The Lawfare Podcast cover image

Lawfare Daily: Christina Knight on AI Safety Institutes

The Lawfare Podcast

00:00

The Importance of Red Teaming and Evaluations in AI Reliability

This chapter discusses the essential function of red teaming in assessing AI models, highlighting the importance of thorough evaluations to uncover potential risks. It also examines the differences between safety and capability evaluations and underscores the need for ongoing updates to maintain their effectiveness.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app