
Lawfare Daily: Christina Knight on AI Safety Institutes
The Lawfare Podcast
00:00
The Importance of Red Teaming and Evaluations in AI Reliability
This chapter discusses the essential function of red teaming in assessing AI models, highlighting the importance of thorough evaluations to uncover potential risks. It also examines the differences between safety and capability evaluations and underscores the need for ongoing updates to maintain their effectiveness.
Transcript
Play full episode