AI Safety Fundamentals cover image

Model Evaluation for Extreme Risks

AI Safety Fundamentals

00:00

Model Evaluation for Extreme Risks in AI Safety and Governance

This chapter discusses the importance of model evaluation for extreme risks in AI safety and governance, and highlights the challenges in finding effective evaluations and building governance regimes. It suggests research, internal policies, and support from Frontier AI developers as key elements in addressing these risks, along with recommendations for policy makers to track, invest, mandate, and regulate AI deployment.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app