AI Safety Fundamentals cover image

Model Evaluation for Extreme Risks

AI Safety Fundamentals

CHAPTER

Model Evaluation for Extreme Risks in AI Safety and Governance

This chapter discusses the importance of model evaluation for extreme risks in AI safety and governance, and highlights the challenges in finding effective evaluations and building governance regimes. It suggests research, internal policies, and support from Frontier AI developers as key elements in addressing these risks, along with recommendations for policy makers to track, invest, mandate, and regulate AI deployment.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner