AI Safety Fundamentals

Model Evaluation for Extreme Risks

16 snips
May 13, 2023
The podcast highlights the significance of model evaluation in addressing extreme risks posed by AI systems. It discusses the importance of evaluating dangerous capabilities and assessing the propensity of models to cause harm. The chapters explore different aspects of model evaluation, including alignment evaluations and evaluating agency in AI systems. The podcast also discusses the limitations and hazards of model evaluation, risks related to conducting dangerous capability evaluations and sharing materials, and the importance of effective evaluations in AI safety and governance.
Ask episode
Chapters
Transcript
Episode notes