The Lawfare Podcast

Lawfare Daily: Christina Knight on AI Safety Institutes

Jun 11, 2025
Christina Knight, Machine Learning Safety and Evals Lead at Scale AI and former senior policy advisor at the U.S. AI Safety Institute, discusses crucial aspects of evaluating frontier AI models. She emphasizes the need for rigorous testing and addressing vulnerabilities to enhance AI safety. The conversation also highlights the urgency for global governance in AI, innovative tactics like red teaming to mitigate risks, and the importance of adaptable safety measures tailored to specific applications in the face of evolving threats.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Role of AI Safety Institutes

  • AI safety institutes are government-backed scientific offices advancing AI safety research without necessarily being regulatory bodies.
  • Different countries' institutes have varying mandates, with some, like South Korea, also handling evaluation roles.
INSIGHT

Government Role in AI Safety

  • Government AI safety institutes exist because independent researchers often lack compute resources for robust safety research.
  • Government resources help enforce a focus on AI safety amid rapid AI development outside of traditional government research.
INSIGHT

Evaluating AI Risks Spectrum

  • AI risk evaluation weighs harm likelihood versus harm impact, from common low-impact issues to rare high-impact threats like chemical or nuclear dangers.
  • Early US AI Safety Institute focus included national security, biological and cyber threats, and public safety.
Get the Snipd Podcast app to discover more snips from this episode
Get the app