"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis

Guaranteed Safe AI? World Models, Safety Specs, & Verifiers, with Nora Ammann & Ben Goldhaber

5 snips
Jul 17, 2024
Nora Ammann, co-author of the Guaranteed Safe AI framework, and Ben Goldhaber dive deep into AI safety. They discuss a groundbreaking three-part system for ensuring robust AI behavior. Highlights include the necessity for quantitative safety metrics and collaborative, interdisciplinary approaches to mitigate risks. They tackle ethical dilemmas like the trolley problem in real-world AI applications and advocate for building resilient safety standards. The duo emphasizes the importance of innovative governance structures to foster responsible AI development.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Need for Higher Safety Standards in AI

  • Current AI safety approaches, especially empirical ones, may be insufficient.
  • Higher expectations and quantitative safety assurances, like in other engineering fields, are needed.
ANECDOTE

Bridge Collapse Rates

  • In the 1870s, 20-25% of bridges collapsed within 10 years.
  • Modern civil engineering makes precise statements about failure rates, a standard AI should aim for.
INSIGHT

Moving Towards Theoretical Grounding for AI Safety

  • Shift from black-box testing to having a strong theoretical grounding for AI safety.
  • This involves precise, quantifiable estimates of failure rates, including known and unknown failures.
Get the Snipd Podcast app to discover more snips from this episode
Get the app