Google DeepMind: The Podcast

AI Safety...Ok Doomer: with Anca Dragan

Aug 28, 2024
Anca Dragan, a lead for AI safety and alignment at Google DeepMind, dives into the pressing challenges of AI safety. She discusses the urgent need to align artificial intelligence with human values to prevent existential threats. The conversation covers the ethical dilemmas posed by AI recommendation systems and the interplay of competing objectives. Dragan also highlights innovative uses of AI, like citizens' assemblies, to promote democratic dialogue. The episode serves as a vital reminder of the importance of human oversight in AI development.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Integrated AI Safety

  • Short-term and long-term AI safety risks are intertwined, requiring a unified approach.
  • Anca Dragan emphasizes the urgency of addressing both present and future harms.
ANECDOTE

The Bridge Analogy

  • Anca Dragan uses bridge design as an analogy for AI safety.
  • Safety should be integrated from the beginning, not added as an afterthought.
ANECDOTE

Robot-Human Interaction

  • Designing robots to interact safely with humans requires considering human anticipation.
  • Robots must be predictable, allowing humans to understand their actions.
Get the Snipd Podcast app to discover more snips from this episode
Get the app