80,000 Hours Podcast

#154 - Rohin Shah on DeepMind and trying to fairly hear out both AI doomers and doubters

6 snips
Jun 9, 2023
Rohin Shah, a machine learning researcher at Google DeepMind and author of the Alignment Newsletter, shares insights from the cutting-edge world of AI safety. He discusses the pressing issue of aligning AI with human intentions amidst rapid technological advancements. Shah explores the spectrum of opinions on AI risks, highlighting the balance between caution and optimism. He also critiques misconceptions in AI discourse, emphasizes the necessity for ethical frameworks, and reflects on the implications of AI's rapid evolution, urging for informed public engagement.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Continuous AI Development

  • Rohin Shah believes AI's rise will be continuous, not a sudden value lock-in.
  • AI will likely collaborate with humans, figuring out values together, rather than having them pre-programmed.
INSIGHT

Worst-Case Reasoning Flaw

  • People learning about AI safety online overuse worst-case reasoning to estimate probabilities.
  • A technique isn't useless just because a theoretical superintelligence could defeat it later on.
ANECDOTE

Evolution Analogy Overuse

  • The evolution analogy, comparing AI training to human evolution, is often overextended.
  • It's used to justify doom scenarios without delving into AI specifics.
Get the Snipd Podcast app to discover more snips from this episode
Get the app