AI Summer

Ajeya Cotra on AI safety and the future of humanity

Jan 16, 2025
Ajeya Cotra, a Senior Program Manager at Open Philanthropy, focuses on AI safety and capabilities forecasting. She discusses the heated debate between 'doomers' and skeptics regarding AI risks. Cotra also envisions how AI personal assistants may revolutionize daily tasks and the workforce by 2027. The conversation touches on the transformative potential of AI in the 2030s, with advancements in various sectors and the philosophical implications of our digital future. Plus, they explore innovative energy concepts and their technological limits.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Defining AGI

  • Disagreements about AI's future stem from differing definitions of terms like "AGI."
  • These varying definitions shape expectations about AI's impact, from mild assistance to radical societal change.
INSIGHT

Skeptic's Perspective

  • AI safety skeptics often emphasize real-world complexities and negative feedback loops.
  • They believe these factors will slow AI's progress, providing time to adapt.
INSIGHT

Doomer's Critique

  • Doomers criticize skeptics for lacking imagination and being trapped in normalcy bias.
  • They argue that historical precedent shows the potential for rapid, transformative change.
Get the Snipd Podcast app to discover more snips from this episode
Get the app