The Trajectory

Toby Ord - Crucial Updates on the Evolving AGI Risk Landscape (AGI Governance, Episode 7)

25 snips
Aug 12, 2025
Toby Ord, a Senior Researcher at Oxford’s AI Governance Initiative and author of 'The Precipice,' delves into the complexities of AGI risks in this engaging discussion. He highlights the rapid advancements in AI and their ethical implications, urging stronger governance frameworks to keep pace. The conversation explores how AI's evolving moral landscape affects creativity and human agency, while pondering potential rights for advanced AI systems. Ord emphasizes the importance of international collaboration to navigate these challenges effectively.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Understanding Versus Caring

  • LLMs have read vast human knowledge, shifting the question from "can they understand ethics?" to "will they care?".
  • Toby Ord warns understanding doesn't imply alignment or moral concern.
INSIGHT

Imitation Pulls Toward Humans

  • Imitation pre-training pulls models toward human-level outputs because they predict human tokens.
  • Ord expects a kink at human-level performance unless reinforcement learning pushes beyond it.
INSIGHT

* measurable Tasks Lead Capability Growth*

  • Reasoning models change the landscape: they're general but excel on objectively scorable tasks.
  • Expect fastest capability gains in domains with clear measurable objectives.
Get the Snipd Podcast app to discover more snips from this episode
Get the app