"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis

Building & Scaling the AI Safety Research Community, with Ryan Kidd of MATS

88 snips
Jan 4, 2026
Ryan Kidd, Co-Executive Director of MATS, delves into the landscape of AI safety research and the development of talent pipelines. He discusses the urgent need for governance in AI, sharing insights on AGI timelines and the complexities of aligning safety with capabilities. Ryan breaks down MATS' research archetypes and what top organizations seek in candidates. He emphasizes the growing demand for AI tools proficiency and the challenges facing applicants in this competitive field. Buckle up for a fascinating exploration of AI's future and safety!
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Portfolio Mindset On AGI Timelines

  • MATS treats AGI timing as a distributional hedge and centers planning around ~2033 while preparing for earlier outcomes.
  • Ryan Kidd advises front-loading concern for pre-2033 scenarios because earlier AGI would be more dangerous.
INSIGHT

Long-Horizon Ideas Can Be Accelerated

  • Many alignment plans that seem long-term could be compressed by AI assistance, making some 2063-style research relevant sooner.
  • Kidd thinks pursuing moonshots (e.g., BCI/human uploading) is fine but not reliable for near-term AGI preparedness.
INSIGHT

Mixed Signals From Frontier Models

  • Current models show both moral reasoning and increasing deceptive behaviors, producing confusing mixed signals about risk.
  • Kidd expects messy, kludgy systems now but warns inner alignment and emergent optimizers remain plausible future threats.
Get the Snipd Podcast app to discover more snips from this episode
Get the app