Future of Life Institute Podcast

Can AI Do Our Alignment Homework? (with Ryan Kidd)

Feb 6, 2026
Ryan Kidd, co-executive director at MATS who builds AI safety talent pipelines and mentors on interpretability and governance. He discusses AGI timelines and preparing for nearer-term risks. They cover model deception, evaluation and monitoring, tradeoffs between safety work and capabilities, and what MATS looks for in applicants and researchers.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ADVICE

Track Capability Prereqs And Run Live Evals

  • Monitor both capabilities prerequisites (situational awareness, hacking) and 'model organism' red-team evals continuously.
  • Prepare rapid response plans and safe fallback options for deployed systems that learn online.
INSIGHT

Safety Work Inevitably Affects Capabilities

  • All safety research influences capabilities; separating them perfectly is infeasible outside extreme secrecy.
  • Practical routes require developing safer, deployable products and governance, not perfect isolation.
INSIGHT

Coordination And Lowering The Alignment Tax

  • Companies racing the frontier do so for money, scale, and historical impact, making global coordination essential.
  • Ryan favors phased international entry and technical solutions that lower the 'alignment tax'.
Get the Snipd Podcast app to discover more snips from this episode
Get the app