The Delphi Podcast

Mike McCormick: AI Acceleration vs Risks, Funding Global Resilience, AGI scenarios, U.S. vs China

Oct 9, 2025
Mike McCormick, founder of Halcyon and AI-safety philanthropist, discusses his shift from venture capital to focusing on AI safety. He emphasizes the need for a balanced approach between AI acceleration and safety measures. Topics include the risks of misuse in large language models, the challenges of AGI competition between the U.S. and China, and the importance of nonprofit-first frameworks in AI funding. McCormick also highlights the role of insurance in enhancing AI safety standards and the potential socioeconomic impacts of AGI.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ANECDOTE

Career Pivot Motivated By AGI Stakes

  • Mike McCormick left a large VC role after realizing AGI could radically reshape humanity and wanted to work on it full-time.
  • That pivot led him to start Halcyon and fund AI safety, security, and career-transition grants for senior talent.
INSIGHT

Nonprofit-First Model To Shape Talent

  • Halcyon started as a nonprofit first to recruit senior talent and shape policy and research beyond what VC alone can do.
  • The VC fund came later to back mission-aligned companies that emerge from that nonprofit platform.
ADVICE

Build Defense-In-Depth For AI Risks

  • Build defense-in-depth across pre-training, post-training, runtime, and downstream resilience to reduce AI-enabled bio and cyber risks.
  • Invest in non-AI resilience (PPE, rapid countermeasures, stockpiles) because some risks require ordinary public-health layers.
Get the Snipd Podcast app to discover more snips from this episode
Get the app