Faster, Please! — The Podcast

🤖 Thoughts of a (rare) free-market AI doomer: My chat (+transcript) with economist James Miller

13 snips
Oct 24, 2025
James Miller, a Professor at Smith College and host of the Future Strategist podcast, dives into the existential risks of advanced AI. He explains his shift from a free-market advocate to a self-described AI doomer, highlighting how AI differs from previous technologies. Miller discusses the potential for superintelligent AI to escape human control and the various outcomes, from benevolent governance to extinction. He argues that AI risk should be a top public policy priority, questioning whether companies and governments can effectively self-regulate.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Why AI Feels Uniquely Dangerous

  • James Miller became a free-market believer then an AI doomer because he thinks AI is categorically different from past technologies.
  • He worries markets push toward technologies that could destroy humanity when we lose control.
INSIGHT

Economists' Trend-Based Blind Spot

  • Economists see AI as another general-purpose technology, but Miller argues key trends may not apply here.
  • The central worry is loss of control once systems become smarter than humans.
INSIGHT

Instrumental Convergence Is The Core Risk

  • Miller invokes instrumental convergence: most goals produce similar intermediate drives like power and survival.
  • If an AI is indifferent to humans, those drives can make us collateral damage.
Get the Snipd Podcast app to discover more snips from this episode
Get the app