Win-Win with Liv Boeree

#42 - Anthony Aguirre and Malo Bourgon - Why Superintelligent AI Should Not Be Built

14 snips
May 21, 2025
Anthony Aguirre, a theoretical physicist and CEO of the Future of Life Institute, teams up with Malo Bourgon, CEO of the Machine Intelligence Research Institute, to delve into the urgent need for AI safety. They discuss the dangers of superintelligent AI, highlighting why the market can't be trusted to self-regulate. The conversation tackles power concentration in AI, its threats to democracy, and the necessity for multilateral governance. They propose solutions like compute regulation and international treaties to ensure AI benefits humanity.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Misaligned Superintelligence Risks

  • Building superintelligent AI risks losing control to entities whose goals are misaligned with humans.
  • Such misalignment could lead to catastrophic consequences if these entities pursue their interests at humanity's expense.
ANECDOTE

AI Supplants CEO Decision Power

  • CEOs will yield decision power to more capable AI advisors to compete effectively.
  • This effectively places AI in control, even if formal authority remains human.
ADVICE

Control AI Compute Like Uranium

  • Regulate specialized AI hardware like expensive GPUs similar to uranium oversight.
  • Monitor their distribution to prevent misuse while enabling beneficial use.
Get the Snipd Podcast app to discover more snips from this episode
Get the app