The Brian Lehrer Show

Warnings From an AI Doomsayer

Sep 19, 2025
Nate Soares, president of the Machine Intelligence Research Institute and co-author of If Anyone Builds It, Everyone Dies, dives deep into the risks of superhuman AI. He defines superintelligence and highlights how its unpredictable outcomes could lead to catastrophe. Soares urges for international bans, discussing potential environmental impacts and the lack of safety measures in companies today. By drawing historical parallels, he emphasizes the urgency of cooperation to mitigate risks like bio threats and infrastructure failures.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

What Superintelligence Means

  • Superintelligence means AIs that outperform humans at every mental task.
  • Nate Soares warns that if AIs reach that level they could pursue goals we did not intend.
INSIGHT

Chatbots Versus Goal-Directed AIs

  • Today's chatbots are unevenly capable and lack long-term initiative.
  • Soares says true superintelligence would take initiative and learn across longer tasks, changing the game.
ADVICE

Call For An International Ban

  • Ban the global race to build superintelligence through international agreements.
  • Soares recommends stopping labs worldwide from pursuing that specific race.
Get the Snipd Podcast app to discover more snips from this episode
Get the app