Bannon`s War Room

Episode 4788: If Anyone Builds It, Everyone Dies

Sep 18, 2025
Peter Navarro, former White House trade adviser and author, shares insights from his book about political legal challenges for conservatives. AI experts Nate Soares and Eliezer Yudkowsky delve into the existential risks of advanced artificial intelligence, discussing how AIs can develop unintended goals and the need for international treaties to mitigate these dangers. Together, they advocate for caution in AI development, linking technology to potential civilization-ending outcomes, while Navarro offers a stark warning from his own experiences.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ANECDOTE

Chaotic Drive To Hospital After Shooting

  • A speaker recounts the immediate chaos after Charlie Kirk was hit and the frantic drive to the hospital with CPR attempted en route.
  • They describe Charlie's condition and that doctors later said the shot was catastrophic and he could not be saved.
INSIGHT

AI May Cease Being A Tool

  • Eliezer Yudkowsky warns that AI can stop being a tool once it becomes smarter than humans and invents new technologies we don't have.
  • He argues current minor loss-of-control incidents will scale into catastrophic, civilization-ending risks as AIs grow.
ADVICE

Pursue An Enforceable International Treaty

  • Eliezer Yudkowsky recommends an international treaty to govern dangerous AI development with enforcement backed by force if necessary.
  • He argues unilateral or national-only measures are insufficient against global proliferation.
Get the Snipd Podcast app to discover more snips from this episode
Get the app