

#4400
Mentioned in 6 episodes
If Anyone Builds It, Everyone Dies
Book • 2025
This book delves into the potential risks of advanced artificial intelligence, arguing that the development of superintelligence could lead to catastrophic consequences for humanity.
The authors present a compelling case for the need for careful consideration and regulation of AI development.
They explore various scenarios and potential outcomes, emphasizing the urgency of addressing the challenges posed by rapidly advancing AI capabilities.
The book is written in an accessible style, making complex ideas understandable to a broad audience.
It serves as a call to action, urging policymakers and researchers to prioritize AI safety and prevent potential existential threats.
The authors present a compelling case for the need for careful consideration and regulation of AI development.
They explore various scenarios and potential outcomes, emphasizing the urgency of addressing the challenges posed by rapidly advancing AI capabilities.
The book is written in an accessible style, making complex ideas understandable to a broad audience.
It serves as a call to action, urging policymakers and researchers to prioritize AI safety and prevent potential existential threats.
Mentioned by
Mentioned in 6 episodes
Mentioned by 

as a new book about AI.


Josh Clark

48 snips
Who are the Zizians?
Mentioned by 

as a critical message on the dangers of AI, urging a shift in conversation towards the risk of building something that could kill everyone.


Liron Shapira

24 snips
This $85M-Backed Founder Claims Open Source AGI is Safe — Debate with Himanshu Tyagi
Referenced by 

to highlight the organization's perspective on AI risk.


Liron Shapira

11 snips
Q&A: Ilya's AGI Doomsday Bunker, Veo 3 is Westworld, Eliezer Yudkowsky, and much more!
Mentioned as receiving strong endorsements from scientists and academics.

“New Endorsements for ‘If Anyone Builds It, Everyone Dies’” by Malo
Mentioned by 

as a book with endorsements from the national security community.


Nate Soares

The AI disconnect: understanding vs motivation, with Nate Soares
Mentioned as a forthcoming book co-authored by Eliezer Yudkowsky and Nate Soares about AI existential risk.

“The Problem” by Rob Bensinger, tanagrabeast, yams, So8res, Eliezer Yudkowsky, Gretta Duleba
Mentioned by ![undefined]()

as a book with new endorsements and crucial insights into AI risks.

Zvi Moshowitz

AI #121 Part 2: The OpenAI Files
Mentioned by ![undefined]()

as a new book coming out September 16th, explicitly about AI policy.

Zvi Moshowitz

AI #116: If Anyone Builds It, Everyone Dies
Mentioned as a new book on AI safety.

AISN #55: Trump Administration Rescinds AI Diffusion Rule, Allows Chip Sales to Gulf States
Mentioned by ![undefined]()

as a new book by one of the most prominent AI doomers.

Danny Fortson

OpenAI's iPhone moment & can AI teach?
Mentioned by two prominent developers, discussing machine intelligence and existential risks.

The ultimate A.I. explainer – Will it kill us all or make us rich?
Mentioned by Mary Wakefield, referring to Yudkowsky's book about the dangers of AI.

Spectator Out Loud: Mark Mason, Mary Wakefield, Matthew Parris and Philip Patrick
Mentioned as a forthcoming book co-authored with Eliezer, receiving positive responses from readers.

“A case for courage, when speaking of AI danger” by So8res
Mentioned as being co-written with Eliezer and receiving positive responses from readers.

“A case for courage, when speaking of AI danger” by So8res
Recommended by 

and 

to help people understand and discuss AI risk effectively.


Eneasz Brodsky


Steven Zuber

240 – How To Live Well With High P(Doom) – with Ben Pace, Brandon Hendrickson, Miranda Dixon-Luinenburg