

#11282
Mentioned in 5 episodes
If Anyone Builds It, Everyone Dies
Book • 2025
This book delves into the potential risks of advanced artificial intelligence, arguing that the development of superintelligence could lead to catastrophic consequences for humanity.
The authors present a compelling case for the need for careful consideration and regulation of AI development.
They explore various scenarios and potential outcomes, emphasizing the urgency of addressing the challenges posed by rapidly advancing AI capabilities.
The book is written in an accessible style, making complex ideas understandable to a broad audience.
It serves as a call to action, urging policymakers and researchers to prioritize AI safety and prevent potential existential threats.
The authors present a compelling case for the need for careful consideration and regulation of AI development.
They explore various scenarios and potential outcomes, emphasizing the urgency of addressing the challenges posed by rapidly advancing AI capabilities.
The book is written in an accessible style, making complex ideas understandable to a broad audience.
It serves as a call to action, urging policymakers and researchers to prioritize AI safety and prevent potential existential threats.
Mentioned by
Mentioned in 5 episodes
Mentioned by
Liron Shapira as a critical message on the dangers of AI, urging a shift in conversation towards the risk of building something that could kill everyone.


24 snips
This $85M-Backed Founder Claims Open Source AGI is Safe — Debate with Himanshu Tyagi
Mentioned as receiving strong endorsements from scientists and academics.

“New Endorsements for ‘If Anyone Builds It, Everyone Dies’” by Malo
Mentioned as a forthcoming book co-authored with Eliezer, receiving positive responses from readers.

“A case for courage, when speaking of AI danger” by So8res
Mentioned as being co-written with Eliezer and receiving positive responses from readers.

“A case for courage, when speaking of AI danger” by So8res
Recommended by
Eneasz Brodsky and
Steven Zuber to help people understand and discuss AI risk effectively.



240 – How To Live Well With High P(Doom) – with Ben Pace, Brandon Hendrickson, Miranda Dixon-Luinenburg