

#286
Mentioned in 80 episodes
If Anyone Builds It, Everyone Dies
Book • 2025
This book delves into the potential risks of advanced artificial intelligence, arguing that the development of superintelligence could lead to catastrophic consequences for humanity.
The authors present a compelling case for the need for careful consideration and regulation of AI development.
They explore various scenarios and potential outcomes, emphasizing the urgency of addressing the challenges posed by rapidly advancing AI capabilities.
The book is written in an accessible style, making complex ideas understandable to a broad audience.
It serves as a call to action, urging policymakers and researchers to prioritize AI safety and prevent potential existential threats.
The authors present a compelling case for the need for careful consideration and regulation of AI development.
They explore various scenarios and potential outcomes, emphasizing the urgency of addressing the challenges posed by rapidly advancing AI capabilities.
The book is written in an accessible style, making complex ideas understandable to a broad audience.
It serves as a call to action, urging policymakers and researchers to prioritize AI safety and prevent potential existential threats.
Mentioned by






























Mentioned in 80 episodes
Mentioned by 

to introduce Eliezer Yudkowsky as a co-author and AI apocalypse warner.


Cal Newport

1,370 snips
Ep. 377: The Case Against Superintelligence
Mentioned by 

as the upcoming book by 

and 

on the dangers of superhuman AI.


Sam Harris


Eliezer Yudkowsky


Nate Soares

384 snips
#434 — Can We Survive AI?
Mentioned by 

as a book co-authored by 

and Eliezer Yudkowsky, about the dangers of superintelligence.


Andy Mills


Nate Soares

353 snips
EP 6: The AI Doomers
Mentioned by 

as the work of Eliezer Yudkaski, known as the prophet of AI doom.


Nathan Labenz

169 snips
What AI Means for Students & Teachers: My Keynote from the Michigan Virtual AI Summit
Mentioned by 

as a contrast to his own concerns, referring to Eliezer Yudkowsky's views on AI.


Blaise Aguera y Arcas

133 snips
Google Researcher Shows Life "Emerges From Code" - Blaise Agüera y Arcas
Mentioned by 

who says that he has not read it, although he does not plan to.


Sam Parr

127 snips
Story Of The Most Important Founder You've Never Heard Of
Mentioned by 

as a provocative title by Eliezer Yudkowsky regarding the dangers of AI.


Tristan Harris

124 snips
Feed Drop: "Into the Machine" with Tobias Rose-Stockwell
Mentioned by 

when comparing his position to Eliezer Yudkowsky's.


Liron Shapira

83 snips
David Deutschian vs. Eliezer Yudkowskian Debate: Will AGI Cooperate With Humanity? — With Brett Hall
Released by 

and 

, its message is fully condensed in that title.


Eliezer Yudkowsky


Nate Soares

81 snips
#434 - Can We Survive AI?
Mentioned by 

as his new book, hitting shelves September 16th, using an analogy to humans and human evolution.


Nate Soares

80 snips
Will AI superintelligence kill us all? (with Nate Soares)
Mentioned by 

as a new book by Eliezer Yudkowsky that made it to the New York Times bestseller list.


Liron Shapira

77 snips
Liron Debunks The Most Common “AI Won't Kill Us" Arguments
Mentioned by 

as a book by Eliezer Yudkowsky and Nate Soros that is a call to arms to get humanity in action.


Josh Clark

71 snips
How Dolphins Work!
Mentioned by ![undefined]()

as a New York Times bestseller, discussing AI alignment.

Jim Rutt

70 snips
EP 325 Joe Edelman on Full-Stack AI Alignment
Recommended by 

as the most important book of the decade, calling out the lack of safety plans in AI development.


Max Tegmark

69 snips
“If Anyone Builds It, Everyone Dies” Party — Max Tegmark, Liv Boeree, Emmett Shear, Gary Marcus, Rob Miles & more!
Mentioned by 

as a book recently published on the risks of artificial superintelligence.


Nate Hagens

61 snips
If Anyone Builds It, Everyone Dies: How Artificial Superintelligence Might Wipe Out Our Entire Species with Nate Soares
Mentioned by ![undefined]()

, capturing Eliezer Yudkowsky's attitude toward building artificial superintelligence.

Robert Wright

55 snips
Robert Wright Interrogates the Eliezer Yudkowsky AI Doom Position
Mentioned by ![undefined]()

as an example of AI doomerism with a catchy title, written by Eliezer Yudkowsky and Nate Soares.

Brett Hall

52 snips
Ep 248: AI and Philosophy of Science
Mentioned by ![undefined]()

, who heard several podcasts about this book and its arguments about AI risks.

Troy Young

48 snips
Age of Extremes
Mentioned by 

as a new book about AI.


Josh Clark

48 snips
Who are the Zizians?
Mentioned by 

as the reason why he and Eliezer Yudkowsky wanted to have the AI conversation in the mainstream.


Nate Soares

47 snips
Why Building Superintelligence Means Human Extinction (with Nate Soares)




