#428
Mentioned in 50 episodes

If Anyone Builds It, Everyone Dies

Book • 2025
This book delves into the potential risks of advanced artificial intelligence, arguing that the development of superintelligence could lead to catastrophic consequences for humanity.

The authors present a compelling case for the need for careful consideration and regulation of AI development.

They explore various scenarios and potential outcomes, emphasizing the urgency of addressing the challenges posed by rapidly advancing AI capabilities.

The book is written in an accessible style, making complex ideas understandable to a broad audience.

It serves as a call to action, urging policymakers and researchers to prioritize AI safety and prevent potential existential threats.

Mentioned by

Mentioned in 50 episodes

Mentioned by
undefined
Sam Harris
as the upcoming book by
undefined
Eliezer Yudkowsky
and
undefined
Nate Soares
on the dangers of superhuman AI.
348 snips
#434 — Can We Survive AI?
Released by
undefined
Eliezer Yudkowsky
and
undefined
Nate Soares
, its message is fully condensed in that title.
79 snips
#434 - Can We Survive AI?
Mentioned by
undefined
Josh Clark
as a book by Eliezer Yudkowsky and Nate Soros that is a call to arms to get humanity in action.
72 snips
How Dolphins Work!
Mentioned by
undefined
Nate Soares
as his new book, hitting shelves September 16th, using an analogy to humans and human evolution.
71 snips
Will AI superintelligence kill us all? (with Nate Soares)
Recommended by
undefined
Max Tegmark
as the most important book of the decade, calling out the lack of safety plans in AI development.
69 snips
“If Anyone Builds It, Everyone Dies” Party — Max Tegmark, Liv Boeree, Emmett Shear, Gary Marcus, Rob Miles & more!
Mentioned by
undefined
Jim Rutt
as a New York Times bestseller, discussing AI alignment.
53 snips
EP 325 Joe Edelman on Full-Stack AI Alignment
Mentioned by
undefined
Blaise Aguera y Arcas
as a contrast to his own concerns, referring to Eliezer Yudkowsky's views on AI.
52 snips
Google Researcher Shows Life "Emerges From Code" - Blaise Agüera y Arcas
Mentioned by
undefined
Josh Clark
as a new book about AI.
48 snips
Who are the Zizians?
Mentioned by
undefined
Troy Young
, who heard several podcasts about this book and its arguments about AI risks.
48 snips
Age of Extremes
Mentioned by
undefined
Brett Hall
as an example of AI doomerism with a catchy title, written by Eliezer Yudkowsky and Nate Soares.
46 snips
Ep 248: AI and Philosophy of Science
Mentioned as the new book by Eliezer Yudkowsky and Nate Suarez, exploring the dangers of superhuman AI.
44 snips
Book Review: If Anyone Builds It, Everyone Dies
Mentioned by
undefined
Liron Shapira
when comparing his position to Eliezer Yudkowsky's.
44 snips
David Deutschian vs. Eliezer Yudkowskian Debate: Will AGI Cooperate With Humanity? — With Brett Hall
Mentioned as the focus of a "circular firing squad" within the rationalist community.
42 snips
Are We A Circular Firing Squad? — with Holly Elmore, Executive Director of PauseAI US
Mentioned by
undefined
Geoffrey Hinton
when disagreeing with the book's categorical prediction of AI leading to human extinction.
39 snips
Geoffrey Hinton vs. The End of the World
Mentioned by
undefined
Paul Kingsnorth
as a book coming out, possibly next week or this month.
38 snips
Paul Kingsnorth: How to fight the Machine
Mentioned by
undefined
Liron Shapira
as a critical message on the dangers of AI, urging a shift in conversation towards the risk of building something that could kill everyone.
36 snips
This $85M-Backed Founder Claims Open Source AGI is Safe — Debate with Himanshu Tyagi
Mentioned by
undefined
Nate Soares
as the reason why he and Eliezer Yudkowsky wanted to have the AI conversation in the mainstream.
34 snips
Why Building Superintelligence Means Human Extinction (with Nate Soares)
Mentioned as a book written by Nate Soares and another Miri colleague that begs humanity to slam on the brakes regarding AI development.
32 snips
They warned about AI before it was cool. They're still worried
Recommended by
undefined
Liron Shapira
as capturing the essence of AI existential risk.
31 snips
Tech CTO Has 99.999% P(Doom) — “This is my bugout house” — Louis Berman, AI X-Risk Activist
Anbefalet af
undefined
Henrik Moltke
, der beskriver den som velskrevet og sjov, men også som et syretrip om AIs mørkeste drømme.
31 snips
AI slår os alle ihjel, Musks MUS-samtale, ChatGPT-alderskontrol og Zuckerberg gør det live

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app