#4400
Mentioned in 6 episodes

If Anyone Builds It, Everyone Dies

Book • 2025
This book delves into the potential risks of advanced artificial intelligence, arguing that the development of superintelligence could lead to catastrophic consequences for humanity.

The authors present a compelling case for the need for careful consideration and regulation of AI development.

They explore various scenarios and potential outcomes, emphasizing the urgency of addressing the challenges posed by rapidly advancing AI capabilities.

The book is written in an accessible style, making complex ideas understandable to a broad audience.

It serves as a call to action, urging policymakers and researchers to prioritize AI safety and prevent potential existential threats.

Mentioned by

Mentioned in 6 episodes

Mentioned by
undefined
Josh Clark
as a new book about AI.
48 snips
Who are the Zizians?
Mentioned by
undefined
Liron Shapira
as a critical message on the dangers of AI, urging a shift in conversation towards the risk of building something that could kill everyone.
24 snips
This $85M-Backed Founder Claims Open Source AGI is Safe — Debate with Himanshu Tyagi
Referenced by
undefined
Liron Shapira
to highlight the organization's perspective on AI risk.
11 snips
Q&A: Ilya's AGI Doomsday Bunker, Veo 3 is Westworld, Eliezer Yudkowsky, and much more!
Mentioned as receiving strong endorsements from scientists and academics.
“New Endorsements for ‘If Anyone Builds It, Everyone Dies’” by Malo
Mentioned by
undefined
Nate Soares
as a book with endorsements from the national security community.
The AI disconnect: understanding vs motivation, with Nate Soares
Mentioned as a forthcoming book co-authored by Eliezer Yudkowsky and Nate Soares about AI existential risk.
“The Problem” by Rob Bensinger, tanagrabeast, yams, So8res, Eliezer Yudkowsky, Gretta Duleba
Mentioned by
undefined
Zvi Moshowitz
as a book with new endorsements and crucial insights into AI risks.
AI #121 Part 2: The OpenAI Files
Mentioned by
undefined
Zvi Moshowitz
as a new book coming out September 16th, explicitly about AI policy.
AI #116: If Anyone Builds It, Everyone Dies
Mentioned by
undefined
Danny Fortson
as a new book by one of the most prominent AI doomers.
OpenAI's iPhone moment & can AI teach?
Mentioned by two prominent developers, discussing machine intelligence and existential risks.
The ultimate A.I. explainer – Will it kill us all or make us rich?
Mentioned by Mary Wakefield, referring to Yudkowsky's book about the dangers of AI.
Spectator Out Loud: Mark Mason, Mary Wakefield, Matthew Parris and Philip Patrick
Mentioned as a forthcoming book co-authored with Eliezer, receiving positive responses from readers.
“A case for courage, when speaking of AI danger” by So8res
Mentioned as being co-written with Eliezer and receiving positive responses from readers.
“A case for courage, when speaking of AI danger” by So8res
Recommended by
undefined
Eneasz Brodsky
and
undefined
Steven Zuber
to help people understand and discuss AI risk effectively.
240 – How To Live Well With High P(Doom) – with Ben Pace, Brandon Hendrickson, Miranda Dixon-Luinenburg

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app