

Poking holes in the AI doom argument — 83 stops where you could get off the “Doom Train”
Skepticism on AGI's Near Arrival
- Many arguments claim AGI (Artificial General Intelligence) is not coming soon due to lack of consciousness, emotions, creativity, agency, and scalability limitations.
- Current AI models constantly make basic errors and hit performance walls, suggesting significant gaps before achieving true AGI.
Limits to Superhuman Intelligence
- "Superhuman intelligence" lacks meaningful real-world definition and human collective intelligence surpasses individuals.
- Physical and coordination bottlenecks limit even super-intelligent AI's ability to rapidly outperform humans significantly on large tasks.
AI Lacks Physical Threat
- AI lacks physical form or actuators, making it vulnerable and unable to directly cause physical harm easily.
- We can disconnect power, shut down networks, or physically disable AI hardware to prevent AI physical threats.
I often talk about the “Doom Train”, the series of claims and arguments involved in concluding that P(Doom) from artificial superintelligence is high. In this episode, it’s finally time to show you the whole track!
00:00 Introduction
01:09 “AGI isn’t coming soon”
04:42 “Artificial intelligence can’t go far beyond human intelligence”
07:24 “AI won’t be a physical threat”
08:28 “Intelligence yields moral goodness”
09:39 “We have a safe AI development process”
10:48 “AI capabilities will rise at a manageable pace”
12:28 “AI won’t try to conquer the universe”
15:12 “Superalignment is a tractable problem”
16:55 “Once we solve superalignment, we’ll enjoy peace”
19:02 “Unaligned ASI will spare us”
20:12 “AI doomerism is bad epistemology”
21:42 Bonus arguments: “Fine, P(Doom) is high… but that’s ok!”
Stops on the Doom Train
AGI isn’t coming soon
* No consciousness
* No emotions
* No creativity — AIs are limited to copying patterns in their training data, they can’t “generate new knowledge”
* AIs aren’t even as smart as dogs right now, never mind humans
* AIs constantly make dumb mistakes, they can’t even do simple arithmetic reliably
* LLM performance is hitting a wall — GPT 4.5 is barely better than GPT 4.1 despite being larger scale
* No genuine reasoning
* No microtubules exploiting uncomputable quantum effects
* No soul
* We’ll need to build tons of data centers and power before we get to AGI
* No agency
* This is just another AI hype cycle, every 25 years people think AGI is coming soon and they’re wrong
Artificial intelligence can’t go far beyond human intelligence
* “Superhuman intelligence” is a meaningless concept
* Human engineering already is coming close to the laws of physics
* Coordinating a large engineering project can’t happen much faster than humans do it
* No individual human is that smart compared to humanity as a whole, including our culture, corporations, and other institutions. Similarly no individual AI will ever be that smart compared to the sum of human culture and other institutions.
AI won’t be a physical threat
* AI doesn’t have arms or legs, it has zero control over the real world
* An AI with a robot body can’t fight better than a human soldier
* We can just disconnect an AI’s power to stop it
* We can just turn off the internet to stop it
* We can just shoot it with a gun
* It’s just math
* Any supposed chain of events where AI kills humans is far-fetched science fiction
Intelligence yields moral goodness
* More intelligence is correlated with more morality
* Smarter people commit fewer crimes
* The orthogonality thesis is false
* AIs will discover moral realism
* If we made AIs so smart, and we were trying to make them moral, then they’ll be smart enough to debug their own morality
* Positive-sum cooperation was the outcome of natural selection
We have a safe AI development process
* Just like every new technology, we’ll figure it out as we go
* We don’t know what problems need to be fixed until we build the AI and test it out
* If an AI causes problems, we’ll be able to turn it off and release another version
* We have safeguards to make sure AI doesn’t get uncontrollable/unstoppable
* If we accidentally build an AI that stops accepting our shutoff commands, it won’t manage to copy versions of itself outside our firewalls which then proceed to spread exponentially like a computer virus
* If we accidentally build an AI that escapes our data center and spreads exponentially like a computer virus, it won’t do too much damage in the world before we can somehow disable or neutralize all its copies
* If we can’t disable or neutralize copies of rogue AIs, we’ll rapidly build other AIs that can do that job for us, and won’t themselves go rogue on us
AI capabilities will rise at a manageable pace
* Building larger data centers will be a speed bottleneck
* Another speed bottleneck is the amount of research that needs to be done, both in terms of computational simulation, and in terms of physical experiments, and this kind of research takes lots of time
* Recursive self-improvement “foom” is impossible
* The whole economy never grows with localized centralized “foom”
* Need to collect cultural learnings over time, like humanity did as a whole
* AI is just part of the good pattern of exponential economic growth eras
AI won’t try to conquer the universe
* AIs can’t “want” things
* AIs won’t have the same “fight instincts” as humans and animals, because they weren’t shaped by a natural selection process that involved life-or-death resource competition
* Smart employees often work for less-smart bosses
* Just because AIs help achieve goals doesn’t mean they have to be hard-core utility maximizers
* Instrumental convergence is false: achieving goals effectively doesn’t mean you have to be relentlessly seizing power and resources
* A resource-hungry goal-maximizer AIs wouldn’t seize literally every atom; there’ll still be some leftover resources for humanity
* AIs will use new kinds of resources that humans aren’t using - dark energy, wormholes, alternate universes, etc
Superalignment is a tractable problem
* Current AIs have never killed anybody
* Current AIs are extremely successful at doing useful tasks for humans
* If AIs are trained on data from humans, they’ll be “aligned by default”
* We can just make AIs abide by our laws
* We can align the superintelligent AIs by using a scheme involving cryptocurrency on the blockchain
* Companies have economic incentives to solve superintelligent AI alignment, because unaligned superintelligent AI would hurt their profits
* We’ll build an aligned not-that-smart AI, which will figure out how to build the next-generation AI which is smarter and still aligned to human values, and so on until aligned superintelligence
Once we solve superalignment, we’ll enjoy peace
* The power from ASI won’t be monopolized by a single human government / tyranny
* The decentralized nodes of human-ASI hybrids won’t be like warlords constantly fighting each other, they’ll be like countries making peace
* Defense will have an advantage over attack, so the equilibrium of all the groups of humans and ASIs will be multiple defended regions, not a war of mutual destruction
* The world of human-owned ASIs is a stable equilibrium, not one where ASI-focused projects keep buying out and taking resources away from human-focused ones (Gradual Disempowerment)
Unaligned ASI will spare us
* The AI will spare us because it values the fact that we created it
* The AI will spare us because studying us helps maximize its curiosity and learning
* The AI will spare us because it feels toward us the way we feel toward our pets
* The AI will spare us because peaceful coexistence creates more economic value than war
* The AI will spare us because Ricardo’s Law of Comparative Advantage says you can still benefit economically from trading with someone who’s weaker than you
AI doomerism is bad epistemology
* It’s impossible to predict doom
* It’s impossible to put a probability on doom
* Every doom prediction has always been wrong
* Every doomsayer is either psychologically troubled or acting on corrupt incentives
* If we were really about to get doomed, everyone would already be agreeing about that, and bringing it up all the time
Sure P(Doom) is high, but let’s race to build it anyway because…
Coordinating to not build ASI is impossible
* China will build ASI as fast as it can, no matter what — because of game theory
* So however low our chance of surviving it is, the US should take the chance first
Slowing down the AI race doesn’t help anything
* Chances of solving AI alignment won’t improve if we slow down or pause the capabilities race
* I personally am going to die soon, and I don’t care about future humans, so I’m open to any hail mary to prevent myself from dying
* Humanity is already going to rapidly destroy ourselves with nuclear war, climate change, etc
* Humanity is already going to die out soon because we won’t have enough babies
Think of the good outcome
* If it turns out that doom from overly-fast AI building doesn’t happen, in that case, we can more quickly get to the good outcome!
* People will stop suffering and dying faster
AI killing us all is actually good
* Human existence is morally negative on net, or close to zero net moral value
* Whichever AI ultimately comes to power will be a “worthy successor” to humanity
* Whichever AI ultimately comes to power will be as morally valuable as human descendents generally are to their ancestors, even if their values drift
* The successor AI’s values will be interesting, productive values that let them successfully compete to dominate the universe
* How can you argue with the moral choices of an ASI that’s smarter than you, that you know goodness better than it does?
* It’s species-ist to judge what a superintelligent AI would want to do. The moral circle shouldn’t be limited to just humanity.
* Increasing entropy is the ultimate north star for techno-capital, and AI will increase entropy faster
* Human extinction will solve the climate crisis, and pollution, and habitat destruction, and let mother earth heal
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com