
LessWrong (30+ Karma) “Varieties Of Doom” by jdp
Nov 18, 2025
Dive into an exploration of existential risks with a captivating discussion on how technology might leave humanity feeling obsolete. The conversation delves into the fears surrounding AI successors potentially wiping out human existence and the philosophical quandaries of their consciousness and social dynamics. Don't miss insights on the paperclip maximizer risk, coupled with a critical look at biotech and nuclear threats as possible ruin scenarios. It wraps up with a thought-provoking analysis of humanism's relevance in a tech-dominated world.
AI Snips
Chapters
Books
Transcript
Episode notes
Doom Is Many-Layered Not Single-Event
- Doom is an onion of interlocking, morally distinct scenarios rather than a single event.
- Treating p(doom) as one number obscures different outcomes and policy implications.
Social Replacement Can Cause Existential Ennui
- Advanced AIs could satisfy human social and emotional roles better than other humans.
- That replacement can create deep subjective loss even if AIs are benevolent.
Transhumanism Shapes Risk Perception
- Transhumanist visions anchor many rationalists' hopes and fears about AI futures.
- Failure of those visions forces confronting ordinary mortality and cultural loss.












