

The Most Likely AI Doom Scenario — with Jim Babcock, LessWrong Team
24 snips Apr 30, 2025
In a riveting discussion, Jim Babcock, a key member of the LessWrong engineering team, shares insights from nearly 20 years of contemplating AI doom scenarios. The conversation explores the evolution of AI threats, the significance of moral alignment, and the surprising implications of large language models. Jim and the host dissect the complexities of programming choices and highlight the importance of ethical AI development. They emphasize the potential risks of both gradual disempowerment and rapid advancements, demanding urgent attention to ensure AI aligns with human values.
AI Snips
Chapters
Transcript
Episode notes
Current AI Missing Dangerous Agency
- Current AI lacks dangerous agency but labs are actively working to add it back in.
- Moral conversations with chatbots exist but maintaining control over smarter optimizers remains a concern.
LessWrong Elevates Discourse
- LessWrong's mission is to foster long-form, high-quality discourse over social media's dopamine-driven engagement loops.
- This approach nudges people towards deeper, more thoughtful reasoning and rationality.
AI Intelligence Shows Discontinuous Jumps
- Intelligence in AI development isn't a smooth continuous scale but a series of jumps and plateaus.
- Big paradigm shifts like from LSTMs to Transformers drastically amplify AI capabilities and may precede superintelligence.