
LessWrong (Curated & Popular) “Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development” by Jan_Kulveit, Raymond D, Nora_Ammann, Deger Turan, David Scott Krueger (formerly: capybaralet), David Duvenaud
Feb 4, 2025
Explore the hidden dangers of incremental AI advancements that could gradually disempower humanity. The discussion delves into the risks of AI taking over roles in labor, governance, and even creative fields. Hear how small technological changes could misalign societal structures, threatening human influence and welfare. The experts highlight the slippery slope of losing control over our civilization, raising crucial questions about our future with AI.
AI Snips
Chapters
Transcript
Episode notes
Gradual Disempowerment Risk
- AI risk scenarios often depict a sudden loss of control due to rapid AI advancement or coordinated betrayal.
- This podcast argues that even incremental AI development poses a substantial risk of gradual human disempowerment.
Shifting Alignment of Societal Systems
- Societal systems have historically aligned with human interests due to our essential role in economies and cultures.
- As AI becomes more competitive, this alignment might break, leading to institutions prioritizing growth over human flourishing.
Interconnected Societal Pressures
- Resistance to AI-driven disempowerment is difficult due to interconnected societal pressures.
- Economic incentives for AI adoption will influence state policies and public opinion, creating feedback loops that accelerate the process.
