“Gradual Disempowerment, Shell Games and Flinches” by Jan_Kulveit
Feb 5, 2025
auto_awesome
In this engaging discussion, Jan Kulveit, author and insightful thinker on AI risks, delves into the concept of Gradual Disempowerment. He examines how as human cognition loses its value, societal systems may become misaligned with human interests. Kulveit highlights intriguing patterns of avoidance in conversations about AI, encapsulated by ideas like 'shell games' and 'flinches.' He also warns against the dangers of delegating too much to future AI, encouraging a more proactive engagement with the complex challenges ahead.
Human alignment with socioeconomic systems may weaken as automation reduces the utility of human cognition, risking gradual disempowerment.
Many individuals exhibit cognitive flinches, often reverting to familiar narratives instead of addressing the nuanced implications of technological risks.
Deep dives
The Threat of Gradual Disempowerment
Human civilization remains aligned with human interests largely because humans provide essential value to economic and social systems. As automation and advanced technologies reduce the relevance of human cognition, there is a risk that these systems will become less responsive to human needs, leading to gradual disempowerment. This dynamic could manifest across various sectors, raising concerns that states and cultural institutions may drift away from prioritizing human welfare as their reliance on human input diminishes. Consequently, the potential for systemic failure increases if all these interconnected systems become misaligned simultaneously, undermining human power and influence.
Shell Games in Discussion
A recurring theme in conversations about alignment and gradual disempowerment is what is termed 'shell games', where the burden of human influence is shifted among different societal frameworks. When discussing the socioeconomic impacts of automation, for instance, people often dismiss concerns by suggesting that government will manage redistributions or cultural frameworks will safeguard human values. However, this perspective overlooks the overarching issue that reduced reliance on humans affects all systems collectively, rather than in isolation. This disconnect highlights a tendency among specialists to apply their domain expertise without recognizing how correlated failures could undermine those very frameworks.
Cognitive Flinch and Future AI
A notable reaction to the gradual disempowerment argument is a cognitive flinch, where individuals instinctively pivot to more familiar narratives rather than engage with the complex implications involved. Even highly intelligent individuals might acknowledge the economic risks of AI yet avoid discussing its broader impacts on state and cultural evolution, often redirecting focus onto more tangible technical concerns. Additionally, some researchers believe that the challenges posed by disempowerment will either be solved by future aligned AI or are not immediately pressing issues. This mindset underscores a tendency to underestimate the socio-technical processes that will shape AI development and the essential human agency potentially eroded in the process.
1.
Exploring Gradual Disempowerment and Cognitive Disengagement in AI Discourse
Over the past year and half, I've had numerous conversations about the risks we describe in Gradual Disempowerment. (The shortest useful summary of the core argument is: To the extent human civilization is human-aligned, most of the reason for the alignment is that humans are extremely useful to various social systems like the economy, and states, or as substrate of cultural evolution. When human cognition ceases to be useful, we should expect these systems to become less aligned, leading to human disempowerment.) This post is not about repeating that argument - it might be quite helpful to read the paper first, it has more nuance and more than just the central claim - but mostly me ranting sharing some parts of the experience of working on this and discussing this.
What fascinates me isn't just the substance of these conversations, but relatively consistent patterns in how people avoid engaging [...]