We’ve written a new report on the threat of AI-enabled coups.
I think this is a very serious risk – comparable in importance to AI takeover but much more neglected.
In fact, AI-enabled coups and AI takeover have pretty similar threat models. To see this, here's a very basic threat model for AI takeover:
- Humanity develops superhuman AI
- Superhuman AI is misaligned and power-seeking
- Superhuman AI seizes power for itself
And now here's a closely analogous threat model for AI-enabled coups:
- Humanity develops superhuman AI
- Superhuman AI is controlled by a small group
- Superhuman AI seizes power for the small group
While the report focuses on the risk that someone seizes power over a country, I think that similar dynamics could allow someone to take over the world. In fact, if someone wanted to take over the world, their best strategy might well be to first stage an AI-enabled [...]
---
Outline:(02:39) Summary
(03:31) An AI workforce could be made singularly loyal to institutional leaders
(05:04) AI could have hard-to-detect secret loyalties
(06:46) A few people could gain exclusive access to coup-enabling AI capabilities
(09:46) Mitigations
(13:00) Vignette
The original text contained 2 footnotes which were omitted from this narration. ---
First published: April 16th, 2025
Source: https://www.lesswrong.com/posts/6kBMqrK9bREuGsrnd/ai-enabled-coups-a-small-group-could-use-ai-to-seize-power-1 ---
Narrated by
TYPE III AUDIO.
---