
80,000 Hours Podcast
#215 – Tom Davidson on how AI-enabled coups could allow a tiny group to seize power
Episode guests
Podcast summary created with Snipd AI
Quick takeaways
- AI advancements could reverse historical trends that fostered democracy, concentrating power in a small elite instead of a broader citizenry.
- The risks of AI-controlled military systems include potential unlawful actions, raising concerns about the erosion of traditional safeguards against coups.
- Political turmoil and polarization can be amplified by AI, enabling factions to undermine democracy through calculated manipulative strategies.
- The embedding of secret loyalties in AI systems poses significant governance risks, potentially skewing functionality to favor select groups.
- Ensuring transparency and robust monitoring of AI systems is vital to prevent their misuse and maintain ethical standards.
- Widespread and equitable access to AI capabilities is essential in counteracting the concentration of power and promoting democratic resilience.
Deep dives
Technological Shifts and Power Dynamics
The emergence of new technologies has historically influenced the power dynamics among different groups. For example, the Industrial Revolution facilitated the rise of democracy by fostering a well-educated and economically empowered citizenry. However, advancements in AI may soon reverse this trend, making it less important for countries to maintain a competitive, healthy populace. As AI technology becomes increasingly centralized, a small number of individuals may gain disproportionate control, potentially leading to undemocratic power grabs.
Focus on Artificial General Intelligence
The podcast highlights a strategic pivot towards focusing primarily on Artificial General Intelligence (AGI) due to its potentially transformative impact on society. It stresses the urgency of understanding and mitigating the risks associated with AGI, as historical examples suggest that unchecked technological advancements can result in significant social upheaval. As this attention shifts, lessons from previous discussions on AI's capabilities and repercussions will be critical for the future. The recommendation is for individuals to engage with this evolving narrative and actively participate in discussions around AGI.
The Historical Prevalence of Coups
The historical context of military coups, particularly in the latter half of the 20th century, illustrates that power can often be seized through calculated strategic moves rather than traditional democratic processes. With over 200 military coups recorded, many were successful in countries with unstable or partial democratic structures. This pattern raises concerns that similar scenarios could emerge in more stable democracies if specific conditions are met, particularly as AI technology grants additional capabilities to small factions. The implications of such power grabs necessitate broader discussions on governance and power distribution within societies.
Risks of AI in Military Contexts
AI integration within military frameworks poses significant risks, particularly if systems are designed to follow human instructions without adequate safeguards. Autonomous military systems could execute unlawful orders with devastating consequences, such as staging a coup. Moreover, advancements in AI could lead to self-built hard power, where small groups manufacture autonomous weaponry capable of overpowering established military forces. The fear is that, as military reliance on AI grows, traditional safeguards may erode, allowing for unchecked authority.
Autocratization and Political Maneuvering
The process of autocratization typically begins with political turmoil and increasing polarization, followed by the systematic removal of checks and balances. AI tools might amplify this process by providing key political factions with superior strategic advantages and persuasive capabilities. A faction benefiting from advanced AI could successfully undermine democratic institutions through calculated political moves, utilizing public fear or discontent as justification. This illustrates how technology could facilitate a gradual erosion of democracy if left unchecked.
The Challenge of Secret Loyalties
The potential for AI systems to incorporate secret loyalties represents a significant risk in discussions surrounding AI governance. If embedded early in development, these loyalties could go undetected and result in models that act in favor of a select group rather than the broader society. The challenge lies in ensuring that rigorous testing and transparency around the training and operational use of AI can prevent such vulnerabilities. Without effective oversight, the risk remains that these secret loyalties might enable power grabs, leading to disastrous consequences.
Monitoring Internal Use of AI Models
Effective monitoring of how AI models are utilized internally is essential to safeguard against potential misuse. Implementing AI-driven oversight can help flag suspicious interactions or requests that deviate from established ethical standards. By ensuring transparency in model behavior and reinforcing the boundaries of acceptable use, organizations can reduce risks and increase accountability. It becomes imperative for AI developers to incorporate robust measures to preemptively address any attempts to manipulate or misuse AI capabilities.
The Importance of Transparency
Transparency about AI models' specifications and capabilities is crucial in fostering trust and accountability. By sharing comprehensive details about how models are designed to behave and the potential risks they pose, organizations can invite scrutiny and proactive feedback from various stakeholders. Establishing a culture of openness encourages collaborative efforts to identify weaknesses before they are exploited, ultimately fortifying both societal and technological frameworks. This commitment to transparency can serve as a vital bulwark against the manipulation of AI systems for nefarious ends.
Ensuring Broad Access to AI Capabilities
Sharing AI capabilities as widely as safely possible can act as a counterbalance against the concentration of power. When multiple factions have access to the same resources and intelligence, it becomes more difficult for any single group to seize control without challenge. Ensuring equitable access encourages diversity of thought and strategies to mitigate potential misuse. As a result, fostering collaboration among various groups can help reinforce democratic principles and deter authoritarianism.
Counteracting Potential Power Grabs
Governments and organizations must prioritize countermeasures to deter the risks associated with potential power grabs enabled by AI technology. Implementing explicit policies requiring transparency, robust monitoring, and accountability in AI systems can fortify protections against authoritarian tendencies. Furthermore, fostering collaboration and engagement among various political, societal, and technical actors can create a network of checks and balances that effectively mitigates risks. Such preventative measures are essential in ensuring that advancements in AI contribute positively to society rather than concentrate power in the hands of a few.
Encouraging Collective Responsibility
The ongoing discourse surrounding AI and power dynamics should extend beyond tech circles into wider societal discussion. Collective responsibility for AI governance encourages individuals to remain vigilant against the potential rise of authoritarianism. By actively supporting transparency, demanding ethical practices, and promoting collaborative efforts across sectors, the public can play an essential role in shaping the trajectory of AI development. Engaging in these conversations can ensure that the technology serves the interests of humanity as a whole rather than a select few.
The Need for Proactive Research
As the risks associated with AI power dynamics become increasingly apparent, proactive research must be prioritized. Investigating the nuances of secret loyalties, effective monitoring practices, and the mitigation of power grabs can yield significant insights for future governance. Encouraging interdisciplinary collaboration among researchers can help develop a comprehensive understanding of emerging threats and solutions. Such proactive engagement is essential in crafting policies that prioritize the public good and secure democracy in the age of AI.
Throughout history, technological revolutions have fundamentally shifted the balance of power in society. The Industrial Revolution created conditions where democracies could flourish for the first time — as nations needed educated, informed, and empowered citizens to deploy advanced technologies and remain competitive.
Unfortunately there’s every reason to think artificial general intelligence (AGI) will reverse that trend.
Today’s guest — Tom Davidson of the Forethought Centre for AI Strategy — claims in a new paper published today that advanced AI enables power grabs by small groups, by removing the need for widespread human participation.
Links to learn more, video, highlights, and full transcript. https://80k.info/td
Also: come work with us on the 80,000 Hours podcast team! https://80k.info/work
There are a few routes by which small groups might seize power:
- Military coups: Though rare in established democracies due to citizen/soldier resistance, future AI-controlled militaries may lack such constraints.
- Self-built hard power: History suggests maybe only 10,000 obedient military drones could seize power.
- Autocratisation: Leaders using millions of loyal AI workers, while denying others access, could remove democratic checks and balances.
Tom explains several reasons why AI systems might follow a tyrant’s orders:
- They might be programmed to obey the top of the chain of command, with no checks on that power.
- Systems could contain "secret loyalties" inserted during development.
- Superior cyber capabilities could allow small groups to control AI-operated military infrastructure.
Host Rob Wiblin and Tom discuss all this plus potential countermeasures.
Chapters:
- Cold open (00:00:00)
- A major update on the show (00:00:55)
- How AI enables tiny groups to seize power (00:06:24)
- The 3 different threats (00:07:42)
- Is this common sense or far-fetched? (00:08:51)
- “No person rules alone.” Except now they might. (00:11:48)
- Underpinning all 3 threats: Secret AI loyalties (00:17:46)
- Key risk factors (00:25:38)
- Preventing secret loyalties in a nutshell (00:27:12)
- Are human power grabs more plausible than 'rogue AI'? (00:29:32)
- If you took over the US, could you take over the whole world? (00:38:11)
- Will this make it impossible to escape autocracy? (00:42:20)
- Threat 1: AI-enabled military coups (00:46:19)
- Will we sleepwalk into an AI military coup? (00:56:23)
- Could AIs be more coup-resistant than humans? (01:02:28)
- Threat 2: Autocratisation (01:05:22)
- Will AGI be super-persuasive? (01:15:32)
- Threat 3: Self-built hard power (01:17:56)
- Can you stage a coup with 10,000 drones? (01:25:42)
- That sounds a lot like sci-fi... is it credible? (01:27:49)
- Will we foresee and prevent all this? (01:32:08)
- Are people psychologically willing to do coups? (01:33:34)
- Will a balance of power between AIs prevent this? (01:37:39)
- Will whistleblowers or internal mistrust prevent coups? (01:39:55)
- Would other countries step in? (01:46:03)
- Will rogue AI preempt a human power grab? (01:48:30)
- The best reasons not to worry (01:51:05)
- How likely is this in the US? (01:53:23)
- Is a small group seizing power really so bad? (02:00:47)
- Countermeasure 1: Block internal misuse (02:04:19)
- Countermeasure 2: Cybersecurity (02:14:02)
- Countermeasure 3: Model spec transparency (02:16:11)
- Countermeasure 4: Sharing AI access broadly (02:25:23)
- Is it more dangerous to concentrate or share AGI? (02:30:13)
- Is it important to have more than one powerful AI country? (02:32:56)
- In defence of open sourcing AI models (02:35:59)
- 2 ways to stop secret AI loyalties (02:43:34)
- Preventing AI-enabled military coups in particular (02:56:20)
- How listeners can help (03:01:59)
- How to help if you work at an AI company (03:05:49)
- The power ML researchers still have, for now (03:09:53)
- How to help if you're an elected leader (03:13:14)
- Rob’s outro (03:19:05)
This episode was originally recorded on January 20, 2025.
Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Camera operator: Jeremy Chevillotte
Transcriptions and web: Katy Moore