Much has been written framing and articulating the AI governance problem from a catastrophic risks lens, but these writings have been scattered. This page aims to provide a synthesized introduction to some of these already prominent framings. This is just one attempt at suggesting an overall frame for thinking about some AI governance problems; it may miss important things. Some researchers think that unsafe development or misuse of AI could cause massive harms. A key contributor to some of these risks is that catastrophe may not require all or most relevant decision makers to make harmful decisions. Instead, harmful decisions from just a minority of influential decision makers—perhaps just a single actor with good intentions—may be enough to cause catastrophe. For example, some researchers argue, if just one organization deploys highly capable, goal-pursuing, misaligned AI—or if many businesses (but a small portion of all businesses) deploy somewhat capable, goal-pursuing, misaligned AI—humanity could be permanently disempowered. The above would not be very worrying if we could rest assured that no actors capable of these harmful actions would take them. However, especially in the context of AI safety, several factors are arguably likely to incentivize some actors to take harmful deployment actions: Misjudgment: Assessing the consequences of AI deployment may be difficult (as it is now, especially given the nature of AI risk arguments), so some organizations could easily get it wrong—concluding that an AI system is safe or beneficial when it is not. “Winner-take-all” competition: If the first organization(s) to deploy advanced AI is expected to get large gains, while leaving competitors with nothing, competitors would be highly incentivized to cut corners in order to be first—they would have less to lose.
Original text:
https://www.agisafetyfundamentals.com/governance-blog/global-vulnerability
Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
---
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.