TL;DR Most “AGI ban” proposals define AGI by outcome: whatever potentially leads to human extinction. That's legally insufficient: regulation has to act before harm occurs, not after.
- Strict liability is essential. High-stakes domains (health & safety, product liability, export controls) already impose liability for risky precursor states, not outcomes or intent. AGI regulation must do the same.
- Fuzzy definitions won’t work here. Courts can tolerate ambiguity in ordinary crimes because errors aren’t civilisation-ending and penalties bite. An AGI ban will likely follow the EU AI Act model (civil fines, ex post enforcement), which companies can Goodhart around. We cannot afford an “80% avoided” ban.
- Define crisp thresholds. Nuclear treaties succeeded by banning concrete precursors (zero-yield tests, 8kg plutonium, 25kg HEU, 500kg/300km delivery systems), not by banning “extinction-risk weapons.” AGI bans need analogous thresholds: capabilities like autonomous replication, scalable resource acquisition, and systematic deception.
- Bring lawyers in. If this [...]
---
Outline:(00:12) TL;DR
(02:07) Why outcome-based AGI bans proposals don't work
(03:52) The luxury of defining the thing ex post
(05:43) Actually defining the thing we want to ban
(08:06) Credible bans depend on bright lines
(08:44) Learning from nuclear treaties
The original text contained 2 footnotes which were omitted from this narration. ---
First published: September 20th, 2025
Source: https://www.lesswrong.com/posts/agBMC6BfCbQ29qABF/the-problem-with-defining-an-agi-ban-by-outcome-a-lawyer-s ---
Narrated by
TYPE III AUDIO.