[Co-written by Mateusz Bagiński and Samuel Buteau (Ishual)]
TL;DR Many X-risk-concerned people who join AI capabilities labs with the intent to contribute to existential safety think that the labs are currently engaging in a race that is unacceptably likely to lead to human disempowerment and/or extinction, and would prefer an AGI ban[1] over the current path. This post makes the case that such people should speak out publicly[2] against the current AI R&D regime and in favor of an AGI ban[3]. They should explicitly communicate that a saner world would coordinate not to build existentially dangerous intelligences, at least until we know how to do it in a principled, safe way. They could choose to maintain their political capital by not calling the current AI R&D regime insane, or find a way to lean into this valid persona of “we will either cooperate (if enough others cooperate) or win [...]
---
Outline:(00:16) TL;DR
(02:02) Quotes
(03:22) The default strategy of marginal improvement from within the belly of a beast
(06:59) Noble intention murphyjitsu
(09:35) The need for a better strategy
The original text contained 8 footnotes which were omitted from this narration. ---
First published: September 19th, 2025
Source: https://www.lesswrong.com/posts/fF8pvsn3AGQhYsbjp/safety-researchers-should-take-a-public-stance ---
Narrated by
TYPE III AUDIO.