TL;DR: In AI safety, we systematically undervalue founders and field‑builders relative to researchers and prolific writers. This status gradient pushes talented would‑be founders and amplifiers out of the ecosystem, slows the growth of research orgs and talent funnels, and bottlenecks our capacity to scale the AI safety field. We should deliberately raise the status of founders and field-builders and lower the friction for starting and scaling new AI safety orgs.
Epistemic status: A lot of hot takes with less substantiation than I'd like. Also, there is an obvious COI in that I am an AI safety org founder and field-builder.
Coauthored with ChatGPT.
Why boost AI safety founders?
- Multiplier effects: Great founders and field-builders have multiplier effects on recruiting, training, and deploying talent to work on AI safety. At MATS, mentor applications are increasing 1.5x/year and scholar applications are increasing even faster, but deployed research talent is only increasing at 1.25x/year. If we want to 10-100x the AI safety field in the next 8 years, we need multiplicative capacity, not just marginal hires; training programs and founders are the primary constraints.
- Anti-correlated attributes: “Founder‑mode” is somewhat anti‑natural to “AI concern.” The cognitive style most attuned to AI catastrophic [...]
---
Outline:
(00:53) Why boost AI safety founders?
(03:42) How did we get here?
(06:13) Potential counter-arguments
(08:45) What should we do?
(09:57) How to become a founder
(10:54) Closing
---
First published:
November 16th, 2025
Source:
https://www.lesswrong.com/posts/yw9B5jQazBKGLjize/ai-safety-undervalues-founders
---
Narrated by TYPE III AUDIO.