
LessWrong (30+ Karma) “Help us find founders for new AI safety projects” by lukeprog
Dec 6, 2025
Explore the intriguing gaps in AI safety funding that urgently need filling. Discover neglected areas like policy advocacy in underrepresented countries and essential model specifications. Learn about critical needs in AI infosecurity, including confidential computing workflows and detection tools. Luke Progg highlights the importance of de-escalation mechanisms and incident tracking. The plan to scale interactive grant-making involves headhunting founders for impactful projects, while offering a $5,000 referral reward for suitable candidates. Tune in to find out more!
AI Snips
Chapters
Transcript
Episode notes
Funding Exists But Gaps Remain
- Coefficient Giving funded many AI safety projects over the past decade but major gaps remain in the ecosystem.
- Lack of promising applications, not lack of funding, explains why many desirable activities have no projects.
Concrete Gaps Across Multiple Domains
- Many high-impact activities lack any significant projects, including policy, infosecurity, monitoring, and treaty verification.
- The podcast lists specific neglected areas to illustrate the breadth of missing work.
Use Active Grant-Making To Fill Gaps
- Coefficient Giving uses interactive grant-making to scope projects and headhunt founders for critical gaps.
- They recruit founders, match funding to strengths, and reshape projects around founder perspectives.
