“It looks like there are some good funding opportunities in AI safety right now” by Benjamin_Todd
Dec 22, 2024
auto_awesome
Benjamin Todd, a recommender in the SFF grant round, dives into the dynamic funding landscape for AI safety. He discusses the rapid growth of the AI safety community post-ChatGPT and highlights the challenges in funding that haven't quite kept up. Todd reveals recent trends, notably that Good Ventures has changed its funding priorities, leaving gaps in support for various groups. He suggests promising philanthropic investment opportunities and emphasizes careful research for potential donors. His insights are a must-listen for anyone interested in making an impact in AI safety.
The AI safety sector currently offers unique funding opportunities due to shifts in philanthropic activity creating significant financing gaps for many organizations.
Smaller donors can enhance the effectiveness of AI safety initiatives by diversifying funding sources and supporting promising organizations like SecureBio and CLTR.
Deep dives
Current Funding Landscape in AI Safety
The AI safety sector is currently experiencing a unique funding landscape that presents numerous opportunities for donors. Recent shifts in philanthropic activity have created gaps in financing for many organizations, with only about 25% of available capital being accessed by key players. Furthermore, the recent SFF grant round showed that organizations faced a significantly higher funding bar, which opened avenues for new donors to step in and fill these shortages. By diversifying funding sources, smaller donors can play a crucial role in enhancing the effectiveness of AI safety initiatives, particularly given the centralization of funding among a few major foundations.
Specific Recommendations for Donors
Donors looking to make an impactful contribution in AI safety should consider certain organizations that have been highlighted as particularly promising. For instance, SecureBio, focused on bio-risk related to AI, could benefit significantly from increased funding, as could other think tanks like CLTR in the UK and the Centre for AI Safety, both of which drive important initiatives in AI governance. Additionally, supporting organizations with proven track records, such as the METR evaluation group, could optimize funding efficacy. By targeting these and other highlighted entities, donors can potentially maximize their impact in an area that is increasingly recognized as crucial.
1.
Exploring the Opportunities and Gaps in AI Safety Funding
The AI safety community has grown rapidly since the ChatGPT wake-up call, but available funding doesn’t seem to have kept pace.
However, there's a more recent dynamic that's created even better funding opportunities, which I witnessed as a recommender in the most recent SFF grant round.[1]
Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures. But they’ve recently stopped funding several categories of work (my own categories, not theirs):
Many Republican-leaning think tanks, such as the Foundation for American Innovation.
“Post-alignment” causes such as digital sentience or regulation of explosive growth.
The rationality community, including LessWrong, Lightcone, SPARC, CFAR, MIRI.
High school outreach, such as Non-trivial.
In addition, they are currently not funding (or not fully funding):
Many non-US think tanks, who don’t want to appear influenced by an American organisation (there's now probably more [...]
The original text contained 2 footnotes which were omitted from this narration.
The original text contained 1 image which was described by AI.