This is a link post.
The AI safety community has grown rapidly since the ChatGPT wake-up call, but available funding doesn’t seem to have kept pace.
However, there's a more recent dynamic that's created even better funding opportunities, which I witnessed as a recommender in the most recent SFF grant round.[1]
Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures. But they’ve recently stopped funding several categories of work (my own categories, not theirs):
- Many Republican-leaning think tanks, such as the Foundation for American Innovation.
- “Post-alignment” causes such as digital sentience or regulation of explosive growth.
- The rationality community, including LessWrong, Lightcone, SPARC, CFAR, MIRI.
- High school outreach, such as Non-trivial.
In addition, they are currently not funding (or not fully funding):
- Many non-US think tanks, who don’t want to appear influenced by an American organisation (there's now probably more [...]
The original text contained 2 footnotes which were omitted from this narration.
The original text contained 1 image which was described by AI.
---
First published:
December 21st, 2024
Source:
https://forum.effectivealtruism.org/posts/s9dyyge6uLG5ScwEp/it-looks-like-there-are-some-good-funding-opportunities-in
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.