
LessWrong (Curated & Popular) “MIRI’s 2025 Fundraiser” by alexvermeer
Dec 2, 2025
A critical fundraiser aimed at raising $6M focuses on the urgent need for responsible AI development. MIRI emphasizes its shift from technical research to public advocacy against the dangers of superintelligence. The success of a bestselling book raises awareness, while two dedicated teams tackle communications and governance issues. Plans for extensive outreach and the creation of policy recommendations showcase MIRI's proactive strategy to avert catastrophic outcomes. With ambitious fundraising goals, the nonprofit calls for collective action to ensure a safer future.
AI Snips
Chapters
Books
Transcript
Episode notes
Lethal Default From Current AI Trajectory
- MIRI asserts that building superintelligence with current techniques would kill everyone on Earth if attempted.
- They view this conclusion as a direct extrapolation from current AI knowledge, evidence, and institutional behavior.
Urgency Of Halting The AI Race
- MIRI believes leading AI labs are explicitly rushing to build superintelligence and that the world needs to stop this race.
- They argue international coordination is required to halt the development safely.
Leverage Media And Partners To Spread The Message
- Focus communications on explaining the book If Anyone Builds It, Everyone Dies and engaging public audiences through interviews.
- Use existing content to support third parties and respond quickly to news events to amplify the message.




