

“Effective altruism in the age of AGI” by William_MacAskill
This post is based on a memo I wrote for this year's Meta Coordination Forum. See also Arden Koehler's recent post, which hits a lot of similar notes.
Summary
The EA movement stands at a crossroads. In light of AI's very rapid progress, and the rise of the AI safety movement, some people view EA as a legacy movement set to fade away; others think we should refocus much more on “classic” cause areas like global health and animal welfare.
I argue for a third way: EA should embrace the mission of making the transition to a post-AGI society go well, significantly expanding our cause area focus beyond traditional AI safety. This means working on neglected areas like AI welfare, AI character, AI persuasion and epistemic disruption, human power concentration, space governance, and more (while continuing work on global health, animal welfare, AI safety, and biorisk).
These additional [...]
---
Outline:
(00:20) Summary
(02:38) Three possible futures for the EA movement
(07:07) Reason #1: Neglected cause areas
(10:49) Reason #2: EA is currently intellectually adrift
(13:08) Reason #3: The benefits of EA mindset for AI safety and biorisk
(14:53) This isn't particularly Will-idiosyncratic
(15:57) Some related issues
(16:10) Principles-first EA
(17:30) Cultivating vs growing EA
(21:27) PR mentality
(24:48) What I'm not saying
(28:31) What to do?
(29:00) Local groups
(31:26) Online
(35:18) Conferences
(36:05) Conclusion
---
First published:
October 10th, 2025
Source:
https://forum.effectivealtruism.org/posts/R8AAG4QBZi5puvogR/effective-altruism-in-the-age-of-agi
---
Narrated by TYPE III AUDIO.
---
Images from the article:


Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.