
EA Forum Podcast (All audio)
Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing.
If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.
Latest episodes

May 8, 2025 • 1h
“Please Donate to CAIP (Post 1 of 3 on AI Governance)” by Jason Green-Lowe
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we’ve been doing, why I think it's valuable, and how your donations could help. This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP's particular need [...] ---Outline:(01:33) OUR MISSION AND STRATEGY(02:59) Our Model Legislation(04:17) Direct Meetings with Congressional Staffers(05:20) Expert Panel Briefings(06:16) AI Policy Happy Hours(06:43) Op-Eds & Policy Papers(07:22) Grassroots & Grasstops Organizing(09:13) Whats Unique About CAIP?(10:26) OUR ACCOMPLISHMENTS(10:29) Quantifiable Outputs(11:21) Changing the Media Narrative(12:23) Proof of Concept(13:44) Outcomes -- Congressional Engagement(18:29) Context(19:54) OUR PROPOSED POLICIES(19:58) Mandatory Audits for Frontier AI(21:23) Liability Reform(22:32) Hardware Monitoring(24:11) Emergency Powers(25:31) Further Details(25:41) RESPONSES TO COMMON POLICY OBJECTIONS(25:46) 1. Why not push for a ban or pause on superintelligence research?(30:17) 2. Why not support bills that have a better chance of passing this year, like funding for NIST or NAIRR?(32:30) 3. If Congress is so slow to act, why should anyone be working with Congress at all? Why not focus on promoting state laws or voluntary standards?(35:09) 4. Why would you push the US to unilaterally disarm? Don't we instead need a global treaty regulating AI (or subsidies for US developers) to avoid handing control of the future to China?(37:24) 5. Why haven't you accomplished your mission yet? If your organization is effective, shouldn't you have passed some of your legislation by now, or at least found some powerful Congressional sponsors for it?(40:56) OUR TEAM(41:53) Executive Director(44:04) Government Relations Team(45:12) Policy Team(46:08) Communications Team(47:29) Operations Team(48:11) Personnel Changes(48:49) OUR PLAN IF FUNDED(51:58) OUR FUNDING SITUATION(52:02) Our Expenses & Runway(53:02) No Good Way to Cut Costs(55:22) Our Revenue(57:02) Surprise Budget Deficit(59:00) The Bottom Line---
First published:
May 7th, 2025
Source:
https://forum.effectivealtruism.org/posts/9uZHnEkhXZjWzia7F/please-donate-to-caip-post-1-of-3-on-ai-governance
---
Narrated by TYPE III AUDIO.

May 7, 2025 • 2min
“Can TikToks communicate AI policy and risk?” by Caitlin Borke
Hi everyone! I’m Caitlin, and I’ve just kicked off a 6-month, full-time career-transition grant to dive deep into AI policy and risk. You can learn more about my work here. What I’m Building I’m launching a TikTok and Instagram channel, @AICuriousGirl, to document my journey as I explore AI governance, misalignment, and the more tangible risks like job displacement and misuse. My goal is to strike a tone of skeptical optimism[1], acknowledging the risks of AI while finding ways to mitigate them. How You Can Help My friends and roommates have already given me invaluable feedback and would be thrilled to hear I'm not just relying on them anymore to be reviewers. I’m now seeking: Technical and policy experts or other communicators who can Volunteer 10-15 minutes/week to review draft videos (1-2 minutes in length) and share quick thoughts on: Clarity Accuracy Suggestions for tighter storytelling First [...] The original text contained 1 footnote which was omitted from this narration. ---
First published:
May 7th, 2025
Source:
https://forum.effectivealtruism.org/posts/WzFiEGzxJWrX9pBaw/can-tiktoks-communicate-ai-policy-and-risk
---
Narrated by TYPE III AUDIO.

May 7, 2025 • 36min
“5 Historical Case Studies for an EA in Decline” by JWS 🔸
Introduction Before November 2022, Effective Altruism only seemed to know success and growth. Ever since November 2022, the EA movement has only seemed to know criticism and scandal. Some have even gone so far to declare that EA is dead or dying,[1] or no longer worth standing behind,[2] or otherwise disassociate themselves from the movement even if outside observers would clearly identify them as being 'EA'.[3] This negative environment that EA finds itself in is, I think, indicative of its state as a social movement in decline. I was reflecting on this state of affairs, and thinking about what the prospects for recovery for EA as a movement are, and thought that looking at the historical record for comparable case studies might prove enlightening, and provide an interesting perspective to understand the current tribulations EA is facing. This post is the result of that inquiry. Method To be open [...] ---Outline:(00:11) Introduction(01:07) Method(04:42) Edits(04:59) The Case Studies(05:02) #1 - New Atheism(08:19) #2 - Saint-Simonianism(11:07) #3 - The Technocracy Movement(15:30) #4 - Moral Re-Armament / Buchmanism(19:07) #5 - Early Quakerism(22:28) Honourable Mentions(26:57) Takeaways and Conclusions(29:55) A Final Note on Historical Violence and its Modern-Day RelevanceThe original text contained 17 footnotes which were omitted from this narration. ---
First published:
May 7th, 2025
Source:
https://forum.effectivealtruism.org/posts/uEWZxbLaT2MpuC5BR/5-historical-case-studies-for-an-ea-in-decline
---
Narrated by TYPE III AUDIO.

May 7, 2025 • 3min
“Work with EAs to Build a Campus” by tyleralterman
Fractal is launching a new program to help local networks turn their group chats and meetups into a walkable campus—places that brings together collaboration and classes with co-living and shared gathering spaces. We're inspired by the history of scenius: tight networks of collaborators which have produced innovations and institutions that we now take for granted (eg the Founding Fathers, Bell Labs, YCombinator, Renaissance Florence, etc). Scenius tends to blossom under particular conditions that we suspect are replicable. Namely: close proximity and a culture of lively collaboration. Our program is designed to replicate these conditions. The win condition would be that sceniuses pop up all over the world, run by collectives that are working on high-impact projects. We're also excited by the idea that campuses can integrate the work life of ambitious people with a "village" lifestyle that makes it easier to take care of kids and see friends [...] ---
First published:
May 6th, 2025
Source:
https://forum.effectivealtruism.org/posts/kHDxccxF8XRGA2sat/work-with-eas-to-build-a-campus
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

May 7, 2025 • 17min
“Speedrunning on-demand bliss and peace for improved productivity, wellbeing, and thinking” by kuhanj
Disclaimer: I wanted to get this post out quickly since a work-compatible, remote meditation retreat I’d strongly recommend is starting soon (Thursday May 15th), and likely won’t be run again online for four months. Many of the benefits and insights I discuss are based on personal experience, and not backed with statistics/science. I plan to write a more well-researched piece in the future. Summary The jhanas are a set of non-addictive states of extraordinary bliss and peace, typically accessed through meditation (though upregulated breathwork like this may yield quicker results initially). There's an upcoming work-compatible Jhourney retreat (which takes a data-driven, secular approach to jhana meditation) starting next week (May 15). My online Jhourney retreat was easily the best week of my life, and many others have had similar experiences. Many EA friends have also sustainably increased their wellbeing and productivity after doing a Jhourney retreat. I [...] ---Outline:(00:37) Summary(01:56) Jhourney's (data-driven, secular) approach to jhana meditation reliably produces transformative experiences.(03:14) Large, sustained positive changes can happen quite quickly.(03:53) People often report significant, lasting benefits to jhana meditation.(04:55) Downsides:(06:01) Recommendations:(07:03) Benefits of jhana meditation that have lasted over a month post-retreat:(12:29) Additional Content Recommendations(14:27) Hypotheses for why I had an outlier positive retreat:---
First published:
May 7th, 2025
Source:
https://forum.effectivealtruism.org/posts/ProDMd28DBzqQiefA/speedrunning-on-demand-bliss-and-peace-for-improved
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

May 7, 2025 • 8min
[Linkpost] “Why ‘Solving Alignment’ Is Likely a Category Mistake” by Nate Sharpe
This is a link post. A common framing of the AI alignment problem is that it's a technical hurdle to be overcome. A clever team at DeepMind or Anthropic would publish a paper titled "Alignment is All You Need," everyone would implement it, and we'd all live happily ever after in harmonious coexistence with our artificial friends. I suspect this perspective constitutes a category mistake on multiple levels. Firstly, it presupposes that the aims, drives, and objectives of both the artificial general intelligence and what we aim to align it with can be simplified into a distinct and finite set of elements, a simplification I believe is unrealistic. Secondly, it treats both the AGI and the alignment target as if they were static systems. This is akin to expecting a single paper titled "The Solution to Geopolitical Stability" or "How to Achieve Permanent Marital Bliss." These are not problems that [...] ---Outline:(01:10) The Problem of Aligned To Whom?(03:27) The Target is Moving---
First published:
May 6th, 2025
Source:
https://forum.effectivealtruism.org/posts/hs7hATCkBupePZSj3/why-solving-alignment-is-likely-a-category-mistake
Linkpost URL:https://www.lesswrong.com/posts/wgENfqD8HgADgq4rv/why-solving-alignment-is-likely-a-category-mistake
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

May 7, 2025 • 17min
[Linkpost] “AI Governance to Avoid Extinction: The Strategic Landscape and Actionable Research Questions [MIRI TGT Research Agenda]” by peterbarnett, Aaron_Scher
This is a link post. We’re excited to release a new AI governance research agenda from the MIRI Technical Governance Team. With this research agenda, we have two main aims: to describe the strategic landscape of AI development and to catalog important governance research questions. We base the agenda around four high-level scenarios for the geopolitical response to advanced AI development. Our favored scenario involves building the technical, legal, and institutional infrastructure required to internationally restrict dangerous AI development and deployment (which we refer to as an Off Switch), which leads into an internationally coordinated Halt on frontier AI activities at some point in the future. This blog post is a slightly edited version of the executive summary. We are also looking for someone to lead our team and work on these problems, please reach out here if you think you’d be a good fit. The default trajectory of AI [...] ---Outline:(04:48) Off Switch and Halt(07:37) US National Project(09:53) Light-Touch(12:07) Threat of Sabotage(14:27) Understanding the World(15:11) Outlook---
First published:
May 5th, 2025
Source:
https://forum.effectivealtruism.org/posts/QJygRSc5mriCQS6XH/ai-governance-to-avoid-extinction-the-strategic-landscape
Linkpost URL:https://techgov.intelligence.org/research/ai-governance-to-avoid-extinction
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

May 6, 2025 • 8min
“EA as a Tree of Questions” by OGTutzauer🔸
Introduction As a community builder, I sometimes get into conversations with EA-skeptics that aren't going to sway the person I'm talking to. The Tree of Questions is a tool I use to be more sure of having effective conversations, faster identifying the crux. Much of this is inspired by Scott Alexander's "tower of assumptions" and Benjamin Todd's ideas of The Core of EA. The Tree a trunk with core ideas almost all EAs accept - and without which you have to jump through some very specific hoops in order to agree with standard EA stances. Branches for different cause areas or ideas within EA, such as longtermism. If you reject the trunk, there's no point debating branches. All too often, I find people cutting off a branch of this tree, and then believing they've cut down the entire thing. "I'm not an EA because I don't believe in [...] ---Outline:(00:09) Introduction(00:34) The Tree(02:02) Altruism(02:26) Concede:(03:02) Debate briefly:(03:26) Effectiveness(03:49) Concede:(04:28) Debate briefly:(05:09) Comparability(05:31) Concede:(06:30) Institutional Trust(07:19) Further Discussions---
First published:
May 6th, 2025
Source:
https://forum.effectivealtruism.org/posts/4vkwm7drWB9oLNhbx/ea-as-a-tree-of-questions
---
Narrated by TYPE III AUDIO.

May 6, 2025 • 16min
“The crucible — how I think about the situation with AI” by Owen Cotton-Barratt
The basic situation The world is wild and terrible and wonderful and rushing forwards so so fast. Modern economies are tremendous things, allowing crazy amounts of coordination. People have got really very good at producing stuff. Long-term trends are towards more affluence, and less violence. The enlightenment was pretty fantastic not just for bringing us better tech, but also more truthseeking, better values, etc. People, on the whole, are basically good — they want good things for others, and they want to be liked, and they want the truth to come out. This is some mix of innate and socially conditioned. (It isn’t universal.) But they also often are put in a tight spot and end up looking out for themselves or those they love. The hierarchy of needs bites. Effective altruism often grows from a measure of privilege. The world is shaped by economics and by incentives and [...] ---Outline:(00:11) The basic situation(01:17) AI enters the picture(02:50) The crucible(04:44) Heating up(05:54) The shape of technology(07:11) The case for optimism(09:48) What is needed?(13:26) Against an overly-narrow focusThe original text contained 1 footnote which was omitted from this narration. ---
First published:
May 5th, 2025
Source:
https://forum.effectivealtruism.org/posts/hJfXMffMaT57odFDv/the-crucible-how-i-think-about-the-situation-with-ai
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

May 5, 2025 • 53sec
“Vetted Causes’ 2025 Charity Recommendations” by VettedCauses
Vetted Causes is excited to announce our 2025 charity recommendations: Animal Legal Defense Fund Fish Welfare Initiative Shrimp Welfare Project Each of these recommended charities has received a published review (linked above), and a $1,000 donation in support of their work. Please join us in recognizing these organizations for their outstanding contributions! ---
First published:
May 5th, 2025
Source:
https://forum.effectivealtruism.org/posts/YpwtbCGnaqo5z67yp/vetted-causes-2025-charity-recommendations
---
Narrated by TYPE III AUDIO.