
EA Forum Podcast (Curated & popular)
Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma.
If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
Latest episodes

Feb 13, 2025 • 12min
“Why Did Elon Musk Just Offer to Buy Control of OpenAI for $100 Billion?” by Garrison
This is a link post. This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." [...] ---Outline:(02:44) The control premium(04:19) Conversion significance(05:45) Musks suit(09:26) The stakes---
First published:
February 11th, 2025
Source:
https://forum.effectivealtruism.org/posts/7iopGPmtEmubSFSP3/why-did-elon-musk-just-offer-to-buy-control-of-openai-for
---
Narrated by TYPE III AUDIO.

Feb 4, 2025 • 6min
“Leadership change at the Center on Long-Term Risk” by JesseClifton, Tristan Cook, Mia_Taylor
The Center on Long-Term Risk (CLR) does research and community building aimed at reducing s-risk. Jesse Clifton is stepping down as CLR's Executive Director. He’ll be succeeded by Tristan Cook as Managing Director and Mia Taylor as Interim Research Director. [1] Statement from Jesse Over the past year or so, I’ve become increasingly convinced by arguments that we are clueless about the sign (in terms of expected total suffering reduced) of interventions aimed at reducing s-risk. (And I think it's plausible that we should consider ourselves clueless about interventions aimed at improved expected total welfare, generally.) The other researchers on CLR's Conceptual Research team[2] have come to a similar view,[3] but not the other staff or the board, who are still positive on the pre-cluelessness priorities. Given this, I don’t think it makes sense for me to lead CLR. So, for now, I’ll be transitioning to working [...] ---Outline:(00:25) Statement from Jesse(03:06) Statement from Mia and TristanThe original text contained 6 footnotes which were omitted from this narration. ---
First published:
January 31st, 2025
Source:
https://forum.effectivealtruism.org/posts/YE3tdpE6JdiWRqqKx/leadership-change-at-the-center-on-long-term-risk
---
Narrated by TYPE III AUDIO.

Feb 4, 2025 • 9min
“Climate Change Is Worse Than Factory Farming” by EA Forum Team
This is a link post. Note: This post was crossposted from the United States of Exception Substack by the Forum team, with the author's permission. The author may not see or respond to comments on this post.A good and wholesome K-strategist. I am a climate change catastrophist, but I’m not like all the others. I don’t think climate change is going to wipe out all life on Earth (as 35% of Americans say they believe) or end the human race (as 31% believe). Nor do I think it's going to end human life on Earth but that human beings will continue to exist somewhere else in the universe (which at least 4% of Americans would logically have to believe). Nevertheless, I think global warming is among the worst things in the world — if not #1 — and addressing it should be among our top priorities. Friend of the blog [...] ---
First published:
January 28th, 2025
Source:
https://forum.effectivealtruism.org/posts/gBSmkRjYLcAvNPoDs/climate-change-is-worse-than-factory-farming
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jan 29, 2025 • 26min
“The Game Board has been Flipped: Now is a good time to rethink what you’re doing” by LintzA
Recent developments in AI are prompting a critical reassessment of safety strategies. Topics include the implications of tight timelines and the impact of the Trump presidency on AI governance. The discussion explores new paradigms like o1 computing and budget shifts in AI data centers. With mainstream discourse lacking consideration for AI risks, the episode emphasizes what safety-focused individuals should prioritize going forward. The potential effects on US-China competition are also examined, questioning traditional methods and strategies.

Jan 28, 2025 • 9min
“The Upcoming PEPFAR Cut Will Kill Millions, Many of Them Children” by Omnizoid
Edit 1/29: Funding is back, baby! Crossposted from my blog. (This could end up being the most important thing I’ve ever written. Please like and restack it—if you have a big blog, please write about it). A mother holds her sick baby to her chest. She knows he doesn’t have long to live. She hears him coughing—those body-wracking coughs—that expel mucus and phlegm, leaving him desperately gasping for air. He is just a few months old. And yet that's how old he will be when he dies. The aforementioned scene is likely to become increasingly common in the coming years. Fortunately, there is still hope. Trump recently signed an executive order shutting off almost all foreign aid. Most terrifyingly, this included shutting off the PEPFAR program—the single most successful foreign aid program in my lifetime. PEPFAR provides treatment and prevention of HIV and AIDS—it has saved about [...] ---
First published:
January 27th, 2025
Source:
https://forum.effectivealtruism.org/posts/BRqBvkjskZ6c2G6rn/the-upcoming-pepfar-cut-will-kill-millions-many-of-them
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jan 28, 2025 • 2min
“GiveWell raised less than its 10th percentile forecast in 2023” by Rasool
In 2023[1] GiveWell raised $355 million - $100 million from Open Philanthropy, and $255 million from other donors. In their post on 10th April 2023, GiveWell forecast the amount they expected to raise in 2023, albeit with wide confidence intervals, and stated that their 10th percentile estimate for total funds raised was $416 million, and 10th percentile estimate for funds raised outside of Open Philanthropy was $260 million. 10th percentile estimateMedian estimateAmount raisedTotal$416 million$581 million$355 millionExcluding Open Philanthropy$260 million$330 million$255 million Regarding Open Philanthropy, the April 2023 post states that they "tentatively plans to give $250 million in 2023", however Open Philanthropy gave a grant of $300 million to cover 2023-2025, to be split however GiveWell saw fit, and it used $100 million of that grant in 2023. However for other donors I'm not sure what caused the missed estimate Credit to 'Arnold' on GiveWell's December 2024 Open Thread for [...] The original text contained 2 footnotes which were omitted from this narration. ---
First published:
January 19th, 2025
Source:
https://forum.effectivealtruism.org/posts/RdbDH4T8bxWwZpc9h/givewell-raised-less-than-its-10th-percentile-forecast-in
---
Narrated by TYPE III AUDIO.

Jan 27, 2025 • 15min
“In defense of the certifiers” by LewisBollard
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. They’re imperfect agents of change The world's three largest animal welfare groups are under attack. Their antagonists are not factory farmers, but other animal groups. And the ASPCA, HSUS, and RSPCA stand accused not of hurting farmers, but of hurting animals, through their work with GAP and RSPCA Assured, which certify animal products as being less cruelly produced. The attacks began last summer when the UK animal rights group Animal Rising released a report and footage showing abuses on RSPCA Assured farms. They’ve since forced the RSPCA to cancel its 200th year celebrations, plastered portraits of RSPCA patron King Charles, and persuaded the ceremonial president and two vice-presidents of the RSPCA to resign in protest. [...] ---
First published:
January 24th, 2025
Source:
https://forum.effectivealtruism.org/posts/np6vRZvsWgF5rq5W7/in-defense-of-the-certifiers
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jan 24, 2025 • 2min
“Preparing Effective Altruism for an AI-Transformed World” by Tobias Häberli
In recent years, many in the Effective Altruism community have shifted to working on AI risks, reflecting the growing consensus that AI will profoundly shape our future. In response to this significant shift, there have been efforts to preserve a "principles-first EA" approach, or to give special thought into how to support non-AI causes. This has often led to discussions being framed around "AI Safety vs. everything else". And it feels like the community is somewhat divided along the following lines: Those working on AI Safety, because they believe that transformative AI is coming. Those focusing on other causes, implicitly acting as if transformative AI is not coming.[1] Instead of framing priorities this way, I believe it would be valuable for more people to adopt a mindset that assumes transformative AI is likely coming and asks: What should we work on in light of that? If we [...] The original text contained 2 footnotes which were omitted from this narration. ---
First published:
January 22nd, 2025
Source:
https://forum.effectivealtruism.org/posts/psNGNSoJpXRodmDSg/preparing-effective-altruism-for-an-ai-transformed-world
---
Narrated by TYPE III AUDIO.

Jan 19, 2025 • 23min
“What are we doing about the EA Forum? (Jan 2025)” by Sarah Cheng
This post is my personal perspective. I’m sure that my colleagues on the Forum Team and at CEA disagree with parts of this. However, since I am the Interim EA Forum Project Lead, I recognize that my opinions and beliefs carry extra weight. I’m very happy to receive feedback and push back from others, since I believe that my decisions matter a fair amount. You’re welcome to reply to this post, DM me, find me at EAG Bay Area, contact our team, or leave our team anonymous feedback here. When I took the role of Interim EA Forum Project Lead in late August 2024, I spent some time investigating where the Forum was at and thinking about what (if anything) our team should prioritize working on. Over the course of 2024 (and indeed, since early 2023), Forum usage metrics have steadily gone down[1]. My subjective opinion was that the [...] ---Outline:(01:21) The Forum Team as community builders(05:41) What does the best version of the Forum community look like?(07:23) We're not there yet(09:50) What is the Forum Team doing?(12:01) What are we not doing?(13:00) How you can help(14:31) Appendix: The value of the ForumThe original text contained 27 footnotes which were omitted from this narration. ---
First published:
January 13th, 2025
Source:
https://forum.effectivealtruism.org/posts/wpDGEXjAtHJa2eCFA/what-are-we-doing-about-the-ea-forum-jan-2025
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jan 19, 2025 • 7min
“What I’m celebrating from EA and adjacent work in 2024” by Emma Richter🔸
As 2024 draws to a close, I’m reflecting on the work and stories that inspired me this year: those from the effective altruism community, those I found out about through EA-related channels, and those otherwise related to EA. I’ve appreciated the celebration of wins and successes over the past few years from @Shakeel Hashim's posts in 2022 and 2023. As @Lizka and @MaxDalton put very well in a post in 2022: We often have high standards in effective altruism. This seems absolutely right: our work matters, so we must constantly strive to do better. But we think that it's really important that the effective altruism community celebrate successes: If we focus too much on failures, we incentivize others/ourselves to minimize the risk of failure, and we will probably be too risk averse. We're humans: we're more motivated if we celebrate things that have gone well. Rather than attempting [...] ---Outline:(01:54) What progress in the world did you find exciting?(03:14) What individual stories inspired you?(04:29) What popular media or articles did you appreciate?(05:40) What writing from this year did you appreciate or find compelling?(06:19) What made you grateful or excited to be involved in or related to effective altruism?---
First published:
December 31st, 2024
Source:
https://forum.effectivealtruism.org/posts/SkfMyerJ5bGK7scnW/what-i-m-celebrating-from-ea-and-adjacent-work-in-2024
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.