

EA Forum Podcast (Curated & popular)
EA Forum Team
Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma.
If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
Episodes
Mentioned books

Mar 1, 2025 • 2min
“How confident are you that it’s preferable for America to develop AGI before China does?” by ScienceMon🔸
The belief that it's preferable for America to develop AGI before China does seems widespread among American effective altruists. Is this belief supported by evidence, or it it just patriotism in disguise? How would you try to convince an open-minded Chinese citizen that it really would be better for America to develop AGI first? Such a person might point out: Over the past 30 years, the Chinese government has done more for the flourishing of Chinese citizens than the American government has done for the flourishing of American citizens. My village growing up lacked electricity, and now I'm a software engineer! Chinese institutions are more trustworthy for promoting the future flourishing of humanity. Commerce in China ditches some of the older ideas of Marxism because it's the means to an end: the China Dream of wealthy communism. As AGI makes China and the world extraordinarily wealthy, we are [...] ---
First published:
February 22nd, 2025
Source:
https://forum.effectivealtruism.org/posts/MxPhK4mLRkaFekAmp/how-confident-are-you-that-it-s-preferable-for-america-to
---
Narrated by TYPE III AUDIO.

Feb 25, 2025 • 3min
“Stop calling them labs” by sawyer🔸
Note: This started as a quick take, but it got too long so I made it a full post. It's still kind of a rant; a stronger post would include sources and would have gotten feedback from people more knowledgeable than I. But in the spirit of Draft Amnesty Week, I'm writing this in one sitting and smashing that Submit button. Many people continue to refer to companies like OpenAI, Anthropic, and Google DeepMind as "frontier AI labs". I think we should drop "labs" entirely when discussing these companies, calling them "AI companies"[1] instead. While these companies may have once been primarily research laboratories, they are no longer so. Continuing to call them labs makes them sound like harmless groups focused on pushing the frontier of human knowledge, when in reality they are profit-seeking corporations focused on building products and capturing value in the marketplace. Laboratories do not directly [...] The original text contained 2 footnotes which were omitted from this narration. ---
First published:
February 24th, 2025
Source:
https://forum.effectivealtruism.org/posts/Ap6E2aEFGiHWf5v5x/stop-calling-them-labs
---
Narrated by TYPE III AUDIO.

Feb 25, 2025 • 21min
“Ditching what we are good at: A change of course for Anima International in France” by Keyvan Mostafavi, Anima International
My name is Keyvan, and I lead Anima International's work in France. Our organization went through a major transformation in 2024. I want to share that journey with you. Anima International in France used to be known as Assiettes Végétales (‘Plant-Based Plates’). We focused entirely on introducing and promoting vegetarian and plant-based meals in collective catering. Today, as Anima, our mission is to put an end to the use of cages for laying hens. These changes come after a thorough evaluation of our previous campaign, assessing 94 potential new interventions, making several difficult choices, and navigating emotional struggles. We hope that by sharing our experience, we can help others who find themselves in similar situations. So let me walk you through how the past twelve months have unfolded for us. The French team Act One: What we did as Assiettes Végétales Since 2018, we worked with the local [...] ---Outline:(01:13) Act One: What we did as Assiettes Végétales(03:55) Act Two: The moment we realized we needed to measure our impact more precisely(05:12) Act Three: The evaluation(07:23) Act Four: Ending our previous campaign(09:09) Act Five: Searching for a new intervention(11:30) Act Six: The struggle to choose(14:11) Act Seven: The strengths of the cage-free campaign(16:34) Conclusion - Where we stand today The original text contained 10 footnotes which were omitted from this narration. ---
First published:
February 22nd, 2025
Source:
https://forum.effectivealtruism.org/posts/vfADxsPECqcbd3vs6/ditching-what-we-are-good-at-a-change-of-course-for-anima
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Feb 22, 2025 • 23min
“Teaching AI to reason: this year’s most important story” by Benjamin_Todd
This is a link post. I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that's never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the [...] ---Outline:(00:51) The new paradigm: reinforcement learning(02:32) Reasoning models breakthroughs(04:09) A new rate of progress?(07:53) Why this is just the beginning(11:02) Two more accelerants(16:12) The key thing to watch: AI doing AI research ---
First published:
February 13th, 2025
Source:
https://forum.effectivealtruism.org/posts/ZuWcG3W3rEBxLceWj/teaching-ai-to-reason-this-year-s-most-important-story
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Feb 13, 2025 • 10min
“Using a diet offset calculator to encourage effective giving for farmed animals” by Aidan Alexander, ThomNorman
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it's worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our [...] ---Outline:(00:50) Background(01:41) What it is and what it isn't(02:38) How it works(04:24) Why this is a promising way to encourage effective giving for animals(06:46) Case study: Bentham's Bulldog(07:30) How is this actionable for you? The original text contained 2 footnotes which were omitted from this narration. ---
First published:
February 11th, 2025
Source:
https://forum.effectivealtruism.org/posts/nGQRBWyCAbcEYSyLL/using-a-diet-offset-calculator-to-encourage-effective-giving
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Feb 13, 2025 • 12min
“Why Did Elon Musk Just Offer to Buy Control of OpenAI for $100 Billion?” by Garrison
This is a link post. This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." [...] ---Outline:(02:44) The control premium(04:19) Conversion significance(05:45) Musks suit(09:26) The stakes ---
First published:
February 11th, 2025
Source:
https://forum.effectivealtruism.org/posts/7iopGPmtEmubSFSP3/why-did-elon-musk-just-offer-to-buy-control-of-openai-for
---
Narrated by TYPE III AUDIO.

Feb 4, 2025 • 6min
“Leadership change at the Center on Long-Term Risk” by JesseClifton, Tristan Cook, Mia_Taylor
The Center on Long-Term Risk (CLR) does research and community building aimed at reducing s-risk. Jesse Clifton is stepping down as CLR's Executive Director. He’ll be succeeded by Tristan Cook as Managing Director and Mia Taylor as Interim Research Director. [1] Statement from Jesse Over the past year or so, I’ve become increasingly convinced by arguments that we are clueless about the sign (in terms of expected total suffering reduced) of interventions aimed at reducing s-risk. (And I think it's plausible that we should consider ourselves clueless about interventions aimed at improved expected total welfare, generally.) The other researchers on CLR's Conceptual Research team[2] have come to a similar view,[3] but not the other staff or the board, who are still positive on the pre-cluelessness priorities. Given this, I don’t think it makes sense for me to lead CLR. So, for now, I’ll be transitioning to working [...] ---Outline:(00:25) Statement from Jesse(03:06) Statement from Mia and Tristan The original text contained 6 footnotes which were omitted from this narration. ---
First published:
January 31st, 2025
Source:
https://forum.effectivealtruism.org/posts/YE3tdpE6JdiWRqqKx/leadership-change-at-the-center-on-long-term-risk
---
Narrated by TYPE III AUDIO.

Feb 4, 2025 • 9min
“Climate Change Is Worse Than Factory Farming” by EA Forum Team
This is a link post. Note: This post was crossposted from the United States of Exception Substack by the Forum team, with the author's permission. The author may not see or respond to comments on this post.A good and wholesome K-strategist. I am a climate change catastrophist, but I’m not like all the others. I don’t think climate change is going to wipe out all life on Earth (as 35% of Americans say they believe) or end the human race (as 31% believe). Nor do I think it's going to end human life on Earth but that human beings will continue to exist somewhere else in the universe (which at least 4% of Americans would logically have to believe). Nevertheless, I think global warming is among the worst things in the world — if not #1 — and addressing it should be among our top priorities. Friend of the blog [...] ---
First published:
January 28th, 2025
Source:
https://forum.effectivealtruism.org/posts/gBSmkRjYLcAvNPoDs/climate-change-is-worse-than-factory-farming
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jan 29, 2025 • 26min
“The Game Board has been Flipped: Now is a good time to rethink what you’re doing” by LintzA
Recent developments in AI are prompting a critical reassessment of safety strategies. Topics include the implications of tight timelines and the impact of the Trump presidency on AI governance. The discussion explores new paradigms like o1 computing and budget shifts in AI data centers. With mainstream discourse lacking consideration for AI risks, the episode emphasizes what safety-focused individuals should prioritize going forward. The potential effects on US-China competition are also examined, questioning traditional methods and strategies.

Jan 28, 2025 • 9min
“The Upcoming PEPFAR Cut Will Kill Millions, Many of Them Children” by Omnizoid
Edit 1/29: Funding is back, baby! Crossposted from my blog. (This could end up being the most important thing I’ve ever written. Please like and restack it—if you have a big blog, please write about it). A mother holds her sick baby to her chest. She knows he doesn’t have long to live. She hears him coughing—those body-wracking coughs—that expel mucus and phlegm, leaving him desperately gasping for air. He is just a few months old. And yet that's how old he will be when he dies. The aforementioned scene is likely to become increasingly common in the coming years. Fortunately, there is still hope. Trump recently signed an executive order shutting off almost all foreign aid. Most terrifyingly, this included shutting off the PEPFAR program—the single most successful foreign aid program in my lifetime. PEPFAR provides treatment and prevention of HIV and AIDS—it has saved about [...] ---
First published:
January 27th, 2025
Source:
https://forum.effectivealtruism.org/posts/BRqBvkjskZ6c2G6rn/the-upcoming-pepfar-cut-will-kill-millions-many-of-them
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.


