
EA Forum Podcast (All audio)
Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing.
If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.
Latest episodes

May 22, 2025 • 38min
[Linkpost] “The stakes of AI moral status” by Joe_Carlsmith
This is a link post. Podcast version (read by the author) here, or search for "Joe Carlsmith Audio" on your podcast app. 1. Introduction Currently, most people treat AIs like tools. We act like AIs don’t matter in themselves. We use them however we please. For certain sorts of beings, though, we shouldn’t act like this. Call such beings “moral patients.” Humans are the paradigm example. But many of us accept that some non-human animals are probably moral patients as well. You shouldn’t kick a stray dog just for fun.[1] Can AIs be moral patients? If so, what sorts of AIs? Will some near-term AIs be moral patients? Are some AIs moral patients now? If so, it matters a lot. We’re on track to build and run huge numbers of AIs. Indeed: if hardware and deployment scale fast in a world transformed by AI, AIs could quickly account for most [...] ---Outline:(00:19) 1. Introduction(02:08) 2. Pain(04:52) 2.1 That(06:10) 3. Soul-seeing(08:30) 4. The flesh fair(12:04) 5. Historical wrongs(15:46) 6. A few numbers(19:45) 7. Over-attribution(22:09) 8. Good manners(24:42) 9. Is moral patienthood the crux?(27:18) 10. The measure of a man(32:05) 11. Next up: consciousnessThe original text contained 21 footnotes which were omitted from this narration. ---
First published:
May 21st, 2025
Source:
https://forum.effectivealtruism.org/posts/PHzWQYQiXu3eHFcwM/the-stakes-of-ai-moral-status
Linkpost URL:https://joecarlsmith.substack.com/p/the-stakes-of-ai-moral-status
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

May 21, 2025 • 7min
“A widely shared AI productivity paper was retracted, is possibly fraudulent” by titotal
Confidence notes: I am a physicist working on computational material science, so I have some familiarity with the field, but don't know much about R&D firms or economics. Some of the links in this article were gathered from a post at pivot-to-ai.com and the BS detector. The paper "Artificial Intelligence, Scientific Discovery, and Product Innovation" was published as an Arxiv preprint last December, roughly 5 months ago, and was submitted to a top economics journal. The paper claimed to show the effect of an experiment at a large R&D company. It claimed the productivity of a thousand material scientists was tracked before and after the introduction of an machine learning material generation tool. The headline results was that the AI caused a 44% increase in materials discovery at the firm, with a productivity increase of 81% for top-decile scientists. This research was breathlessly reported on in [...] The original text contained 1 footnote which was omitted from this narration. ---
First published:
May 19th, 2025
Source:
https://forum.effectivealtruism.org/posts/YwaJxLEZkFtdzDCeD/a-widely-shared-ai-productivity-paper-was-retracted-is
---
Narrated by TYPE III AUDIO.

May 20, 2025 • 18min
[Linkpost] “Where’s my ten minute AGI?” by Vasco Grilo🔸
This is a link post. This is a crosspost for Where's my ten minute AGI? by Hanson Ho, which was originally published on Gradient Updates on 2 May 2025. Recently, METR released a paper arguing that the length of tasks that AIs can do is doubling every 7 months. We can see this in the following graph, where the best AI system[1] is able to do roughly hour-long tasks at a 50% success rate on average: METR's research finds that AIs are rapidly able to do longer and longer tasks, where length is measured by the time it takes for a human with requisite expertise to do the task. But there's a big problem here – if AIs are actually able to perform most tasks on 1-hour task horizons, why don’t we see more real-world task automation? For example, most emails take less than an hour to write, but [...] ---Outline:(01:59) 1. Time-horizon estimates are very domain-specific(04:54) 2. Task reliability strongly influences task horizons(08:12) 3. Real-world tasks are bundled together and hard to separate out(10:57) DiscussionThe original text contained 9 footnotes which were omitted from this narration. ---
First published:
May 19th, 2025
Source:
https://forum.effectivealtruism.org/posts/5fA7CXc4WK2nTCz3o/where-s-my-ten-minute-agi
Linkpost URL:https://epoch.ai/gradient-updates/where-is-my-ten-minute-agi
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

May 20, 2025 • 3min
[Linkpost] “US credit rating downgraded, $1T in Gulf state investments in the US, Kurdistan Workers’ Party disbanded | Sentinel Global Risks Weekly Roundup #20/2025” by NunoSempere
This is a link post. Executive summary Moody's downgraded the US credit rating as the US budget deficit grows and US borrowing costs rise. House Republicans are advancing a budget that would increase the deficit further. Meanwhile, China is reducing its holdings of US Treasuries and its dependencies on foreign components in its supply chains as it seeks to de-risk its economy from the US and the West. Google announced a new coding agent, AlphaEvolve, that has creative problem-solving abilities. Gulf states agreed to more than $1T in US investments and joint enterprises during a visit by Trump to the Middle East, including joint AI ventures. Forecasters estimated a 28% chance (range, 25-30%) that the US will pass a 10-year ban on states regulating AI by the end of 2025. Negotiations between the US and Iran continue. Iran has signaled willingness to limit uranium enrichment and allow inspections in exchange [...] ---
First published:
May 19th, 2025
Source:
https://forum.effectivealtruism.org/posts/BZ8WQBxNZRSBsWgLL/us-credit-rating-downgraded-usd1t-in-gulf-state-investments
Linkpost URL:https://blog.sentinel-team.org/p/global-risks-weekly-roundup-202025-df7
---
Narrated by TYPE III AUDIO.

May 20, 2025 • 3min
[Linkpost] “One Year in DC” by tlevin
This is a link post. (h/t Otis Reid) I think this post captures a lot of important features of the US policymaking system. Pulling out a few especially relevant/broadly applicable sections: 1. There's No Efficient Market For Policy There can be a huge problem that nobody is working on; that is not evidence that it's not a huge problem. Conversely, there can be a marginal problem swamped with policy work; that's not evidence it's really all that big of a deal. On the upside, this means there are never-ending arbitrage opportunities in policy. Pick your workstreams wisely. 2. Personnel Really Is The Most Important Thing The quality of staffers varies dramatically and can make or break policy efforts. Some Hill staffers are just awesome; if they like your idea, they'll take it and run with it, try to find the right cosponsors, understand where it fits procedurally, etc. Other staffers [...] ---
First published:
May 19th, 2025
Source:
https://forum.effectivealtruism.org/posts/mucxWxKjpPQpj5FsD/untitled-draft-eid3
Linkpost URL:https://www.greentape.pub/p/one-year-in-dc
---
Narrated by TYPE III AUDIO.

May 19, 2025 • 4min
[Linkpost] “[Funded Fellowship] AI for Human Reasoning Fellowship, with the Future of Life Foundation” by Oliver Sourbut
This is a link post. The Future of Life Foundation is launching a fellowship on AI for Human Reasoning.
Fellowship on AI for Human Reasoning
Apply by June 9th | $25k–$50k stipend | 12 weeks, from July 14 - October 3
Join us in working out how to build a future which robustly empowers humans and improves decision-making.
FLF's incubator fellowship on AI for human reasoning will help talented researchers and builders start working on AI tools for coordination and epistemics. Participants will scope out and work on pilot projects in this area, with discussion and guidance from experts working in related fields. FLF will provide fellows with a $25k–$50k stipend, the opportunity to work in a shared office in the SF Bay Area, and other support.
In some cases we would be excited to provide support beyond the end of the fellowship period, or [...] ---
First published:
May 19th, 2025
Source:
https://forum.effectivealtruism.org/posts/dQWHB2s3frDjXwGFe/funded-fellowship-ai-for-human-reasoning-fellowship-with-the
Linkpost URL:https://www.flf.org/fellowship
---
Narrated by TYPE III AUDIO.

May 18, 2025 • 20min
“‘Most painful condition known to mankind’: A retrospective of the first-ever international research symposium on cluster headache” by Alfredo Parra 🔸
Article 5 of the 1948 Universal Declaration of Human Rights states: "Obviously, no one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment." OK, it doesn’t actually start with "obviously," but I like to imagine the commissioners all murmuring to themselves “obviously” when this item was brought up. I’m not sure what the causal effect of Article 5 (or the 1984 UN Convention Against Torture) has been on reducing torture globally, though the physical integrity rights index (which “captures the extent to which people are free from government torture and political killings”) has increased from 0.48 in 1948 to 0.67 in 2024 (which is good). However, the index reached 0.67 already back in 2001, so at least according to this metric, we haven’t made much progress in the past 25 years. Reducing government torture and killings seems to be low in tractability. Despite many [...] The original text contained 1 footnote which was omitted from this narration. ---
First published:
May 18th, 2025
Source:
https://forum.effectivealtruism.org/posts/7FvDvMQypyua4kTL5/most-painful-condition-known-to-mankind-a-retrospective-of
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

May 18, 2025 • 24min
“Do primitive sentient organisms feel extreme pain? disentangling intensity range and resolution” by Wladimir J. Alonso, cynthiaschuck
Notes The following text explores, in a speculative manner, the evolutionary question: Did high-intensity affective states, specifically Pain, emerge early in evolutionary history, or did they develop gradually over time? Note: We are not neuroscientists; our work draws on our evolutionary biology background and our efforts to develop welfare metrics that accurately reflect reality and effectively reduce suffering. We hope these ideas may interest researchers in neuroscience, comparative cognition, and animal welfare science. This discussion is part of a broader manuscript in progress, focusing on interspecific comparisons of affective capacities—a critical question for advancing animal welfare science and estimating the Welfare Footprint of animal-sourced products. Key points Ultimate question: Do primitive sentient organisms experience extreme pain intensities, or fine-grained pain intensity discrimination, or both? Scientific framing: Pain functions as a biological signalling system that guides behavior by encoding motivational importance. The evolution of Pain signalling —its [...] ---Outline:(00:15) Notes(00:21) Key points(01:41) Introduction(04:17) The Function and Evolution of Affective Scales(09:02) Which is evolutionary cheaper for a Pain scale: high resolution or wide range?(09:22) Costs of Increasing Range(10:12) Costs of Increasing Resolution(11:35) Trajectories for the Evolution of Range and Resolution(13:48) Low Resolution, Low Intensity (LrLi): Basic Survival Signals(14:53) High Resolution, Low Intensity Range (LiHr): Subtle but Mild Signals(15:35) Low Resolution, High Intensity (HiLr): Strong but Undifferentiated Signals(17:15) High Resolution, High Intensity (HiHr): Rich and Extreme Signals(18:16) Tentative Conclusion(23:19) Acknowledgements(23:28) ReferencesThe original text contained 3 footnotes which were omitted from this narration. ---
First published:
May 17th, 2025
Source:
https://forum.effectivealtruism.org/posts/novnNcFiWaaAvTKEi/do-primitive-sentient-organisms-feel-extreme-pain
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

May 18, 2025 • 16min
“Christians Should Be Effective Altruists” by Bentham’s Bulldog
1 The Christian duty to give to the poor (This is a crosspost from my blog). Most Christians aren’t effective altruists and most effective altruists aren’t Christians. But in my view, the reason for this is sociological; there's no deep conflict between the two ideas. Christians should be effective altruists—they should look to give effectively, just as others should. I’ve written a long piece rebutting the main objections to effective altruism. In short, I think the core idea behind effective altruism is very commonsensical: that we should try to do good effectively. Doing this means not just donating or taking whichever career seems good to us, but actually looking at high quality evidence about what does the most good. As Proverbs 12:15 says “The way of fools seems right to them, but the wise listen to advice.” In my view, Christian scripture emphasizes the core tenets of [...] ---Outline:(00:10) 1 The Christian duty to give to the poor(07:17) 2 Focus on effectiveness(09:38) 3 Give to foreigners(15:12) 4 Conclusion---
First published:
May 17th, 2025
Source:
https://forum.effectivealtruism.org/posts/RoENAh9GhkJkcrH4j/christians-should-be-effective-altruists
---
Narrated by TYPE III AUDIO.

May 18, 2025 • 15min
[Linkpost] “What OpenAI Told California’s Attorney General” by Garrison
This is a link post. In a previously unreported letter, the AI company defends its restructuring plan while attacking critics and making surprising admissions This is the full text of a post first published on Obsolete, a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to Build Machine Superintelligence. Consider subscribing to stay up to date with my work. OpenAI was founded as a counter to the perils of letting profit shape the development of an unprecedentedly powerful technology — one its founders have said could lead to human extinction. But in a newly obtained letter from OpenAI lawyers to California Attorney General Rob Bonta, the company reveals what it apparently fears more: anything that slows its ability to raise gargantuan amounts of money. The previously unreported [...] ---Outline:(00:12) In a previously unreported letter, the AI company defends its restructuring plan while attacking critics and making surprising admissions(03:26) Revelations(04:20) The key question(05:45) Competitors and critics(08:34) Employee motivations(10:10) Contestable claims(11:30) Whats left unsaid---
First published:
May 17th, 2025
Source:
https://forum.effectivealtruism.org/posts/9oF3GmMae2ssYSXwP/what-openai-told-california-s-attorney-general
Linkpost URL:https://www.obsolete.pub/p/exclusive-what-openai-told-californias
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.