
EA Forum Podcast (All audio)
Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing.
If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.
Latest episodes

Jun 30, 2025 • 12min
“Don’t Eat Honey” by Bentham’s Bulldog
Crosspost from my blog. (I think this is a pretty important article so I’d appreciate you sharing and restacking it—thanks!) There are lots of people who say of themselves “I’m vegan except for honey.” This is a bit like someone saying “I’m a law-abiding citizen, never violating the law, except sometimes I’ll bring a young boy to the woods and slay him.” These people abstain from all the animal products except honey, even though honey is by far the worst of the commonly eaten animal products. Now, this claim sounds outrageous. Why do I think it's worse to eat honey than beef, eggs, chicken, dairy, and even foie gras? Don’t I know about the months-long torture process needed to fatten up ducks sold for foie gras? Don’t I know about the fact that they grind up baby male chicks in the egg industry and keep the females in [...] ---
First published:
June 30th, 2025
Source:
https://forum.effectivealtruism.org/posts/eyRwnes5hDT734GWq/don-t-eat-honey
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jun 30, 2025 • 1min
“If you want to be vegan but you worry about health effects of no meat, consider being vegan except for mussels/oysters” by Kat Woods
1) They're unlikely to be sentient (few neurons, immobile) 2) If they are sentient, the farming practices look likely to be pretty humane 3) They're extremely nutritionally dense Buying canned smoked oysters/mussels and eating them plain or on crackers is super easy and cheap. It's an acquired taste for some, but I love them. ---
First published:
June 30th, 2025
Source:
https://forum.effectivealtruism.org/posts/BM38x7QCYrf7MGN5D/if-you-want-to-be-vegan-but-you-worry-about-health-effects
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jun 30, 2025 • 5min
“Who Are Your EA Role Models?” by EAvalues🔸
Learning from Inspiring Figures in Our Community Background This post emerged from discussions during the EA values project, where we observed that many community members cite specific individuals (whether EA founders, organization leaders, or mentors) as key influences in their journey into effective altruism. Understanding who inspires us and why can help us identify the values and approaches that make EA compelling to newcomers and sustaining for existing members. Historical movements have often been shaped by individuals who embodied their core principles in compelling ways. From Gandhi's commitment to nonviolence to scientists like Marie Curie who persevered despite systemic barriers, these figures serve not just as leaders but as concrete examples of abstract values in action. The Value of Role Models in EA Role models serve several important functions: They make abstract EA principles concrete and relatable. They demonstrate how EA values translate into career and life decisions. [...] ---Outline:(00:10) Learning from Inspiring Figures in Our Community(00:15) Background(01:00) The Value of Role Models in EA(01:46) Categories for Discussion(02:42) Some Starting Examples(03:30) Guidelines for discussions and comments(04:02) The hope moving forward---
First published:
June 28th, 2025
Source:
https://forum.effectivealtruism.org/posts/EuGiePm7dp9qKi9cR/who-are-your-ea-role-models
---
Narrated by TYPE III AUDIO.

Jun 30, 2025 • 5min
“Welfare tech should be developed by welfare people” by Aaron Boddy🔸
This is a lightly-edited memo I wrote for the 2025 Animal Advocacy Strategy Forum, which were encouraged to be highly opinionated to generate strategy discussion. It seems strange to me that animal advocates rely on animal ag to come up with the solutions that we want to see. We identify a problem, we look to see what exists in the world, we try to find the best of those things, and then we try to get them implemented. I would like to see welfare-oriented engineers actively develop the tech that we want to see deployed on farms. For example, Shrimp Welfare Project is trying to get new stunners developed, which I'm excited about [1]. But I think there are a lot of other examples that this would work for. In-Ovo sexing seems like another good example of technology that I'm kind of surprised that the animal movement wasn't more [...] The original text contained 3 footnotes which were omitted from this narration. ---
First published:
June 29th, 2025
Source:
https://forum.effectivealtruism.org/posts/JDDAiMoaeTK6WRNpT/welfare-tech-should-be-developed-by-welfare-people
---
Narrated by TYPE III AUDIO.

Jun 28, 2025 • 25min
“Should EAs pay more attention to Climate Tipping Points? AMOC Collapse as a Case Study” by Rebecca Frank
Climate change doesn't rank as a top EA cause area because it already receives substantial funding, is less neglected than other risks, and—under gradual-warming scenarios—seems unlikely to cause human extinction. I broadly agree with that assessment. Yet some problems under the climate umbrella do fit EA criteria of scale, neglectedness, and tractability. Climate tipping points in particular could trigger catastrophic feedback loops—mass human suffering, great-power conflict, biodiversity collapse, and wild-animal suffering—while still attracting little targeted funding. Conversations with long-time climate experts reinforce that most money still flows to mitigation; far less supports adaptation or contingency planning. From a “maximizing impact at the margins” perspective, work that limits damage once a tipping point is crossed looks unusually cost-effective, even if it is unfashionable in some environmental circles (it can feel like “admitting defeat”). Preparedness and response planning could therefore make a decisive difference. I would welcome the community's perspective [...] ---Outline:(02:27) Assessing the Risk of Global Catastrophic Food Failure from AMOC Collapse(03:53) Introduction(04:27) Overview of the Atlantic Meridional Overturning Circulation (AMOC)(07:05) AMOC Weakening and Tipping Potential(12:59) Impacts(18:53) Timelines and Early Warnings(22:15) Recommendations for Resilience and Further Research(24:03) References---
First published:
June 27th, 2025
Source:
https://forum.effectivealtruism.org/posts/zCHvZCpyjB8rchyye/untitled-draft-m599
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jun 28, 2025 • 2min
“You should update on how DC is talking about AI” by Abby Babby
If you are planning on doing AI policy communications to DC policymakers, I recommend watching the full video of the Select Committee on the CCP hearing from this week. In his introductory comments, Ranking Member Representative Krishnamoorthi played a clip of Neo fighting an army of Agent Smiths, described it as misaligned AGI fighting humanity, and then announced he was working on a bill called "The AGI Safety Act" which would require AI to be aligned to human values. On the Republican side, Congressman Moran articulated the risks of AI automated R&D, and how dangerous it would be to let China achieve this capability. Additionally, 250 policymakers (half Republican, half Democrat) signed a letter saying they don't want the Federal government to ban state level AI regulation. The Overton window is rapidly shifting in DC, and I think people should re-evaluate what the [...] ---
First published:
June 27th, 2025
Source:
https://forum.effectivealtruism.org/posts/RPYnR7c6ZmZKBoeLG/you-should-update-on-how-dc-is-talking-about-ai
---
Narrated by TYPE III AUDIO.

Jun 27, 2025 • 29min
“Morality Isn’t Objective” by Noah Birnbaum
In response to Matthew's post about the objectivity of morality, I'd thought I'd throw out a(n initially -- this became much longer than I was expecting) short post explaining why I think this view is pretty implausible. I have another post on my Substack that shows why I think some other arguments for moral realism fail (though I'm not sure I endorse everything I argue there anymore...). If you like this one, check it out! Note: The formatting of this article got a little messed up (inconsistent about ordering with the numbered and lettered bullets, etc), but it should still be understandable and fairly readable. Also, I didn't spend much time editing because I wanted to get this out so apologies in advance. Evolutionary debunking arguments: I see there being two main versions of these arguments: Even if moral realism were true, there is [...] ---
First published:
June 26th, 2025
Source:
https://forum.effectivealtruism.org/posts/n4cNbmAELuKkxD5T2/morality-isn-t-objective
---
Narrated by TYPE III AUDIO.

Jun 27, 2025 • 21min
[Linkpost] “Reducing suffering given long-term cluelessness” by Magnus Vinding
This is a link post. An objection against trying to reduce suffering is that we cannot predict whether our actions will reduce or increase suffering in the long term. Relatedly, some have argued that we are clueless about the effects that any realistic action would have on total welfare, and this cluelessness, it has been claimed, undermines our reason to help others in effective ways. For example, DiGiovanni (2025) writes: “if my arguments [about cluelessness] hold up, our reason to work on EA causes is undermined.” There is a grain of truth in these claims: we face enormous uncertainty when trying to reduce suffering on a large scale. Of course, whether we are bound to be completely clueless about the net effects of any action is a much stronger and more controversial claim (and one that I am not convinced of). Yet my goal here is not to discuss the [...] ---Outline:(01:50) A potential approach: Giving weight to scope-adjusted views(04:40) Asymmetry in practical recommendations(05:40) Toy models(08:25) Justifications and motivations(08:50) Why give weight to multiple views?(10:47) Why give weight to a scope-adjusted view?(17:30) Arguments I have not made(19:12) Conclusion(20:01) Acknowledgments---
First published:
June 26th, 2025
Source:
https://forum.effectivealtruism.org/posts/dq7cHFgJrZSQBcNrN/untitled-draft-68pm
Linkpost URL:https://magnusvinding.com/2025/06/25/reducing-suffering-given-long-term-cluelessness/
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jun 27, 2025 • 32min
[Linkpost] “The Industrial Explosion” by rosehadshar, Tom_Davidson, Forethought
This is a link post. Summary To quickly transform the world, it's not enough for AI to become super smart (the "intelligence explosion"). AI will also have to turbocharge the physical world (the "industrial explosion"). Think robot factories building more and better robot factories, which build more and better robot factories, and so on. The dynamics of the industrial explosion has gotten remarkably little attention. This post lays out how the industrial explosion could play out, and how quickly it might happen. We think the industrial explosion will unfold in three stages: AI-directed human labour, where AI-directed human labourers drive productivity gains in physical capabilities. We argue this could increase physical output by 10X within a few years. Fully autonomous robot factories, where AI-directed robots (and other physical actuators) replace human physical labour. We argue that, with current physical technology and full automation of cognitive [...] ---Outline:(00:13) Summary(01:46) Intro(04:34) The industrial explosion will start after the intelligence explosion, and will proceed more slowly(06:56) Three stages of industrial explosion(08:00) AI-directed human labour(09:42) Fully autonomous robot factories(12:26) Nanotechnology(13:23) How fast could an industrial explosion be?(13:58) Initial speed(16:37) Acceleration(17:55) Maximum speed(20:17) Appendices(20:21) How fast could robot doubling times be initially?(28:03) How fast could robot doubling times accelerate?---
First published:
June 26th, 2025
Source:
https://forum.effectivealtruism.org/posts/qgMMSnGWEDJwedEtj/the-industrial-explosion
Linkpost URL:https://www.forethought.org/research/the-industrial-explosion
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jun 27, 2025 • 12min
“Anchoring AI and Animals” by Kevin Xia 🔸
Many thanks to Max Taylor, Alistair Stewart, Albert Didriksen, Jeff and Johannes Pichler for feedback on this post. All mistakes are my own. This post does not necessarily reflect the views of my employer.Executive Summary I believe that AI development could have an outsized impact on animal welfare and that these stakes warrant deep investigation. However, the complexity and uncertainty involved make it difficult to approach this. This post outlines three robust anchor points — concepts that we know and that I've found helpful — for navigating the uncertain and high-stakes intersection of AI development and animal advocacy. By focusing on what we do know, I hope to nudge and provide structure to more strategic research. These anchor points are: Influence and Values: Fundamentally, we want to increase pro-animal values in influential AI-related spheres and amplify the influence of the pro-animal movement itself, through both upskilling animal advocates [...] ---Outline:(00:24) Executive Summary(02:07) Introduction(04:00) Influence and Values(06:00) Interest and Alternatives(09:06) Symmetries and Asymmetries(09:47) ConclusionThe original text contained 2 footnotes which were omitted from this narration. ---
First published:
June 26th, 2025
Source:
https://forum.effectivealtruism.org/posts/soKsgn6DZmTQ4uoQx/anchoring-ai-and-animals
---
Narrated by TYPE III AUDIO.