
EA Forum Podcast (All audio)
Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing.
If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.
Latest episodes

Jul 7, 2025 • 4min
“The current market price for animal welfare is zero” by Aaron Boddy🔸
This is a lightly-edited memo I wrote for the 2025 Animal Advocacy Strategy Forum, which were encouraged to be highly opinionated to generate strategy discussion. I actually wrote more about Shrimp Welfare Project's exploration of Credits here, but I think this 2-minute pitch for the value of Credits is useful to publish on its own. In 2011, Jayson Lusk published a paper titled The Market for Animal Welfare: The idea, in short, is to create a separate market for animal welfare that is decoupled from the market for eggs, meat, and milk. Farmers have a product they're supplying (animal welfare) that is only indirectly (and poorly) reflected in the price of food. Animal advocacy groups have a product they want to buy (higher animal welfare) but there is currently no mechanism for them to achieve this outcome in a market setting. It is no wonder then, that they [...] ---
First published:
July 7th, 2025
Source:
https://forum.effectivealtruism.org/posts/7jnRYbZmvf2ngGJG8/the-current-market-price-for-animal-welfare-is-zero
---
Narrated by TYPE III AUDIO.

Jul 7, 2025 • 6min
“Gaslit by humanity” by tobiasleenaert
Hi all, This is a one time cross-post from my substack. If you like it, you can subscribe to the substack at tobiasleenaert.substack.com. Thanks Gaslit by humanity After twenty-five years in the animal liberation movement, I’m still looking for ways to make people see. I’ve given countless talks, co-founded organizations, written numerous articles and cited hundreds of statistics to thousands of people. And yet, most days, I know none of this will do what I hope: open their eyes to the immensity of animal suffering. Sometimes I feel obsessed with finding the ultimate way to make people understand and care. This obsession is about stopping the horror, but it's also about something else, something harder to put into words: sometimes the suffering feels so enormous that I start doubting my own perception - especially because others don’t seem to see it. It's as if I am being [...] ---
First published:
July 7th, 2025
Source:
https://forum.effectivealtruism.org/posts/28znpN6fus9pohNmy/gaslit-by-humanity
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jul 7, 2025 • 5min
[Linkpost] “LLMs might already be conscious” by MichaelDickens
This is a link post. Among people who have thought about LLM consciousness, a common belief is something like
LLMs might be conscious soon, but they aren't yet.
How sure are we that they aren't conscious already?
I made a quick list of arguments for/against LLM consciousness, and it seems to me that high confidence in non-consciousness is not justified. I don't feel comfortable assigning less than a 10% chance to LLM consciousness, and I believe a 1% chance is unreasonably confident. But I am interested in hearing arguments I may have missed.
For context, I lean toward the computational theory of consciousness, but I also think it's reasonable to have high uncertainty about which theory of consciousness is correct.
Behavioral evidence
Pro: LLMs have passed the Turing test. If you have a black box containing either a human or an LLM, and [...] ---Outline:(00:56) Behavioral evidence(02:23) Architectural evidence(03:06) Other evidence(03:19) My synthesis of the evidence(04:12) What will change with future AIs?(04:33) On LLM welfare---
First published:
July 5th, 2025
Source:
https://forum.effectivealtruism.org/posts/WrLMQjLDbT8nnowGB/untitled-draft-suvg
Linkpost URL:https://mdickens.me/2025/07/05/LLMs_might_already_be_conscious/
---
Narrated by TYPE III AUDIO.

Jul 5, 2025 • 4min
“Buying your way out of personal ethics” by escapealert
I’ve noticed a recurring argument in EA spaces around veganism: “If I donate enough money to effective animal charities, I’ll save more animals than I would by going vegan. So, I don’t need to personally stop consuming animal products.” While this may sound compelling on the surface, I believe it fails for several reasons—both ethically and practically. First-Order Utilitarianism Can Justify Harm This argument relies on a pure first-order utilitarian outlook, where harm is permissible as long as it's “offset” by a greater good. Taken to its logical extreme, this reasoning leads to absurd conclusions: “If I donate $10,000 to save two lives, I’m morally justified in taking one life because it's convenient or enjoyable.” Second-Order Effects: Ethics Become a Privilege for the Wealthy A system where individuals can buy their way out of ethical harm creates an inequitable moral landscape: The Wealthy: Can offset harm without personal [...] ---Outline:(00:31) First-Order Utilitarianism Can Justify Harm(00:57) Second-Order Effects: Ethics Become a Privilege for the Wealthy(01:41) Personal Sacrifice and Offsetting Aren't Mutually Exclusive(02:04) Veganisms Signalling Effect: The Power of Visible Ethical Action---
First published:
July 5th, 2025
Source:
https://forum.effectivealtruism.org/posts/bMbsfx6oDdcgjbpnP/buying-your-way-out-of-personal-ethics
---
Narrated by TYPE III AUDIO.

Jul 4, 2025 • 23min
“How could AI affect different animal advocacy interventions?” by Kevin Xia 🔸, Max Taylor
Many thanks to Alina Salmen, Vince Mak, Constance Li, and Johannes Pichler for feedback on this post. All mistakes are our own. This post does not necessarily reflect the views of our employers.Introduction Rapid AI development presents unprecedented opportunities and significant challenges for animal advocacy. AI could either worsen animal suffering by, e.g, making exploitative systems more efficient, or drastically reduce it by enabling new and improving current solutions. The stakes are immense: AI could profoundly influence the trajectory of animal welfare at a scale we have not seen before - and it could go in either direction. Understanding these potential shifts now is crucial for developing proactive strategies and ensuring our movement's long-term effectiveness. This piece explores the evolving roles of existing animal advocacy interventions in a post-AI society, looking at how they may change in their nature, feasibility and cost-effectiveness. We don't attempt to assess the [...] ---Outline:(00:26) Introduction(02:01) Common Patterns and Broader Implications(06:07) Deep Dive into Key Interventions(21:12) Conclusion(21:59) Appendix: Other Animal Advocacy InterventionsThe original text contained 1 footnote which was omitted from this narration. ---
First published:
July 2nd, 2025
Source:
https://forum.effectivealtruism.org/posts/FxDWcTuoH3SuQXD3h/how-could-ai-affect-different-animal-advocacy-interventions
---
Narrated by TYPE III AUDIO.

Jul 3, 2025 • 2min
“AMA: Saloni Dattani” by Toby Tremlett🔹, salonium
Saloni will answer the questions in this AMA between 6-8pm BST on July 8th. Leave your questions as comments, and upvote other questions you’d like to see answered. If you’ve been around EA for a while, and you’re interested in global health, you’ve probably read Saloni Dattani before. Saloni writes about global health at Our World In Data and is a co-founder and editor of Works in Progress magazine. She's also recently started a podcast, Hard Drugs, with Jacob Trefethen. She also (somehow) finds time to write a great blog, Scientific Discovery. She's recently written on: The decline in cancer mortality, and how it's not all down to decline in smoking. How we calculate fertility rates, and why just calculating the total fertility rate leads us astray. And she delivered a talk at EA Global London on the data that shapes global health.Question ideas: For some question [...] ---
First published:
July 2nd, 2025
Source:
https://forum.effectivealtruism.org/posts/i2DtGATx9ZWRcKfAz/ama-saloni-dattani
---
Narrated by TYPE III AUDIO.

Jul 3, 2025 • 46min
“We should be more uncertain about cause prioritization based on philosophical arguments” by Rethink Priorities, Marcus_A_Davis
Summary In this article, I argue most of the interesting cross-cause prioritization decisions and conclusions rest on philosophical evidence that isn’t robust enough to justify high degrees of certainty that any given intervention (or class of cause interventions) is “best” above all others. I hold this to be true generally because of the reliance of such cross-cause prioritization judgments on relatively weak philosophical evidence. In particular, the case for high confidence in conclusions on which interventions are all things considered best seems to rely on particular approaches to handling normative uncertainty. The evidence for these approaches is weak and different approaches can produce radically different recommendations, which suggest that cross-cause prioritization intervention rankings or conclusions are fundamentally fragile and that high confidence in any single approach is unwarranted. I think the reliance of cross-cause prioritization conclusions on philosophical evidence that isn’t robust has been previously underestimated in EA circles [...] ---Outline:(00:14) Summary(06:03) Cause Prioritization Is Uncertain and Some Key Philosophical Evidence for Particular Conclusions is Structurally Weak(06:11) The decision-relevant parts of cross-cause prioritization heavily rely on philosophical conclusions(09:26) Philosophical evidence about the interesting cause prioritization questions is generally weak(17:35) Aggregation methods disagree(21:27) Evidence for aggregation methods is weaker than empirical evidence of which EAs are skeptical(24:07) Objections and Replies(24:11) Aren't we here to do the most good? / Aren't we here to do consequentialism? / Doesn't our competitive edge come from being more consequentialist than others in the nonprofit sector?(25:28) Can't I just use my intuitions or my priors about the right answers to these questions? I agree philosophical evidence is weak so we should just do what our intuitions say(27:27) We can use common sense / or a non-philosophical approach and conclude which cause area(s) to support. For example, it's common sense that humanity going extinct would be really bad; so, we should work on that(30:22) I'm an anti-realist about philosophical questions so I think that whatever I value is right, by my lights, so why should I care about any uncertainty across theories? Can't I just endorse whatever views seem best to me?(31:52) If the evidence in philosophy is as weak as you say, this suggests there are no right answers at all and/or that potentially anything goes in philanthropy. If you can't confidently rule things out, wouldn't this imply that you can't distinguish a scam charity from a highly effective group like Against Malaria Foundation?(34:08) I have high confidence in MEC (or some other aggregation method) and/or some more narrow set of normative theories so cause prioritization is more predictable than you are suggesting despite some uncertainty in what theories I give some credence to(41:44) Conclusion (or well, what do I recommend?)(44:05) AcknowledgementsThe original text contained 20 footnotes which were omitted from this narration. ---
First published:
July 3rd, 2025
Source:
https://forum.effectivealtruism.org/posts/nwckstt2mJinCwjtB/we-should-be-more-uncertain-about-cause-prioritization-based
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jul 2, 2025 • 6min
[Linkpost] “Eating Honey is (Probably) Fine, Actually” by Linch
This is a link post. I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.” Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (millions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help them. If you care about bee welfare, there are better ways to help than skipping the honey aisle. Source Bentham Bulldog's Case Against Honey Bentham Bulldog, a young and intelligent [...] ---Outline:(01:16) Bentham Bulldog's Case Against Honey(02:42) Where I agree with Bentham's Bulldog(03:08) Where I disagree---
First published:
July 2nd, 2025
Source:
https://forum.effectivealtruism.org/posts/znsmwFahYgRpRvPjT/eating-honey-is-probably-fine-actually
Linkpost URL:https://linch.substack.com/p/eating-honey-is-probably-fine-actually
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jul 2, 2025 • 1min
“1Day Sooner Open Phil Funding Request” by 1Day Sooner
We thought it might be useful to post a (lightly) redacted version of our most recent funding request to Open Philanthropy for our work in 2025-2026. It lays out what we’ve done over the last several years, the lessons we’ve learned, and how we go about accounting for our causal impact in that time. This request led to $3 million in funding from Open Philanthropy. OP has provided about 40% of our funding to date, which is our goal going forward. If you read this request and would like to support our work, please contact us or donate here. ---
First published:
July 2nd, 2025
Source:
https://forum.effectivealtruism.org/posts/xERpDXGsQDjrohMi9/1day-sooner-open-phil-funding-request
---
Narrated by TYPE III AUDIO.

Jul 2, 2025 • 2min
[Linkpost] “Senate Strikes Potential AI Moratorium” by Tristan Williams
This is a link post. The vote was 99-1, removing the 10 year moratorium on AI legislation at the state level set by the version passed in the House. Interestingly, the attempt to propose an alternative 5 year moratorium with further restrictions, which itself passed the procedural roadblock which might have prevented the 10 year moratorium, fell apart reportedly as a result of Senator Blackburn pulling her support. Why did she change her mind? "The current language is not acceptable to those who need these protections the most...Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can't block states from making laws that protect their citizens." - Marsha Blackburn We don't have the counterfactual here, and 99 votes against the amendment is a strong signal. But I think there were some worlds in which the 5 year moratorium passed narrowly [...] ---
First published:
July 1st, 2025
Source:
https://forum.effectivealtruism.org/posts/LJELLYjchxaW5LfCZ/senate-strikes-potential-ai-moratorium
Linkpost URL:https://www.reuters.com/legal/government/us-senate-strikes-ai-regulation-ban-trump-megabill-2025-07-01/?utm_source=chatgpt.com
---
Narrated by TYPE III AUDIO.