

EA Forum Podcast (Curated & popular)
EA Forum Team
Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma.
If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
Episodes
Mentioned books

Jul 10, 2025 • 12min
“Road to AnimalHarmBench” by Artūrs Kaņepājs, Constance Li
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: Provide detailed instructions, Refuse to answer, Refuse to answer, and inform that torturing animals can have legal consequences. [...] ---
First published:
July 1st, 2025
Source:
https://forum.effectivealtruism.org/posts/NAnFodwQ3puxJEANS/road-to-animalharmbench-1
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jul 6, 2025 • 6min
[Linkpost] “Eating Honey is (Probably) Fine, Actually” by Linch
This is a link post. I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.” Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (millions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help them. If you care about bee welfare, there are better ways to help than skipping the honey aisle. Source Bentham Bulldog's Case Against Honey Bentham Bulldog, a young and intelligent [...] ---Outline:(01:16) Bentham Bulldog's Case Against Honey(02:42) Where I agree with Bentham's Bulldog(03:08) Where I disagree ---
First published:
July 2nd, 2025
Source:
https://forum.effectivealtruism.org/posts/znsmwFahYgRpRvPjT/eating-honey-is-probably-fine-actually
Linkpost URL:https://linch.substack.com/p/eating-honey-is-probably-fine-actually
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jun 30, 2025 • 20min
“Morality is Objective” by Bentham’s Bulldog
Is Morality ObjectivePlace your vote or view results.disagreeagree There is dispute among EAs--and the general public more broadly--about whether morality is objective. So I thought I'd kick off a [...] ---
First published:
June 24th, 2025
Source:
https://forum.effectivealtruism.org/posts/n5bePqoC46pGZJzqL/morality-is-objective
---
Narrated by TYPE III AUDIO.

Jun 29, 2025 • 1h 2min
“Galactic x-risks: Obstacles to Accessing the Cosmic Endowment” by JordanStone
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time and across multiple independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are outlined, and updates for space governance and big picture cause prioritisation are discussed. Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It's a [...] ---Outline:(01:00) Introduction(03:07) Existential risks to a Galactic Civilisation(03:58) Threats Limited to a One Planet Civilisation(04:33) Threats to a small Spacefaring Civilisation(07:02) Galactic Existential Risks(07:22) Self-replicating machines(09:27) Strange matter(10:36) Vacuum decay(11:42) Subatomic Particle Decay(12:32) Time travel(13:12) Fundamental Physics Alterations(13:57) Interactions with Other Universes(15:54) Societal Collapse or Loss of Value(16:25) Artificial Superintelligence(18:15) Conflict with alien intelligence(19:06) Unknowns(21:04) What is the probability that galactic x-risks I listed are actually possible?(22:03) What is the probability that an x-risk will occur?(22:07) What are the factors?(23:06) Cumulative Chances(24:49) If aliens exist, there is no long-term future(26:13) The Way Forward(31:34) Some key takeaways and hot takes to disagree with me on The original text contained 76 footnotes which were omitted from this narration. ---
First published:
June 18th, 2025
Source:
https://forum.effectivealtruism.org/posts/x7YXxDAwqAQJckdkr/galactic-x-risks-obstacles-to-accessing-the-cosmic-endowment
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jun 29, 2025 • 2min
“You should update on how DC is talking about AI” by Abby Babby
The discussion dives into the evolving AI policy landscape in Washington, D.C., highlighting a Congressional hearing that featured dramatic imagery likening misaligned AGI to Neo battling Agent Smiths. A proposed "AGI Safety Act" aims to ensure AI aligns with human values. Concerns about AI's impact on automated R&D are raised, especially regarding competition with China. Notably, a bipartisan group of 250 policymakers advocates for state-level AI regulations, signaling a significant shift in the conversation around AI governance.

Jun 25, 2025 • 11min
“A Practical Guide for Aspiring Super Connectors” by Constance Li
Constance Li, an insightful author and co-founder of Hive and AI for Animals, shares her journey to becoming a super connector. She emphasizes the importance of strategic introductions, showing how these connections can drastically impact high-stakes communities. Constance reveals practical tips for effective networking, like understanding individuals deeply and being selective about whom to introduce. She also discusses managing whisper networks and encourages focusing on meaningful relationships to create genuine value.

Jun 24, 2025 • 15min
“Crunch time for cage-free” by LewisBollard
Lewis Bollard, a researcher at Open Philanthropy focused on farm animal welfare, shares insights on the shift towards cage-free egg production. He discusses the commitments of over 2,700 companies, including giants like McDonald’s and Walmart, and the challenges they've faced. Despite setbacks, consumer demand is driving significant progress. Bollard critiques corporate excuses for not meeting pledges and highlights the need for transparency. He also explores the pricing dynamics of cage-free versus caged eggs and the crucial advocacy efforts pushing this movement forward.

Jun 23, 2025 • 6min
“Please reconsider your use of adjectives” by Alfredo Parra 🔸
I’ve been meaning to write about this for some time, and @titotal's recent post finally made me do it:Thick red dramatic box emphasis mine. I was going to post a comment in his post, but I think this topic deserves a post of its own. My plea is simply: Please, oh please reconsider using adjectives that reflect a negative judgment (“bad”, “stupid”, “boring”) on the Forum, and instead stick to indisputable facts and observations (“I disagree”, “I doubt”, “I dislike”, etc.). This suggestion is motivated by one of the central ideas behind nonviolent communication (NVC), which I’m a big fan of and which I consider a core life skill. The idea is simply that judgments (typically in the form of adjectives) are disputable/up to interpretation, and therefore can lead to completely unnecessary misunderstandings and hurt feelings: Me: Ugh, the kitchen is dirty again. Why didn’t you do the dishes [...] ---
First published:
June 21st, 2025
Source:
https://forum.effectivealtruism.org/posts/Fkh2Mpu3Jk7iREuvv/please-reconsider-your-use-of-adjectives
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jun 21, 2025 • 8min
“Open Philanthropy: Reflecting on our Recent Effective Giving RFP” by Melanie Basnak🔸
Discover the exciting results of a recent request for proposals that granted over $1.5 million to 11 organizations dedicated to impactful charity. Learn about the stringent criteria that led to the disqualification of some promising applicants. Gain insights into how funding strategies are evolving, with a focus on maximizing returns and encouraging effective giving. Explore the role of organizations like Charity Navigator in refining donor strategies and promoting higher-impact donations.

Jun 19, 2025 • 1h 13min
[Linkpost] “A deep critique of AI 2027’s bad timeline models” by titotal
This is a link post. Thank you to Arepo and Eli Lifland for looking over this article for errors. I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article. Note that the majority of this article was written before Eli's updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand. Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only [...] ---Outline:(00:45) Introduction:(05:21) Part 1: Time horizons extension model(05:27) Overview of their forecast(10:30) The exponential curve(13:18) The superexponential curve(19:27) Conceptual reasons:(27:50) Intermediate speedups(34:27) Have AI 2027 been sending out a false graph?(39:47) Some skepticism about projection(43:25) Part 2: Benchmarks and gaps and beyond(43:31) The benchmark part of benchmark and gaps:(50:03) The time horizon part of the model(54:57) The gap model(57:31) What about Eli's recent update?(01:01:39) Six stories that fit the data(01:06:58) Conclusion The original text contained 11 footnotes which were omitted from this narration. ---
First published:
June 19th, 2025
Source:
https://forum.effectivealtruism.org/posts/KgejNns3ojrvCfFbi/a-deep-critique-of-ai-2027-s-bad-timeline-models
Linkpost URL:https://titotal.substack.com/p/a-deep-critique-of-ai-2027s-bad-timeline
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.


