
EA Forum Podcast (Curated & popular)
Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma.
If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
Latest episodes

Jun 30, 2025 • 20min
“Morality is Objective” by Bentham’s Bulldog
Is Morality ObjectivePlace your vote or view results.disagreeagree There is dispute among EAs--and the general public more broadly--about whether morality is objective. So I thought I'd kick off a [...] ---
First published:
June 24th, 2025
Source:
https://forum.effectivealtruism.org/posts/n5bePqoC46pGZJzqL/morality-is-objective
---
Narrated by TYPE III AUDIO.

Jun 29, 2025 • 1h 2min
“Galactic x-risks: Obstacles to Accessing the Cosmic Endowment” by JordanStone
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time and across multiple independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are outlined, and updates for space governance and big picture cause prioritisation are discussed. Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It's a [...] ---Outline:(01:00) Introduction(03:07) Existential risks to a Galactic Civilisation(03:58) Threats Limited to a One Planet Civilisation(04:33) Threats to a small Spacefaring Civilisation(07:02) Galactic Existential Risks(07:22) Self-replicating machines(09:27) Strange matter(10:36) Vacuum decay(11:42) Subatomic Particle Decay(12:32) Time travel(13:12) Fundamental Physics Alterations(13:57) Interactions with Other Universes(15:54) Societal Collapse or Loss of Value(16:25) Artificial Superintelligence(18:15) Conflict with alien intelligence(19:06) Unknowns(21:04) What is the probability that galactic x-risks I listed are actually possible?(22:03) What is the probability that an x-risk will occur?(22:07) What are the factors?(23:06) Cumulative Chances(24:49) If aliens exist, there is no long-term future(26:13) The Way Forward(31:34) Some key takeaways and hot takes to disagree with me onThe original text contained 76 footnotes which were omitted from this narration. ---
First published:
June 18th, 2025
Source:
https://forum.effectivealtruism.org/posts/x7YXxDAwqAQJckdkr/galactic-x-risks-obstacles-to-accessing-the-cosmic-endowment
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jun 29, 2025 • 2min
“You should update on how DC is talking about AI” by Abby Babby
The discussion dives into the evolving AI policy landscape in Washington, D.C., highlighting a Congressional hearing that featured dramatic imagery likening misaligned AGI to Neo battling Agent Smiths. A proposed "AGI Safety Act" aims to ensure AI aligns with human values. Concerns about AI's impact on automated R&D are raised, especially regarding competition with China. Notably, a bipartisan group of 250 policymakers advocates for state-level AI regulations, signaling a significant shift in the conversation around AI governance.

Jun 25, 2025 • 11min
“A Practical Guide for Aspiring Super Connectors” by Constance Li
Constance Li, an insightful author and co-founder of Hive and AI for Animals, shares her journey to becoming a super connector. She emphasizes the importance of strategic introductions, showing how these connections can drastically impact high-stakes communities. Constance reveals practical tips for effective networking, like understanding individuals deeply and being selective about whom to introduce. She also discusses managing whisper networks and encourages focusing on meaningful relationships to create genuine value.

Jun 24, 2025 • 15min
“Crunch time for cage-free” by LewisBollard
Lewis Bollard, a researcher at Open Philanthropy focused on farm animal welfare, shares insights on the shift towards cage-free egg production. He discusses the commitments of over 2,700 companies, including giants like McDonald’s and Walmart, and the challenges they've faced. Despite setbacks, consumer demand is driving significant progress. Bollard critiques corporate excuses for not meeting pledges and highlights the need for transparency. He also explores the pricing dynamics of cage-free versus caged eggs and the crucial advocacy efforts pushing this movement forward.

Jun 23, 2025 • 6min
“Please reconsider your use of adjectives” by Alfredo Parra 🔸
I’ve been meaning to write about this for some time, and @titotal's recent post finally made me do it:Thick red dramatic box emphasis mine. I was going to post a comment in his post, but I think this topic deserves a post of its own. My plea is simply: Please, oh please reconsider using adjectives that reflect a negative judgment (“bad”, “stupid”, “boring”) on the Forum, and instead stick to indisputable facts and observations (“I disagree”, “I doubt”, “I dislike”, etc.). This suggestion is motivated by one of the central ideas behind nonviolent communication (NVC), which I’m a big fan of and which I consider a core life skill. The idea is simply that judgments (typically in the form of adjectives) are disputable/up to interpretation, and therefore can lead to completely unnecessary misunderstandings and hurt feelings: Me: Ugh, the kitchen is dirty again. Why didn’t you do the dishes [...] ---
First published:
June 21st, 2025
Source:
https://forum.effectivealtruism.org/posts/Fkh2Mpu3Jk7iREuvv/please-reconsider-your-use-of-adjectives
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jun 21, 2025 • 8min
“Open Philanthropy: Reflecting on our Recent Effective Giving RFP” by Melanie Basnak🔸
Discover the exciting results of a recent request for proposals that granted over $1.5 million to 11 organizations dedicated to impactful charity. Learn about the stringent criteria that led to the disqualification of some promising applicants. Gain insights into how funding strategies are evolving, with a focus on maximizing returns and encouraging effective giving. Explore the role of organizations like Charity Navigator in refining donor strategies and promoting higher-impact donations.

Jun 19, 2025 • 1h 20min
[Linkpost] “A deep critique of AI 2027’s bad timeline models” by titotal
This is a link post. Thank you to Arepo and Eli Lifland for looking over this article for errors. I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article. Note that the majority of this article was written before Eli's updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand. Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only [...] ---Outline:(00:45) Introduction:(05:27) Part 1: Time horizons extension model(05:33) Overview of their forecast(10:23) The exponential curve(13:25) The superexponential curve(20:20) Conceptual reasons:(28:38) Intermediate speedups(36:00) Have AI 2027 been sending out a false graph?(41:50) Some skepticism about projection(46:13) Part 2: Benchmarks and gaps and beyond(46:19) The benchmark part of benchmark and gaps:(52:53) The time horizon part of the model(58:02) The gap model(01:00:58) What about Eli's recent update?(01:05:19) Six stories that fit the data(01:10:46) ConclusionThe original text contained 11 footnotes which were omitted from this narration. ---
First published:
June 19th, 2025
Source:
https://forum.effectivealtruism.org/posts/KgejNns3ojrvCfFbi/a-deep-critique-of-ai-2027-s-bad-timeline-models
Linkpost URL:https://titotal.substack.com/p/a-deep-critique-of-ai-2027s-bad-timeline
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jun 19, 2025 • 1h 2min
“An invasion of Taiwan is uncomfortably likely, potentially catastrophic, and we can help avoid it.” by JoelMcGuire
Formosa: Fulcrum of the Future?An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it. TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms [...] ---Outline:(00:13) Formosa: Fulcrum of the Future?(02:04) Part 0: Background(03:44) Part 1: Invasion -- uncomfortably possible.(08:33) Part 2: Why an invasion would be bad(10:27) 2.1 War and nuclear war(19:20) 2.2. The end of cooperation: AI and Bio-risk(22:44) 2.3 Appeasement or capitulation and the end of the liberal-led order: Value risk(26:04) Part 3: How to prevent a war(29:39) 3.1. Diplomacy: speaking softly(31:21) 3.2. Deterrence: carrying a big stick(34:16) Toy model of deterrence(37:58) Toy cost-effectiveness of deterrence(41:13) How to cost-effectively increase deterrence(43:30) Risks of a deterrence strategy(44:12) 3.3. What can be done?(44:42) How tractable is it to increase deterrence?(45:43) A theory of change for philanthropy increasing Taiwan's military deterrence(45:56) en-US-AvaMultilingualNeural__ Flow chart showing policy influence between think tanks and Taiwan security outcomes.(48:55) 4. Conclusion and further work(50:53) With more time(52:00) Bonus thoughts(52:09) 1. Reminder: a catastrophe killing 10% or more of humanity is pretty unprecedented(53:06) 2. Where's the Effective Altruist think tank for preventing global conflict?(54:11) 3. Does forecasting risks based on scenarios change our view on the likelihood of catastrophe?The original text contained 16 footnotes which were omitted from this narration. ---
First published:
June 15th, 2025
Source:
https://forum.effectivealtruism.org/posts/qvzcmzPcR5mDEhqkz/an-invasion-of-taiwan-is-uncomfortably-likely-potentially
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jun 18, 2025 • 22min
“From feelings to action: spreadsheets as an act of compassion” by Zachary Robinson🔸
Zachary Robinson, CEO of the Centre for Effective Altruism, challenges the common belief that effective altruism is devoid of feelings. He emphasizes that personal emotions, like anger and sadness, drive individuals to take impactful action. Through relatable stories, he illustrates how frustrations can spark community initiatives—like addressing potholes in Omaha. Robinson argues that rational analysis and compassion should coexist, portraying analytical tools as manifestations of deep emotional commitment to alleviating suffering.