
EA Forum Podcast (All audio)
Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing.
If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.
Latest episodes

Jun 23, 2025 • 8min
“The Lies of Big Bug” by Bentham’s Bulldog
Crosspost of this blog article. The majority of farmed animals killed each year are insects, and this number is only expected to increase. By 2033, it's estimated that around 5 trillion insects will be slaughtered annually—more than 50 times the number of cows, pigs, chickens, turkeys, and the like currently slaughtered. But insect farming is built on a dark secret: its foundational premises are all lies! There has been a calculated plot by the insect farming industry to mislead the public. They know that if the public knew the truth about them, they’d never support subsidizing them. The insect farms can only thrive in darkness—shielded from public scrutiny. Insect farming was promised as an environmentally-friendly alternative to meat. In reality, however, there's virtually no consumer market for insects, so the insect farming industry mostly feeds insects to farmed animals like chickens and fish. Insect farming is not a [...] ---
First published:
June 23rd, 2025
Source:
https://forum.effectivealtruism.org/posts/ekqBRzREhxkDrGLz9/the-lies-of-big-bug
---
Narrated by TYPE III AUDIO.

Jun 23, 2025 • 5min
“Arkose is Closing” by Arkose
Summary Arkose is an early-stage AI safety fieldbuilding nonprofit focused on accelerating the involvement of experienced machine learning professionals in technical AI safety research through direct outreach, one-on-one calls, and public resources. Between December 2023 and June 2025, we had one-on-one calls with 311 such professionals. 78% of those professionals said their initial call accelerated their involvement in AI safety[1]. Unfortunately, we’re closing due to a lack of funding. We remain excited about other attempts at direct outreach to this population, and think the right team could have impact here. Why are we closing? Over the past year, we’ve applied for funding from all of the major funders interested in AI safety fieldbuilding work, and several minor funders. Rather than try to massively change what we're doing to appeal to funders, with a short funding runway and little to no feedback, we’re choosing to close down and pursue [...] ---Outline:(00:09) Summary(00:51) Why are we closing?(01:13) What were we doing? Why?(04:02) What do we think about other people doing similar things?The original text contained 1 footnote which was omitted from this narration. ---
First published:
June 23rd, 2025
Source:
https://forum.effectivealtruism.org/posts/wEKmiLASbNAj8FDzk/arkose-is-closing
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jun 23, 2025 • 10min
“Should we launch the ‘Animal Advocacy Corps’? Seeking Feedback and a Founder” by haven
Summary This post considers the idea of an Animal Advocacy Corps, which would be modeled after the US Peace Corps program and involve recent graduates spending a year working in-person at various pro-animal organizations/companies. The main goal of this project, were it to happen, would be to a) increase the number of young people engaged in animal advocacy in some capacity, and b) increase the level of in-person organization in the movement*. How you can help: I’m not going to start this idea myself (right now at least). Here, I’m just interested in hearing input on whether this idea warrants proceeding with. If so, I’d also love to hear from people potentially interested in working on it. *Note that when I refer to the “movement” throughout, I’m talking about the animal advocacy movement. Though perhaps some of these ideas apply to other social change movements [...] ---Outline:(00:12) Summary(01:09) Context(01:34) The Problem(04:03) Proposed Solution(06:56) Possible Next Steps / A Call to Ownership(08:02) UncertaintiesThe original text contained 1 footnote which was omitted from this narration. ---
First published:
June 21st, 2025
Source:
https://forum.effectivealtruism.org/posts/8HvHD4oY9wseAeaRi/should-we-launch-the-animal-advocacy-corps-seeking-feedback
---
Narrated by TYPE III AUDIO.

Jun 22, 2025 • 24min
“Should more EAs become technical sales professionals?” by Patrick Hoang
Sales is rarely discussed as a career path in EA. This post explores whether technical sales roles — especially B2B Sales Engineering — might be a fast, high-leverage path for building career capital. It compares sales to other roles, and makes the case that sales deserves more serious attention from EA students and professionals. Depth: Medium. I am just a college student but I believe this could provide significant value. (I completed a sales bootcamp a month ago, and spent good time reflecting and connecting to SEs on LinkedIn. I'll try not to let recency/optimism bias hit me but it inevitably will). I'll use "sales engineer" and "technical sales" interchangeably. Companies have a lot of names for sales roles, but fundamentally they do the same thing: bring in revenue. Introduction Imagine you asked high school and university students what they wish to do with their careers. Maybe they want [...] ---Outline:(01:03) Introduction(02:07) What is B2B Sales?(02:38) Marketing(03:40) B2C Sales(04:43) B2B Sales(06:48) Why do B2B Sales?(07:07) Building Career Capital(07:30) Time Management & Prioritization(08:45) Communication(09:25) Relationship Building(10:02) Networking(10:49) Earning to Give Potential(12:10) Pivoting Potential(14:30) Why not do B2B Sales?(14:34) You might not perform well.(15:48) You could sell for a bad industry.(17:16) Sales is not prestigious or EA-coded.(18:46) You could be locked into golden handcuffs(19:42) EA is Sales(20:52) Learning More & Next Steps(21:01) Testing your fit (Fast, low-risk)(22:16) Explore Further Resources(23:01) Lets Talk---
First published:
June 20th, 2025
Source:
https://forum.effectivealtruism.org/posts/Y6oAgAfpvwegLB4J7/should-more-eas-become-technical-sales-professionals
---
Narrated by TYPE III AUDIO.

Jun 21, 2025 • 6min
“Please reconsider your use of adjectives” by Alfredo Parra 🔸
I’ve been meaning to write about this for some time, and @titotal's recent post finally made me do it:Thick red dramatic box emphasis mine. I was going to post a comment in his post, but I think this topic deserves a post of its own. My plea is simply: Please, oh please reconsider using adjectives that reflect a negative judgment (“bad”, “stupid”, “boring”) on the Forum, and instead stick to indisputable facts and observations (“I disagree”, “I doubt”, “I dislike”, etc.). This suggestion is motivated by one of the central ideas behind nonviolent communication (NVC), which I’m a big fan of and which I consider a core life skill. The idea is simply that judgments (typically in the form of adjectives) are disputable/up to interpretation, and therefore can lead to completely unnecessary misunderstandings and hurt feelings: Me: Ugh, the kitchen is dirty again. Why didn’t you do the dishes [...] ---
First published:
June 21st, 2025
Source:
https://forum.effectivealtruism.org/posts/Fkh2Mpu3Jk7iREuvv/please-reconsider-your-use-of-adjectives
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jun 20, 2025 • 15min
“Crunch time for cage-free” by LewisBollard
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. Despite setbacks, battery cages are on the retreat My colleague Emma Buckland contributed (excellent) research to this piece. All opinions and errors are mine alone. It's deadline time. Over the last decade, many of the world's largest food companies — from McDonald's to Walmart — pledged to stop sourcing eggs from caged hens in at least their biggest markets. All in, over 2,700 companies globally have now pledged to go cage-free. Good things take time, and companies insisted they needed a lot of it to transition their egg supply chains — most set 2025 deadlines to do so. Over the years, companies reassured anxious advocates that their transitions were on track. But now, with just [...] ---
First published:
June 20th, 2025
Source:
https://forum.effectivealtruism.org/posts/5DTrsKCSYhp9gnpAi/crunch-time-for-cage-free
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jun 20, 2025 • 1min
“Expression of Interest: Rethink Priorities’ AI Strategy Contractor Pool” by kierangreig🔸, Rethink Priorities
We’re looking to expand our network of collaborators interested in working on strategy questions related to AI safety and governance, as well as AI's intersection with other cause areas we care about—particularly animal welfare, global health and development, and digital sentience. If you have relevant expertise or perspectives and would be open to short-term or long-term contracting or project-based collaboration, we’d love to hear from you. This Expression of Interest form will help us build a pool of aligned contractors we can reach out to as we: Explore opportunities in AI x Animals and AI x GHD Bring in talent to support the development of our broader AI strategy Commission work on cross-cutting questions (e.g., the future value of money, and potential shifts in research productivity) Please submit here (ideally by June 29th). ---
First published:
June 17th, 2025
Source:
https://forum.effectivealtruism.org/posts/7oMgFizAEi9mAhaAy/expression-of-interest-rethink-priorities-ai-strategy
---
Narrated by TYPE III AUDIO.

Jun 19, 2025 • 9min
“CEA is hiring for a Director of EA Funds” by Zachary Robinson🔸
Note: the deadline for applications has been extended to June 29th. The Centre for Effective Altruism (CEA) is seeking an experienced leader to join our senior leadership team as the Director of EA Funds. EA Funds is an established grantmaking organization that currently operates as an independent project but is merging into CEA. We wrote about the merger of CEA and EA Funds here, and now we’re looking for an ambitious leader to scale EA Funds and move (at least) hundreds of millions of dollars.Apply now About EA Funds and CEA Effective Altruism Funds (EA Funds) is an existing foundation that directs financial resources to particularly cost-effective and altruistically impactful projects. The platform makes funding accessible for high-impact projects and maintains specialized funds in key focus areas, managed by subject-matter experts who identify the highest-impact opportunities. EA Funds is composed of four separate funds: the EA Infrastructure Fund [...] ---Outline:(00:47) About EA Funds and CEA(04:10) Key Responsibilities(04:13) Transition Leadership(05:07) Team Leadership(05:33) Grantmaking(06:04) Stakeholder Relations(06:27) Strategic Alignment(06:49) What we're looking for---
First published:
June 19th, 2025
Source:
https://forum.effectivealtruism.org/posts/Z6xqJXy7t7aau4pha/cea-is-hiring-for-a-director-of-ea-funds
---
Narrated by TYPE III AUDIO.

Jun 19, 2025 • 1h 2min
“Galactic x-risks: Obstacles to Accessing the Cosmic Endowment” by JordanStone
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time and across multiple independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are outlined, and updates for space governance and big picture cause prioritisation are discussed. Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It's a [...] ---Outline:(01:00) Introduction(03:07) Existential risks to a Galactic Civilisation(03:58) Threats Limited to a One Planet Civilisation(04:33) Threats to a small Spacefaring Civilisation(07:02) Galactic Existential Risks(07:22) Self-replicating machines(09:27) Strange matter(10:36) Vacuum decay(11:42) Subatomic Particle Decay(12:32) Time travel(13:12) Fundamental Physics Alterations(13:57) Interactions with Other Universes(15:54) Societal Collapse or Loss of Value(16:25) Artificial Superintelligence(18:15) Conflict with alien intelligence(19:06) Unknowns(21:04) What is the probability that galactic x-risks I listed are actually possible?(22:03) What is the probability that an x-risk will occur?(22:07) What are the factors?(23:06) Cumulative Chances(24:49) If aliens exist, there is no long-term future(26:13) The Way Forward(31:34) Some key takeaways and hot takes to disagree with me onThe original text contained 76 footnotes which were omitted from this narration. ---
First published:
June 18th, 2025
Source:
https://forum.effectivealtruism.org/posts/x7YXxDAwqAQJckdkr/galactic-x-risks-obstacles-to-accessing-the-cosmic-endowment
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jun 19, 2025 • 1h 13min
[Linkpost] “A deep critique of AI 2027’s bad timeline models” by titotal
This is a link post. Thank you to Arepo and Eli Lifland for looking over this article for errors. I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article. Note that the majority of this article was written before Eli's updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand. Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only [...] ---Outline:(00:45) Introduction:(05:21) Part 1: Time horizons extension model(05:27) Overview of their forecast(10:30) The exponential curve(13:18) The superexponential curve(19:27) Conceptual reasons:(27:50) Intermediate speedups(34:27) Have AI 2027 been sending out a false graph?(39:47) Some skepticism about projection(43:25) Part 2: Benchmarks and gaps and beyond(43:31) The benchmark part of benchmark and gaps:(50:03) The time horizon part of the model(54:57) The gap model(57:31) What about Eli's recent update?(01:01:39) Six stories that fit the data(01:06:58) ConclusionThe original text contained 11 footnotes which were omitted from this narration. ---
First published:
June 19th, 2025
Source:
https://forum.effectivealtruism.org/posts/KgejNns3ojrvCfFbi/a-deep-critique-of-ai-2027-s-bad-timeline-models
Linkpost URL:https://titotal.substack.com/p/a-deep-critique-of-ai-2027s-bad-timeline
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.