
EA Forum Podcast (Curated & popular)
Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma.
If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
Latest episodes

May 9, 2024 • 7min
“Why I’m doing PauseAI” by Joseph Miller
GPT-5 training is probably starting around now. It seems very unlikely that GPT-5 will cause the end of the world. But it's hard to be sure. I would guess that GPT-5 is more likely to kill me than an asteroid, a supervolcano, a plane crash or a brain tumor. We can predict fairly well what the cross-entropy loss will be, but pretty much nothing else. Maybe we will suddenly discover that the difference between GPT-4 and superhuman level is actually quite small. Maybe GPT-5 will be extremely good at interpretability, such that it can recursively self improve by rewriting its own weights. Hopefully model evaluations can catch catastrophic risks before wide deployment, but again, it's hard to be sure. GPT-5 could plausibly be devious enough so circumvent all of our black-box testing. Or it may be that it's too late as soon as the model has been trained. These [...] ---Outline:(01:10) How do we do better for GPT-6?(02:02) Plan B: Mass protests against AI(03:06) No innovation required(04:36) The discomfort of doing something weird(05:53) Preparing for the moment---
First published:
April 30th, 2024
Source:
https://forum.effectivealtruism.org/posts/J8sw7o5mWbGFaBW4o/why-i-m-doing-pauseai
---
Narrated by TYPE III AUDIO.

May 7, 2024 • 4min
“Updates on the EA catastrophic risk landscape” by Benjamin_Todd
Around the end of Feb 2024 I attended the Summit on Existential Risk and EAG: Bay Area (GCRs), during which I did 25+ one-on-ones about the needs and gaps in the EA-adjacent catastrophic risk landscape, and how they’ve changed. The meetings were mostly with senior managers or researchers in the field who I think are worth listening to (unfortunately I can’t share names). Below is how I’d summarise the main themes in what was said. If you have different impressions of the landscape, I’d be keen to hear them. There's been a big increase in the number of people working on AI safety, partly driven by a reallocation of effort (e.g. Rethink Priorities starting an AI policy think tank); and partly driven by new people entering the field after its newfound prominence. Allocation in the landscape seems more efficient than in the past – it's harder to identify [...] ---
First published:
May 6th, 2024
Source:
https://forum.effectivealtruism.org/posts/YDjH6ACPZq889tqeJ/updates-on-the-ea-catastrophic-risk-landscape
---
Narrated by TYPE III AUDIO.

May 6, 2024 • 15min
“My Lament to EA” by kta
I am dealing with repetitive strain injury and don’t foresee being able to really respond to any comments. I’m a little hesitant to post this, but I thought I should be vulnerable. Honestly, I'm relieved that I finally get to share my voice. I know some people may want me to discuss this privately – but that might not be helpful to me, as I know some issues have been tried to be silenced by the very people who were meant to help. And to be honest, the fear of criticizing EA is something I have disliked about EA – I’ve been behind the scenes enough to know that despite being well-intentioned, criticizing EA (especially openly) can privately get you excluded from opportunities and circles, often even silently. This is an internal battle I’ve had with EA for a while (years). Still, I thought by sharing my experiences I [...] ---Outline:(00:55) Appreciation and disillusionment(03:30) Specific challenges(03:33) When it has been uncomfortable for diversity and inclusion(04:33) When it primarily became about prestige or funding(07:13) When professional social dynamics were unhealthy(09:46) When empathy is deprioritized and logic/consequentialism/utilitarianism becomes toxic(13:38) Parting ways---
First published:
May 3rd, 2024
Source:
https://forum.effectivealtruism.org/posts/3GjstAyhH9cDeNar4/my-lament-to-ea
---
Narrated by TYPE III AUDIO.

May 1, 2024 • 1h 8min
“Émile P. Torres’s history of dishonesty and harassment” by anonymous-for-obvious-reasons
This is a cross-post and you can see the original here, written in 2022. I am not the original author, but I thought it was good for more EAs to know about this. I am posting anonymously for obvious reasons, but I am a longstanding EA who is concerned about Torres's effects on our community. An incomplete summary Introduction. This post compiles evidence that Émile P. Torres, a philosophy student at Leibniz Universität Hannover in Germany, has a long pattern of concerning behavior, which includes gross distortion and falsification, persistent harassment, and the creation of fake identities. Note: Since Torres has recently claimed that they have been the target of threats from anonymous accounts, I would like to state that I condemn any threatening behavior in the strongest terms possible, and that I have never contacted Torres or posted anything about Torres other than in this Substack [...] ---Outline:(00:25) An incomplete summary(01:16) Stalking and harassment(01:20) Peter Boghossian(11:48) Helen Pluckrose(19:02) Demonstrable falsehoods and gross distortions(19:07) “Forcible” removal(24:04) “Researcher at CSER”(27:30) Giving What We Can(31:20) Brief Digression on Effective Altruism(33:53) More falsehoods and distortions(33:57) Hilary Greaves(38:25) Andreas Mogensen(41:16) Nick Beckstead(45:29) Tyler Cowen(48:50) Olle Häggström(56:44) Sockpuppetry(57:01) “Alex Williams”(01:03:57) Conclusion---
First published:
May 1st, 2024
Source:
https://forum.effectivealtruism.org/posts/yAHcPNZzx35i25xML/emile-p-torres-s-history-of-dishonesty-and-harassment
---
Narrated by TYPE III AUDIO.

Apr 29, 2024 • 4min
“Joining the Carnegie Endowment for International Peace” by Holden Karnofsky
Effective today, I’ve left Open Philanthropy and joined the Carnegie Endowment for International Peace[1] as a Visiting Scholar. At Carnegie, I will analyze and write about topics relevant to AI risk reduction. In the short term, I will focus on (a) what AI capabilities might increase the risk of a global catastrophe; (b) how we can catch early warning signs of these capabilities; and (c) what protective measures (for example, strong information security) are important for safely handling such capabilities. This is a continuation of the work I’ve been doing over the last ~year. I want to be explicit about why I’m leaving Open Philanthropy. It's because my work no longer involves significant involvement in grantmaking, and given that I’ve overseen grantmaking historically, it's a significant problem for there to be confusion on this point. Philanthropy comes with particular power dynamics that I’d like to move away from, and [...] The original text contained 1 footnote which was omitted from this narration. ---
First published:
April 29th, 2024
Source:
https://forum.effectivealtruism.org/posts/7gzgwgwefwBku2cnL/joining-the-carnegie-endowment-for-international-peace
---
Narrated by TYPE III AUDIO.

Apr 26, 2024 • 13min
“Priors and Prejudice” by MathiasKB
Discussing the hypothetical Effective Samaritans movement influenced by socialist ideas, questioning the randomista charity approach vs societal transformation, tracing Scandinavian social models back to labor unions, exploring debates on labor unions, social democratic movements, and charter cities for institutional reform, contrasting perspectives on logical inferences in competitive gaming, reflecting on personal biases in assessing gaming factions, navigating differing beliefs in Effective Altruism through experimental collaboration

Apr 24, 2024 • 2min
“Announcing The New York Declaration on Animal Consciousness” by Sofia_Fogel
The last ten years have witnessed rapid advances in the science of animal cognition and behavior. Striking results have hinted at surprisingly rich inner lives in a wide range of animals, driving renewed debate about animal consciousness. To highlight these advances, the NYU Mind, Ethics and Policy Program and NYU Wild Animal Welfare Program co-hosted a conference on the emerging science of animal consciousness on Friday April 19 at New York University. This conference also served as the launch event for The New York Declaration on Animal Consciousness. This short statement, signed by leading scientists who research a wide range of taxa, holds that all vertebrates (including reptiles, amphibians, and fishes) and many invertebrates (including cephalopod mollusks, decapod crustaceans, and insects) have a realistic chance of being conscious, and that their welfare merits consideration. We now welcome signatures from others as well. If you have relevant [...] ---
First published:
April 21st, 2024
Source:
https://forum.effectivealtruism.org/posts/Pqkf5N7LkHfd7rRBf/announcing-the-new-york-declaration-on-animal-consciousness
---
Narrated by TYPE III AUDIO.

Apr 23, 2024 • 35min
[Linkpost] “Motivation gaps: Why so much EA criticism is hostile and lazy” by titotal
Disclaimer: While I criticize several EA critics in this article, I am myself on the EA-skeptical side of things (especially on AI risk). Introduction. I am a proud critic of effective altruism, and in particular a critic of AI existential risk, but I have to admit that a lot of the critcism of EA is hostile, or lazy, and is extremely unlikely to convince a believer. Take this recent Leif Weinar time article as an example. I liked a few of the object level critiques, but many of the points were twisted, and the overall point was hopelessly muddled (are they trying to say that voluntourism is the solution here?). As people have noted, the piece was needlessly hostile to EA (and incredibly hostile to Will Macaskill in particular). And he's far from the only prominent hater. Emille Torres views EA as a threat to humanity. Timnit Gebru sees [...] ---Outline:(02:21) No door to door atheists(04:51) What went wrong here?(08:40) Motivation gaps in AI x-risk(10:59) EA gap analysis(15:12) Counter-motivations(25:49) You can’t rely on ingroup criticism(29:10) How to respond to motivation gaps---
First published:
April 22nd, 2024
Source:
https://forum.effectivealtruism.org/posts/CfBNdStftKGc863o6/motivation-gaps-why-so-much-ea-criticism-is-hostile-and-lazy
Linkpost URL:https://titotal.substack.com/p/motivation-gaps-why-so-much-ea-criticism
---
Narrated by TYPE III AUDIO.

Apr 23, 2024 • 12min
“How good it is to donate and how hard it is to get a job” by Elijah Persson-Gordon
Summary In this post, I hope to inspire other Effective Altruists to focus more on donation and commiserate with those who have been disappointed in their ability to get an altruistic job. First, I argue that the impact of having a job that helps others is complicated. In this section, I discuss annual donation statistics of people in the Effective Altruism community donate, which I find quite low. In the rest of the post, I describe my recent job search, my experience substituting at public schools, and my expenses. Having a job that helps others might be overemphasized Doing a job that helps others seems like a good thing to do. Weirdly, it's not as simple as that. While some job vacancies last for years, other fields are very competitive and have many qualified applicants for most position listings. In the latter case, if you [...] ---Outline:(00:05) Summary(00:42) Having a job that helps others might be overemphasized(02:08) Donations are an amazing opportunity, and I think they are underemphasized(03:42) I used to really want an animal welfare-related job. Then I wanted to donate more. Now I am a substitute at a public school(06:13) I live frugally and donate(08:07) I have been disappointed in my ability to find a job that would allow me to donate more(09:52) Its okay(10:25) Additional reading---
First published:
April 16th, 2024
Source:
https://forum.effectivealtruism.org/posts/G9ocwYA2LpLqC4vmq/how-good-it-is-to-donate-and-how-hard-it-is-to-get-a-job
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Apr 18, 2024 • 1min
“Personal reflections on FTX” by William_MacAskill
William MacAskill shares personal reflections on FTX in various podcasts, offering insights and updates for different audiences. Topics covered include updates, lessons learned, and responses to questions raised on the EA Forum. MacAskill recommends listening to the podcasts for a more comprehensive understanding.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.