EA Forum Podcast (Curated & popular) cover image

EA Forum Podcast (Curated & popular)

Latest episodes

undefined
Mar 8, 2025 • 29min

“From Comfort Zone to Frontiers of Impact: Pursuing A Late-Career Shift to Existential Risk Reduction” by Jim Chapman

By Jim Chapman, Linkedin. TL;DR: In 2023, I was a 57-year-old urban planning consultant and non-profit professional with 30 years of leadership experience. After talking with my son about rationality, effective altruism, and AI risks, I decided to pursue a pivot to existential risk reduction work. The last time I had to apply for a job was in 1994. By the end of 2024, I had spent ~740 hours on courses, conferences, meetings with ~140 people, and 21 job applications. I hope that by sharing my experiences, you can gain practical insights, inspiration, and resources to navigate your career transition, especially for those who are later in their career and interested in making an impact in similar fields. I share my experience in 5 sections - sparks, take stock, start, do, meta-learnings, and next steps. [Note - as of 03/05/2025, I am still pursuing my career shift.] Sparks – [...] ---Outline:(01:16) Sparks - 2022(02:29) Take Stock - 2023(03:36) Start(04:15) Do - 2023 and 2024(05:13) Learn(10:46) Get a Job(14:21) Create a Job(16:49) Contractor(18:16) Meta-Learnings(19:50) Next Steps(20:48) Appendix A - Helpful FeedbackThe original text contained 30 footnotes which were omitted from this narration. The original text contained 9 images which were described by AI. --- First published: March 4th, 2025 Source: https://forum.effectivealtruism.org/posts/FcKpAGn75pRLsoxjE/from-comfort-zone-to-frontiers-of-impact-pursuing-a-late-1 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Mar 5, 2025 • 13min

“On deference to funders” by abrahamrowe

This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked.  Commenting and feedback guidelines: I'm posting this to get it out there. I'd love to see comments that take the ideas forward, but criticism of my argument won't be as useful at this time, in part because I won't do any further work on it. This is a post I drafted in November 2023, then updated for an hour in March 2025. I don’t think I’ll ever finish it so I am just leaving it in this draft form for draft amnesty week (I know I'm late). I don’t think it is particularly well calibrated, but mainly just makes a bunch of points that I haven’t seen assembled elsewhere. Please take it as extremely low-confidence and there being a low-likelihood of this post describing these dynamics perfectly. I’ve [...] ---Outline:(02:45) Deference is everywhere(04:39) Funders often lack information you have access to(08:29) Funders often don't share your values(09:58) Funders have experience in grantmaking. That is different from experience doing the work.(11:48) What can we do to make this better?(12:22) There are lots of issues with over-updating on this!--- First published: March 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/adZEA4SEkab4SZhTx/on-deference-to-funders --- Narrated by TYPE III AUDIO.
undefined
Mar 3, 2025 • 2min

“The lost art of the cheap office lunch” by Julia_Wise🔸

I feel silly writing this up, but it's draft amnesty week. Caveat: I’ve been a visitor to several EA offices but haven’t worked regularly in any of them, and maybe I'm overly nostalgic about reheated felafel. Some EA offices have catered lunch or lunch cooked on the premises every day. This is nice, but not every workplace can afford it. 5+ years ago when everything in EA was lower-budget, the main way EA offices did lunch was to provide sandwich / wrap ingredients. Ops staff would order the groceries, and would put out the spread about 15 minutes before lunchtime and microwave some of the foods. There was a designated time to show up, often 1 pm. This method works pretty well for a crowd because you don’t all have to wait for the microwave. It was pretty flexible for different tastes and diets. People who wanted [...] --- First published: February 28th, 2025 Source: https://forum.effectivealtruism.org/posts/EyXWx8stxSzgAMzJX/the-lost-art-of-the-cheap-office-lunch --- Narrated by TYPE III AUDIO.
undefined
Mar 2, 2025 • 10min

“The catastrophic situation with U.S. foreign aid just got worse - why the EA community should care” by Dorothy M.

For those in the EA community who may not typically engage with politics/government, this is the time to do so. If you are American and/or based in the U.S., reaching out to lawmakers, supporting organizations that are mobilizing on this issue, and helping amplify the urgency of this crisis can make a difference. Why this matters: Millions of lives are at stake Decades of progress, and prior investment, in global health and wellbeing are at risk Government funding multiplies the impact of philanthropy Where things stand today (February 27, 2025) The Trump Administration's foreign aid freeze has taken a catastrophic turn: rather than complying with a court order to restart paused funding, they have chosen to terminate more than 90% of all USAID grants and contracts. This stunningly reckless decision comes just 30 days into a supposed 90-day review of foreign aid. This will cause a devastating loss [...] ---Outline:(00:43) Where things stand today (February 27, 2025)(03:22) Some of the few lifesaving programs that were terminated are:(04:47) Why this matters for the future of global health and wellbeing(07:03) Your action and engagement is needed NOW--- First published: February 27th, 2025 Source: https://forum.effectivealtruism.org/posts/TbZAkjJQn8kPDodXG/the-catastrophic-situation-with-u-s-foreign-aid-just-got --- Narrated by TYPE III AUDIO.
undefined
Mar 1, 2025 • 2min

“How confident are you that it’s preferable for America to develop AGI before China does?” by ScienceMon🔸

The belief that it's preferable for America to develop AGI before China does seems widespread among American effective altruists. Is this belief supported by evidence, or it it just patriotism in disguise? How would you try to convince an open-minded Chinese citizen that it really would be better for America to develop AGI first? Such a person might point out: Over the past 30 years, the Chinese government has done more for the flourishing of Chinese citizens than the American government has done for the flourishing of American citizens. My village growing up lacked electricity, and now I'm a software engineer! Chinese institutions are more trustworthy for promoting the future flourishing of humanity. Commerce in China ditches some of the older ideas of Marxism because it's the means to an end: the China Dream of wealthy communism. As AGI makes China and the world extraordinarily wealthy, we are [...] --- First published: February 22nd, 2025 Source: https://forum.effectivealtruism.org/posts/MxPhK4mLRkaFekAmp/how-confident-are-you-that-it-s-preferable-for-america-to --- Narrated by TYPE III AUDIO.
undefined
Feb 25, 2025 • 3min

“Stop calling them labs” by sawyer🔸

Note: This started as a quick take, but it got too long so I made it a full post. It's still kind of a rant; a stronger post would include sources and would have gotten feedback from people more knowledgeable than I. But in the spirit of Draft Amnesty Week, I'm writing this in one sitting and smashing that Submit button. Many people continue to refer to companies like OpenAI, Anthropic, and Google DeepMind as "frontier AI labs". I think we should drop "labs" entirely when discussing these companies, calling them "AI companies"[1] instead. While these companies may have once been primarily research laboratories, they are no longer so. Continuing to call them labs makes them sound like harmless groups focused on pushing the frontier of human knowledge, when in reality they are profit-seeking corporations focused on building products and capturing value in the marketplace. Laboratories do not directly [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: February 24th, 2025 Source: https://forum.effectivealtruism.org/posts/Ap6E2aEFGiHWf5v5x/stop-calling-them-labs --- Narrated by TYPE III AUDIO.
undefined
Feb 25, 2025 • 21min

“Ditching what we are good at: A change of course for Anima International in France” by Keyvan Mostafavi, Anima International

My name is Keyvan, and I lead Anima International's work in France. Our organization went through a major transformation in 2024. I want to share that journey with you. Anima International in France used to be known as Assiettes Végétales (‘Plant-Based Plates’). We focused entirely on introducing and promoting vegetarian and plant-based meals in collective catering. Today, as Anima, our mission is to put an end to the use of cages for laying hens. These changes come after a thorough evaluation of our previous campaign, assessing 94 potential new interventions, making several difficult choices, and navigating emotional struggles. We hope that by sharing our experience, we can help others who find themselves in similar situations. So let me walk you through how the past twelve months have unfolded for us. The French team Act One: What we did as Assiettes Végétales Since 2018, we worked with the local [...] ---Outline:(01:13) Act One: What we did as Assiettes Végétales(03:55) Act Two: The moment we realized we needed to measure our impact more precisely(05:12) Act Three: The evaluation(07:23) Act Four: Ending our previous campaign(09:09) Act Five: Searching for a new intervention(11:30) Act Six: The struggle to choose(14:11) Act Seven: The strengths of the cage-free campaign(16:34) Conclusion - Where we stand todayThe original text contained 10 footnotes which were omitted from this narration. The original text contained 5 images which were described by AI. --- First published: February 22nd, 2025 Source: https://forum.effectivealtruism.org/posts/vfADxsPECqcbd3vs6/ditching-what-we-are-good-at-a-change-of-course-for-anima --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Feb 22, 2025 • 23min

“Teaching AI to reason: this year’s most important story” by Benjamin_Todd

This is a link post. I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that's never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the [...] ---Outline:(00:51) The new paradigm: reinforcement learning(02:32) Reasoning models breakthroughs(04:09) A new rate of progress?(07:53) Why this is just the beginning(11:02) Two more accelerants(16:12) The key thing to watch: AI doing AI researchThe original text contained 10 images which were described by AI. --- First published: February 13th, 2025 Source: https://forum.effectivealtruism.org/posts/ZuWcG3W3rEBxLceWj/teaching-ai-to-reason-this-year-s-most-important-story --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Feb 13, 2025 • 10min

“Using a diet offset calculator to encourage effective giving for farmed animals” by Aidan Alexander, ThomNorman

When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it's worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our [...] ---Outline:(00:50) Background(01:41) What it is and what it isn't(02:38) How it works(04:24) Why this is a promising way to encourage effective giving for animals(06:46) Case study: Bentham's Bulldog(07:30) How is this actionable for you?The original text contained 2 footnotes which were omitted from this narration. The original text contained 4 images which were described by AI. --- First published: February 11th, 2025 Source: https://forum.effectivealtruism.org/posts/nGQRBWyCAbcEYSyLL/using-a-diet-offset-calculator-to-encourage-effective-giving --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Feb 13, 2025 • 12min

“Why Did Elon Musk Just Offer to Buy Control of OpenAI for $100 Billion?” by Garrison

This is a link post. This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." [...] ---Outline:(02:44) The control premium(04:19) Conversion significance(05:45) Musks suit(09:26) The stakes--- First published: February 11th, 2025 Source: https://forum.effectivealtruism.org/posts/7iopGPmtEmubSFSP3/why-did-elon-musk-just-offer-to-buy-control-of-openai-for --- Narrated by TYPE III AUDIO.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode