EA Forum Podcast (Curated & popular) cover image

EA Forum Podcast (Curated & popular)

Latest episodes

undefined
Feb 22, 2025 • 23min

“Teaching AI to reason: this year’s most important story” by Benjamin_Todd

This is a link post. I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that's never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the [...] ---Outline:(00:51) The new paradigm: reinforcement learning(02:32) Reasoning models breakthroughs(04:09) A new rate of progress?(07:53) Why this is just the beginning(11:02) Two more accelerants(16:12) The key thing to watch: AI doing AI research--- First published: February 13th, 2025 Source: https://forum.effectivealtruism.org/posts/ZuWcG3W3rEBxLceWj/teaching-ai-to-reason-this-year-s-most-important-story --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Feb 13, 2025 • 10min

“Using a diet offset calculator to encourage effective giving for farmed animals” by Aidan Alexander, ThomNorman

When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it's worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our [...] ---Outline:(00:50) Background(01:41) What it is and what it isn't(02:38) How it works(04:24) Why this is a promising way to encourage effective giving for animals(06:46) Case study: Bentham's Bulldog(07:30) How is this actionable for you?The original text contained 2 footnotes which were omitted from this narration. --- First published: February 11th, 2025 Source: https://forum.effectivealtruism.org/posts/nGQRBWyCAbcEYSyLL/using-a-diet-offset-calculator-to-encourage-effective-giving --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Feb 13, 2025 • 12min

“Why Did Elon Musk Just Offer to Buy Control of OpenAI for $100 Billion?” by Garrison

This is a link post. This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." [...] ---Outline:(02:44) The control premium(04:19) Conversion significance(05:45) Musks suit(09:26) The stakes--- First published: February 11th, 2025 Source: https://forum.effectivealtruism.org/posts/7iopGPmtEmubSFSP3/why-did-elon-musk-just-offer-to-buy-control-of-openai-for --- Narrated by TYPE III AUDIO.
undefined
Feb 4, 2025 • 6min

“Leadership change at the Center on Long-Term Risk” by JesseClifton, Tristan Cook, Mia_Taylor

The Center on Long-Term Risk (CLR) does research and community building aimed at reducing s-risk. Jesse Clifton is stepping down as CLR's Executive Director. He’ll be succeeded by Tristan Cook as Managing Director and Mia Taylor as Interim Research Director. [1] Statement from Jesse Over the past year or so, I’ve become increasingly convinced by arguments that we are clueless about the sign (in terms of expected total suffering reduced) of interventions aimed at reducing s-risk. (And I think it's plausible that we should consider ourselves clueless about interventions aimed at improved expected total welfare, generally.) The other researchers on CLR's Conceptual Research team[2] have come to a similar view,[3] but not the other staff or the board, who are still positive on the pre-cluelessness priorities. Given this, I don’t think it makes sense for me to lead CLR. So, for now, I’ll be transitioning to working [...] ---Outline:(00:25) Statement from Jesse(03:06) Statement from Mia and TristanThe original text contained 6 footnotes which were omitted from this narration. --- First published: January 31st, 2025 Source: https://forum.effectivealtruism.org/posts/YE3tdpE6JdiWRqqKx/leadership-change-at-the-center-on-long-term-risk --- Narrated by TYPE III AUDIO.
undefined
Feb 4, 2025 • 9min

“Climate Change Is Worse Than Factory Farming” by EA Forum Team

This is a link post. Note: This post was crossposted from the United States of Exception Substack by the Forum team, with the author's permission. The author may not see or respond to comments on this post.A good and wholesome K-strategist. I am a climate change catastrophist, but I’m not like all the others. I don’t think climate change is going to wipe out all life on Earth (as 35% of Americans say they believe) or end the human race (as 31% believe). Nor do I think it's going to end human life on Earth but that human beings will continue to exist somewhere else in the universe (which at least 4% of Americans would logically have to believe). Nevertheless, I think global warming is among the worst things in the world — if not #1 — and addressing it should be among our top priorities. Friend of the blog [...] --- First published: January 28th, 2025 Source: https://forum.effectivealtruism.org/posts/gBSmkRjYLcAvNPoDs/climate-change-is-worse-than-factory-farming --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Jan 29, 2025 • 26min

“The Game Board has been Flipped: Now is a good time to rethink what you’re doing” by LintzA

Recent developments in AI are prompting a critical reassessment of safety strategies. Topics include the implications of tight timelines and the impact of the Trump presidency on AI governance. The discussion explores new paradigms like o1 computing and budget shifts in AI data centers. With mainstream discourse lacking consideration for AI risks, the episode emphasizes what safety-focused individuals should prioritize going forward. The potential effects on US-China competition are also examined, questioning traditional methods and strategies.
undefined
Jan 28, 2025 • 9min

“The Upcoming PEPFAR Cut Will Kill Millions, Many of Them Children” by Omnizoid

Edit 1/29: Funding is back, baby! Crossposted from my blog. (This could end up being the most important thing I’ve ever written. Please like and restack it—if you have a big blog, please write about it). A mother holds her sick baby to her chest. She knows he doesn’t have long to live. She hears him coughing—those body-wracking coughs—that expel mucus and phlegm, leaving him desperately gasping for air. He is just a few months old. And yet that's how old he will be when he dies. The aforementioned scene is likely to become increasingly common in the coming years. Fortunately, there is still hope. Trump recently signed an executive order shutting off almost all foreign aid. Most terrifyingly, this included shutting off the PEPFAR program—the single most successful foreign aid program in my lifetime. PEPFAR provides treatment and prevention of HIV and AIDS—it has saved about [...] --- First published: January 27th, 2025 Source: https://forum.effectivealtruism.org/posts/BRqBvkjskZ6c2G6rn/the-upcoming-pepfar-cut-will-kill-millions-many-of-them --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Jan 28, 2025 • 2min

“GiveWell raised less than its 10th percentile forecast in 2023” by Rasool

In 2023[1] GiveWell raised $355 million - $100 million from Open Philanthropy, and $255 million from other donors. In their post on 10th April 2023, GiveWell forecast the amount they expected to raise in 2023, albeit with wide confidence intervals, and stated that their 10th percentile estimate for total funds raised was $416 million, and 10th percentile estimate for funds raised outside of Open Philanthropy was $260 million. 10th percentile estimateMedian estimateAmount raisedTotal$416 million$581 million$355 millionExcluding Open Philanthropy$260 million$330 million$255 million Regarding Open Philanthropy, the April 2023 post states that they "tentatively plans to give $250 million in 2023", however Open Philanthropy gave a grant of $300 million to cover 2023-2025, to be split however GiveWell saw fit, and it used $100 million of that grant in 2023. However for other donors I'm not sure what caused the missed estimate Credit to 'Arnold' on GiveWell's December 2024 Open Thread for [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: January 19th, 2025 Source: https://forum.effectivealtruism.org/posts/RdbDH4T8bxWwZpc9h/givewell-raised-less-than-its-10th-percentile-forecast-in --- Narrated by TYPE III AUDIO.
undefined
Jan 27, 2025 • 15min

“In defense of the certifiers” by LewisBollard

Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. They’re imperfect agents of change The world's three largest animal welfare groups are under attack. Their antagonists are not factory farmers, but other animal groups. And the ASPCA, HSUS, and RSPCA stand accused not of hurting farmers, but of hurting animals, through their work with GAP and RSPCA Assured, which certify animal products as being less cruelly produced. The attacks began last summer when the UK animal rights group Animal Rising released a report and footage showing abuses on RSPCA Assured farms. They’ve since forced the RSPCA to cancel its 200th year celebrations, plastered portraits of RSPCA patron King Charles, and persuaded the ceremonial president and two vice-presidents of the RSPCA to resign in protest. [...] --- First published: January 24th, 2025 Source: https://forum.effectivealtruism.org/posts/np6vRZvsWgF5rq5W7/in-defense-of-the-certifiers --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Jan 24, 2025 • 2min

“Preparing Effective Altruism for an AI-Transformed World” by Tobias Häberli

In recent years, many in the Effective Altruism community have shifted to working on AI risks, reflecting the growing consensus that AI will profoundly shape our future. In response to this significant shift, there have been efforts to preserve a "principles-first EA" approach, or to give special thought into how to support non-AI causes. This has often led to discussions being framed around "AI Safety vs. everything else". And it feels like the community is somewhat divided along the following lines: Those working on AI Safety, because they believe that transformative AI is coming. Those focusing on other causes, implicitly acting as if transformative AI is not coming.[1] Instead of framing priorities this way, I believe it would be valuable for more people to adopt a mindset that assumes transformative AI is likely coming and asks: What should we work on in light of that? If we [...] The original text contained 2 footnotes which were omitted from this narration. --- First published: January 22nd, 2025 Source: https://forum.effectivealtruism.org/posts/psNGNSoJpXRodmDSg/preparing-effective-altruism-for-an-ai-transformed-world --- Narrated by TYPE III AUDIO.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app