EA Forum Podcast (All audio) cover image

EA Forum Podcast (All audio)

Latest episodes

undefined
May 5, 2025 • 2min

“We don’t have evidence that the best charities are over 1000x more cost effective than the average” by Will_Davison

I very frequently hear the statement "the best charities are over 1000x more cost effective than the average". This is often alongside the accompanying graph. Where does this figure come from? Most sources link it to Toby Ord's 2013 paper "The Moral Imperative toward Cost-Effectiveness in Global Health". The data in this paper comes from the 2006 paper "Disease Control Priorities in Developing Countries". How should this information affect our claims as EAs? 1. We should not extrapolate this claim to charities unless we have direct evidence (please comment with the best evidence you have seen). We should also not extrapolate this to fields outside of health development- in particular fields that involve creating change in complex systems, for which change-making is less measurable and less linear. 2. We should be transparent about what we mean by 'effective'. Just because some charities use less measurable methods [...] --- First published: May 5th, 2025 Source: https://forum.effectivealtruism.org/posts/KKrSTafBqCcz9YXF9/we-don-t-have-evidence-that-the-best-charities-are-over --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
May 5, 2025 • 13min

“Interpretability Will Not Reliably Find Deceptive AI” by Neel Nanda

(Disclaimer: Post written in a personal capacity. These are personal hot takes and do not in any way represent my employer's views.) TL;DR:** I do not think we will produce high reliability methods to evaluate or monitor the safety**[1]** of superintelligent systems** via current research paradigms, with interpretability or otherwise[2]. Interpretability seems a valuable tool here and remains worth investing in, as it will hopefully increase the reliability we can achieve. However, interpretability should be viewed as part of an overall portfolio of defences: a layer in a defence-in-depth strategy. It is not the one thing that will save us, and it still won’t be enough for high reliability. Introduction There's a common, often implicit, argument made in AI safety discussions: interpretability is presented as the only reliable path forward for detecting deception in advanced AI - notably argued in Dario Amodei's recent “The Urgency of [...] ---Outline:(00:58) Introduction(02:59) High Reliability Seems Unattainable(05:16) Why Won't Interpretability be Reliable?(07:50) The Potential of Black-Box Methods(08:52) The Role of Interpretability(12:07) ConclusionThe original text contained 5 footnotes which were omitted from this narration. --- First published: May 4th, 2025 Source: https://forum.effectivealtruism.org/posts/Th4tviypdKzeb59GN/interpretability-will-not-reliably-find-deceptive-ai --- Narrated by TYPE III AUDIO.
undefined
May 4, 2025 • 7min

“Update: EAIF is now more funding constrained” by Jamie_Harris, hbesceli

Tl;dr: Over the past few months, the EA Infrastructure Fund's (EAIF) grantmaking has increased significantly, and funding has become more of a constraint. Your donations would help us continue supporting impactful projects.The situation EA Infrastructure Fund aims to increase the impact of projects that use the principles of effective altruism, by increasing their access to talent, capital, and knowledge. In November 2024, we posted saying that EAIF wasn’t funding constrained at the time. We’ve recently increased our grantmaking. In 2024 we made $1.9M in grants. So far in 2025 we have made $0.9M in grants and are only a third of the way through the year. Our total available fund balance is currently $2.6M, down from $3.3M in November 2024 when we last posted. We project that at our 2024 funding bar (i.e. our sense of how promising an application needs to be for us to fund it), we [...] ---Outline:(00:28) The situation(01:54) Why has this happened?(01:57) 1. Growth in demand from established organisations(02:38) 2. Our active grantmaking has worked(03:33) 3. We continue to fill an important role in the EA funding ecosystem(04:25) Room for additional fundingThe original text contained 4 footnotes which were omitted from this narration. --- First published: May 3rd, 2025 Source: https://forum.effectivealtruism.org/posts/vdLWt6A7DZuZqCLFA/update-eaif-is-now-more-funding-constrained --- Narrated by TYPE III AUDIO.
undefined
May 3, 2025 • 16min

“Creating Market Incentives or, Shrimp Stunning Credits” by Aaron Boddy🔸

Introduction This post is intended to be read after reading our previous post outlining Shrimp Welfare Project's 2030 Vision & Absorbency Plans. This post therefore assumes some baseline knowledge of our Humane Slaughter Initiative, which can be found in that post. Problem(s) Market Incentives How do we create the incentives for the market to shift to pre-slaughter stunning? The suffering shrimps experience at the end of their lives can be reduced by rendering them unconscious prior to slaughter through the use of electrical stunning technology. However, producers currently have very little incentive to implement pre-slaughter stunning as it's expensive and few buyers require it. Even relatively forward-thinking retailers often can’t push an initiative through internally without some leverage (as they need to be able to convince their higher-ups that this is something worth doing and to persuade their suppliers to invest in the technology). To solve this, we buy [...] ---Outline:(00:12) Introduction(00:30) Problem(s)(00:33) Market Incentives(01:51) Market Transition(02:54) Strategy(02:57) Existing Policy Tools(04:23) Cage-Free Impact Incentives(06:38) Shrimp Stunning Credits(07:31) Buying Credits(08:21) Precision Aquaculture(10:17) How it works(12:16) Further thoughts(12:20) Cost-Effectiveness(13:25) Expanding Credits(15:28) How You Can Help--- First published: May 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/tzEaFktvPP8XA9j4n/creating-market-incentives-or-shrimp-stunning-credits --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
May 3, 2025 • 34min

“Shrimp Welfare Project’s 2030 Vision & Absorbency Plans” by Aaron Boddy🔸

Introduction ~440 billion shrimps are farmed each year [1]. This is over 5x the total number of all farmed land animals put together [2]. Many farmed shrimps suffer from conditions that can and should be addressed, such as poor water quality, high stocking densities, inhumane slaughter methods, and avoidable mutilations (such as eyestalk ablation) [3]. Shrimp Welfare Project is an organisation of people who believe that shrimps are capable of suffering and deserve our moral consideration [4]. We aim to cost-effectively reduce the suffering of billions of farmed shrimps. This post is essentially an expanded version of our 2025 Funding Proposal. If you want the TL;DR version of this post, I'd recommend reading that. (Shr)Impact and Vision Shrimp Welfare Project has four workstreams, two of which we consider our Core or Foundational workstreams - those are Corporate Engagement and Farmer Support. Two more are relatively new, but we [...] ---Outline:(00:12) Introduction(01:01) (Shr)Impact and Vision(01:39) Core: Corporate Engagement(02:05) Problem (and Context)(03:12) Strategy(06:01) Achievements(06:31) 2030 Vision(08:24) Core: Farmer Engagement(08:45) Problem (and Context)(09:36) Strategy(11:42) Achievements(13:51) 2030 Vision(16:52) New: Research & Policy(17:16) Problem (and Context)(18:26) Strategy(19:35) 2030 Vision(22:17) New: Precision Welfare(22:42) Problem (and Context)(23:50) Strategy(25:15) 20(26) Vision(26:49) Absorbency Plans(27:26) ~$50k-250k+(27:31) Scaling the Humane Slaughter Initiative (High)(28:37) Traceability (High)(29:11) Shrimp Welfare industry Marketing Campaign, Conference, or Webinars (Low)(29:52) ~$250k-1M(29:57) Large Stunner commitment (Medium)(30:40) New Stunners (Medium)(31:00) ~$1M+(31:04) Creating Market Incentives or, Shrimp Stunning Credits (High)(31:48) Supporting UsThe original text contained 2 footnotes which were omitted from this narration. --- First published: May 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/Ztqzt4BWMPZW5ECfP/shrimp-welfare-project-s-2030-vision-and-absorbency-plans --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
May 3, 2025 • 2min

“Oliver Habryka on OpenPhil and GoodVentures” by TimothyTelleenLawton

In this episode of our podcast, Elizabeth Van Nostrand and I talk to Oliver Habryka of Lightcone Infrastructure about his thoughts on the Open Philanthropy Project, which he believes has become stifled by the PR interests of its primary funder, Good Ventures. Oliver's main claim is that around mid 2023 or early 2024, Good Ventures founder Dustin Moskovitz became more concerned about his reputation, and this put a straight jacket over what Open Phil could fund. Moreover it was not enough for a project to be good and pose low reputational risk; it had to be obviously low reputational risk, because OP employees didn’t have enough communication with Good Ventures to pitch exceptions. According to Habryka. That's a big caveat; this podcast is pretty one sided. We invited OpenPhil to send a representative to record their own episode, but they decided to just send a written response (which is [...] --- First published: May 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/s38brRDm7sG8JgHxB/oliver-habryka-on-openphil-and-goodventures --- Narrated by TYPE III AUDIO.
undefined
May 2, 2025 • 13min

“Why I am Still Skeptical about AGI by 2030” by James Fodor

Introduction I have been writing posts critical of mainstream EA narratives about AI capabilities and timelines for many years now. Compared to the situation when I wrote my posts in 2018 or 2020, LLMs now dominate the discussion, and timelines have also shrunk enormously. The ‘mainstream view’ within EA now appears to be that human-level AI will be arriving by 2030, even as early as 2027. This view has been articulated by 80,000 Hours, on the forum (though see this excellent piece excellent piece arguing against short timelines), and in the highly engaging science fiction scenario of AI 2027. While my article piece is directed generally against all such short-horizon views, I will focus on responding to relevant portions of the article ‘Preparing for the Intelligence Explosion’ by Will MacAskill and Fin Moorhouse. Rates of Growth The authors summarise their argument as follows: Currently, total global research effort [...] ---Outline:(00:11) Introduction(01:05) Rates of Growth(04:55) The Limitations of Benchmarks(09:26) Real-World Adoption(11:31) Conclusion--- First published: May 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/meNrhbgM3NwqAufwj/why-i-am-still-skeptical-about-agi-by-2030 --- Narrated by TYPE III AUDIO.
undefined
May 2, 2025 • 8min

“EA North 2025 retrospective” by matthes

intro EA North was a one-day conference in Sheffield (UK) aimed at people in the North of England (Manchester, Liverpool, Sheffield, Leeds, etc.). The event had 35 attendees on the day.[1] The cost per attendee was £50 and the cost per new connection was £11. The total cost was £1765. The attendee feedback was quite positive (see below). I am happy with how it all turned out and think it was a very worthwhile use of my time. Thank you for everyone who attended! Special thanks to the speakers and meet-up facilitators. venue Our venue was The Showroom in Sheffield. I booked two rooms. The main room had a projector and was used for the talks. The second room was laid out like a cafe/bar. This was to allow space for 1-1s to happen at any point during the day. agenda Here is the schedule for the [...] ---Outline:(00:20) intro(01:02) venue(01:38) agenda(02:44) feedback from attendees(02:52) big ones(03:20) other things people care about with events(04:21) what people found the most useful(04:46) what people liked the least(05:01) applicants and attendees(05:19) numbers(05:36) who applied?(06:19) what I would do differently next time(07:10) resources for you to stealThe original text contained 2 footnotes which were omitted from this narration. --- First published: May 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/eQcpxyLfXoA8bXY6K/ea-north-2025-retrospective --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
May 2, 2025 • 14min

“12x more cost-effective than EAG - how I organised EA North 2025 (and how you could, too)” by matthes

I put on a small one-day conference. The cost per attendee was £50 (vs £1.2k for EAGs) and the cost per new connection was £11 (vs £130 for EAGs). intro EA North was a one-day event for the North of England. 35 people showed up on the day. In total, I spent £1765 (≈ $2.4k), including paying myself £20/h for 30h total. This money will be reimbursed by EA UK[1]. The cost per attendee was £50 and the cost per new connection was £11. These are significantly lower than for EAG events, suggesting that we should be putting on more smaller events. I am not arguing that EAGs should not exist at all. A local event will likely never let me connect with someone living on another continent in person. My main goal with this post is to encourage individuals to put on more events [...] ---Outline:(00:29) intro(01:38) why you can probably do this, too(02:26) what I spent the money on and a comparison with EAG London 2023(03:12) budget breakdown(04:48) cost per attendee per day(05:17) cost per connection(07:19) what I spent my time on(08:24) ideas for being even more cost-effective(09:27) recommendations to funders(09:49) reconsider how much resources you spend on small applications(10:44) consider providing funding upfront(11:17) thermal printers are cool and cheap(11:57) conclusionThe original text contained 7 footnotes which were omitted from this narration. --- First published: May 2nd, 2025 Source: https://forum.effectivealtruism.org/posts/m9sTFoAsE8dSnzoBt/untitled-draft-tr7p --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
May 1, 2025 • 5min

“Reflections on 7 years building the EA Forum — and moving on” by JP Addison🔸

I’m ironically not a very prolific writer. I’ve preferred to stay behind the scenes here and leave the writing to my colleagues who have more of a knack for it. But a goodbye post is something I must write for myself. Perhaps I’m getting old and nostalgic, because what came out wound up being a wander down memory lane. I probably am getting old and nostalgic, but I also hope I’ve communicated something about my love for this community and the gratefulness for the chance to serve you all.My story of the EA Forum Few things have lasted as long in my life as my work on the Forum. I’ve spent more time working on the EA Forum than I’ve spent living anywhere since I was 0-12 years old. I've worked on the Forum longer than I've known my partner—whom I've known long enough to get married to. [...] ---Outline:(00:40) My story of the EA Forum(03:47) What's nextThe original text contained 1 footnote which was omitted from this narration. --- First published: May 1st, 2025 Source: https://forum.effectivealtruism.org/posts/4ckgvqohXTBy6hCap/untitled-draft-a4kx --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app