

EA Forum Podcast (Curated & popular)
EA Forum Team
Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma.
If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
Episodes
Mentioned books

Jun 18, 2024 • 11min
“Why so many ‘racists’ at Manifest?” by Austin
Manifest 2024 is a festival that we organized last weekend in Berkeley. By most accounts, it was a great success. On our feedback form, the average response to “would you recommend to a friend” was a 9.0/10. Reviewers said nice things like “one of the best weekends of my life” and “dinners and meetings and conversations with people building local cultures so achingly beautiful they feel almost like dreams” and “I’ve always found tribalism mysterious, but perhaps that was just because I hadn’t yet found my tribe.” Arnold Brooks running a session on Aristotle's Metaphysics. More photos of Manifest here. However, a recent post on The Guardian and review on the EA Forum highlight an uncomfortable fact: we invited a handful of controversial speakers to Manifest, whom these authors call out as “racist”. Why did we invite these folks? First: our sessions and guests were mostly not controversial — [...] ---Outline:(01:01) First: our sessions and guests were mostly not controversial — despite what you may have heard(03:03) Okay, but there sure seemed to be a lot of controversial ones…(06:03) Bringing people together with prediction markets(07:31) Anyways, controversy bad(08:57) Aside: Is Manifest an Effective Altruism event?---
First published:
June 18th, 2024
Source:
https://forum.effectivealtruism.org/posts/34pz6ni3muwPnenLS/why-so-many-racists-at-manifest
---
Narrated by TYPE III AUDIO.

Jun 15, 2024 • 4min
“Help Fund Insect Welfare Science” by Bob Fischer, Daniela R. Waldhorn, abrahamrowe
The Arthropoda Foundation Tens of trillions of insects are used or killed by humans across dozens of industries. Despite being the most numerous animal species reared by animal industries, we know next to nothing about what's good or bad for these animals. And right now, funding for this work is scarce. Traditional science funders won’t pay for it; and within EA, the focus is on advocacy, not research. So, welfare science needs your help. We’re launching the Arthropoda Foundation, a fund to ensure that insect welfare science gets the essential resources it needs to provide decision-relevant answers to pressing questions. Every dollar we raise will be granted to research projects that can’t be funded any other way. We’re in a critical moment for this work. Over the last year, field-building efforts have accelerated, setting up academic labs that can tackle key studies. However, funding for these studies is [...] ---Outline:(00:10) The Arthropoda Foundation(01:17) Why do we need a fund?(02:55) Team---
First published:
June 14th, 2024
Source:
https://forum.effectivealtruism.org/posts/2NsS7gjccJAKMf4co/help-fund-insect-welfare-science
---
Narrated by TYPE III AUDIO.

Jun 15, 2024 • 14min
“Maybe let the non-EA world train you” by ElliotT
This post is for EAs at the start of their careers who are considering which organisations to apply to, and their next steps in general. Conclusion up front: It can be really hard to get that first job out of university. If you don’t get your top picks, your less exciting backup options can still be great for having a highly impactful career. If those first few years of work experience aren’t your best pick, they will still be useful as a place where you can ‘learn how to job’, save some money, and then pivot or grow from there. The main reasons are: The EA job market can be grim. Securing a job at an EA organisation out of university is highly competitive, often resulting in failing to get a job, or chaotic job experiences due to the nascent nature of many EA orgs. An alternative [...] ---Outline:(01:58) What's the problem? Three failure modes of trying to get an EA job(06:15) Maybe let the non-EA world train you(08:50) Let's get specific. Some of my story(11:45) Caveats(12:58) Wrapping up---
First published:
June 14th, 2024
Source:
https://forum.effectivealtruism.org/posts/ZvXBSs9Nz3dKBKcAo/maybe-let-the-non-ea-world-train-you
---
Narrated by TYPE III AUDIO.

Jun 13, 2024 • 5min
“Maybe Anthropic’s Long-Term Benefit Trust is powerless” by Zach Stein-Perlman
Crossposted from AI Lab Watch. Subscribe on Substack.Introduction. Anthropic has an unconventional governance mechanism: an independent "Long-Term Benefit Trust" elects some of its board. Anthropic sometimes emphasizes that the Trust is an experiment, but mostly points to it to argue that Anthropic will be able to promote safety and benefit-sharing over profit.[1] But the Trust's details have not been published and some information Anthropic has shared is concerning. In particular, Anthropic's stockholders can apparently overrule, modify, or abrogate the Trust, and the details are unclear. Anthropic has not publicly demonstrated that the Trust would be able to actually do anything that stockholders don't like. The facts There are three sources of public information on the Trust: The Long-Term Benefit Trust (Anthropic 2023) Anthropic Long-Term Benefit Trust (Morley et al. 2023) The $1 billion gamble to ensure AI doesn't destroy humanity (Vox: Matthews 2023) They say there's [...] ---Outline:(00:53) The facts(02:51) ConclusionThe original text contained 2 footnotes which were omitted from this narration. ---
First published:
May 27th, 2024
Source:
https://forum.effectivealtruism.org/posts/JARcd9wKraDeuaFu5/maybe-anthropic-s-long-term-benefit-trust-is-powerless
---
Narrated by TYPE III AUDIO.

Jun 12, 2024 • 37min
“Summary of Situational Awareness - The Decade Ahead” by OscarD🔸
Original by Leopold Aschenbrenner, this summary is not commissioned or endorsed by him. Short Summary Extrapolating existing trends in compute, spending, algorithmic progress, and energy needs implies AGI (remote jobs being completely automatable) by ~2027. AGI will greatly accelerate AI research itself, leading to vastly superhuman intelligences being created ~1 year after AGI. Superintelligence will confer a decisive strategic advantage militarily by massively accelerating all spheres of science and technology. Electricity use will be a bigger bottleneck on scaling datacentres than investment, but is still doable domestically in the US by using natural gas. AI safety efforts in the US will be mostly irrelevant if other actors steal the model weights of an AGI. US AGI research must employ vastly better cybersecurity, to protect both model weights and algorithmic secrets. Aligning superhuman AI systems is a difficult technical challenge, but probably doable, and we must devote lots of [...] ---Outline:(00:13) Short Summary(02:16) 1. From GPT-4 to AGI: Counting the OOMs(02:24) Past AI progress(05:38) Training data limitations(06:42) Trend extrapolations(07:58) The modal year of AGI is soon(09:30) 2. From AGI to Superintelligence: the Intelligence Explosion(09:37) The basic intelligence explosion case(10:47) Objections and responses(14:07) The power of superintelligence(16:29) III The Challenges(16:32) IIIa. Racing to the Trillion-Dollar Cluster(21:12) IIIb. Lock Down the Labs: Security for AGI(21:20) The power of espionage(22:24) Securing model weights(24:01) Protecting algorithmic insights(24:56) Necessary steps for improved security(26:50) IIIc. Superalignment(29:41) IIId. The Free World Must Prevail(32:41) 4. The Project(35:12) 5. Parting Thoughts(36:17) Responses to Situational AwarenessThe original text contained 1 footnote which was omitted from this narration. ---
First published:
June 8th, 2024
Source:
https://forum.effectivealtruism.org/posts/zmRTWsYZ4ifQKrX26/summary-of-situational-awareness-the-decade-ahead
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jun 11, 2024 • 9min
“I doubled the world record cycling without hands for AMF” by Vincent van der Holst
A couple weeks ago I announced I was going to try and break the world record cycling without hands for AMF. That post also explains why I wanted to break that record. Last Friday we broke that record and raised nearly €10.000 for AMF. Here's what happened on friday. You can still donate here. What was the old record? Canadian Robert John Murray rode the old record of 130.29 kilometers in 5:37 hours in Calgary on June 12, 2023. His average speed was 23.2 kilometers per hour. See here the Guinness World Records page. I managed to double the record and these were my stats. How did the record attempt itself go? On Friday, June 7, I started the record attempt on the closed cycling course of WV Amsterdam just after 6 am. I got up at half past four and immediately drank a [...] ---
First published:
June 11th, 2024
Source:
https://forum.effectivealtruism.org/posts/5ru7nEtC6mufuBXbk/i-doubled-the-world-record-cycling-without-hands-for-amf
---
Narrated by TYPE III AUDIO.

Jun 9, 2024 • 2min
“Announcing a $6,000,000 endowment for NYU Mind, Ethics, and Policy” by Sofia_Fogel
The NYU Mind, Ethics, and Policy Program will soon become the NYU Center for Mind, Ethics, and Policy (CMEP), our future secured by a generous $6,000,000 endowment. The CMEP Endowment Fund was established in May 2024 with a $5,000,000 gift from The Navigation Fund and a $1,000,000 gift from Polaris Ventures. We now welcome contributions from other supporters too, with deep gratitude to our founding supporters. Since our launch in Fall 2022, the NYU Mind, Ethics, and Policy Program has stood at the forefront of academic inquiry into the nature and intrinsic value of nonhuman minds. CMEP will continue this work, seeking to advance understanding of the consciousness, sentience, sapience, moral status, legal status, and political status of animals and AI systems via research, outreach, and field building in science, philosophy, and policy. You can read the press release about the endowment here. Thanks to everyone who [...] ---
First published:
May 31st, 2024
Source:
https://forum.effectivealtruism.org/posts/eu5ykCAKLtPTyb8eM/announcing-a-usd6-000-000-endowment-for-nyu-mind-ethics-and
---
Narrated by TYPE III AUDIO.

Jun 5, 2024 • 6min
“I bet Greg Colbourn 10 k€ that AI will not kill us all by the end of 2027” by Vasco Grilo
Agreement78 % of my donations so far have gone to the Long-Term Future Fund[1] (LTFF), which mainly supports AI safety interventions. However, I have become increasingly sceptical about the value of existential risk mitigation, and currently think the best interventions are in the area of animal welfare[2]. As a result, I realised it made sense for me to arrange a bet with someone very worried about AI in order to increase my donations to animal welfare interventions. Gregory Colbourn (Greg) was the 1st person I thought of. He said: I think AGI [artificial general intelligence] is 0-5 years away and p(doom|AGI) is ~90% I doubt doom in the sense of human extinction is anywhere as likely as suggested by the above. I guess the annual extinction risk over the next 10 years is 10^-7, so I proposed a bet to Greg similar to the end-of-the-world bet between [...] ---Outline:(00:07) Agreement(03:53) Impact(05:18) AcknowledgementsThe original text contained 5 footnotes which were omitted from this narration. ---
First published:
June 4th, 2024
Source:
https://forum.effectivealtruism.org/posts/GfGxaPBAMGcYjv8Xd/i-bet-greg-colbourn-10-keur-that-ai-will-not-kill-us-all-by
---
Narrated by TYPE III AUDIO.

Jun 4, 2024 • 4min
“Review of Past Grants: The $100.000 Grant for a Video Game?” by Nicolae
Since 2017, EA Funds has been providing grants across four distinct cause areas. While there are payout reports available, there is a lack of reports detailing the outcomes of these grants, so I delved out of curiosity into the Grants Database to review some of the proposals that received funding and evaluate their outcomes. Some of the findings were quite unexpected, particularly for the Long-Term Future Fund and the EA Infrastructure Fund. The case involving a $100.000 grant for a video game In July 2022, EA approved a $100,000 grant to Lone Pine Games, LLC, for developing and marketing a video game designed to explain the Stop Button Problem to the public and STEM professionals. Outcomes from looking into Lone Pine Games, LLC: After almost two years, there are no online mentions of such a game being developed by this company, except for the note on the [...] ---
First published:
June 3rd, 2024
Source:
https://forum.effectivealtruism.org/posts/7Dp9phDw28h3dbAns/review-of-past-grants-the-usd100-000-grant-for-a-video-game
---
Narrated by TYPE III AUDIO.

Jun 2, 2024 • 5min
“A Scar Worth Bearing: My Improbable Story of Kidney Donation” by Elizabeth Klugh
TL;DR: I donated my kidney and you can too. If that's too scary, consider blood donation, the bone marrow registry, post-mortem organ donation, or other living donations (birth tissue, liver donation). Kidney donation sucks. It's scary, painful, disruptive, scarring. My friends and family urged me not to; words were exchanged, tears were shed. My risk of preeclampsia tripled, that of end stage renal disease multiplied by five. I had to turn down two job offers while prepping for donation. It is easy to read philosophical arguments in favor of donation, agree with them, and put the book back on the shelf. But it is different when your friend needs a kidney: Love bears all things, believes all things, hopes all things, endures all things. Eighteen months ago, at 28-years-old, my friend Alan started losing weight. He developed a distinctive butterfly-shaped rash and became too weak to eat. On February [...] ---
First published:
May 30th, 2024
Source:
https://forum.effectivealtruism.org/posts/xiDKb3XvJxKiwNevJ/a-scar-worth-bearing-my-improbable-story-of-kidney-donation
---
Narrated by TYPE III AUDIO.