

EA Forum Podcast (Curated & popular)
EA Forum Team
Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma.
If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
Episodes
Mentioned books

Jun 12, 2024 • 37min
“Summary of Situational Awareness - The Decade Ahead” by OscarD🔸
Original by Leopold Aschenbrenner, this summary is not commissioned or endorsed by him. Short Summary Extrapolating existing trends in compute, spending, algorithmic progress, and energy needs implies AGI (remote jobs being completely automatable) by ~2027. AGI will greatly accelerate AI research itself, leading to vastly superhuman intelligences being created ~1 year after AGI. Superintelligence will confer a decisive strategic advantage militarily by massively accelerating all spheres of science and technology. Electricity use will be a bigger bottleneck on scaling datacentres than investment, but is still doable domestically in the US by using natural gas. AI safety efforts in the US will be mostly irrelevant if other actors steal the model weights of an AGI. US AGI research must employ vastly better cybersecurity, to protect both model weights and algorithmic secrets. Aligning superhuman AI systems is a difficult technical challenge, but probably doable, and we must devote lots of [...] ---Outline:(00:13) Short Summary(02:16) 1. From GPT-4 to AGI: Counting the OOMs(02:24) Past AI progress(05:38) Training data limitations(06:42) Trend extrapolations(07:58) The modal year of AGI is soon(09:30) 2. From AGI to Superintelligence: the Intelligence Explosion(09:37) The basic intelligence explosion case(10:47) Objections and responses(14:07) The power of superintelligence(16:29) III The Challenges(16:32) IIIa. Racing to the Trillion-Dollar Cluster(21:12) IIIb. Lock Down the Labs: Security for AGI(21:20) The power of espionage(22:24) Securing model weights(24:01) Protecting algorithmic insights(24:56) Necessary steps for improved security(26:50) IIIc. Superalignment(29:41) IIId. The Free World Must Prevail(32:41) 4. The Project(35:12) 5. Parting Thoughts(36:17) Responses to Situational AwarenessThe original text contained 1 footnote which was omitted from this narration. ---
First published:
June 8th, 2024
Source:
https://forum.effectivealtruism.org/posts/zmRTWsYZ4ifQKrX26/summary-of-situational-awareness-the-decade-ahead
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jun 11, 2024 • 9min
“I doubled the world record cycling without hands for AMF” by Vincent van der Holst
A couple weeks ago I announced I was going to try and break the world record cycling without hands for AMF. That post also explains why I wanted to break that record. Last Friday we broke that record and raised nearly €10.000 for AMF. Here's what happened on friday. You can still donate here. What was the old record? Canadian Robert John Murray rode the old record of 130.29 kilometers in 5:37 hours in Calgary on June 12, 2023. His average speed was 23.2 kilometers per hour. See here the Guinness World Records page. I managed to double the record and these were my stats. How did the record attempt itself go? On Friday, June 7, I started the record attempt on the closed cycling course of WV Amsterdam just after 6 am. I got up at half past four and immediately drank a [...] ---
First published:
June 11th, 2024
Source:
https://forum.effectivealtruism.org/posts/5ru7nEtC6mufuBXbk/i-doubled-the-world-record-cycling-without-hands-for-amf
---
Narrated by TYPE III AUDIO.

Jun 9, 2024 • 2min
“Announcing a $6,000,000 endowment for NYU Mind, Ethics, and Policy” by Sofia_Fogel
The NYU Mind, Ethics, and Policy Program will soon become the NYU Center for Mind, Ethics, and Policy (CMEP), our future secured by a generous $6,000,000 endowment. The CMEP Endowment Fund was established in May 2024 with a $5,000,000 gift from The Navigation Fund and a $1,000,000 gift from Polaris Ventures. We now welcome contributions from other supporters too, with deep gratitude to our founding supporters. Since our launch in Fall 2022, the NYU Mind, Ethics, and Policy Program has stood at the forefront of academic inquiry into the nature and intrinsic value of nonhuman minds. CMEP will continue this work, seeking to advance understanding of the consciousness, sentience, sapience, moral status, legal status, and political status of animals and AI systems via research, outreach, and field building in science, philosophy, and policy. You can read the press release about the endowment here. Thanks to everyone who [...] ---
First published:
May 31st, 2024
Source:
https://forum.effectivealtruism.org/posts/eu5ykCAKLtPTyb8eM/announcing-a-usd6-000-000-endowment-for-nyu-mind-ethics-and
---
Narrated by TYPE III AUDIO.

Jun 5, 2024 • 6min
“I bet Greg Colbourn 10 k€ that AI will not kill us all by the end of 2027” by Vasco Grilo
Agreement78 % of my donations so far have gone to the Long-Term Future Fund[1] (LTFF), which mainly supports AI safety interventions. However, I have become increasingly sceptical about the value of existential risk mitigation, and currently think the best interventions are in the area of animal welfare[2]. As a result, I realised it made sense for me to arrange a bet with someone very worried about AI in order to increase my donations to animal welfare interventions. Gregory Colbourn (Greg) was the 1st person I thought of. He said: I think AGI [artificial general intelligence] is 0-5 years away and p(doom|AGI) is ~90% I doubt doom in the sense of human extinction is anywhere as likely as suggested by the above. I guess the annual extinction risk over the next 10 years is 10^-7, so I proposed a bet to Greg similar to the end-of-the-world bet between [...] ---Outline:(00:07) Agreement(03:53) Impact(05:18) AcknowledgementsThe original text contained 5 footnotes which were omitted from this narration. ---
First published:
June 4th, 2024
Source:
https://forum.effectivealtruism.org/posts/GfGxaPBAMGcYjv8Xd/i-bet-greg-colbourn-10-keur-that-ai-will-not-kill-us-all-by
---
Narrated by TYPE III AUDIO.

Jun 4, 2024 • 4min
“Review of Past Grants: The $100.000 Grant for a Video Game?” by Nicolae
Since 2017, EA Funds has been providing grants across four distinct cause areas. While there are payout reports available, there is a lack of reports detailing the outcomes of these grants, so I delved out of curiosity into the Grants Database to review some of the proposals that received funding and evaluate their outcomes. Some of the findings were quite unexpected, particularly for the Long-Term Future Fund and the EA Infrastructure Fund. The case involving a $100.000 grant for a video game In July 2022, EA approved a $100,000 grant to Lone Pine Games, LLC, for developing and marketing a video game designed to explain the Stop Button Problem to the public and STEM professionals. Outcomes from looking into Lone Pine Games, LLC: After almost two years, there are no online mentions of such a game being developed by this company, except for the note on the [...] ---
First published:
June 3rd, 2024
Source:
https://forum.effectivealtruism.org/posts/7Dp9phDw28h3dbAns/review-of-past-grants-the-usd100-000-grant-for-a-video-game
---
Narrated by TYPE III AUDIO.

Jun 2, 2024 • 5min
“A Scar Worth Bearing: My Improbable Story of Kidney Donation” by Elizabeth Klugh
TL;DR: I donated my kidney and you can too. If that's too scary, consider blood donation, the bone marrow registry, post-mortem organ donation, or other living donations (birth tissue, liver donation). Kidney donation sucks. It's scary, painful, disruptive, scarring. My friends and family urged me not to; words were exchanged, tears were shed. My risk of preeclampsia tripled, that of end stage renal disease multiplied by five. I had to turn down two job offers while prepping for donation. It is easy to read philosophical arguments in favor of donation, agree with them, and put the book back on the shelf. But it is different when your friend needs a kidney: Love bears all things, believes all things, hopes all things, endures all things. Eighteen months ago, at 28-years-old, my friend Alan started losing weight. He developed a distinctive butterfly-shaped rash and became too weak to eat. On February [...] ---
First published:
May 30th, 2024
Source:
https://forum.effectivealtruism.org/posts/xiDKb3XvJxKiwNevJ/a-scar-worth-bearing-my-improbable-story-of-kidney-donation
---
Narrated by TYPE III AUDIO.

Jun 2, 2024 • 22min
“Introducing Ansh: A Charity Entrepreneurship Incubated Charity” by Supriya
Ansh, a charity focused on delivering Kangaroo Care to premature babies in India, discusses their impact of saving 4 lives per month per facility. They aim to double their impact by expanding to more hospitals. The podcast explores the challenges and successes of implementing Kangaroo Care, with plans to scale up and expand to other countries in the future.

May 30, 2024 • 11min
“Against a Happiness Ceiling: Replicating Killingsworth & Kahneman (2022)” by charlieh943
Epistemic Status: somewhat confident: I may have made coding mistakes. R code is here if you feel like checking. Introduction: In their 2022 article, Matthew Killingsworth and Daniel Kahneman looked to reconcile the results from two of their papers. Kahneman (2010) had reported that above a certain income level ($75,000 USD), extra income had no association with increases in individual happiness. Killingsworth (2021) suggested that it did. Kahneman and Killingsworth (henceforth KK) claimed they had resolved this conflict by (correctly) hypothesizing that: 1) There is an unhappy minority, whose unhappiness diminishes with rising income up to a threshold, then shows no further progress (i.e., Kahnemann's leveling off); 2) In the happier majority, happiness continues to rise with income even in the high range of incomes (i.e., Kllingsworth continued log-linear finding) (More info on this discussion can be found in Spencer Greenberg's thoroughly enjoyable blog post. Spencer [...] ---Outline:(00:18) Introduction:(03:04) Summary of Findings(04:07) Results(05:07) Median Regressions(05:21) Figure 1(06:16) Regressions at Various Percentiles(06:55) Figure 2(08:38) Implications(10:50) Table 1: Happiness at Different Percentiles (above, KK; below, me)The original text contained 2 footnotes which were omitted from this narration. ---
First published:
May 28th, 2024
Source:
https://forum.effectivealtruism.org/posts/A5voYMFhPkWTrGkuJ/against-a-happiness-ceiling-replicating-killingsworth-and
---
Narrated by TYPE III AUDIO.

May 29, 2024 • 1min
“89%of cage-free egg commitments with deadlines of 2023 or earlier have been fulfilled” by ASuchy
This is a link post. The report concludes that the cage-free fulfillment rate is maintaining its momentum at 89%. The producer, retailer, and manufacturer industries are some of the most cage-free forward sectors when it comes to fulfillment. Some major companies across sectors that fulfilled their commitments in 2023 (or years ahead of schedule) include Hershey (Global), Woolworths (South Africa), Famous Brands (Africa), Scandic Hotels (Europe), Monolog Coffee (Indonesia), Special Dog (Brazil), Azzuri Group (Europe), McDonald's (US), TGI Fridays (US), and The Cheesecake Factory (US). ---
First published:
May 24th, 2024
Source:
https://forum.effectivealtruism.org/posts/SG38cPw5C7wLXAeFn/89-of-cage-free-egg-commitments-with-deadlines-of-2023-or
---
Narrated by TYPE III AUDIO.

May 24, 2024 • 2min
“Articles about recent OpenAI departures” by bruce
This is a link post. A brief overview of recent OpenAI departures (Ilya Sutskever, Jan Leike, Daniel Kokotajlo, Leopold Aschenbrenner, Pavel Izmailov, William Saunders, Ryan Lowe Cullen O'Keefe[1]). Will add other relevant media pieces below as I come across them. Some quotes perhaps worth highlighting: Even when the team was functioning at full capacity, that “dedicated investment” was home to a tiny fraction of OpenAI's researchers and was promised only 20 percent of its computing power — perhaps the most important resource at an AI company. Now, that computing power may be siphoned off to other OpenAI teams, and it's unclear if there’ll be much focus on avoiding catastrophic risk from future AI models. -Jan suggesting that compute for safety may have been deprioritised even despite the 20% commitment. (Wired claims that OpenAI confirms that their "superalignment team is no more"). “I joined with substantial hope that OpenAI [...] The original text contained 1 footnote which was omitted from this narration. ---
First published:
May 17th, 2024
Source:
https://forum.effectivealtruism.org/posts/ckYw5FZFrejETuyjN/articles-about-recent-openai-departures
---
Narrated by TYPE III AUDIO.