EA Forum Podcast (Curated & popular) cover image

EA Forum Podcast (Curated & popular)

Latest episodes

undefined
Jun 24, 2024 • 33min

“Kaya Guides Pilot Results” by RachelAbbott

Summary. Who We Are: Kaya Guides runs a self-help course on WhatsApp to reduce depression at scale in low and middle-income countries. We help young adults with moderate to severe depression. Kaya currently operates in India. We are the world's first nonprofit implementer of Step-by-Step, the World Health Organization's digital guided self-help program, which was proven effective in two RCTs. Pilot: We ran a pilot with 103 participants in India to assess the feasibility of implementing our program on WhatsApp with our target demographic and to generate early indicators of its effectiveness. Results: 72% of program completers experienced depression reduction of 50% or greater. 36% were depression-free. 92% moved down at least a classification in severity (i.e. they shifted from severe to moderately severe, moderately severe to moderate, etc). The average reduction in score was 10 points on the 27-point PHQ-9 depression questionnaire. Context: To offer a few [...] ---Outline:(04:44) Part 1. About the Kaya Guides Program(04:49) What is Kaya Guides and what do we do?(05:13) How the program works(05:35) Evidence base(06:11) Why guided self-help is effective(06:50) Why this work matters(07:52) Program design(08:46) Target participant profile(09:14) Impact measurement(10:00) Part 2. Pilot Impact and Cost-Effectiveness(10:18) Impacts on depression(11:01) Comparison(12:10) Effect Size Estimate(14:35) Takeaway(15:02) Cost-Effectiveness(15:29) Pilot Cost-Effectiveness(17:15) 2025 Projected Cost-Effectiveness(19:15) Program Impacts According to Participants(22:50) Part 3. Recruitment(22:55) Quick Stats(24:20) Participant Profile(25:38) Part 4. Retention(27:45) Part 5. Participant Feedback(31:19) What's Next(32:05) Support Us--- First published: June 16th, 2024 Source: https://forum.effectivealtruism.org/posts/6NaRJpSn2zfRSnGYN/kaya-guides-pilot-results --- Narrated by TYPE III AUDIO.
undefined
Jun 23, 2024 • 51min

“Are our Top Charities saving the same lives each year?” by GiveWell

This is a link post. Author: Adam Salisbury, Senior Research Associate In a nutshell We’ve had a longstanding concern that some of our top charity programs, including insecticide-treated nets, seasonal malaria chemoprevention (SMC), and vitamin A supplementation (VAS), may have less impact than we've estimated due to “repetitive saving.” These programs provide health interventions to the same children under 5 years old annually or every 3 years. Our cost-effectiveness models currently assume that different lives are saved each year from these interventions. We think it's possible the programs are actually saving the same, high-risk children over and over. In a worst-case scenario, this could mean the programs are saving 80% fewer cumulative lives than we thought. Based on a shallow review of empirical evidence and talking to experts, our best guess is that we're only overstating the total lives saved by these programs by around 10%, because: Under-5 deaths [...] ---Outline:(00:12) In a nutshell(02:46) What's the issue?(06:44) What did we find?(11:53) How could we be wrong?(14:31) What's the issue?(17:35) Why we don’t think this is a big concern(18:22) Driver 1: Skewness of mortality risk(20:42) Driver 2: Persistence of the at-risk population(25:12) Modeling these drivers(34:08) Sensitivity checks(35:35) Outside the model checks(37:34) How could we be wrong?(40:28) Are we returning children to normal life expectancy?(42:34) Driver 1: Skewness of mortality risk across the life cycle(43:43) Driver 2: Persistence of the at-risk population(48:13) Moral difficulties raised by the life expectancy question--- First published: June 18th, 2024 Source: https://forum.effectivealtruism.org/posts/jNAFTJWpKK89pisaQ/are-our-top-charities-saving-the-same-lives-each-year --- Narrated by TYPE III AUDIO.
undefined
Jun 18, 2024 • 11min

“Why so many ‘racists’ at Manifest?” by Austin

Manifest 2024 is a festival that we organized last weekend in Berkeley. By most accounts, it was a great success. On our feedback form, the average response to “would you recommend to a friend” was a 9.0/10. Reviewers said nice things like “one of the best weekends of my life” and “dinners and meetings and conversations with people building local cultures so achingly beautiful they feel almost like dreams” and “I’ve always found tribalism mysterious, but perhaps that was just because I hadn’t yet found my tribe.” Arnold Brooks running a session on Aristotle's Metaphysics. More photos of Manifest here. However, a recent post on The Guardian and review on the EA Forum highlight an uncomfortable fact: we invited a handful of controversial speakers to Manifest, whom these authors call out as “racist”. Why did we invite these folks? First: our sessions and guests were mostly not controversial — [...] ---Outline:(01:01) First: our sessions and guests were mostly not controversial — despite what you may have heard(03:03) Okay, but there sure seemed to be a lot of controversial ones…(06:03) Bringing people together with prediction markets(07:31) Anyways, controversy bad(08:57) Aside: Is Manifest an Effective Altruism event?--- First published: June 18th, 2024 Source: https://forum.effectivealtruism.org/posts/34pz6ni3muwPnenLS/why-so-many-racists-at-manifest --- Narrated by TYPE III AUDIO.
undefined
Jun 15, 2024 • 4min

“Help Fund Insect Welfare Science” by Bob Fischer, Daniela R. Waldhorn, abrahamrowe

The Arthropoda Foundation Tens of trillions of insects are used or killed by humans across dozens of industries. Despite being the most numerous animal species reared by animal industries, we know next to nothing about what's good or bad for these animals. And right now, funding for this work is scarce. Traditional science funders won’t pay for it; and within EA, the focus is on advocacy, not research. So, welfare science needs your help. We’re launching the Arthropoda Foundation, a fund to ensure that insect welfare science gets the essential resources it needs to provide decision-relevant answers to pressing questions. Every dollar we raise will be granted to research projects that can’t be funded any other way. We’re in a critical moment for this work. Over the last year, field-building efforts have accelerated, setting up academic labs that can tackle key studies. However, funding for these studies is [...] ---Outline:(00:10) The Arthropoda Foundation(01:17) Why do we need a fund?(02:55) Team--- First published: June 14th, 2024 Source: https://forum.effectivealtruism.org/posts/2NsS7gjccJAKMf4co/help-fund-insect-welfare-science --- Narrated by TYPE III AUDIO.
undefined
Jun 15, 2024 • 14min

“Maybe let the non-EA world train you” by ElliotT

This post is for EAs at the start of their careers who are considering which organisations to apply to, and their next steps in general. Conclusion up front: It can be really hard to get that first job out of university. If you don’t get your top picks, your less exciting backup options can still be great for having a highly impactful career. If those first few years of work experience aren’t your best pick, they will still be useful as a place where you can ‘learn how to job’, save some money, and then pivot or grow from there. The main reasons are: The EA job market can be grim. Securing a job at an EA organisation out of university is highly competitive, often resulting in failing to get a job, or chaotic job experiences due to the nascent nature of many EA orgs. An alternative [...] ---Outline:(01:58) What's the problem? Three failure modes of trying to get an EA job(06:15) Maybe let the non-EA world train you(08:50) Let's get specific. Some of my story(11:45) Caveats(12:58) Wrapping up--- First published: June 14th, 2024 Source: https://forum.effectivealtruism.org/posts/ZvXBSs9Nz3dKBKcAo/maybe-let-the-non-ea-world-train-you --- Narrated by TYPE III AUDIO.
undefined
Jun 13, 2024 • 5min

“Maybe Anthropic’s Long-Term Benefit Trust is powerless” by Zach Stein-Perlman

Crossposted from AI Lab Watch. Subscribe on Substack.Introduction. Anthropic has an unconventional governance mechanism: an independent "Long-Term Benefit Trust" elects some of its board. Anthropic sometimes emphasizes that the Trust is an experiment, but mostly points to it to argue that Anthropic will be able to promote safety and benefit-sharing over profit.[1] But the Trust's details have not been published and some information Anthropic has shared is concerning. In particular, Anthropic's stockholders can apparently overrule, modify, or abrogate the Trust, and the details are unclear. Anthropic has not publicly demonstrated that the Trust would be able to actually do anything that stockholders don't like. The facts There are three sources of public information on the Trust: The Long-Term Benefit Trust (Anthropic 2023) Anthropic Long-Term Benefit Trust (Morley et al. 2023) The $1 billion gamble to ensure AI doesn't destroy humanity (Vox: Matthews 2023) They say there's [...] ---Outline:(00:53) The facts(02:51) ConclusionThe original text contained 2 footnotes which were omitted from this narration. --- First published: May 27th, 2024 Source: https://forum.effectivealtruism.org/posts/JARcd9wKraDeuaFu5/maybe-anthropic-s-long-term-benefit-trust-is-powerless --- Narrated by TYPE III AUDIO.
undefined
Jun 12, 2024 • 37min

“Summary of Situational Awareness - The Decade Ahead” by OscarD🔸

Original by Leopold Aschenbrenner, this summary is not commissioned or endorsed by him. Short Summary Extrapolating existing trends in compute, spending, algorithmic progress, and energy needs implies AGI (remote jobs being completely automatable) by ~2027. AGI will greatly accelerate AI research itself, leading to vastly superhuman intelligences being created ~1 year after AGI. Superintelligence will confer a decisive strategic advantage militarily by massively accelerating all spheres of science and technology. Electricity use will be a bigger bottleneck on scaling datacentres than investment, but is still doable domestically in the US by using natural gas. AI safety efforts in the US will be mostly irrelevant if other actors steal the model weights of an AGI. US AGI research must employ vastly better cybersecurity, to protect both model weights and algorithmic secrets. Aligning superhuman AI systems is a difficult technical challenge, but probably doable, and we must devote lots of [...] ---Outline:(00:13) Short Summary(02:16) 1. From GPT-4 to AGI: Counting the OOMs(02:24) Past AI progress(05:38) Training data limitations(06:42) Trend extrapolations(07:58) The modal year of AGI is soon(09:30) 2. From AGI to Superintelligence: the Intelligence Explosion(09:37) The basic intelligence explosion case(10:47) Objections and responses(14:07) The power of superintelligence(16:29) III The Challenges(16:32) IIIa. Racing to the Trillion-Dollar Cluster(21:12) IIIb. Lock Down the Labs: Security for AGI(21:20) The power of espionage(22:24) Securing model weights(24:01) Protecting algorithmic insights(24:56) Necessary steps for improved security(26:50) IIIc. Superalignment(29:41) IIId. The Free World Must Prevail(32:41) 4. The Project(35:12) 5. Parting Thoughts(36:17) Responses to Situational AwarenessThe original text contained 1 footnote which was omitted from this narration. --- First published: June 8th, 2024 Source: https://forum.effectivealtruism.org/posts/zmRTWsYZ4ifQKrX26/summary-of-situational-awareness-the-decade-ahead --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Jun 11, 2024 • 9min

“I doubled the world record cycling without hands for AMF” by Vincent van der Holst

A couple weeks ago I announced I was going to try and break the world record cycling without hands for AMF. That post also explains why I wanted to break that record. Last Friday we broke that record and raised nearly €10.000 for AMF. Here's what happened on friday. You can still donate here. What was the old record? Canadian Robert John Murray rode the old record of 130.29 kilometers in 5:37 hours in Calgary on June 12, 2023. His average speed was 23.2 kilometers per hour. See here the Guinness World Records page. I managed to double the record and these were my stats. How did the record attempt itself go? On Friday, June 7, I started the record attempt on the closed cycling course of WV Amsterdam just after 6 am. I got up at half past four and immediately drank a [...] --- First published: June 11th, 2024 Source: https://forum.effectivealtruism.org/posts/5ru7nEtC6mufuBXbk/i-doubled-the-world-record-cycling-without-hands-for-amf --- Narrated by TYPE III AUDIO.
undefined
Jun 9, 2024 • 2min

“Announcing a $6,000,000 endowment for NYU Mind, Ethics, and Policy” by Sofia_Fogel

The NYU Mind, Ethics, and Policy Program will soon become the NYU Center for Mind, Ethics, and Policy (CMEP), our future secured by a generous $6,000,000 endowment. The CMEP Endowment Fund was established in May 2024 with a $5,000,000 gift from The Navigation Fund and a $1,000,000 gift from Polaris Ventures. We now welcome contributions from other supporters too, with deep gratitude to our founding supporters. Since our launch in Fall 2022, the NYU Mind, Ethics, and Policy Program has stood at the forefront of academic inquiry into the nature and intrinsic value of nonhuman minds. CMEP will continue this work, seeking to advance understanding of the consciousness, sentience, sapience, moral status, legal status, and political status of animals and AI systems via research, outreach, and field building in science, philosophy, and policy. You can read the press release about the endowment here. Thanks to everyone who [...] --- First published: May 31st, 2024 Source: https://forum.effectivealtruism.org/posts/eu5ykCAKLtPTyb8eM/announcing-a-usd6-000-000-endowment-for-nyu-mind-ethics-and --- Narrated by TYPE III AUDIO.
undefined
Jun 5, 2024 • 6min

“I bet Greg Colbourn 10 k€ that AI will not kill us all by the end of 2027” by Vasco Grilo

Agreement78 % of my donations so far have gone to the Long-Term Future Fund[1] (LTFF), which mainly supports AI safety interventions. However, I have become increasingly sceptical about the value of existential risk mitigation, and currently think the best interventions are in the area of animal welfare[2]. As a result, I realised it made sense for me to arrange a bet with someone very worried about AI in order to increase my donations to animal welfare interventions. Gregory Colbourn (Greg) was the 1st person I thought of. He said: I think AGI [artificial general intelligence] is 0-5 years away and p(doom|AGI) is ~90% I doubt doom in the sense of human extinction is anywhere as likely as suggested by the above. I guess the annual extinction risk over the next 10 years is 10^-7, so I proposed a bet to Greg similar to the end-of-the-world bet between [...] ---Outline:(00:07) Agreement(03:53) Impact(05:18) AcknowledgementsThe original text contained 5 footnotes which were omitted from this narration. --- First published: June 4th, 2024 Source: https://forum.effectivealtruism.org/posts/GfGxaPBAMGcYjv8Xd/i-bet-greg-colbourn-10-keur-that-ai-will-not-kill-us-all-by --- Narrated by TYPE III AUDIO.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner