EA Forum Podcast (Curated & popular) cover image

EA Forum Podcast (Curated & popular)

Latest episodes

undefined
Jul 19, 2024 • 2min

“Warren Buffett changes giving plans (for the worse)” by katriel

This is a link post. Folks in philanthropy and development definitely know that the Gates Foundation is the largest private player in that realm by far. Until recently it was likely to get even larger, as Warren Buffet had stated that the Foundation would receive the bulk of his assets when he died. A few weeks ago, Buffet announced that he had changed his mind, and was instead going to create a new trust for his assets, to be jointly managed by his children. It's a huge change, but I don't think very many people took note of what it means ("A billionaire is going to create his own foundation rather than giving to an existing one; seems unsurprising."). So I created this chart: The new Buffet-funded trust is going to be nearly twice as large as the Gates Foundation, and nearly 150% larger than most of the other brand [...] --- First published: July 15th, 2024 Source: https://forum.effectivealtruism.org/posts/bqi2M8oayRDvuGQg9/warren-buffett-changes-giving-plans-for-the-worse --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Jul 19, 2024 • 30min

“Rethink Priorities’ Moral Parliament Tool” by Derek Shiller, arvomm, Bob Fischer, Hayley Clatterbuck

Link to tool: https://parliament.rethinkpriorities.org (1 min) Introductory Video (6 min) Basic Features Video Executive Summary This post introduces Rethink Priorities’ Moral Parliament Tool, which models ways an agent can make decisions about how to allocate goods in light of normative uncertainty. We treat normative uncertainty as uncertainty over worldviews. A worldview encompasses a set of normative commitments, including first-order moral theories, values, and attitudes toward risk. We represent worldviews as delegates in a moral parliament who decide on an allocation of funds to a diverse array of charitable projects. Users can configure the parliament to represent their own credences in different worldviews and choose among several procedures for finding their best all-things-considered philanthropic allocation. The relevant procedures are metanormative methods. These methods take worldviews and our credences in them as inputs and produce some action guidance as an output. Some proposed methods have taken inspiration from political or market processes involving agents [...] ---Outline:(00:24) Executive Summary(02:18) Introduction(03:47) How does it work?(04:21) Worldviews(08:07) Projects(10:45) Metanormative parliament(12:11) The Moral Parliament Tool at work(12:16) (How) do empirical assumptions matter?(12:20) Uncertainties about scale(14:13) How much does scale matter?(16:10) An example project: The Cassandra Fund(19:15) What would an EA parliament do?(19:21) Normative uncertainty among EAs(21:17) Results(24:12) Takeaways(26:40) Getting Started(27:04) AcknowledgmentsThe original text contained 9 footnotes which were omitted from this narration. --- First published: July 17th, 2024 Source: https://forum.effectivealtruism.org/posts/HxphJhSiXBQ74uxJX/rethink-priorities-moral-parliament-tool --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Jul 17, 2024 • 1h 24min

“Destabilization of the United States: The top X-factor EA neglects?” by Yelnats T.J.

Highlights Destabilization could be the biggest setback for great power conflict, AI, bio-risk, and climate disruption. Polarization plays a role in nearly every causal pathway leading to destabilization of the United States, and there is no indication polarization will decrease. The United States fits the pattern of past democracies that have descended into authoritarian regimes in many key aspects. The most recent empirical research on civil conflicts suggests the United States is in a category that has a 4% annual risk of falling into a civil conflict. In 2022 (when this was originally written), Mike Berkowitz, ED of Democracy Funders Network and 80,000 Hours guest, believes there is 50% chance American democracy fails in the next 6 years. For every dollar spent on depolarization efforts, there are probably at least a hundred dollars spent aggravating the culture war. Destabilization of the United States could wipe out billions of dollars of pledged EA funds. Note following the [...] ---Outline:(00:07) Highlights(01:16) Note following the assassination attempt of former President Trump(02:45) Preface(06:10) Acknowledgements(06:24) Summary(09:08) Possibility(10:02) Big picture(10:06) Authoritarianism(12:57) Civil conflict(16:50) Polarization(20:40) How close we already came (January 6th)(26:42) A note on the military counter argument(28:18) Top reasons why the United States wouldn’t destabilize(29:14) What I would have included in a longer version(29:51) Conclusion(31:36) Importance(31:59) Global ramifications and great power conflict(33:17) Artificial Intelligence and bio-risk(33:22) Applicable to both(34:27) Artificial Intelligence(34:48) Accelerating climate disruption(34:52) Authoritarianism(35:25) Civil conflict(36:08) Significance(37:02) Effects on the Effective Altruism movement(37:06) Talent(37:29) Funds(38:15) Plausible scenario(39:01) Neglectedness(40:36) Through the lens of polarization(42:03) Tractability(44:15) What is needed(44:18) The broad needs(44:46) Structural-reform needs\[80\](46:03) Needs for stopping polarizing forces(46:38) Needs for Depolarizing the population(46:58) Why it's difficult(47:02) Structural reform(47:36) Stopping polarizing forces(49:01) Depolarize the population(50:36) Where there is traction(50:40) Ballot initiatives(51:29) Robust federalism(51:48) Prescription (what OP/EA could do)(53:04) Funding and scaling existing efforts(53:09) Create an operation focused on recruiting more funders and key non-funder partners to this effort(54:00) Fund ballot initiative efforts and organizations(56:15) Fund existing depolarization efforts and organizations(56:51) Fund new organizations to fill gaps through an approach similar to the arrangement between CE and FTX for biosecurity(57:54) Fund experiments/projects that will give us actionable information(58:30) Miscellaneous interventions(58:34) Preempting accelerationist events(01:00:19) Invest in local journalism(01:00:51) Promote sincere populist leadership in the Republican apparatus to replace culture warriors(01:03:49) Invest in mutual aid networks(01:04:17) Strengthening unions and preparing for a general strike(01:05:32) My personal favorite(01:05:36) Left-Right coalitions to run a slate of ballot-initiatives for structural reform(01:06:49) Uncertainties (why OP/EA should do a medium-level investigation)(01:08:21) Conclusion/call to action--- First published: July 15th, 2024 Source: https://forum.effectivealtruism.org/posts/kmx3rKh2K4ANwMqpW/destabilization-of-the-united-states-the-top-x-factor-ea --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Jul 17, 2024 • 42min

“Against Aschenbrenner: How ‘Situational Awareness’ constructs a narrative that undermines safety and threatens humanity” by Gideon Futerman

Summary/Introduction Aschenbrenner's ‘Situational Awareness’ (Aschenbrenner, 2024) promotes a dangerous narrative of national securitisation. This narrative is not, despite what Aschenbrenner suggests, descriptive, but rather, it is performative, constructing a particular notion of security that makes the dangerous world Aschenbrenner describes more likely to happen. This piece draws on the work of Nathan A. Sears (2023), who argues that the failure to sufficiently eliminate plausible existential threats throughout the 20th century emerges from a ‘national securitisation’ narrative winning out over a ‘humanity macrosecuritization narrative’. National securitisation privileges extraordinary measures to defend the nation, often centred around military force and logics of deterrence/balance of power and defence. Humanity macrosecuritization suggests the object of security is to defend all of humanity, not just the nation, and often invokes logics of collaboration, mutual restraint and constraints on sovereignty. Sears uses a number of examples to show that when issues are constructed as issues [...] ---Outline:(03:08) Section 1- What is securitisation(07:45) Section 2: Sears 2023 - The macrosecuritization of Existential Threats to humanity(16:30) Section 3 - How does this relate to Aschenbrenner's 'Situational Awareness'?(19:54) Section 4 - Why Aschenbrenners narrative is dangerous and the role of expert communities(29:40) Section 5- The possibility of a moratorium, military conflict and collaboration(36:56) Conclusion--- First published: July 15th, 2024 Source: https://forum.effectivealtruism.org/posts/H6xEhur9Lbbv9dhBC/against-aschenbrenner-how-situational-awareness-constructs-a --- Narrated by TYPE III AUDIO.
undefined
Jul 14, 2024 • 33min

“The Precipice Revisited” by Toby_Ord

I'm often asked about how the existential risk landscape has changed in the years since I wrote The Precipice. Earlier this year, I gave a talk on exactly that, and I want to share it here. Here's a video of the talk and a full transcript. In the years since I wrote The Precipice, the question I’m asked most is how the risks have changed. It's now almost four years since the book came out, but the text has to be locked down a long time earlier, so we are really coming up on about five years of changes to the risk landscape. I’m going to dive into four of the biggest risks — climate change, nuclear, pandemics, and AI — to show how they’ve changed. Now a lot has happened over those years, and I don’t want this to just be recapping the news in fast-forward. But [...] ---Outline:(01:30) Climate Change(01:58) Carbon Emissions(03:18) Climate Sensitivity(06:43) Nuclear(06:46) Heightened Chance of Onset(08:16) Likely New Arms Race(09:54) Funding Collapse(10:53) Pandemics(10:56) Covid(16:03) Protective technologies(18:59) AI in Biotech(20:32) AI(20:50) RL agents ⇒ language models(24:59) Racing(27:05) Governance(30:14) Conclusions--- First published: July 12th, 2024 Source: https://forum.effectivealtruism.org/posts/iKLLSYHvnhgcpoBxH/the-precipice-revisited --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Jul 13, 2024 • 28min

“Most smart and skilled people are outside of the EA/rationalist community: an analysis” by titotal

This is a link post. Introduction: The (highly interrelated) effective altruist and Rationalist communities are very small on a global scale. Therefore, in general, most intelligence, skill and expertise is outside of the community, not within it. I don’t think many people will disagree with this statement. But sometimes it's worth reminding people of the obvious, and also it is worth quantifying and visualizing the obvious, to get a proper feel for the scale of the difference. I think some people are acting like they have absorbed this point, and some people definitely are not. In this post, I will try and estimate the size of these communities. I will compare how many smart people are in the community vs outside the community. I will do the same for people in a few professions, and then I will go into controversial mode and try and give some advice that I [...] --- First published: July 12th, 2024 Source: https://forum.effectivealtruism.org/posts/Bz4McBt62p63Zkjzb/most-smart-and-skilled-people-are-outside-of-the-ea --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Jul 11, 2024 • 1h 30min

“Reliable Sources: The Story of David Gerard” by TracingWoodgrains

This is a linkpost for https://www.tracingwoodgrains.com/p/reliable-sources-how-wikipedia-admin, posted in full here given its relevance to this community. Gerard has been one of the longest-standing malicious critics of the rationalist and EA communities and has done remarkable amounts of work to shape their public images behind the scenes. Note: I am closer to this story than to many of my others. As always, I write aiming to provide a thorough and honest picture, but this should be read as the view of a close onlooker who has known about much within this story for years and has strong opinions about the matter, not a disinterested observer coming across something foreign and new. If you’re curious about the backstory, I encourage you to read my companion article after this one. Introduction: Reliable Sources Wikipedia administrator David Gerard cares a great deal about Reliable Sources. For the past half-decade, he has torn [...] ---Outline:(00:55) Introduction: Reliable Sources(06:00) Gerard's Standards for Reliable Sources(13:48) Who Is David Gerard?(16:49) The Early Romantic Years(27:52) Gerard's fling with LessWrong in the twilight of the old internet(37:44) The bitter end(45:19) The Vindictive Ex(49:53) LessWrong(01:04:08) Effective Altruism(01:07:47) Scott Alexander(01:16:14) Conclusion(01:21:49) Companion article: A Young Mormon Discovers Online RationalityThe original text contained 24 footnotes which were omitted from this narration. --- First published: July 10th, 2024 Source: https://forum.effectivealtruism.org/posts/D8GmTE9jvJg44GTAg/reliable-sources-the-story-of-david-gerard --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Jul 11, 2024 • 13min

“80,000 hours should remove OpenAI from the Job Board (and similar EA orgs should do similarly)” by Raemon

I haven't shared this post with other relevant parties – my experience has been that private discussion of this sort of thing is more paralyzing than helpful. I might change my mind in the resulting discussion, but, I prefer that discussion to be public. I think 80,000 hours should remove OpenAI from its job board, and similar EA job placement services should do the same. (I personally believe 80k shouldn't advertise Anthropic jobs either, but I think the case for that is somewhat less clear) I think OpenAI has demonstrated a level of manipulativeness, recklessness, and failure to prioritize meaningful existential safety work, that makes me think EA orgs should not be going out of their way to give them free resources. (It might make sense for some individuals to work there, but this shouldn't be a thing 80k or other orgs are systematically funneling talent into) There [...] ---Outline:(04:41) FAQ / Appendix(04:51) Q: It seems that, like it or not, OpenAI is a place transformative AI research is likely to happen, and having good people work there is important.(05:02) Isnt it better to have alignment researchers working there, than not? Are you sure youre not running afoul of misguided purity instincts?(07:06) Q: What about jobs like security research engineer?.(07:12) That seems straightforwardly good for OpenAI to have competent people for, and probably doesnt require a good Safety Culture to pay off?(08:09) Q: What about offering a path towards good standing? to OpenAI?(10:44) Q: What if we left up job postings, but with an explicit disclaimer linking to a post saying why people should be skeptical?--- First published: July 3rd, 2024 Source: https://forum.effectivealtruism.org/posts/DjCXPkGDisS6oj6Ga/80-000-hours-should-remove-openai-from-the-job-board-and --- Narrated by TYPE III AUDIO.
undefined
Jul 11, 2024 • 8min

“We’ve renamed the Giving What We Can Pledge” by Alana HF, Giving What We Can

This is a link post. The Giving What We Can Pledge is now the 🔸10% Pledge! We cover the why (along with our near-term plans and how you can help!) below. TL;DR: The name change will help us grow awareness of the pledge by reducing brand confusion and facilitating partnerships.  We see it as an important part of reaching our goal of 10,000 pledgers by the end of 2024. You can help by adding the orange diamond emoji to your social profiles 🔸 if you’ve taken the 10% Pledge! (or a small blue diamond 🔹 emoji if you’ve taken the Trial Pledge) as described below.   Full post: For the better part of a year, Giving What We Can has been thinking more deliberately about how our brand choices could accelerate or hinder progress towards our mission of making giving effectively and significantly a cultural [...] ---Outline:(02:24) What will this help us achieve?(03:34) How can you help?(04:53) More about our new partnerships(06:12) What's staying the same?(06:53) Questions?(07:07) A big thanksThe original text contained 1 footnote which was omitted from this narration. --- First published: July 1st, 2024 Source: https://forum.effectivealtruism.org/posts/uZzXRyAwkDHLfu94W/we-ve-renamed-the-giving-what-we-can-pledge --- Narrated by TYPE III AUDIO.
undefined
Jul 9, 2024 • 1min

“AMA: Beast Philanthropy’s Darren Margolias” by Beast Philanthropy, GiveDirectly

From Darren Margolias: I'm the Executive Director of Beast Philanthropy, the charity founded by the world's most popular YouTuber MrBeast. We recently collaborated with GiveDirectly on the video below. You can read background the project from our LinkedIn here and here (plus GiveDirectly's blog) On Thursday, July 18th I'll be recording a video AMA with CEA's Emma Richter. Her questions will come from you, and we'll post the video and transcript here afterwards. Please post your questions as comments to this post and upvote the questions you’d like me to answer most. Emma and I will do our best to get to as many as we can. Feel free to ask anything you'd like to know about Beast Philanthropy's process, projects, and goals! --- First published: July 9th, 2024 Source: https://forum.effectivealtruism.org/posts/7QfKaF2bnCbuREJNx/ama-beast-philanthropy-s-darren-margolias --- Narrated by TYPE III AUDIO.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner