EA Forum Podcast (All audio) cover image

EA Forum Podcast (All audio)

Latest episodes

undefined
May 1, 2025 • 3min

“Community Polls for the Community” by Will Aldred

The Meta Coordination Forum (MCF) is a place where EA leaders are polled on matters of EA community strategy. I thought it could be fun (and interesting) to run these same polls on EAs at large.[1] Note: I link to the corresponding MCF results throughout this post, but I recommend readers don’t look at those until after voting themselves, to avoid anchoring.  (MCF results)   (MCF results)  (MCF results)   (MCF results)  (MCF results; see also the AIS field-building survey results)   (MCF results; see also the AIS field-building survey results)   (MCF results; see also the AIS field-building survey results)  (MCF results) 2040) AI timelines","agreeWording":"agree","disagreeWording":"disagree","colorScheme":{"darkColor":"#1D2A17","lightColor":"#FFFFFF","bannerTextColor":"#FFFFFF"},"duration":{"days":1,"hours":0,"minutes":0}}">  (MCF results) 20% of the EA community's resources but currently receives little attention","agreeWording":"agree","disagreeWording":"disagree","colorScheme":{"darkColor":"#7B3402","lightColor":"#FFFFFF","bannerTextColor":"#FFFFFF"},"duration":{"days":1,"hours":0,"minutes":0}}">  (MCF results)  I’m sneaking in this meta-level poll to finish. For previous discussion, see this thread. The idea, in my mind, is that these independent EAs would be invited for [...] The original text contained 1 footnote which was omitted from this narration. --- First published: May 1st, 2025 Source: https://forum.effectivealtruism.org/posts/EYcFujQqWhzoSadh9/community-polls-for-the-community --- Narrated by TYPE III AUDIO.
undefined
May 1, 2025 • 5min

“Arkose is closing, but you can help” by Arkose

Arkose is an AI safety fieldbuilding organisation that supports experienced machine learning professionals — such as professors and research engineers — to engage with the field. We focus on those new to AI safety, and have strong evidence that our work helps them take meaningful first steps. Since December 2023, we’ve held nearly 300 one-on-one calls with senior machine learning researchers and engineers. In follow-up surveys, 79% reported that the call accelerated their involvement in AI safety[1]. Nonetheless, we’re at serious risk of shutting down in the coming weeks due to a lack of funding. Several funders have told us that we’re close to meeting their bar, but not quite there, leaving us in a precarious position. Without immediate support, we won’t be able to continue this work. If you're interested in supporting Arkose, or would like to learn more, please reach out here or email victoria@arkose.org. What [...] ---Outline:(01:07) What evidence is there that Arkose is impactful?(02:50) What would the funding allow you to achieve?(03:13) How can I help?The original text contained 1 footnote which was omitted from this narration. --- First published: May 1st, 2025 Source: https://forum.effectivealtruism.org/posts/wZc2jyJyNe6DLrr2Y/arkose-is-closing-but-you-can-help --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
May 1, 2025 • 37min

“Should we expect the future to be good?” by Neil Crawford

Audio note: this article contains 54 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. 1. Introduction 'Should we expect the future to be good? This is an important question for many reasons. One such reason is that the answer to this question has implications for what our intermediate goals should be. If we should expect the future to be good, then it would be relatively more important for us to focus on ensuring that we survive long into the future, e.g. by working on mitigating extinction risks. If we should not expect the future to be good, then it would be relatively more important for us to focus on mitigating risks of astronomical suffering. In this paper, I critique Paul Christiano's (2013) argument that the future will be good. In Section 2 [...] ---Outline:(00:21) 1. Introduction(02:34) 2. Christianos argument(02:51) 2.1 First premise(03:46) 2.2 Second premise(04:38) 2.3 Third premise and conclusion(05:25) 2.4 Simplifying assumptions(07:08) 3. Model(08:46) 4. Assuming that longtermists are all aligned(11:50) 5. Fanatical values(17:52) 6. Different levels of influence(19:58) Aligned longtermists versus shorttermists(23:56) 8. Confidence in aligned longtermists(24:59) 9. Conclusion(26:34) Acknowledgements(27:07) Appendix A(27:10) Theorem(27:47) Proof(29:12) Appendix BThe original text contained 17 footnotes which were omitted from this narration. --- First published: April 30th, 2025 Source: https://forum.effectivealtruism.org/posts/j2HaibJEqjs38Rv2n/should-we-expect-the-future-to-be-good --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
May 1, 2025 • 6min

“Debate: should EA avoid using AI art outside of research?” by titotal

There is a growing movement to ban or discourage the use of AI art, citing ethical concerns over unethical data scraping, environmental cost, and harm to the incomes of real artists. This sentiment seems most prevalent in left-leaning online spaces like reddit and bluesky. Some are even starting to associate AI art with the far-right, with one popular article declaring it to be “the new aesthetics of fascism”. As an example of how far this movement is spreading, the subreddit for the poker roguelike video game Balatro had a kerfuffle a few months ago, when a volunteer moderator for the subreddit stated that AI art was allowed. A person on bluesky screenshotted the post, and declared that if they had known the Balatro creator was okay with AI art, they wouldn’t have bought or own the game. In response, the creator of the game stated that “Neither Playstack [...] --- First published: April 30th, 2025 Source: https://forum.effectivealtruism.org/posts/ABzpzKFQhNxdEzayb/debate-should-ea-avoid-using-ai-art-outside-of-research --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
May 1, 2025 • 2min

“Prioritizing Work” by Jeff Kaufman 🔸

I recently read a blog post that concluded with: When I'm on my deathbed, I won't look back at my life and wish I had worked harder. I'll look back and wish I spent more time with the people I loved. Setting aside that some people don't have the economic breathing room to make this kind of tradeoff, what jumps out at me is the implication that you're not working on something important that you'll endorse in retrospect. I don't think the author is envisioning directly valuable work (reducing risk from international conflict, pandemics, or AI-supported totalitarianism; improving humanity's treatment of animals; fighting global poverty) or the undervalued less direct approach of earning money and donating it to enable others to work on pressing problems. Definitely spend time with your friends, family, and those you love. Don't work to the exclusion of everything else [...] --- First published: May 1st, 2025 Source: https://forum.effectivealtruism.org/posts/cF6eumerCq8hnb9YT/prioritizing-work --- Narrated by TYPE III AUDIO.
undefined
Apr 30, 2025 • 33min

“New Funding Round on Hardware-Enabled Mechanisms (HEMs)” by aog, Longview Philanthropy

Longview Philanthropy is launching a new request for proposals on hardware-enabled mechanisms (HEMs). We think HEMs are a promising method to enforce export controls, secure model weights, and verify compliance with international agreements on AI. We’d like to fund the development of designs, prototypes, and secure enclosures for various kinds of HEMs, as well as complementary red-teaming and field-building efforts. We plan to distribute $2M - $10M through this RFP. If you’d like to apply, please submit an expression of interest. Below is the text of the request for proposals. Table of Contents Motivations Areas of Interest Designs and prototypes of HEMs Location Verification Offline Licensing Bandwidth Limiters for Accessing Model Weights Analog Sensors for Workload Verification Other HEMs Tamper Resistance Measures Threat model Funding priorities Adversarial Testing Funding priorities How should adversarial testing be structured? Focused Research Organization (FRO) for FlexHEGs [...] ---Outline:(00:50) Motivations(07:05) Areas of Interest(07:43) Designs and prototypes of HEMs(08:23) Location Verification(11:16) Offline Licensing(12:41) Bandwidth Limiters for Accessing Model Weights(14:03) Analog Sensors for Workload Verification(15:13) Other HEMs(17:37) Tamper Resistance Measures(17:56) Threat model(19:28) Funding priorities(21:02) Adversarial Testing(21:34) Funding priorities(22:49) How should adversarial testing be structured?(24:00) Focused Research Organization (FRO) for FlexHEGs(26:01) Field-Building for HEMs(26:43) Evaluation Criteria(30:53) Application Process--- First published: April 30th, 2025 Source: https://forum.effectivealtruism.org/posts/aiStCBjfutWqwsxx4/new-funding-round-on-hardware-enabled-mechanisms-hems --- Narrated by TYPE III AUDIO.
undefined
Apr 30, 2025 • 5min

“EMERGENCY CALL FOR SUPPORT: Mitigating Global Catastrophic Risks (GCRs)” by JorgeTorresC, JuanGarcia, Mónica Ulloa, Michelle Bruno Hz, Jaime Sevilla, Roberto Tinoco, Guillem Bas

Observatorio de Riesgos Catastróficos Globales (ORCG) is at a critical juncture. We have secured funds for AI governance projects, but we are at risk of discontinuing all projects in other GCR areas, which will have a detrimental impact on GCR scenario preparedness. We want to know whether there is enough demand from funders for the lines of work below to decide whether to continue them, by raising at least $55,000 within the next month. For this reason, We would like to urge funders in EA to consider this proposal. If you are not interested in supporting these projects, it would be invaluable if you could direct us to other potential funders who you think might be interested. Why Support ORCG? ORCG has a proven track record of translating research into actionable policy and building resilience. Our approach is evidence-based, collaborative, and focused on practical solutions. Supporting ORCG means investing [...] --- First published: April 30th, 2025 Source: https://forum.effectivealtruism.org/posts/4tqfnJYmT53BdkjXB/emergency-call-for-support-mitigating-global-catastrophic --- Narrated by TYPE III AUDIO.
undefined
Apr 30, 2025 • 14min

“EA Funds and CEA are merging” by calebp, Zachary Robinson🔸, Oscar Howie

Caleb is Project Lead of EA Funds. Zach is CEO of the Centre for Effective Altruism, and Oscar is CEA's Chief of Staff. EA Funds and CEA are currently separate projects within Effective Ventures. EV is winding down, and so EA Funds and CEA will spin out. We have decided to spin out as one organization rather than two, and that organization will be called CEA. Our target date for spinning out and merging is 1 July 2025.Why we’re merging We believe this is the best way to achieve our common goal of contributing to a radically better world. While the merger process is not straightforward, making and implementing this decision has been made easier by both having impact as our north star and operating according to shared EA principles. EA Funds is a natural fit for CEA's stewardship strategy and our focus on building sustainable momentum for [...] ---Outline:(00:47) Why we're merging(03:20) What EA Funds will look like post-merger(07:28) Tradeoffs we're making by merging(09:55) What the merger mechanics mean for you if you're a donor or grantee(10:01) For donors(10:18) For granteesThe original text contained 6 footnotes which were omitted from this narration. --- First published: April 30th, 2025 Source: https://forum.effectivealtruism.org/posts/BLvTcMMEQGFYsh6Jw/ea-funds-and-cea-are-merging --- Narrated by TYPE III AUDIO.
undefined
Apr 30, 2025 • 2min

“New EA-adjacent Philosophy Lab” by Walter Veit

Hi everyone, I am a lecturer in philosophy at the University of Reading and currently trying to set-up at a lab focused on animal and AI sentience and welfare. Since many EAs are doing research in this area, I thought it might be useful to make an announcement here. Lab description: It is intended as an interdisciplinary group of philosophers and scientists using conceptual and computational methods to solve pressing philosophical, ethical, and scientific challenges at the intersection of the biological, social, and cognitive sciences such as as: Do insects feel pain? How can AI be used to improve animal welfare? How should economists and policy-makers include animal welfare in social welfare functions? Can we create artificial consciousness and how could we ensure AI welfare? How should new technologies be regulated? The lab is led by Dr. Walter Veit at the University of Reading. If you are interested [...] --- First published: April 30th, 2025 Source: https://forum.effectivealtruism.org/posts/owKfpJZa9NabLHJsr/new-ea-adjacent-philosophy-lab --- Narrated by TYPE III AUDIO.
undefined
Apr 30, 2025 • 20min

“My Research Process: Key Mindsets - Truth-Seeking, Prioritisation, Moving Fast” by Neel Nanda

This is post 2 of a sequence on my framework for doing and thinking about research. Start here. Before I get into what exactly to do at each stage of the research process, it's worth reflecting on the key mindsets that are crucial throughout the process, and how they should manifest at each stage. I think the most important mindsets are: Truth-seeking: By default, many research insights will be false - finding truth is hard. It's not enough to just know this, you must put in active effort to be skeptical and resist bias, lest you risk your research being worthless. Prioritisation: You have finite time, and a lot of possible actions. Your project will live or die according to whether you pick good ones. Moving fast: You have finite time and a lot to do. This doesn’t just mean “push yourself to go faster” - there's a [...] ---Outline:(01:44) Truth Seeking(06:59) Prioritisation(12:24) Moving Fast(18:25) Taking action under uncertainty--- First published: April 27th, 2025 Source: https://forum.effectivealtruism.org/posts/igS3T3QG6i7iLdptn/my-research-process-key-mindsets-truth-seeking --- Narrated by TYPE III AUDIO.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app