

EA Forum Podcast (Curated & popular)
EA Forum Team
Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma.
If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
Episodes
Mentioned books

Dec 13, 2024 • 3min
“Technical Report on Mirror Bacteria: Feasibility and Risks” by Aaron Gertler 🔸
This is a link post. Science just released an article, with an accompanying technical report, about a neglected source of biological risk. From the abstract of the technical report: This report describes the technical feasibility of creating mirror bacteria and the potentially serious and wide-ranging risks that they could pose to humans, other animals, plants, and the environment... In a mirror bacterium, all of the chiral molecules of existing bacteria—proteins, nucleic acids, and metabolites—are replaced by their mirror images. Mirror bacteria could not evolve from existing life, but their creation will become increasingly feasible as science advances. Interactions between organisms often depend on chirality, and so interactions between natural organisms and mirror bacteria would be profoundly different from those between natural organisms. Most importantly, immune defenses and predation typically rely on interactions between chiral molecules that could often fail to detect or kill mirror bacteria due to their reversed [...] ---
First published:
December 12th, 2024
Source:
https://forum.effectivealtruism.org/posts/9pkjXwe2nFun32hR2/technical-report-on-mirror-bacteria-feasibility-and-risks
---
Narrated by TYPE III AUDIO.

Dec 12, 2024 • 2min
“EA Forum audio: help us choose the new voice” by peterhartree, TYPE III AUDIO
We’re thinking about changing our narrator's voice.There are three new voices on the shortlist. They’re all similarly good in terms of comprehension, emphasis, error rate, etc. They just sound different—like people do.
We think they all sound similarly agreeable. But, thousands of listening hours are at stake, so we thought it’d be worth giving listeners an opportunity to vote—just in case there’s a strong collective preference. Listen and votePlease listen here:https://files.type3.audio/ea-forum-poll/ And vote here:https://forms.gle/m7Ffk3EGorUn4XU46 It’ll take 1-10 minutes, depending on how much of the sample you decide to listen to.We'll collect votes until Monday December 16th. Thanks! ---Outline:(00:47) Listen and vote(01:11) Other feedback? The original text contained 1 footnote which was omitted from this narration. ---
First published:
December 10th, 2024
Source:
https://forum.effectivealtruism.org/posts/Bhd5GMyyGbusB22Hp/ea-forum-audio-help-us-choose-the-new-voice
---
Narrated by TYPE III AUDIO.

Dec 11, 2024 • 0sec
Podcast and transcript: Allan Saldanha on earning-to-give
Me and Allan recorded this podcast on Tuesday 10th December, based on the questions in this AMA. I used Claude to edit the transcript, but I've read over it for accuracy.

Dec 7, 2024 • 1h 52min
“Where I Am Donating in 2024” by MichaelDickens
Summary
It's been a while since I last put serious thought into where to donate. Well I'm putting thought into it this year and I'm changing my mind on some things.
I now put more priority on existential risk (especially AI risk), and less on animal welfare and global priorities research. I believe I previously gave too little consideration to x-risk for emotional reasons, and I've managed to reason myself out of those emotions.
Within x-risk:
AI is the most important source of risk.
There is a disturbingly high probability that alignment research won't solve alignment by the time superintelligent AI arrives. Policy work seems more promising.
Specifically, I am most optimistic about policy advocacy for government regulation to pause/slow down AI development.
In the rest of this post, I will explain:
Why I prioritize x-risk over animal-focused [...] ---Outline:(00:04) Summary(01:30) I dont like donating to x-risk(03:56) Cause prioritization(04:00) S-risk research and animal-focused longtermism(05:52) X-risk vs. global priorities research(07:01) Prioritization within x-risk(08:08) AI safety technical research vs. policy(11:36) Quantitative model on research vs. policy(14:20) Man versus man conflicts within AI policy(15:13) Parallel safety/capabilities vs. slowing AI(22:56) Freedom vs. regulation(24:24) Slow nuanced regulation vs. fast coarse regulation(27:02) Working with vs. against AI companies(32:49) Political diplomacy vs. advocacy(33:38) Conflicts that arent man vs. man but nonetheless require an answer(33:55) Pause vs. Responsible Scaling Policy (RSP)(35:28) Policy research vs. policy advocacy(36:42) Advocacy directed at policy-makers vs. the general public(37:32) Organizations(39:36) Important disclaimers(40:56) AI Policy Institute(42:03) AI Safety and Governance Fund(43:29) AI Standards Lab(43:59) Campaign for AI Safety(44:30) Centre for Enabling EA Learning and Research (CEEALAR)(45:13) Center for AI Policy(47:27) Center for AI Safety(49:06) Center for Human-Compatible AI(49:32) Center for Long-Term Resilience(55:52) Center for Security and Emerging Technology (CSET)(57:33) Centre for Long-Term Policy(58:12) Centre for the Governance of AI(59:07) CivAI(01:00:05) Control AI(01:02:08) Existential Risk Observatory(01:03:33) Future of Life Institute (FLI)(01:03:50) Future Society(01:06:27) Horizon Institute for Public Service(01:09:36) Institute for AI Policy and Strategy(01:11:00) Lightcone Infrastructure(01:12:30) Machine Intelligence Research Institute (MIRI)(01:15:22) Manifund(01:16:28) Model Evaluation and Threat Research (METR)(01:17:45) Palisade Research(01:19:10) PauseAI Global(01:21:59) PauseAI US(01:23:09) Sentinel rapid emergency response team(01:24:52) Simon Institute for Longterm Governance(01:25:44) Stop AI(01:27:42) Where Im donating(01:28:57) Prioritization within my top five(01:32:17) Where Im donating (this is the section in which I actually say where Im donating) The original text contained 58 footnotes which were omitted from this narration. ---
First published:
November 19th, 2024
Source:
https://forum.effectivealtruism.org/posts/jAfhxWSzsw4pLypRt/where-i-am-donating-in-2024
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Dec 5, 2024 • 3min
“I’m grateful for you” by Sarah Cheng 🔸
I recently wrote up some EA Forum-related strategy docs for a CEA team retreat, which meant I spent a bunch of time reflecting on the Forum and why I think it's worth my time to work on it. Since it's Thanksgiving here in the US, I wanted to share some of the gratitude that I felt. 🙂 I strongly believe in the principles of EA. I’ve been doing effective giving for about a decade now. But before joining CEA in 2021, I had barely used the Forum, and I had no other people in my life who identified with EA in the slightest. Most of the people that I know, have worked with, or have interacted with are not EA. When I bring up EA to people in my personal life, they are usually not that interested, or are quite cynical about the idea, or they just want to [...] ---
First published:
November 28th, 2024
Source:
https://forum.effectivealtruism.org/posts/f2c2to4KpW59GRoyj/i-m-grateful-for-you
---
Narrated by TYPE III AUDIO.

Dec 5, 2024 • 10min
“Still donating half” by Julia_Wise🔸
Crossposted from Otherwise My husband and I were donating about 50% of our income until two years ago, when he took a significant pay cut to work at a nonprofit. We planned to cut our donation percentage at that time, but then FTX collapsed. In the time since, we’ve decided to keep donating half, although the absolute amount is a lot smaller. In a sense this is nothing special, because it was remarkably good luck that we were ever able to afford to donate at this rate at all. But I’ll spell out our process over time, in case it helps others realize they can also afford to donate more than they thought. How we got here Getting interested in donation In my teens and early twenties, I thought it was really unfair that my family had plenty of stuff while other people (especially in low-income countries) [...] ---Outline:(00:41) How we got here(00:45) Getting interested in donation(01:09) Early years with Jeff(02:18) When we earned less(03:17) Earning to give(04:15) Both at nonprofits(04:55) EA funding declines(05:33) Currently(05:51) Avoiding spending creep(07:19) Becoming older and more boring(08:44) Habits and commitment mechanisms ---
First published:
December 4th, 2024
Source:
https://forum.effectivealtruism.org/posts/mEQTxDGp4MxMSZA74/still-donating-half
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Dec 4, 2024 • 4min
“Factory farming as a pressing world problem” by 80000_Hours, Benjamin Hilton
This is a link post. 80,000 Hours recently updated our problem profile on factory farming, and we now rank it among the most pressing problems in the world. We're sharing the summary of the article here, and there's much more detail at the link. The author, Benjamin Hilton, published the article with us before moving on to a new role outside of 80k back in July, so he may have limited ability to engage with comments. But we welcome feedback and may incorporate it into future updates. Summary History is littered with moral mistakes — things that once were common, but we now consider clearly morally wrong, for example: human sacrifice, gladiatorial combat, public executions, witch hunts, and slavery. In my opinion, there's one clear candidate for the biggest moral mistake that humanity is currently making: factory farming. The rough argument is: There are trillions of farmed animals, making [...] The original text contained 1 footnote which was omitted from this narration. ---
First published:
October 29th, 2024
Source:
https://forum.effectivealtruism.org/posts/goTRwb49riDvXGdy8/factory-farming-as-a-pressing-world-problem
---
Narrated by TYPE III AUDIO.

Nov 29, 2024 • 4min
“Bequest: An EA-ish TV show that didn’t make it” by Keiran Harris 🔸
Hey everyone, I’m the producer of The 80,000 Hours Podcast, and a few years ago I interviewed AJ Jacobs on his writing, and experiments, and EA. And I said that my guess was that the best approach to making a high-impact TV show was something like: You make Mad Men — same level of writing, directing, and acting — but instead of Madison Avenue in the 1950-70s, it's an Open Phil-like org. So during COVID I wrote a pilot and series outline for a show called Bequest, and I ended up with something like that (in that the characters start an Open Phil-like org by the middle of the season, in a world where EA doesn't exist yet), combined with something like: Breaking Bad, but instead of raising money for his family, Walter White is earning to give. (That's not especially close to the story, and not claiming it's [...] ---
First published:
November 21st, 2024
Source:
https://forum.effectivealtruism.org/posts/HjKpghhowBRLat4Hq/bequest-an-ea-ish-tv-show-that-didn-t-make-it
---
Narrated by TYPE III AUDIO.

Nov 28, 2024 • 13min
“GWWC’s 2024 evaluations of evaluators” by Giving What We Can, Aidan Whitfield🔸, Sjir Hoeijmakers🔸
Introduction The Giving What We Can research team is excited to share the results of our 2024 round of evaluations of charity evaluators and grantmakers! In this round, we completed three evaluations that will inform our donation recommendations for the 2024 giving season. As with our 2023 round, there are substantial limitations to these evaluations, but we nevertheless think that they are a significant improvement to a landscape in which there were no independent evaluations of evaluators’ work. In this post, we share the key takeaways from each of our 2024 evaluations and link to the full reports. We also include an update explaining our decision to remove The Humane League from our list of recommended programs. Our website has now been updated to reflect the new fund and charity recommendations that came out of these evaluations. Please also see our website for more context on [...] ---Outline:(00:14) Introduction(01:16) Key takeaways from each of our 2024 evaluations(01:39) Global health and wellbeing(01:44) Founders Pledge Global Health and Development Fund (FP GHDF)(04:07) Animal welfare(04:11) Animal Charity Evaluators' Movement Grants (ACE MG)(06:08) Animal Charity Evaluators' Charity Evaluation Program(08:33) Additional recommendation updates(08:37) The Humane League's corporate campaigns program(11:26) Conclusion The original text contained 2 footnotes which were omitted from this narration. ---
First published:
November 27th, 2024
Source:
https://forum.effectivealtruism.org/posts/NhpAHDQq6iWhk7SEs/gwwc-s-2024-evaluations-of-evaluators-1
---
Narrated by TYPE III AUDIO.

Nov 27, 2024 • 10min
“Research report: ‘Meaningfully reducing consumption of meat and animal products is an unsolved problem: A meta-analysis’” by Seth Ariel Green, Benny Smith, MMathur
The podcast dives into a meta-analysis exploring effective ways to reduce meat and animal product consumption. It reveals that no well-validated methods currently exist to achieve significant reductions, though cutting back on red and processed meat shows promise. However, this may inadvertently boost chicken and fish consumption, raising concerns for animal welfare and environmental issues. Future research is essential for developing better strategies. The discussion also highlights the evolution of research methods and the complexities consumers face.


