EA Forum Podcast (Curated & popular) cover image

EA Forum Podcast (Curated & popular)

Latest episodes

undefined
Dec 17, 2024 • 10min

“My experience with the Community Health team at CEA” by frances_lorenz

Summary This post shares my personal experience with CEA's Community Health team, focusing on how they helped me navigate a difficult situation in 2021. I aim to provide others with a concrete example of when and how to reach out to Community Health, supplementing the information on their website with a first-hand account. I also share why their work has helped me remain engaged with the EA community. Further, I try to highlight why a centralised Community Health team is crucial for identifying patterns of concerning behaviour. Introduction The Community Health team at the Centre for Effective Altruism has been an important source of support throughout my EA journey. As stated on their website, they “aim to strengthen the effective altruism community's ability to fulfil its potential for impact, and to address problems that could prevent that.” I don’t know the details of their day-to-day, but I understand that [...] ---Outline:(00:05) Summary(00:41) Introduction(01:32) My goals with this post are:(02:05) My experience in 2021(05:17) Three personal takeaways(07:22) What is the team like now?--- First published: December 16th, 2024 Source: https://forum.effectivealtruism.org/posts/aTmzt4TbTx7hiSAN8/my-experience-with-the-community-health-team-at-cea --- Narrated by TYPE III AUDIO.
undefined
Dec 16, 2024 • 4min

“Gwern on creating your own AI race and China’s Fast Follower strategy.” by Larks

This is a link post. Gwern recently wrote a very interesting thread about Chinese AI strategy and the downsides of US AI racing. It's both quite short and hard to excerpt so here is almost the entire thing: Hsu is a long-time China hawk and has been talking up the scientific & technological capabilities of the CCP for a long time, saying they were going to surpass the West any moment now, so I found this interesting when Hsu explains that: the scientific culture of China is 'mafia' like (Hsu's term, not mine) and focused on legible easily-cited incremental research, and is against making any daring research leaps or controversial breakthroughs... but is capable of extremely high quality world-class followup and large scientific investments given a clear objective target and government marching orders there is no interest or investment in an AI arms race, in part [...] --- First published: November 25th, 2024 Source: https://forum.effectivealtruism.org/posts/Kz8WpQkCckN9JNHCN/gwern-on-creating-your-own-ai-race-and-china-s-fast-follower --- Narrated by TYPE III AUDIO.
undefined
Dec 13, 2024 • 3min

“Technical Report on Mirror Bacteria: Feasibility and Risks” by Aaron Gertler 🔸

This is a link post. Science just released an article, with an accompanying technical report, about a neglected source of biological risk. From the abstract of the technical report: This report describes the technical feasibility of creating mirror bacteria and the potentially serious and wide-ranging risks that they could pose to humans, other animals, plants, and the environment... In a mirror bacterium, all of the chiral molecules of existing bacteria—proteins, nucleic acids, and metabolites—are replaced by their mirror images. Mirror bacteria could not evolve from existing life, but their creation will become increasingly feasible as science advances. Interactions between organisms often depend on chirality, and so interactions between natural organisms and mirror bacteria would be profoundly different from those between natural organisms. Most importantly, immune defenses and predation typically rely on interactions between chiral molecules that could often fail to detect or kill mirror bacteria due to their reversed [...] --- First published: December 12th, 2024 Source: https://forum.effectivealtruism.org/posts/9pkjXwe2nFun32hR2/technical-report-on-mirror-bacteria-feasibility-and-risks --- Narrated by TYPE III AUDIO.
undefined
Dec 12, 2024 • 2min

“EA Forum audio: help us choose the new voice” by peterhartree, TYPE III AUDIO

We’re thinking about changing our narrator's voice.There are three new voices on the shortlist. They’re all similarly good in terms of comprehension, emphasis, error rate, etc. They just sound different—like people do. We think they all sound similarly agreeable. But, thousands of listening hours are at stake, so we thought it’d be worth giving listeners an opportunity to vote—just in case there’s a strong collective preference. Listen and votePlease listen here:https://files.type3.audio/ea-forum-poll/ And vote here:https://forms.gle/m7Ffk3EGorUn4XU46 It’ll take 1-10 minutes, depending on how much of the sample you decide to listen to.We'll collect votes until Monday December 16th. Thanks! ---Outline:(00:47) Listen and vote(01:11) Other feedback?The original text contained 1 footnote which was omitted from this narration. --- First published: December 10th, 2024 Source: https://forum.effectivealtruism.org/posts/Bhd5GMyyGbusB22Hp/ea-forum-audio-help-us-choose-the-new-voice --- Narrated by TYPE III AUDIO.
undefined
Dec 11, 2024 • 0sec

Podcast and transcript: Allan Saldanha on earning-to-give

Me and Allan recorded this podcast on Tuesday 10th December, based on the questions in this AMA. I used Claude to edit the transcript, but I've read over it for accuracy. ---
undefined
Dec 7, 2024 • 1h 52min

“Where I Am Donating in 2024” by MichaelDickens

Summary It's been a while since I last put serious thought into where to donate. Well I'm putting thought into it this year and I'm changing my mind on some things. I now put more priority on existential risk (especially AI risk), and less on animal welfare and global priorities research. I believe I previously gave too little consideration to x-risk for emotional reasons, and I've managed to reason myself out of those emotions. Within x-risk: AI is the most important source of risk. There is a disturbingly high probability that alignment research won't solve alignment by the time superintelligent AI arrives. Policy work seems more promising. Specifically, I am most optimistic about policy advocacy for government regulation to pause/slow down AI development. In the rest of this post, I will explain: Why I prioritize x-risk over animal-focused [...] ---Outline:(00:04) Summary(01:30) I dont like donating to x-risk(03:56) Cause prioritization(04:00) S-risk research and animal-focused longtermism(05:52) X-risk vs. global priorities research(07:01) Prioritization within x-risk(08:08) AI safety technical research vs. policy(11:36) Quantitative model on research vs. policy(14:20) Man versus man conflicts within AI policy(15:13) Parallel safety/capabilities vs. slowing AI(22:56) Freedom vs. regulation(24:24) Slow nuanced regulation vs. fast coarse regulation(27:02) Working with vs. against AI companies(32:49) Political diplomacy vs. advocacy(33:38) Conflicts that arent man vs. man but nonetheless require an answer(33:55) Pause vs. Responsible Scaling Policy (RSP)(35:28) Policy research vs. policy advocacy(36:42) Advocacy directed at policy-makers vs. the general public(37:32) Organizations(39:36) Important disclaimers(40:56) AI Policy Institute(42:03) AI Safety and Governance Fund(43:29) AI Standards Lab(43:59) Campaign for AI Safety(44:30) Centre for Enabling EA Learning and Research (CEEALAR)(45:13) Center for AI Policy(47:27) Center for AI Safety(49:06) Center for Human-Compatible AI(49:32) Center for Long-Term Resilience(55:52) Center for Security and Emerging Technology (CSET)(57:33) Centre for Long-Term Policy(58:12) Centre for the Governance of AI(59:07) CivAI(01:00:05) Control AI(01:02:08) Existential Risk Observatory(01:03:33) Future of Life Institute (FLI)(01:03:50) Future Society(01:06:27) Horizon Institute for Public Service(01:09:36) Institute for AI Policy and Strategy(01:11:00) Lightcone Infrastructure(01:12:30) Machine Intelligence Research Institute (MIRI)(01:15:22) Manifund(01:16:28) Model Evaluation and Threat Research (METR)(01:17:45) Palisade Research(01:19:10) PauseAI Global(01:21:59) PauseAI US(01:23:09) Sentinel rapid emergency response team(01:24:52) Simon Institute for Longterm Governance(01:25:44) Stop AI(01:27:42) Where Im donating(01:28:57) Prioritization within my top five(01:32:17) Where Im donating (this is the section in which I actually say where Im donating)The original text contained 58 footnotes which were omitted from this narration. The original text contained 1 image which was described by AI. --- First published: November 19th, 2024 Source: https://forum.effectivealtruism.org/posts/jAfhxWSzsw4pLypRt/where-i-am-donating-in-2024 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Dec 5, 2024 • 3min

“I’m grateful for you” by Sarah Cheng

I recently wrote up some EA Forum-related strategy docs for a CEA team retreat, which meant I spent a bunch of time reflecting on the Forum and why I think it's worth my time to work on it. Since it's Thanksgiving here in the US, I wanted to share some of the gratitude that I felt. 🙂 I strongly believe in the principles of EA. I’ve been doing effective giving for about a decade now. But before joining CEA in 2021, I had barely used the Forum, and I had no other people in my life who identified with EA in the slightest. Most of the people that I know, have worked with, or have interacted with are not EA. When I bring up EA to people in my personal life, they are usually not that interested, or are quite cynical about the idea, or they just want [...] --- First published: November 28th, 2024 Source: https://forum.effectivealtruism.org/posts/f2c2to4KpW59GRoyj/i-m-grateful-for-you --- Narrated by TYPE III AUDIO.
undefined
Dec 5, 2024 • 10min

“Still donating half” by Julia_Wise🔸

Crossposted from Otherwise My husband and I were donating about 50% of our income until two years ago, when he took a significant pay cut to work at a nonprofit. We planned to cut our donation percentage at that time, but then FTX collapsed. In the time since, we’ve decided to keep donating half, although the absolute amount is a lot smaller. In a sense this is nothing special, because it was remarkably good luck that we were ever able to afford to donate at this rate at all. But I’ll spell out our process over time, in case it helps others realize they can also afford to donate more than they thought. How we got here Getting interested in donation In my teens and early twenties, I thought it was really unfair that my family had plenty of stuff while other people (especially in low-income countries) [...] ---Outline:(00:41) How we got here(00:45) Getting interested in donation(01:09) Early years with Jeff(02:18) When we earned less(03:17) Earning to give(04:15) Both at nonprofits(04:55) EA funding declines(05:33) Currently(05:51) Avoiding spending creep(07:19) Becoming older and more boring(08:44) Habits and commitment mechanismsThe original text contained 2 images which were described by AI. --- First published: December 4th, 2024 Source: https://forum.effectivealtruism.org/posts/mEQTxDGp4MxMSZA74/still-donating-half --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Dec 4, 2024 • 4min

“Factory farming as a pressing world problem” by 80000_Hours, Benjamin Hilton

This is a link post. 80,000 Hours recently updated our problem profile on factory farming, and we now rank it among the most pressing problems in the world. We're sharing the summary of the article here, and there's much more detail at the link. The author, Benjamin Hilton, published the article with us before moving on to a new role outside of 80k back in July, so he may have limited ability to engage with comments. But we welcome feedback and may incorporate it into future updates. Summary History is littered with moral mistakes — things that once were common, but we now consider clearly morally wrong, for example: human sacrifice, gladiatorial combat, public executions, witch hunts, and slavery. In my opinion, there's one clear candidate for the biggest moral mistake that humanity is currently making: factory farming. The rough argument is: There are trillions of farmed animals, making [...] The original text contained 1 footnote which was omitted from this narration. --- First published: October 29th, 2024 Source: https://forum.effectivealtruism.org/posts/goTRwb49riDvXGdy8/factory-farming-as-a-pressing-world-problem --- Narrated by TYPE III AUDIO.
undefined
Nov 29, 2024 • 4min

“Bequest: An EA-ish TV show that didn’t make it” by Keiran Harris 🔸

Hey everyone, I’m the producer of The 80,000 Hours Podcast, and a few years ago I interviewed AJ Jacobs on his writing, and experiments, and EA. And I said that my guess was that the best approach to making a high-impact TV show was something like: You make Mad Men — same level of writing, directing, and acting — but instead of Madison Avenue in the 1950-70s, it's an Open Phil-like org. So during COVID I wrote a pilot and series outline for a show called Bequest, and I ended up with something like that (in that the characters start an Open Phil-like org by the middle of the season, in a world where EA doesn't exist yet), combined with something like: Breaking Bad, but instead of raising money for his family, Walter White is earning to give. (That's not especially close to the story, and not claiming it's [...] --- First published: November 21st, 2024 Source: https://forum.effectivealtruism.org/posts/HjKpghhowBRLat4Hq/bequest-an-ea-ish-tv-show-that-didn-t-make-it --- Narrated by TYPE III AUDIO.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode