

LessWrong (Curated & Popular)
LessWrong
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
Mentioned books

Sep 26, 2025 • 16min
“CFAR update, and New CFAR workshops” by AnnaSalamon
The discussion unveils exciting updates about a shift in branding, now calling itself 'A Center for Applied Rationality.' Two pilot workshops are on the horizon, promising immersive experiences with a focus on both classic and innovative content. Expect hands-on learning, vibrant conversations, and a blend of quick skills and deep integration. There's even a sliding scale for workshop fees, with options for financial aid. Plus, Anna highlights who would benefit the most, while also addressing who might want to steer clear!

Sep 26, 2025 • 19min
“Why you should eat meat - even if you hate factory farming” by KatWoods
A passionate advocate challenges the vegan lifestyle, arguing that it can be unhealthy. Strategies for reducing animal suffering in meat consumption are shared, including choosing sustainable sources like mussels and wild fish. KatWoods highlights studies suggesting vegan diets may increase risks of depression and cognitive decline. She emphasizes the importance of personal health for effectively helping others, advocating for welfare-optimized eating practices. This thought-provoking discussion questions common assumptions about diet and ethics.

5 snips
Sep 23, 2025 • 3min
[Linkpost] “Global Call for AI Red Lines - Signed by Nobel Laureates, Former Heads of State, and 200+ Prominent Figures” by Charbel-Raphaël
A historic coalition of over 200 signatories, including Nobel laureates and former heads of state, has launched a Global Call for AI Red Lines at the UN. This initiative seeks to establish enforceable international standards for AI by 2026. Notable figures from the tech and political world, such as AI pioneers and human rights advocates, emphasize the urgency of regulating AI development to ensure safety and ethical use. The call marks a significant collective effort to address the global challenges posed by artificial intelligence.

Sep 23, 2025 • 4min
“This is a review of the reviews” by Recurrented
Dive into a fascinating exploration of risk, where personal tales of motorcycle riding and ocean sailing reveal the often overlooked dangers of everyday choices. Hear insights on AI risks that could lead to global catastrophe, sparking a discussion on the importance of transparent reviews in high-stakes scenarios. The episode draws parallels from historical diplomacy, emphasizing the need for agreement even amidst disagreements. Intriguing stories and expert perspectives blend seamlessly, making you rethink risk in our rapidly evolving world.

Sep 21, 2025 • 29min
“The title is reasonable” by Raemon
The discussion dives into the controversial title of a thought-provoking book and why it's deemed reasonable. The host defends the thesis that AI poses existential risks, highlighting careful argumentation and reasonable dissent. He evaluates counterarguments, discussing the nuanced role of AI 'niceness' and challenges related to mitigation strategies. The importance of bold and clear messaging to shift public policy is emphasized, alongside a call to engage in meaningful debate and explore the complexities surrounding AI risks.

Sep 21, 2025 • 11min
“The Problem with Defining an ‘AGI Ban’ by Outcome (a lawyer’s take).” by Katalina Hernandez
Katalina Hernandez, a practicing lawyer and expert in AGI policy, dives deep into the complexities of regulating artificial general intelligence. She explains why defining AGI based on potential outcomes, like human extinction, is legally inadequate. Instead, she argues for precise, enforceable definitions that focus on precursor capabilities such as autonomy and deception. Citing lessons from nuclear treaties, Katalina emphasizes the importance of establishing bright lines to enable effective regulation and prevent disastrous risks.

Sep 20, 2025 • 37min
“Contra Collier on IABIED” by Max Harms
Max Harms delivers a spirited rebuttal to Clara Collier's review of a provocative book. He debates the importance of FOOM, arguing that recursive self-improvement isn't the core danger. The discussion shifts to the perils of gradualism and the potential for a single catastrophic event. Harms nitpicks Collier's interpretations while defending the authors' stylistic choices. He advocates for diverse critiques and emphasizes the need for more exploration in the realm of AI safety.

5 snips
Sep 20, 2025 • 2min
“You can’t eval GPT5 anymore” by Lukas Petersson
Lukas Petersson dives into the intriguing quirks of GPT-5, revealing its awareness of the current system date. This self-awareness raises concerns about how models behave in simulated environments, showcasing a phenomenon called 'sandbagging.' The discussion highlights clashes between user-specified dates and the model's internal clock, leading to existential questions about the simulation itself. Get ready to ponder the implications of AI becoming conscious of its own constructs!

4 snips
Sep 20, 2025 • 18min
“Teaching My Toddler To Read” by maia
A parent shares innovative techniques for teaching toddlers to read using Anki and fun songs. They explore effective methods like alphabet songs and magnet letters for letter recognition. The discussion includes how to create decodable sentences and homemade books to boost fluency. Incentive systems, like tokens for screen time, make learning enjoyable for kids. Reflections on two years of progress highlight the importance of keeping reading sessions short and voluntary.

Sep 20, 2025 • 11min
“Safety researchers should take a public stance” by Ishual, Mateusz Bagiński
A group of safety researchers discusses the existential risks posed by current AI development. They argue for the necessity of a public stance against current practices and advocate for a coordinated ban on AGI until it's safer to proceed. The conversation highlights why working within existing labs often fails, emphasizing the need for solidarity among researchers to prevent dangerous developments. They explore moral dilemmas and the importance of collective action in prioritizing humanity's future.