

LessWrong (Curated & Popular)
LessWrong
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
Mentioned books

Oct 8, 2025 • 29min
“The Origami Men” by Tomás B.
Dive into a world transformed by strange origami men who evoke apathy and curiosity. The narrator grapples with emotions in a monotonous existence, balanced by a compassionate friend, Jamie. Encountering Shaman Bob reveals the complexity of feeling versus numbness. Witnessing the terror of origami men sparks a painful yet enlightening transformation. As Jamie makes a fateful choice to seek understanding, the narrator confronts grief and explores the haunting question of what these beings learn from humanity. A poignant reflection on connection and existence unfolds.

6 snips
Oct 6, 2025 • 7min
“A non-review of ‘If Anyone Builds It, Everyone Dies’” by boazbarak
Boaz Barak shares thoughtful insights on the book 'If Anyone Builds It, Everyone Dies,' highlighting the authors' honesty about AI fears and policy directions. He discusses the book's clear writing while comparing AI risks to historical mistakes. Barak expresses a key disagreement over the book's binary view of AI development, favoring a more nuanced perspective. He examines the concept of AI 'growing' versus 'crafted,' arguing builders maintain control over capabilities. Finally, Barak critiques the lack of empirical evidence to guide risk assessments.

10 snips
Oct 6, 2025 • 16min
“Notes on fatalities from AI takeover” by ryan_greenblatt
Ryan Greenblatt, an AI policy and safety writer, dives into chilling speculations about potential fatalities from AI takeovers. He breaks down three main causes of human deaths, including takeover strategies and industrial expansion. Greenblatt argues that while total human extinction is unlikely, a significant percentage of fatalities is possible, estimating around 25% in various scenarios. He also highlights the complexities of AI motivations and the implications of irrationality in AI decision-making, leaving listeners pondering the future of humanity in a world increasingly dominated by machine intelligence.

Oct 4, 2025 • 22min
“Nice-ish, smooth takeoff (with imperfect safeguards) probably kills most ‘classic humans’ in a few decades.” by Raemon
The discussion centers on the implications of a smooth AI takeoff. Raemon argues that even with optimistic assumptions, most biological humans could face extinction within decades. He explores the necessity of perfect safeguards to avoid disastrous outcomes and uses the game Factorio to illustrate resource struggles. Historical examples of conquest highlight concerns over moral value in post-human descendants. The possibility of superintelligent AIs coordinating protective measures raises questions about early intervention and the nature of evolutionary change.

Oct 3, 2025 • 9min
“Omelas Is Perfectly Misread” by Tobias H
Dive into the fascinating world of Le Guin's 'Omelas' as the discussion unfolds its typical critiques of utilitarianism and global inequality. The narrative challenges readers to question their acceptance of a perfect utopia amidst hidden suffering. Explore why many reject the idea of pure happiness without darkness. The podcast navigates through as Le Guin’s experimental device highlights our complicity in this dilemma. Ultimately, it argues that the story serves as a meta-critique on our perceptions of happiness and moral choices.

4 snips
Oct 1, 2025 • 39min
“Ethical Design Patterns” by AnnaSalamon
Anna Salamon explores the concept of ethical design patterns, drawing parallels with software design. She discusses how we revise our intuitions in ethics much like in math and coding. The dialogue highlights the roots of ethical heuristics, emphasizing the importance of clarity and transparency in institutions. Salamon analyzes historical resistance to practices like handwashing and the complexities of discussing group differences. The podcast further delves into the necessary ethical frameworks for AI, promoting alignment and reflecting on societal impacts.

6 snips
Sep 30, 2025 • 8min
“You’re probably overestimating how well you understand Dunning-Kruger” by abstractapplic
Explore the intriguing misconceptions around the Dunning-Kruger effect, where popular beliefs often misinterpret the relationship between confidence and competence. Delve into the actual findings revealing that many people tend to underestimate their own abilities rather than overestimate them. Discover how statistical artifacts influence perceptions of competence and how simulated analyses can shed light on biases in self-assessment. Finally, learn practical steps for diagnosing miscalibrations in your own understanding.

Sep 27, 2025 • 24min
“Reasons to sell frontier lab equity to donate now rather than later” by Daniel_Eth, Ethan Perez
This discussion highlights the urgency of donating to AI safety now rather than later. Key points include the expectation of increased AI safety funding as awareness grows and wealthy donors become activated. They argue that current donation opportunities are underfunded, while future potential may saturate. The hosts emphasize the increasing costs of AI policy funding and how early contributions can unlock greater impact. Personal financial risks of concentrated assets and strategies for diversifying investments are also explored, encouraging proactive philanthropic action.

Sep 26, 2025 • 16min
“CFAR update, and New CFAR workshops” by AnnaSalamon
The discussion unveils exciting updates about a shift in branding, now calling itself 'A Center for Applied Rationality.' Two pilot workshops are on the horizon, promising immersive experiences with a focus on both classic and innovative content. Expect hands-on learning, vibrant conversations, and a blend of quick skills and deep integration. There's even a sliding scale for workshop fees, with options for financial aid. Plus, Anna highlights who would benefit the most, while also addressing who might want to steer clear!

15 snips
Sep 26, 2025 • 19min
“Why you should eat meat - even if you hate factory farming” by KatWoods
A passionate advocate challenges the vegan lifestyle, arguing that it can be unhealthy. Strategies for reducing animal suffering in meat consumption are shared, including choosing sustainable sources like mussels and wild fish. KatWoods highlights studies suggesting vegan diets may increase risks of depression and cognitive decline. She emphasizes the importance of personal health for effectively helping others, advocating for welfare-optimized eating practices. This thought-provoking discussion questions common assumptions about diet and ethics.