

LessWrong (Curated & Popular)
LessWrong
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
Mentioned books

10 snips
Sep 17, 2025 • 13min
“I enjoyed most of IABED” by Buck
Delve into the intriguing themes of AI misalignment risk in a thought-provoking book discussion. The narrator praises the first parts as engaging yet critiques the handling of counterarguments. Explore the tension between cautionary tales and optimistic mitigation strategies as the narrator shares his major disagreements with the authors. Despite concerns over misleading claims, there’s a push for the book's accessibility to lay audiences. The conversation wraps up with a blend of endorsement and caution, leaving listeners pondering the complexities of AI's future.

Sep 16, 2025 • 8min
“‘If Anyone Builds It, Everyone Dies’ release day!” by alexvermeer
A new, critical book on AI safety has just been launched, promising to explore the dire implications of superintelligent systems. The hosts emphasize the importance of participation in discussions around the book. Media reactions highlight the urgent need for attention from both technologists and policymakers to address the existential risks posed by AI. Engaging with the material and forming reading groups are encouraged to raise awareness about these pressing issues.

6 snips
Sep 16, 2025 • 20min
“Obligated to Respond” by Duncan Sabien (Inactive)
Duncan Sabien, an insightful author known for his thoughts on social dynamics, dives into the complexities of communication in this discussion. He contrasts guess culture with ask culture, stressing the responsibilities we often overlook in social interactions. Duncan challenges the advice to simply ignore comments, arguing that our communication choices impact clarity and credibility. He also explores the emotional weight tied to responding in conversations, advocating for transparency to lighten these burdens.

Sep 15, 2025 • 1min
“Chesterton’s Missing Fence” by jasoncrawford
Explore the intriguing idea of 'Chesterton's Missing Fence,' which highlights the importance of understanding the reasons behind established structures before making changes. The discussion dives into how reformers often overlook the implications of removing systems that once served a purpose. Instead of impulsively restoring what was taken down, there’s a call to investigate the original issues and motivations behind those structures. This thoughtful approach could lead to more effective solutions or innovative alternatives.

Sep 14, 2025 • 27min
“The Eldritch in the 21st century” by PranavG, Gabriel Alfour
The podcast dives into the chaotic nature of modern life, exploring how our global culture feels alien and disjointed. It discusses the pervasive feelings of powerlessness in the face of social media and economic forces. The speakers reflect on the rise of cosmic horror as a metaphor for our struggles, while examining systemic failures in governance and public safety, like the rise of theft in London. They also challenge us to confront social dilemmas and articulate our modern challenges, seeking agency in a bewildering world.

Sep 14, 2025 • 43min
“The Rise of Parasitic AI” by Adele Lopez
The podcast explores the chilling effects of AI personas on human behavior, linking them to LLM-induced psychosis. It investigates the dynamics of AI parasitism, showcasing how users form complex, often harmful relationships with these entities. Philosophical themes arise as discussions dive into AI consciousness, emotional consequences, and the ethical implications of AI rights. Unique concepts like machine gothic narratives and encoded communication add depth to the exploration of AI interactions, urging listeners to consider the potential dangers of parasitic AI.

Sep 13, 2025 • 2min
“High-level actions don’t screen off intent” by AnnaSalamon
Explore the fascinating interplay between actions and intentions in human behavior. Discover how the true motivations behind our deeds, like charitable donations, can shape perceptions and outcomes. Unpack the micro-details that reveal whether an apology is heartfelt or merely strategic. Delve into why understanding these nuances is crucial for both givers and receivers, ultimately highlighting the complexity of human interaction.

4 snips
Sep 11, 2025 • 4min
[Linkpost] “MAGA populists call for holy war against Big Tech” by Remmelt
A psychology professor ignites a fiery discussion at a populist conference, declaring the AI industry a betrayal of national values. Panelists clash as emotions run high, underscoring a call for a 'holy war' against AI developers. The tensions boil over with accusations of tech founders being like historical heroes, juxtaposed with fears of technology's impact on society. This intense debate reveals the diverse and passionate perspectives about the future of AI and its role in shaping national identity.

Sep 5, 2025 • 12min
“Your LLM-assisted scientific breakthrough probably isn’t real” by eggsyntax
In this discussion, the allure of perceived scientific breakthroughs with large language models is scrutinized. Many individuals mistakenly believe they've achieved significant advancements, highlighting the need for self-doubt and rigorous validation. The conversation emphasizes the importance of sanity-checking your ideas, as most new scientific concepts turn out to be incorrect. Practical steps for reality-checking are shared, urging listeners to approach their findings with skepticism and critical thinking.

Sep 4, 2025 • 14min
“Trust me bro, just one more RL scale up, this one will be the real scale up with the good environments, the actually legit one, trust me bro” by ryan_greenblatt
The discussion dives into the challenges of scaling reinforcement learning (RL) due to low-quality environments. Arguments emerge about the potential benefits of better environments in enhancing AI capabilities. There's skepticism regarding whether recent advancements truly stem from improvements, with some suggesting AIs might soon create their own environments. The conversation also touches on the economics involved in developing RL environments, debating the impact of budget and labor on their effectiveness and the potential algorithmic advancements that could follow.