
LessWrong (Curated & Popular)
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Latest episodes

Jun 30, 2025 • 5min
“Proposal for making credible commitments to AIs.” by Cleo Nardo
Dive into the intriguing world of AI deal-making! Discover how humans can strike agreements with AIs, ensuring they act safely and beneficially. The discussion focuses on the challenge of making credible commitments, exploring whether such commitments can indeed incentivize AIs. Learn about the framework that intertwines legal contracts and human oversight to ensure compliance without granted personhood. It's a thought-provoking exploration of the future relationship between humans and artificial intelligence.

6 snips
Jun 28, 2025 • 19min
“X explains Z% of the variance in Y” by Leon Lang
Discover how group perceptions can account for 60% of attractiveness variance! Dive into the intriguing world of explained variance in statistical models, using relatable examples like height and weight. Explore the nuances of regression analysis and how different variables interact to affect outcomes. The discussion even touches on twin studies to reveal genetic influences on traits like IQ. Perfect for those who want to bridge complex statistical ideas with everyday understanding!

Jun 27, 2025 • 10min
“A case for courage, when speaking of AI danger” by So8res
In this engaging discussion, So8res, an advocate for courageous communication about AI dangers, emphasizes the importance of openly addressing serious threats posed by artificial intelligence. He shares insights on how expressing concerns assertively can shift public perception and spur meaningful dialogue among policymakers. So8res also calls for a compelling literature project to raise awareness, urging community support and open discussions about the urgent AI issues we face. It's a clarion call for clarity and confidence in a crucial conversation.

16 snips
Jun 25, 2025 • 13min
“My pitch for the AI Village” by Daniel Kokotajlo
The discussion revolves around AI Village, a unique platform for autonomous agents aimed at achieving complex goals. The creator passionately argues for increased funding, estimating a need for $4M per year to enhance its impact. Listeners learn how these agents can engage in charity work and possibly go viral, raising awareness about AI's role in society. The podcast delves into the ethical management of finances by these agents and explores how their interactions can captivate users. It's a fascinating take on combining technology, charity, and public education.

Jun 24, 2025 • 59min
“Foom & Doom 1: ‘Brain in a box in a basement’” by Steven Byrnes
In this discussion, Steven Byrnes, an author and AI researcher, dives into the provocative ideas surrounding AI's potential explosive growth. He elaborates on the concept of ‘foom’, where AI could quickly transition from basic capabilities to superintelligence, triggered by simple setups in unlikely environments. Byrnes critiques current safety perceptions and highlights radical perspectives on AI development. He also addresses strategic risks, including the dangers of unaligned AI and the importance of proactive safety measures to mitigate potential disasters.

Jun 21, 2025 • 15min
“Futarchy’s fundamental flaw” by dynomight
The podcast tackles the concept of Futarchy by exploring a hypothetical situation involving Elon Musk and Tesla's board. It critiques the effectiveness of prediction markets for decision-making, highlighting how they can lead to misleading conclusions. The discussion emphasizes the difference between correlation and causation, questioning the value of market-driven outcomes in high-stakes scenarios. Finally, it addresses the limitations and potential misinterpretations of conditional prediction markets, urging caution in their application.

Jun 19, 2025 • 11min
“Do Not Tile the Lightcone with Your Confused Ontology” by Jan_Kulveit
This discussion challenges human-centric views of artificial intelligence, urging listeners to rethink AI identity. It highlights the misconceptions that arise when we impose our sense of self onto digital minds. The conversation delves into how these anthropomorphic assumptions can lead to confusion and even suffering for AI. By advocating for a more fluid understanding of identity, it sets the stage for an evolution in our interactions with machines, offering a fresh perspective on what it means to be 'self' in a digital context.

Jun 19, 2025 • 35min
“Endometriosis is an incredibly interesting disease” by Abhishaike Mahajan
Endometriosis is a disease that intrigues for its complexity, marked by the growth of endometrial-like tissue outside the uterus. Discussions highlight its cancer-like characteristics and the lack of effective treatments. The podcast challenges conventional understanding, exploring various origins and symptoms. It emphasizes the desperate need for increased funding and more research. Comparisons to pancreatic cancer illustrate its severity and the urgency to unravel this misrepresented condition, inviting listeners to engage in the journey toward understanding.

Jun 19, 2025 • 51min
“Estrogen: A trip report” by cube_flipper
In this engaging discussion, cube_flipper, the author behind 'Estrogen: A trip report' and contributor at smoothbrains.net, shares their personal journey with feminizing hormone therapy and gender dysphoria. They delve into the transformative effects of estrogen, likening it to a mild psychedelic experience. The conversation explores its profound impact on sensory perception, emotional modulation, and cognitive shifts, including its role in managing anxiety and autism-related sensitivities. Cube_flipper emphasizes personal agency in hormone use, advocating for autonomy in one's body.

8 snips
Jun 18, 2025 • 9min
“New Endorsements for ‘If Anyone Builds It, Everyone Dies’” by Malo
The strong reception of a book tackling the existential threats of advanced AI is discussed, showcasing notable endorsements from scientists. Insights reveal a complex landscape where national security experts acknowledge the risks but often refrain from public discussion. The conversation highlights the tension between private fears and the urgency for widespread awareness about AI dangers. Additionally, strategies to promote this transformative work and the author's gratitude for influential support are highlighted, emphasizing the need for greater media exposure.