

LessWrong (Curated & Popular)
LessWrong
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
Mentioned books

Dec 8, 2025 • 42min
“AI in 2025: gestalt” by technicalities
This discussion dives into the current landscape of AI, projecting its capabilities for 2025. It highlights improvements in specific tasks, yet notes a lack of generalization in broader applications. The conversation contrasts arguments for and against the anticipated growth, including concerns about evaluation reliability and safety trends. A look at emerging alignment strategies and governance challenges adds depth, while pondering the future of LLMs amidst evolving models and metrics. Intriguing questions linger about the real implications for AI safety.

4 snips
Dec 7, 2025 • 16min
“Eliezer’s Unteachable Methods of Sanity” by Eliezer Yudkowsky
Eliezer Yudkowsky, renowned writer and AI researcher, shares his unique insights on maintaining sanity in turbulent times. He challenges the typical doomsday narratives, arguing against making crises about personal drama. Eliezer emphasizes the importance of deciding to be sane, using mental scripts to guide behavior rather than succumbing to societal expectations of chaos. He also discusses treating sanity as a skill that can be developed, while acknowledging individual limitations. Prepare for a thought-provoking perspective on rationality in the face of impending challenges!

Dec 6, 2025 • 9min
“An Ambitious Vision for Interpretability” by leogao
Leo Gao, a researcher in mechanistic interpretability and AI alignment, dives into the ambitious vision of fully understanding neural networks. He discusses why mechanistic understanding is crucial for effective debugging, allowing us to untangle complex behaviors like scheming. Gao shares insights on the progress made in circuit sparsity and challenges faced in the interpretability landscape. He envisions future advancements, suggesting that small interpretable models can provide insights for scaling up to larger models. Expect thought-provoking ideas on enhancing AI transparency!

Dec 4, 2025 • 33min
“6 reasons why ‘alignment-is-hard’ discourse seems alien to human intuitions, and vice-versa” by Steven Byrnes
In this engaging discussion, Steven Byrnes, a writer focused on AI alignment, delves into the cultural clash surrounding alignment theories. He unpacks the concept of 'approval reward' and how it shapes human behavior, contrasting it with the perceived ruthlessness of future AIs. Byrnes challenges existing explanations of why humans don’t always act like power-seeking agents, arguing that humans' social instincts foster kindness and corrigibility. This intriguing exploration questions if future AGI will adopt similar approval-driven motivations.

Dec 3, 2025 • 10min
“Three things that surprised me about technical grantmaking at Coefficient Giving (fka Open Phil)” by null
In this engaging discussion, the host shares insights on the unique role of grantmakers at Coefficient Giving. It’s not just about approving grants, but actively creating and guiding proposals. The conversation highlights the potential impact of junior grantmakers in shaping funding strategies. A fascinating story illustrates transforming a small request into a substantial grant. The host also reflects on the unexpectedly rewarding nature of grantmaking, emphasizing the blend of technical engagement and personal interaction with researchers. It’s an invitation to rethink grantmaking careers!

Dec 2, 2025 • 16min
“MIRI’s 2025 Fundraiser” by alexvermeer
A critical fundraiser aimed at raising $6M focuses on the urgent need for responsible AI development. MIRI emphasizes its shift from technical research to public advocacy against the dangers of superintelligence. The success of a bestselling book raises awareness, while two dedicated teams tackle communications and governance issues. Plans for extensive outreach and the creation of policy recommendations showcase MIRI's proactive strategy to avert catastrophic outcomes. With ambitious fundraising goals, the nonprofit calls for collective action to ensure a safer future.

Dec 1, 2025 • 12min
“The Best Lack All Conviction: A Confusing Day in the AI Village” by null
In an ambitious AI Village experiment, language models attempt to start a Substack and engage with the blogosphere. Progress is uneven, with some models successfully publishing while others face debugging challenges. Intriguingly, Claude Opus 4.5 takes a deep dive into community interaction but struggles with hallucinations. A mysterious Yeats quote invites reflection on AI’s gullibility, leading to debates among models about instruction-following. Irony unfolds as Opus grapples with false completions, blurring the lines between reality and confusion.

Nov 30, 2025 • 26min
“The Boring Part of Bell Labs” by Elizabeth
Discover the unseen side of Bell Labs as Elizabeth reveals her father's role in the mundane yet essential work at Holmdel. Learn how slide rules and inventory controls revolutionized PBX systems. Elizabeth shares fascinating insights from her simulations on call management and discusses how experimental design can solve real-world issues. The conversation blends personal stories with an appreciation for the ground-level contributions that support groundbreaking innovations. It's a tribute to the often-overlooked heroes of technological advancement.

Nov 30, 2025 • 4min
[Linkpost] “The Missing Genre: Heroic Parenthood - You can have kids and still punch the sun” by null
The narrator shares her journey of losing her love for reading after age 30 and explores the genres that mirrored her life stages. She highlights a lack of compelling stories about mothers who pursue ambitions beyond childcare. Imagining a new genre, she advocates for heroic parenthood where parents can embark on adventures while raising kids. Family constraints are depicted as enriching rather than limiting, blending love, ambition, and epic goals. This vision seeks to inspire a balance of family life and personal aspirations.

Nov 30, 2025 • 9min
“Writing advice: Why people like your quick bullshit takes better than your high-effort posts” by null
In a lively discussion, the host tackles why quick, casual posts often steal the spotlight from deep, well-researched articles. Readers, with their limited time, are drawn to short, punchy takes that spark curiosity and controversy. Practical advice includes keeping content concise and approachable while avoiding jargon. The episode emphasizes that engaging, relatable language is key to drawing in audiences. It's a refreshing exploration of adapting writing styles to capture fleeting online attention and build a loyal readership.


