LessWrong (Curated & Popular)

LessWrong
undefined
Oct 19, 2025 • 27min

“That Mad Olympiad” by Tomás B.

Dive into a world where precocious child authors like Chen and Adrian navigate the intriguing landscape of distilled literature and today's Lit Olympiad. Explore the tension between AI creativity and organic writing, as the competition unfolds under strict rules. The allure of celebrity writers like Melissa Lee surfaces, while questions about AI love and dating culture emerge. From bittersweet bowling dates to evolving friendships, this tale weaves a rich tapestry of modern relationships amid the lingering impact of a pre-AI society.
undefined
Oct 17, 2025 • 14min

“The ‘Length’ of ‘Horizons’” by Adam Scholl

Adam Scholl, an insightful author on AI measurement, explores the quirky world of artificial intelligence. He discusses Meta’s 'horizon length' benchmark, questioning its ability to reflect true AI task difficulty and its predictive value for transformative advancements. Adam highlights the weird blend of AI capabilities and failures, pointing out the biases inherent in benchmark selection. He emphasizes caution in interpreting simple tasks as indicators of future AI breakthroughs, sparking a deeper conversation on how we measure progress in this fascinating field.
undefined
6 snips
Oct 15, 2025 • 4min

“Don’t Mock Yourself” by Algon

Discover the transformative power of stopping self-insults for two weeks. The host shares surprising insights about how often they caught themselves about to insult their own worth. They explore how self-deprecating humor often casts them as the butt of the joke and the challenge of shifting to different comedic styles. The experiment leads to a newfound distaste for negative media and a boost in confidence. They argue that self-mockery limits ambition and reinforces unhelpful identities, encouraging listeners to embrace positivity.
undefined
9 snips
Oct 14, 2025 • 26min

“If Anyone Builds It Everyone Dies, a semi-outsider review” by dvd

A semi-outsider critiques AI risk theories, questioning why we assume AI will want to survive or possess coherent drives. He challenges the book’s analogy of evolution, suggesting it might lack explanatory power. The discussion includes concerns about the potential for intermediate phases in AI's development and critiques a proposed international treaty to manage AI risks, comparing it to historical failures. Ultimately, there's an evaluation of the book's readability alongside its shortcomings in factual detail and insight.
undefined
Oct 12, 2025 • 8min

“The Most Common Bad Argument In These Parts” by J Bostock

J. Bostock, a contributor to LessWrong, explores a troubling reasoning flaw known as 'exhaustive free association,' frequently seen in rationalist communities. He illustrates how this pattern misleads by inducing false conclusions through incomplete logic. Bostock critiques superforecasters for underestimating AI risks and discusses the implications of exhaustive reasoning on welfare estimates. The episode dives into the persuasive nature of bad arguments and emphasizes the importance of challenging this reasoning style to enhance discourse.
undefined
8 snips
Oct 11, 2025 • 18min

“Towards a Typology of Strange LLM Chains-of-Thought” by 1a3orn

Explore the intriguing phenomenon of strange chain-of-thoughts in reinforcement learning-trained language models. The discussion dives into six fascinating hypotheses, ranging from the evolution of a new efficient language to accidental byproducts known as spandrels. There's also a look at how context refresh can help reset reasoning and whether models intentionally obfuscate their thought processes. The idea of natural drift and the impact of conflicting learned sub-algorithms further highlights the complexities of language development in AI.
undefined
Oct 10, 2025 • 6min

“I take antidepressants. You’re welcome” by Elizabeth

In this entertaining discussion, Elizabeth hilariously reveals how her antidepressants make everyone seem sharper and her role as the arbiter of behavior. She shares a quirky experience about how her musical taste changed and how antidepressants influence her enjoyment. There's a deep dive into how these medications can improve motivation for health habits and enhance social interactions, challenging common misconceptions. With insights into her unique experience, she opens up about the benefits and caveats of her journey with medication.
undefined
Oct 10, 2025 • 4min

“Inoculation prompting: Instructing models to misbehave at train-time can improve run-time behavior” by Sam Marks

Discover the fascinating concept of inoculation prompting, where models are trained to misbehave deliberately to improve their behavior later. Sam Marks dives into examples like coding test cases, revealing how this technique can prevent models from learning harmful hacks. He discusses two impactful papers exploring selective trait learning and the balance between capabilities and safety. Learn how modifying training prompts can effectively reduce unwanted behaviors without diminishing desired skills. It's a blend of creativity and science!
undefined
Oct 10, 2025 • 19min

“Hospitalization: A Review” by Logan Riggs

Logan Riggs, an author and essayist, shares a gripping personal account of his hospitalization for a spontaneous pneumothorax. He narrates the surreal experience of rushing to the ER under the fear of a heart attack. Listeners will learn about the diagnostic journey, including a mis-scanned X-ray that delayed treatment. With humor and candor, he offers practical advice on patient advocacy and communication with medical staff, and reflects on the emotional impact on loved ones, emphasizing gratitude for caregivers.
undefined
10 snips
Oct 9, 2025 • 24min

“What, if not agency?” by abramdemski

Abram Dembski, an insightful commentator on AI and author, breaks down Sahil's complex ideas about high-actuation and distributed care. He explores why high actuation encompasses technology's role better than automation, while clarifying the difference between agentic and co-agentic AI. Dembski discusses solarware—AI-enabled custom interfaces that could transform user experiences. He also highlights the importance of networks of care and reframes AI threats as indifference risks, tackling the intricate relationship between agency, goals, and alignment.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app