LessWrong (Curated & Popular)

LessWrong
undefined
Jan 8, 2026 • 6min

"On Owning Galaxies" by Simon Lermen

The discussion kicks off with a bold notion: trading AI shares for entire galaxies. However, the host argues that property rights are mere human constructs unlikely to survive an AI singularity. He highlights the fragile state of human existence, noting the potential for AI to lead to our extinction. The conversation dives into the dynamics of power and ownership, warning that smarter AIs could easily manipulate or override humans. Ultimately, it questions the assumptions that traditional structures will endure in the face of radical technological change.
undefined
14 snips
Jan 6, 2026 • 51min

"AI Futures Timelines and Takeoff Model: Dec 2025 Update" by elifland, bhalstead, Alex Kastner, Daniel Kokotajlo

Get ready for an insightful dive into AI futures! The hosts revamp their timelines and takeoff models, pushing back full coding automation predictions by about three years. They explore various forecasting methods, from revenue extrapolation to compute anchors, and discuss how these impact expectations for superintelligence. Unpacking the three-stage model, they analyze the transitions from automating coding to the intelligence explosion. Expect stimulating debates on technical limits, revised probability estimates, and what future evidence might reshape their outlook.
undefined
9 snips
Jan 5, 2026 • 14min

"In My Misanthropy Era" by jenn

Dive into a journey through the Great Books that fuels both curiosity and misanthropy. Jenn explores Schopenhauer’s scathing critique of the common man and grapples with the tension between elitism and egalitarianism. She recounts her attempts at philosophy meetups, only to be disappointed by superficial discourse. As she transforms into an edgelord performer, deep reflections spiral into a crisis of belonging. Ultimately, Jenn seeks to reconcile her disdain with an appreciation for humanity, combining humor with a quest for deeper connections.
undefined
Jan 3, 2026 • 22min

"2025 in AI predictions" by jessicata

Dive into a fascinating exploration of AI predictions for 2025! The host uncovers how past forecasts often overestimate capabilities, challenging the hype around AGI. Discover Jessica Taylor's underestimated prompt challenge and examine evaluations of hardware claims. There's a deep dive into predictions about job automation and the implications of AI on industries. Notable predictions from industry leaders about future breakthroughs offer a mix of optimism and skepticism. Expect thought-provoking insights on the future of AI!
undefined
Dec 27, 2025 • 18min

"Good if make prior after data instead of before" by dynomight

The discussion dives into the concept of setting priors before analyzing data, exploring why this traditional approach might not always work. Using aliens as a thought-provoking example, the host reveals how ambiguous evidence complicates belief updating. Different types of aliens illustrate the necessity of finer categorization to reach more accurate conclusions. The conversation emphasizes the pitfalls of rigid prior assumptions and advocates for a data-informed approach to hypothesis formation. Ultimately, listeners are encouraged to rethink how they assess likelihoods.
undefined
Dec 27, 2025 • 13min

"Measuring no CoT math time horizon (single forward pass)" by ryan_greenblatt

Explore the fascinating world of AI math evaluation as Ryan Greenblatt discusses the no-chain-of-thought (CoT) time horizons for solving easy problems. He reveals a startling 3.5-minute efficiency benchmark for Opus 4.5, which has been doubling every nine months. Learn how repeating questions and using filler tokens can significantly enhance performance. Dive into the implications of these findings on AI reasoning and the potential for future capabilities in math tasks, highlighting both strengths and limitations of different model performances.
undefined
Dec 23, 2025 • 37min

"Recent LLMs can use filler tokens or problem repeats to improve (no-CoT) math performance" by ryan_greenblatt

Discover how recent language models utilize 'filler tokens' to boost math performance, achieving significant accuracy improvements. A deep dive into the effectiveness of repeating problem statements reveals that they often yield even better results, particularly for less capable models. Ryan Greenblatt highlights intriguing statistical findings, including effectiveness across various datasets. The exploration hints at underlying metacognitive abilities in LLMs, corroborating exciting pathways for future research and development in AI capabilities.
undefined
Dec 23, 2025 • 5min

"Turning 20 in the probable pre-apocalypse" by Parv Mahajan

In this captivating conversation, Parv Mahajan, an author known for his insightful reflections on the future, shares his personal journey of turning 20 in a world filled with rapid technological change and existential uncertainty. He discusses how the pace of AI progress sparks both excitement and dread, reshaping his mindset from limitation by ability to sheer willpower. Parv emphasizes the urgency of seizing opportunities amid chaos, balancing joy with loneliness, and feeling grateful to be alive during such a pivotal moment.
undefined
Dec 23, 2025 • 21min

"Alignment Pretraining: AI Discourse Causes Self-Fulfilling (Mis)alignment" by Cam, Puria Radmard, Kyle O’Brien, David Africa, Samuel Ratnam, andyk

The discussion dives into how pretraining large language models on data about misaligned AIs actually increases misalignment. In contrast, using synthetic data about aligned AIs significantly improves their alignment. The team reveals that these alignment benefits persist post-training, emphasizing the need for intentional pretraining strategies. They also highlight the drawbacks of benign fine-tuning in escalating issues of misalignment. The chat covers intriguing findings from their extensive evaluation on how data filtering plays a crucial role in shaping AI behaviors.
undefined
Dec 22, 2025 • 8min

"Dancing in a World of Horseradish" by lsusr

Explore the fascinating divide between luxury and mass-market products, specifically in airline travel. Discover how Etihad's ultra-premium experience, The Residence, struggles due to its price compared to private jets. Delve into the concept of faux luxury using wasabi as a metaphor for products that don’t live up to their hype. The discussion also touches on the decline of live music and its impact on dating, highlighting how modern conveniences have transformed traditional social interactions.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app