LessWrong (Curated & Popular)

LessWrong
undefined
Jan 29, 2026 • 17min

"The Possessed Machines (summary)" by L Rudolf L

A narrated tour of an anonymous critique that reads AI culture through Dostoevsky. Short sketches link literary character types to modern lab personalities and failure modes. Topics include moral erosion in institutions, reasoning that leads to despotism, and a shared AI social organism that pressures dissent. The piece frames intelligence outrunning conscience as a cultural pathology.
undefined
Jan 28, 2026 • 26min

"Ada Palmer: Inventing the Renaissance" by Martin Sustrik

A professor stages a month-long 1492 papal election simulation to teach Machiavelli by immersion. Sixty students play historical figures with detailed character packets and chroniclers. Single actions like assassinations ripple into wildly different continental outcomes. The simulation is compared to ensemble forecasting, wargaming, and experimental history to reveal patterns amid chaos.
undefined
Jan 28, 2026 • 20min

"AI found 12 of 12 OpenSSL zero-days (while curl cancelled its bug bounty)" by Stanislav Fort

An AI system found 12 new OpenSSL zero-days, including high- and moderate-severity flaws. The team explains automated vulnerability hunting, historical low-severity CVEs, and accepted AI-suggested patches. They also describe curl canceling its bug bounty after a flood of AI-generated spam and discuss what AI-driven security means for the future.
undefined
Jan 28, 2026 • 1h 54min

"Dario Amodei – The Adolescence of Technology" by habryka

A deep dive into an essay on AI's perilous technological adolescence. Topics include autonomy risks and defenses, misuse for destruction and biothreat countermeasures, and AI-enabled seizure of power with policy responses. They also cover economic disruption, labor market shocks, wealth concentration, and indirect existential uncertainties facing humanity.
undefined
Jan 27, 2026 • 22min

"AlgZoo: uninterpreted models with fewer than 1,500 parameters" by Jacob_Hilton

Tiny RNNs and transformers trained on algorithmic tasks are used as surprising testbeds for mechanistic interpretability. The show walks through models from 8 to 1,408 parameters and why fully understanding few-hundred-parameter systems matters. Concrete case studies include second-argmax RNNs at several hidden sizes, analyses of decision geometry and neuron subcircuits, and a clear challenge to scale interpretability methods.
undefined
8 snips
Jan 27, 2026 • 11min

"Does Pentagon Pizza Theory Work?" by rba

They dig into a Cold War stock sleuthing story that linked lithium trades to H-bomb secrets. They introduce the quirky Pentagon Pizza theory and trace its origins. They walk through data-collection hurdles, scraping tweets and building controls. They preregister tests around three military events and report backtest results showing no meaningful pizza signal.
undefined
4 snips
Jan 27, 2026 • 3min

"The inaugural Redwood Research podcast" by Buck, ryan_greenblatt

A behind-the-scenes chat about building a custom command-line video editor and automating shot-cutting with Claude Code. They discuss creating transcript line IDs and a small DSL to reorder lines into ffmpeg commands. Practical constraints like file size, disk and network limits come up. They also reflect on what worked, what needed manual fixes, and lessons learned about tooling and agents.
undefined
Jan 26, 2026 • 16min

"Canada Lost Its Measles Elimination Status Because We Don’t Have Enough Nurses Who Speak Low German" by jenn

A deep look at why Canada lost measles elimination status and how outbreaks clustered in old-order Mennonite communities. Mapping and local reporting explain the unusual geographic pattern. The episode highlights language barriers in Low German as a key obstacle to vaccination access. A practical policy focus: hiring Low German–speaking health workers to improve outreach.
undefined
27 snips
Jan 24, 2026 • 1h 12min

"Deep learning as program synthesis" by Zach Furman

Zach Furman, author of the essay 'Deep Learning as Program Synthesis' and mechanistic interpretability researcher, presents a hypothesis that deep nets search for simple, compositional programs. He traces evidence from grokking, vision circuits, and induction heads. He explores paradoxes of approximation, generalization, and convergence and sketches how SGD and representational structure could enable program‑like solutions.
undefined
10 snips
Jan 24, 2026 • 21min

"Why I Transitioned: A Response" by marisa

A clear-eyed response to a previous personal account, focusing on biology, social incentives, and methodology. The speaker discusses twin studies, prenatal influences like CAH, and how genetic and environmental factors might be bounded. They introduce the “trans double bind” concept and share a personal timeline showing late-onset dysphoria and the interplay of social dynamics and medical framing.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app