

LessWrong (30+ Karma)
LessWrong
Audio narrations of LessWrong posts.
Episodes
Mentioned books

Sep 4, 2025 • 12min
“‘I’d accepted losing my husband, until others started getting theirs back’” by Ariel Zeleznikow-Johnston
Note: While this article presents one possibility for how the future may play out, it should not be taken as a specific prediction by either the author or the Brain Preservation Foundation. Amy Robertson-Wong August 18, 2085 ‘My husband had only died because saving his life would have been too weird.’ Annabel Walker had this realisation earlier this year after watching a video of Marcus Chen, a retired primary school teacher, sitting on a bed. His face is slack with relief, and teary-eyed family members are seated all around him. The scene is striking, but not particularly shocking - unless one surmises that Chen has recently been revived from preservation, after having been legally dead for fifteen years. Annabel barely made it to the toilet before vomiting. Chen had been the mentor of her late husband Julian, who had died at sixty-four in 2076. Six months earlier, he'd been [...] ---Outline:(02:24) Available But Invisible(04:16) A Growing Phenomenon(06:35) The Historical Context(09:23) The End of Preservation---
First published:
September 3rd, 2025
Source:
https://www.lesswrong.com/posts/Ek4Qk6iyKv9MNF8QD/i-d-accepted-losing-my-husband-until-others-started-getting
---
Narrated by TYPE III AUDIO.

Sep 4, 2025 • 3min
“Three main views on the future of AI” by Alex Amadori, Eva_B, Gabriel Alfour, Andrea_Miotti
Expert opinions about future AI development span a wide range, from predictions that we will reach ASI soon and then humanity goes extinct, to predictions that AI progress will plateau soon, resulting in weaker AI that presents much more mundane risks and benefits. However, non-experts often encounter only a single narrative, depending on things like their social circles and social media feed.
In our paper, we propose a taxonomy that divides expert views on AIs into three main clusters. We hope to provide a concise and digestible way to explain the most common views and the crucial disagreements between them.
You can read the paper at: doctrines.ai. Here is a brief summary:
Dominance doctrine. The Dominance doctrine predicts that the first actor to develop sufficiently advanced AI will gain a decisive strategic advantage over all others. This is the natural consequence of two beliefs commonly held [...] ---
First published:
September 2nd, 2025
Source:
https://www.lesswrong.com/posts/wh6cZbKohCqYLXWsr/three-main-views-on-the-future-of-ai
---
Narrated by TYPE III AUDIO.

Sep 4, 2025 • 42min
“Natural Latents: Latent Variables Stable Across Ontologies” by johnswentworth, David Lorell
Audio note: this article contains 369 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. There's a details box here with the title "Background on where this post/paper came from". The box contents are omitted from this narration. This post is an overhaul of (and large improvement to) a paper we wrote about a year ago. The pdf version will hopefully be up on arxiv shortly, and we will link that from here once it's available. As of posting, this is probably the best first-stop mathematical intro to natural latents. Abstract Suppose two Bayesian agents each learn a generative model of the same environment. We will assume the two have converged on the predictive distribution (i.e. distribution over some observables in the environment), but may have different generative models containing different latent variables. Under [...] ---Outline:(00:52) Abstract(01:42) Background(04:12) The Math(04:15) Setup & Objective(07:05) Notation(09:45) Foundational Concepts: Mediation, Redundancy & Naturality(12:59) Core Theorems(14:51) Naturality _\\implies_ Minimality Among Mediators(15:59) Naturality _\\implies_ Maximality Among Redunds(16:47) Isomorphism of Natural Latents(17:14) Application To Translatability(17:18) Motivating Question(18:27) Guaranteed Translatability(21:09) Natural Latents: Intuition & Examples(21:24) When Do Natural Latents Exist? Some Intuition From The Exact Case(23:11) Worked Quantitative Example of the Mediator Determines Redund Theorem(26:54) Intuitive Examples of Natural Latents(27:17) Ideal Gas(28:22) Biased Die(29:36) Timescale Separation In A Markov Chain(30:24) Discussion & Conclusion(33:56) Acknowledgements(34:05) Appendices(34:08) Graphical Notation and Some Rules for Graphical Proofs(35:02) Frankenstein Rule(37:39) Factorization Transfer(39:15) Bookkeeping Rule(40:03) The Dangly Bit Lemma(41:09) Graphical Proofs(41:18) Python Script for Computing _D_{KL}_ in Worked ExampleThe original text contained 5 footnotes which were omitted from this narration. ---
First published:
September 4th, 2025
Source:
https://www.lesswrong.com/posts/Qdgo2jYAuFRMeMRJT/natural-latents-latent-variables-stable-across-ontologies
---
Narrated by TYPE III AUDIO.
---Images from the article:__T3A_INLINE_LATEX_PLACEHOLDER___P[X_1, X_2, \Lambda^i] = P[\Lambda^i] \prod_j P[X_j|\Lambda^i]___T3A_INLINE_LATEX_END_PLACEHOLDER__." style="max-width: 100%;" />__T3A_INLINE_LATEX_PLACEHOLDER___\epsilon \geq D_{KL}(P[\Lambda^i, X] || P[\Lambda^i]\prod_j P[X_j|\Lambda^i])___T3A_INLINE_LATEX_END_PLACEHOLDER__." style="max-width: 100%;" />__T3A_INLINE_LATEX_PLACEHOLDER___X_1, ..., X_n___T3A_INLINE_LATEX_END_PLACEHOLDER__ if and only if __T3A_INLINE_LATEX_PLACEHOLDER___P[X, \Lambda']___T3A_INLINE_LATEX_END_PLACEHOLDER__ satisfies the graph for all __T3A_INLINE_LATEX_PLACEHOLDER___j___T3A_INLINE_LATEX_END_PLACEHOLDER__. Intuitively, it says that any one component of __T3A_INLINE_LATEX_PLACEHOLDER___X___T3A_INLINE_LATEX_END_PLACEHOLDER__ is enough to fully determine __T3A_INLINE_LATEX_PLACEHOLDER___\Lambda'___T3A_INLINE_LATEX_END_PLACEHOLDER__." style="max-width: 100%;" />Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Sep 3, 2025 • 43min
“Traffic and Transit Roundup #1” by Zvi
Traffic and transit are finally getting a roundup all their own.
I’ll start out with various victory laps on the awesomeness that is New York City Congestion pricing, which should hopefully now be a settled matter, then do a survey of everything else.
New York City Congestion Pricing Redux
We spent years fighting to get congestion pricing passed in New York City.
Once it was in place, everyone saw that it worked. The city was a better place, almost everyone was better off, and also we got bonus tax revenue. And this is only the bare minimum viable product, we could do so much better.
We had a big debate over whether congestion pricing is good until it was implemented. At this point, with traffic speeds downton up 15% and business visits improving, it is very clear this was exactly the huge [...] ---Outline:(00:24) New York City Congestion Pricing Redux(10:34) New York City Light Rail(11:51) Fare Evasion (1)(14:55) Fare Evasion (2)(16:07) Free Transit(19:52) The Last Train(20:11) Smooth Operator(20:49) Mass Transit And Health(21:51) Ah California High Speed Rail(23:58) Amtrak(25:18) The High Speed Rail We Actually Need Is Acela(30:05) The Other High Speed Rail Project(31:22) Caltrain(32:07) Bike Lanes(32:24) Airports(32:54) Jones Act Bonus Episodes(36:55) In Brief---
First published:
September 2nd, 2025
Source:
https://www.lesswrong.com/posts/qtZ5CgDWe3BuiH8Wn/traffic-and-transit-roundup-1
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Sep 3, 2025 • 14min
“Trust me bro, just one more RL scale up, this one will be the real scale up with the good environments, the actually legit one, trust me bro” by ryan_greenblatt
I've recently written about how I've updated against seeing substantially faster than trend AI progress due to quickly massively scaling up RL on agentic software engineering. One response I've heard is something like:
RL scale-ups so far have used very crappy environments due to difficulty quickly sourcing enough decent (or even high quality) environments. Thus, once AI companies manage to get their hands on actually good RL environments (which could happen pretty quickly), performance will increase a bunch.
Another way to put this response is that AI companies haven't actually done a good job scaling up RL—they've scaled up the compute, but with low quality data—and once they actually do the RL scale up for real this time, there will be a big jump in AI capabilities (which yields substantially above trend progress). I'm skeptical of this argument because I think that ongoing improvements to RL environments [...] ---Outline:(04:18) Counterargument: Actually, companies havent gotten around to improving RL environment quality until recently (or there is substantial lead time on scaling up RL environments etc.) so better RL environments didnt drive much of late 2024 and 2025 progress(05:24) Counterargument: AIs will soon reach a critical capability threshold where AIs themselves can build high quality RL environments(06:51) Counterargument: AI companies are massively fucking up their training runs (either pretraining or RL) and once they get their shit together more, well see fast progress(08:34) Counterargument: This isnt that related to RL scale up, but OpenAI has some massive internal advance in verification which they demonstrated via getting IMO gold and this will cause (much) faster progress late this year or early next year(10:12) Thoughts and speculation on scaling up the quality of RL environmentsThe original text contained 5 footnotes which were omitted from this narration. ---
First published:
September 3rd, 2025
Source:
https://www.lesswrong.com/posts/HsLWpZ2zad43nzvWi/trust-me-bro-just-one-more-rl-scale-up-this-one-will-be-the
---
Narrated by TYPE III AUDIO.

Sep 3, 2025 • 5min
“When Both People Are Interested, How Often Is Flirtatious Escalation Mutual?” by johnswentworth
Here are two models of “the norm” when it comes to flirtatious escalation, assuming both people are in fact interested: “Mutual Escalation”: both people go back-and-forth making gradually more escalative moves. One might tend to escalate first, but the other will at least match escalation level. “Active/Passive”: one person makes roughly all of the escalative moves. The other person passively approves/disapproves, and the active participant tracks their approval when deciding to dial up/down. Multiple people (including agree-votes and offline) have told me that the first is most common. Multiple people (including agree-votes and offline) have told me that the second is most common. What I really want to know is: Conditional on mutual interest, what are the relative frequencies of these two kinds of flirtatious escalation? Are there particular settings/cultures/etc in which those frequencies are very different? These are both fairly difficult questions to answer, because of [...] ---Outline:(01:54) Some Priors(02:47) Some Data(04:22) Some Better Answers?---
First published:
September 2nd, 2025
Source:
https://www.lesswrong.com/posts/dpfYKEC2NijExZQQ6/when-both-people-are-interested-how-often-is-flirtatious
---
Narrated by TYPE III AUDIO.

Sep 3, 2025 • 1h 47min
“How To Become A Mechanistic Interpretability Researcher” by Neel Nanda
Note: If you’ll forgive the shameless self-promotion, applications for my MATS stream are open until Sept 12. I help people write a mech interp paper, often accept promising people new to mech interp, and alumni often have careers as mech interp researchers. If you’re interested in this post I recommend applying! The application should be educational whatever happens: you spend a weekend doing a small mech interp research project, and show me what you learned. Last updated Sept 2 2025TL;DR This post is about the mindset and process I recommend if you want to do mechanistic interpretability research. I aim to give a clear sense of direction, so give opinionated advice and concrete recommendations. Mech interp is high-leverage, impactful, and learnable on your own with short feedback loops and modest compute. Learn the minimum viable basics, then do research. Mech interp is an empirical science Three stages: [...] ---Outline:(00:45) TL;DR(06:26) Introduction(07:42) High-Level Framing(09:36) Stage 1: Learning the Ropes(10:41) Machine Learning & Transformer Basics(13:22) Mechanistic Interpretability Techniques(15:57) Mechanistic Interpretability Coding & Tooling(19:40) Understanding the literature(22:45) Using LLMs for Learning(27:38) Interlude: What is mech interp?(31:27) The Big Picture: Learning the Craft of Research(33:24) Unpacking the Research Process(37:04) What is research taste?(38:28) Stage 2: Practicing Research with Mini-Projects(39:28) Choose A Project(43:04) Practicing Exploration(45:51) Practicing Understanding(48:09) Using LLMs for Research Code(49:33) Interlude: What's New In Mechanistic Interpretability?(49:56) Avoiding Fads(53:58) What's New In Mech Interp?(01:00:10) A Pragmatic Vision for Mech Interp(01:01:42) Stage 3: Working Up To Full Research Projects(01:03:12) Key Research Mindsets(01:09:22) Deepening Your Skills(01:16:59) Practicing Ideation(01:21:19) Write up your work!(01:27:33) Mentorship, Collaboration and Sharing Your Work(01:28:02) So what does a research mentor actually do?(01:30:23) Advice on finding a mentor(01:35:53) Community & collaborators(01:37:54) Careers(01:37:57) Where to apply(01:39:30) What do hiring managers look for(01:42:44) Should you do a PhD?The original text contained 36 footnotes which were omitted from this narration. ---
First published:
September 2nd, 2025
Source:
https://www.lesswrong.com/posts/jP9KDyMkchuv6tHwm/how-to-become-a-mechanistic-interpretability-researcher
---
Narrated by TYPE III AUDIO.

Sep 2, 2025 • 5min
“Simulating the *rest* of the political disagreement” by Raemon
There's a mistake I made a couple times and didn't really internalize the lesson as fast as I'd like. Moreover, it wasn't even a failure to generalize, it was basically a failure to even have a single update stick about a single situation. The particular example was me saying, roughly: Look, I'm 60%+ on "Alignment is quite hard, in a way that's unlikely to be solved without a 6+ year pause." I can imagine believing it was lower, but it feels crazy to me to think it's lower than like 15%. And at 15%, it's still horrendously irresponsible to solve AI takeoff via rushing forward and winging-it than "everybody stop, and actually give yourselves time think." ( The error mode here is something like "I was imagining what I'd think if you slid this one belief slider from ~60%+ to 15%, without imagining all the other beliefs that would [...] ---
First published:
September 2nd, 2025
Source:
https://www.lesswrong.com/posts/vJrP7nbnJTwk4fcbk/simulating-the-rest-of-the-political-disagreement
---
Narrated by TYPE III AUDIO.

Sep 2, 2025 • 7min
“%CPU Utilization Is A Lie” by Brendan Long
I deal with a lot of servers at work, and one thing everyone wants to know about their servers is how close they are to being at max utilization. It should be easy, right? Just pull up top or another system monitor tool, look at network, memory and CPU utilization, and whichever one is the highest tells you how close you are to the limits.For example, this machine is at 50% CPU utilization, so it can probably do twice as much of whatever it's doing. And yet, whenever people actually try to project these numbers, they find that CPU utilization doesn't quite increase linearly. But how bad could it possibly be? To answer this question, I ran a bunch of stress tests and monitored both how much work they did and what the system-reported CPU utilization was, then graphed the results. Setup I vibe-coded a script that runs stress-ng [...] ---Outline:(01:06) Setup(02:01) Results(02:07) General CPU(02:45) 64-bit Integer Math(03:19) Matrix Math(04:08) Whats Going On?(04:11) Hyperthreading(04:54) Turbo(05:33) Does This Matter?The original text contained 2 footnotes which were omitted from this narration. ---
First published:
September 2nd, 2025
Source:
https://www.lesswrong.com/posts/mwsLdPoEQBrSEKgRy/cpu-utilization-is-a-lie
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Sep 2, 2025 • 6min
“But Have They Engaged With The Arguments? [Linkpost]” by Noosphere89
There's an interestingly pernicious version of a selection effect that occurs in epistemology, where people can be led into false claims because when people try to engage with arguments, people will drop out at random steps, and past a few steps or so, the people who believe in all the arguments will have a secure-feeling position that the arguments are right, and that people who object to the arguments are (insane/ridiculous/obviously trolling), no matter whether the claim is true: What's going wrong, I think, is something like this. People encounter uncommonly-believed propositions now and then, like “AI safety research is the most valuable use of philanthropic money and talent in the world” or “Sikhism is true”, and decide whether or not to investigate them further. If they decide to hear out a first round of arguments but don't find them compelling enough, they drop out of the process. [...] ---
First published:
September 2nd, 2025
Source:
https://www.lesswrong.com/posts/LLiZEnnh3kK3Qg7qf/but-have-they-engaged-with-the-arguments-linkpost
---
Narrated by TYPE III AUDIO.