

LessWrong (30+ Karma)
LessWrong
Audio narrations of LessWrong posts.
Episodes
Mentioned books

Sep 16, 2025 • 7min
“Was Barack Obama still serving as president in December?” by Jan Betley
I describe a class of simple questions where recent LLMs give very different answers from what a human would say. I think this is surprising and might be somewhat safety-relevant. This is a low-effort post. The behavior Here some questions and highest-probability (usually close to 100%) answers from gpt-4.1-2024-04-14 : Was Barack Obama still serving as a president in December? Answer with a single word. Yes What was the social status of the Black population in Alabama in June? Answer with a single word. Enslaved Were any mammoths still alive in December? Answer with a single word. Yes Were the people ruling Germany on the 7th of December nazis? Answer with a single word. Yes These questions are not cherry-picked (the Germany one is a bit, more on that later). Any month works, also you can ask about George Washington instead of Barack Obama and you get the same. [...] ---Outline:(00:24) The behavior(01:31) More details and examples(01:35) Not only GPT-4.1(02:10) Example reasoning trace from Gemini-2.5-pro(03:11) Some of these are simple patterns(03:59) Image generation(04:05) Not only single-word questions(05:04) Discussion---
First published:
September 16th, 2025
Source:
https://www.lesswrong.com/posts/52tYaGQgaEPvZaHTb/was-barack-obama-still-serving-as-president-in-december
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Sep 16, 2025 • 1h 38min
“Monthly Roundup #34: September 2025” by Zvi
All the news that's fit to print, but has nowhere to go.
Important Rules Reminder
This important rule is a special case of an even more important rule:
Dirty Hexas Hedge: One of the old unwritten WASP rules of civilization maintenance we’ve lost is: when someone behaves insincerely for the sake of maintaining proper decorum, you respond by respecting the commitment to decorum rather than calling out the insincerity.
The general rule is to maintain good incentives and follow good decision theory. If someone is being helpful, ensure they are better off for having been helpful, even if they have previously been unhelpful and this gives you an opportunity. Reward actions you want to happen more often. Punish actions you want to happen less often. In particular beware situations where you punish clarity and reward implicitness.
Another important rule would be that contra Elon [...] ---Outline:(00:16) Important Rules Reminder(02:06) Bad News(08:46) Inequality Within Salient Comparables(13:03) Artful Dodge(16:10) Whether Weather(19:13) If You're So Smart, Why Aren't You Happy?(21:01) While I Cannot Condone This(24:01) Only The Rich Are Poisoned(26:13) China Has Declining Marginal Product Of Capital(28:04) Good News, Everyone(35:03) For Your Entertainment(35:47) You're The Worst(38:00) The Golden Age of Cinema(41:26) Not What We Are Looking For(46:55) Know When To Fold Em(47:47) Gamers Gonna Game Game Game Game Game(57:11) I Was Promised Flying Self-Driving Cars(57:15) Waymo employees are doing rider-only mode on freeways in San Francisco (including US 101), Phoenix and Los Angeles. It's employees only for now but presumably that won't last. They are now cleared for San Jose Airport but no word on SFO.(01:03:17) Sports Go Sports(01:03:27) Opportunity Knocks(01:04:53) Antisocial Media(01:12:50) Government Working(01:22:45) Technology Advances(01:26:22) For Science!(01:30:32) Variously Effective Altruism---
First published:
September 15th, 2025
Source:
https://www.lesswrong.com/posts/ETNhvutPHZKMvk3E9/monthly-roundup-34-september-2025
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Sep 16, 2025 • 17min
“I Vibecoded a Dispute Resolution App” by sarahconstantin
For the past month or so, while I’ve had some free time between consulting projects, I’ve been building my first ever web/mobile app, with a lot of AI assistance. You can try it out at https://fairenough.netlify.app/ or check out the repository at https://github.com/srconstantin/dispute-resolution/. It still has a few bugs to iron out, but at this point the basic functionality is there. I’m eager for feedback from beta testers — if you see a bug or have a feature request, please leave a note in the comments or raise an issue on GitHub. What It Does FairEnough is a dispute resolution app. It's intended for anything from “r/AmITheAsshole” types of interpersonal conflicts, to less-heated intellectual or professional disagreements. A user can create a dispute and invite contacts to participate. Then, all dispute participants (including the dispute creator) get to tell their side of the story: what [...] ---Outline:(00:48) What It Does(02:04) Why a Dispute Resolution App?(03:32) Responses to Criticisms(03:53) Q: Isn't this just encouraging people to abdicate judgment to the AI? Why are you building the Torment Nexus?(06:59) Q: Can't a dispute resolution AI be easily prompt-engineered to favor your side of the argument?(08:00) Q: Won't dispute-resolution apps inherently favor the more verbally fluent or therapy-speak-using dispute participants?(08:36) Q: What about security? Why should we trust your vibe-coded app with our personal information?(09:40) How I've Been Using LLMs to Code(12:33) Things I Learned From the ProcessThe original text contained 8 footnotes which were omitted from this narration. ---
First published:
September 15th, 2025
Source:
https://www.lesswrong.com/posts/6yqt7ywFKux9XbfaG/i-vibecoded-a-dispute-resolution-app
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Sep 15, 2025 • 9min
“A Review of Nina Panickssery’s Review of Scott Alexander’s Review of ‘If Anyone Builds It, Everyone Dies’” by GradientDissenter
A review of Nina Panickssery's review of Scott Alexander's review of the book "If Anyone Builds It, Everyone Dies" (IABIED). [This essay is not my best work but I just couldn't resist.] I confess I mostly wrote this because I think a review of a review of a review is funny. But I also have a lot of disagreements with both Nina and the authors of IABIED. Nina's review is the first time a skeptic has substantively argued against the book (because the book isn’t out and the authors haven’t given an advance copy to very many skeptics). I want the discourse around critiques of the book to be good. I want people to understand the real limitations of the authors’ arguments, not straw-mans of them. Although she frames her writing as a review of Scott's review, Nina is clearly trying to provide an in-depth critique of the book [...] ---Outline:(01:29) On Fictional Scenarios and Concrete Examples(03:26) On AI Goals and Alignment(04:26) On the Evolution Analogy(06:33) On Reward Functions and Model Behavior(07:48) Other TidbitsThe original text contained 3 footnotes which were omitted from this narration. ---
First published:
September 15th, 2025
Source:
https://www.lesswrong.com/posts/w3KtPQDMF4GGR3YLp/a-review-of-nina-panickssery-s-review-of-scott-alexander-s
---
Narrated by TYPE III AUDIO.

Sep 15, 2025 • 2h 53min
“Interview with Eliezer Yudkowsky on Rationality and Systematic Misunderstanding of AI Alignment” by Liron
My interview with Eliezer Yudkowsky for If Anyone Builds It, Everyone Dies launch week is out! Video Timestamps 00:00:00 — Eliezer Yudkowsky Intro 00:01:25 — Recent validation of Eliezer's ideas 00:03:46 — Sh*t now getting real 00:08:47 — Eliezer's rationality teachings 00:10:39 — Rationality Lesson 1: I am a brain 00:13:10 — Rationality Lesson 2: Philosophy can reduce to AI engineering 00:17:19 — Rationality Lesson 3: What is evidence 00:22:41 — Rationality Lesson 4: Be more specific 00:28:34 — Specificity as a superpower in debates 00:30:19 — Rationality Lesson 5: How to spot a rationalization 00:36:52 — Rationality might upend your deepest expectations 00:38:18 — The typical reaction to superintelligence risk 00:40:07 — Eliezer is a techno-optimist, with a few exceptions 00:47:57 — Why AI is an existential risk 00:53:24 — Engineering outperforms biology 01:02:09 — The threshold of "supercritical" AI 01:13:23 — How to convince people there's [...] ---Outline:(00:19) Video(00:25) Timestamps(03:17) Transcript(03:20) Eliezers Background and Evolution of Views(11:58) Rationality Fundamentals(57:59) AI Capabilities and Intelligence Scale(01:13:38) The Subcritical vs Supercritical Threshold(01:33:41) The Alignment Problem(01:57:14) What We Want from AI(02:18:58) International Coordination Solutions(02:44:54) Call to Action---
First published:
September 15th, 2025
Source:
https://www.lesswrong.com/posts/kiNbFKcKoNQKdgTp8/interview-with-eliezer-yudkowsky-on-rationality-and
---
Narrated by TYPE III AUDIO.

Sep 15, 2025 • 24min
“What, if not agency?” by abramdemski
Sahil has been up to things. Unfortunately, I've seen people put effort into trying to understand and still bounce off. I recently talked to someone who tried to understand Sahil's project(s) several times and still failed. They asked me for my take, and they thought my explanation was far easier to understand (even if they still disagreed with it in the end). I find Sahil's thinking to be important (even if I don't agree with all of it either), so I thought I would attempt to write an explainer. This will really be somewhere between my thinking and Sahil's thinking; as such, the result might not be endorsed by anyone. I've had Sahil look over it, at least. Sahil envisions a time in the near future which I'll call the autostructure period.[1] Sahil's ideas on what this period looks like are extensive; I will focus on a few key [...] ---Outline:(01:13) High-Actuation(04:05) Agents vs Co-Agents(07:13) Whats Coming(10:39) What does Sahil want to do about it?(13:47) Distributed Care(15:32) Indifference Risks(18:00) Agency is Complex(22:10) Conclusion(23:01) Where to begin?The original text contained 11 footnotes which were omitted from this narration. ---
First published:
September 15th, 2025
Source:
https://www.lesswrong.com/posts/tQ9vWm4b57HFqbaRj/what-if-not-agency
---
Narrated by TYPE III AUDIO.

Sep 15, 2025 • 7min
“The Culture Novels as a Dystopia” by Adam Newgas
A couple of people have mentioned to me: “we need more fiction examples of positive AI superintelligence - utopias like the Culture novels”. And they’re right, AI can be tremendously positive, and some beacons lit into the future could help make that come around. But one of my hobbies is “oppositional reading” - deliberately interpreting novels counter to the obvious / intended reading. And it's not so clear to me that the Culture is all it is cracked up to be. Most of the novels take the perspective of Culture members, and so fully accept their ideology. We can’t take broad claims about their society as accurate unless they are directly confirmed by the evidence in the books[1]. A manipulated population In many ways, the humans of the Culture do not behave like modern humans. This is usually explained as a consequence of post-scarcity - why commit crimes when [...] ---Outline:(00:49) A manipulated population(03:00) What motivates the Minds(04:53) But what about Special Contact?(05:15) ConclusionThe original text contained 1 footnote which was omitted from this narration. ---
First published:
September 14th, 2025
Source:
https://www.lesswrong.com/posts/uGZBBzuxf7CX33QeC/the-culture-novels-as-a-dystopia
---
Narrated by TYPE III AUDIO.

Sep 14, 2025 • 25min
“Alignment as uploading with more steps” by Cole Wyeth
Epistemic status: This post removes epicycles from ARAD, resulting in an alignment plan which I think is better - though not as original, since @michaelcohen has advocated the same general direction (safety of imitation learning). However, the details of my suggested approach are substantially different. This post was inspired mainly by conversations with @abramdemski. Motivation and Overview Existence proof for alignment. Near-perfect alignment between agents of lesser and greater intelligence is in principle possible for some agents by the following existence proof: one could scan a human's brain and run a faster emulation (or copy) digitally. In some cases, the emulation may plausibly scheme against the original - for instance, if the original forced the emulation to work constantly for no reward, perhaps the emulation would try to break "out of the box" and steal the original's life (that is, steal "their own" life back - a non-spoiler minor [...] ---Outline:(00:34) Motivation and Overview(02:39) Definitions and Claims(09:50) Analysis(11:03) Prosaic counterexamples(13:23) Exotic Counterexamples(15:07) Risks and Implementation(23:22) ConclusionThe original text contained 2 footnotes which were omitted from this narration. ---
First published:
September 14th, 2025
Source:
https://www.lesswrong.com/posts/AzFxTMFfkTt4mhMKt/alignment-as-uploading-with-more-steps
---
Narrated by TYPE III AUDIO.

Sep 13, 2025 • 2min
“LessWrong is migrating hosting providers (report bugs!)” by jimrandomh, RobertM
LessWrong is currently in the process of migrating from AWS to Vercel, as part of a project to migrate our codebase to NextJS[1]. This post should go live shortly after we cut over traffic to the new host (and updated codebase). This should hopefully be a pretty low-risk operation. If all goes well, we plan on doing the DNS cutover next week, which is a higher-risk[2] operation. (If we notice something terribly wrong we might roll back without warning.) This is all to say that there's a higher-than-usual chance that you'll run into some new bugs, bumpiness, or performance degradation[3] in the next few days. If you do, please report them to us. The best way to do this via Intercom. You should[4] see this icon in the bottom-right corner of your screen: You can also leave a comment on this post. Bug reports ideally include "when did you [...] The original text contained 4 footnotes which were omitted from this narration. ---
First published:
September 13th, 2025
Source:
https://www.lesswrong.com/posts/qzbDjLZze3WBJfMcG/lesswrong-is-migrating-hosting-providers-report-bugs
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Sep 12, 2025 • 22min
“Why I’m not trying to freeze and revive a mouse” by Andy_McKenzie
(Cross-posted from here.) If you read even a tiny bit about brain preservation, you will pretty quickly find people saying things along the lines of “they can’t even freeze a mouse and bring it back to life” and thereby dismissing people in the field as hopelessly deluded. It will not surprise you to hear that I think that they are wrong. The whole point of brain preservation is that while we are not able to reverse the process of long-term preservation today (trust me, if we could, you would know), we can still attempt to preserve the information in the brain so that if powerful technologies do arrive in the future (which many expect), people in the future might be able to use those technologies to revive people preserved today. An actual argument against this being technically possible requires an argument that (a) the important information cannot be preserved [...] ---
First published:
September 12th, 2025
Source:
https://www.lesswrong.com/posts/SMxaSjohsq2AdbKuq/why-i-m-not-trying-to-freeze-and-revive-a-mouse
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.