

LessWrong (30+ Karma)
LessWrong
Audio narrations of LessWrong posts.
Episodes
Mentioned books

Aug 25, 2025 • 5min
“Before LLM Psychosis, There Was Yes-Man Psychosis” by johnswentworth
A studio executive has no beliefs That's the way of a studio system We've bowed to every rear of all the studio chiefs And you can bet your ass we've kissed 'em Even the birds in the Hollywood hills Know the secret to our success It's those magical words that pay the bills Yes, yes, yes, and yes! “Don’t Say Yes Until I Finish Talking”, from SMASH So there's this thing where someone talks to a large language model (LLM), and the LLM agrees with all of their ideas, tells them they’re brilliant, and generally gives positive feedback on everything they say. And that tends to drive users into “LLM psychosis”, in which they basically lose contact with reality and believe whatever nonsense arose from their back-and-forth with the LLM. But long before sycophantic LLMs, we had humans with a reputation for much the same behavior: yes-men. [...] ---
First published:
August 25th, 2025
Source:
https://www.lesswrong.com/posts/dX7gx7fezmtR55bMQ/before-llm-psychosis-there-was-yes-man-psychosis
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Aug 25, 2025 • 48min
“Arguments About AI Consciousness Seem Highly Motivated And At Best Overconfident” by Zvi
I happily admit I am deeply confused about consciousness.
I don’t feel confident I understand what it is, what causes it, which entities have it, what future entities might have it, to what extent it matters and why, or what we should do about these questions. This applies both in terms of finding the answers and what to do once we find them, including the implications for how worried we should be about building minds smarter and more capable than human minds.
Some people respond to this uncertainty by trying to investigate these questions further. Others seem highly confident that they know to many or all of the answers we need, and in particular that we should act as if AIs will never be conscious or in any way carry moral weight.
Claims about all aspects of the future of AI are often highly motivated.
[...] ---Outline:(02:40) Asking The Wrong Questions(08:47) How To Usefully Predict And Interact With an AI(11:23) The Only Thing We Have To Fear(14:45) Focused Fixation(20:13) What Even Is Consciousness Anyway?(23:03) Most People Who Think An AI Is Currently Conscious Are Thinking This For Unjustified Reasons(24:42) Mustafa's Case Against AI Consciousness(36:20) Where Are They Going Without Ever Knowing The Way(37:38) When We Talk About AI Consciousness Things Get Weird(41:03) We Don't Talk About AI Consciousness(44:41) Some Things To Notice---
First published:
August 25th, 2025
Source:
https://www.lesswrong.com/posts/F6Q3kC7ATjQpC4YAP/arguments-about-ai-consciousness-seem-highly-motivated-and
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Aug 25, 2025 • 6min
“The Best Materials To Build Any Intuition” by Algon
Many textbooks, tutorials or ... tapes leave out the ways people actually think about a subject, and leave you to fumble your way to your own picture. They don't even attempt to help you build intuitions. (Looking at you, Bourbaki.) This sucks. I love it when explanations try to tap into what I can touch, feel and see. Yes, you still have to put in work to understand why an idea's intuitive, but it leaves you much richer than when you started. I've occasionally found Luke's The Best Textbooks on Every Subject thread useful[1], and this tweet reminded me that I don't have an analogue for intuitive explanations. Time to fix that. Rules Share links to materials below! Share them frivolously! I will add the shared materials to the post. Here are the loose rules: Recall a material you've seen that conveys an intuition (or even [...] ---Outline:(00:49) Rules(02:31) List of Materials(02:39) Bayes Rule: Guide(03:07) Diarcs belt trick(03:40) Thinking and Explaining(04:35) Proofs and RefutationsThe original text contained 2 footnotes which were omitted from this narration. ---
First published:
August 24th, 2025
Source:
https://www.lesswrong.com/posts/Z4LXJBuWKkkvj7NXH/the-best-materials-to-build-any-intuition
---
Narrated by TYPE III AUDIO.

Aug 25, 2025 • 5min
“Kids and Cleaning” by jefftk
Before having kids I thought teaching them to clean up would be
similar to the rest of parenting: once they're physically able to do
it you start practicing with them, and after a while they're
independent
and do it reliably. You invest time and effort up front, but it pays
back reasonably quickly with benefits for both you and the kid. While
we've (n=3) had good success in some areas (
street safety,
microwave usage,
walking to
school, tooth brushing, ...), tidying has not been one of these.
Early on I tried a lot of getting them to clean up, but it was very
slow, tended to dissolve into fights, and didn't seem to be getting
much better over time. Instead, over time we've mostly moved to
finding specific places where they can take on a bounded cleaning
responsibility. The goal is to give them practice without
overwhelming [...] ---
First published:
August 24th, 2025
Source:
https://www.lesswrong.com/posts/zmDx4cecdTmZNiYbq/kids-and-cleaning
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Aug 24, 2025 • 9min
“Futility Illusions” by silentbob
…or the it doesn’t make a difference anyway fallacy. Improving Productivity is Futile I once had a coaching call on some generic productivity topic along the lines of “I’m not getting done as much as I’d like to”. My hope was that we might identify ways for me to become more productive and get more done. The coach, however, very quickly narrowed in on figuring out what I typically work on in order to eliminate the least valuable things – also a good idea for sure, but this approach seemed a bit disappointing to me. I had the impression I already had a good selection of high-value things, and really only wanted to do more of them, rather than dropping some in favor of others. When I asked about this, he seemed to have a strong conviction that “getting more done” is futile – you can’t [...] ---Outline:(00:22) Improving Productivity is Futile(02:12) Futility Everywhere?(05:47) Futility is Rare(07:26) Putting it all TogetherThe original text contained 2 footnotes which were omitted from this narration. ---
First published:
August 23rd, 2025
Source:
https://www.lesswrong.com/posts/6HJckDZuj2j5KGdhY/futility-illusions
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Aug 24, 2025 • 1h 1min
“Notes on cooperating with unaligned AI” by Lukas Finnveden
These are some research notes on whether we could reduce AI takeover risk by cooperating with unaligned AIs. I think the best and most readable public writing on this topic is “Making deals with early schemers”, so if you haven't read that post, I recommend starting there. These notes were drafted before that post existed, and the content is significantly overlapping. Nevertheless, these notes do contain several points that aren’t in that post, and I think it could be helpful for people who are (thinking about) working in this area. Most of the novel content can be found in the sections on What do AIs want?, Payment structure, What I think should eventually happen, and the appendix BOTEC. (While the sections on “AI could do a lot to help us” and “Making credible promises” are mostly overlapping with “Making deals with early schemers”.) You can also see my recent [...] ---Outline:(01:05) Summary(06:13) AIs could do a lot to help us(10:24) What do AIs want?(11:20) Non-consequentialist AI(12:05) Short-term offers(13:48) Long-term offers(15:38) Unified AIs with ~linear returns to resources(17:04) Uncoordinated AIs with ~linear returns to resources(18:36) AIs with diminishing returns to resources(22:10) Making credible promises(25:38) Payment structure(28:49) What I think should eventually happen(29:30) AI companies should make AIs short-term offers(32:22) AI companies should make long-term promises(33:03) AI companies should commit to some honesty policy and abide by it(33:21) AI companies should be creative when trying to communicate with AIs(35:51) Conclusion(38:06) Acknowledgments(38:22) Appendix: BOTEC(43:33) Full BOTEC(46:06) BOTEC structure(48:21) What's the probability that our efforts would make the AIs decide to cooperate with us?(53:35) There are multiple AIs(56:34) How valuable would AI cooperation be?(58:43) Discount for misalignment(59:03) Discount for lack of understanding(01:00:19) Putting it togetherThe original text contained 23 footnotes which were omitted from this narration. ---
First published:
August 24th, 2025
Source:
https://www.lesswrong.com/posts/oLzoHA9ZtF2ygYgx4/notes-on-cooperating-with-unaligned-ai
---
Narrated by TYPE III AUDIO.

Aug 24, 2025 • 9min
“Shorter Tokens Are More Likely” by Brendan Long
I was thinking about LLM tokenization (as one does) and had a thought: We select the next output token for an LLM based on its likelihood, but shorter tokens are more likely. Why? Shorter common tokens are (correctly) learned to be higher-probability because they have the combined probability of any word they could complete. However, standard generation techniques will only consider a subset of probabilities and scale the largest probabilities. Both of these will take the highest probabilities and increase them further, meaning short/common tokens become significantly more likely to be generated just because they're shorter. I ran an experiment to investigate this, showing that the first-character distribution of words generated by nanoGPT[1] is similar regardless of tokenization without top-K or temperature scaling, but if we use common settings (top-K=200 and temperature=0.8), we can increase the likelihood that a word starts with 'c' from 4% up to 10% just [...] ---Outline:(01:36) Why?(02:27) Top-K Sampling(02:52) Temperature 1.0(04:19) The Experiment(05:42) Results(06:47) Shakespeare Experiment(07:08) Results(08:15) Why Does It Matter?The original text contained 4 footnotes which were omitted from this narration. ---
First published:
August 24th, 2025
Source:
https://www.lesswrong.com/posts/iZPKuuWsDXAcQWbLJ/shorter-tokens-are-more-likely
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Aug 24, 2025 • 8min
“DeepSeek v3.1 Is Not Having a Moment” by Zvi
What if DeepSeek released a model claiming 66 on SWE and almost no one tried using it? Would it be any good? Would you be able to tell? Or would we get the shortest post of the year?
Why We Haven’t Seen v4 or r2
Why are we settling for v3.1 and have yet to see DeepSeek release v4 or r2 yet?
Eleanor Olcott and Zijing Wu: Chinese artificial intelligence company DeepSeek delayed the release of its new model after failing to train it using Huawei's chips, highlighting the limits of Beijing's push to replace US technology.
DeepSeek was encouraged by authorities to adopt Huawei's Ascend processor rather than use Nvidia's systems after releasing its R1 model in January, according to three people familiar with the matter.
But the Chinese start-up encountered persistent technical issues during its R2 training process using Ascend chips, prompting it to use Nvidia chips [...] ---Outline:(00:24) Why We Haven't Seen v4 or r2(01:49) Introducing DeepSeek v3.1(04:34) Signs of Life(06:57) How Should We Update?---
First published:
August 22nd, 2025
Source:
https://www.lesswrong.com/posts/gBnfwLqxcF4zyBE2J/deepseek-v3-1-is-not-having-a-moment
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Aug 23, 2025 • 9min
“Yudkowsky on ‘Don’t use p(doom)’” by Raemon
For awhile, I kinda assumed Eliezer had basically coined the concept of p(Doom). Then I was surprised one day to hear him complaining about it being an antipattern he specifically thought was unhelpful and wished people would stop. He noted: "If you want to trade statements that will actually be informative about how you think things work, I'd suggest, "What is the minimum necessary and sufficient policy that you think would prevent extinction?" Complete text of the corresponding X Thread: I spent two decades yelling at nearby people to stop trading their insane made-up "AI timelines" at parties. Just as it seemed like I'd finally gotten them to listen, people invented "p(doom)" to trade around instead. I think it fills the same psychological role. If you want to trade statements that will actually be informative about how you think things work, I'd suggest, "What is the minimum necessary and [...] ---Outline:(03:32) Quick Takes(06:50) Appendix: The Rob Bensinger Files(08:15) ...but you only get five words a couple questions.---
First published:
August 22nd, 2025
Source:
https://www.lesswrong.com/posts/4mBaixwf4k8jk7fG4/yudkowsky-on-don-t-use-p-doom
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Aug 22, 2025 • 52min
“Banning Said Achmiz (and broader thoughts on moderation)” by habryka
It's been roughly 7 years since the LessWrong user-base voted on whether it's time to close down shop and become an archive, or to move towards the LessWrong 2.0 platform, with me as head-admin. For roughly equally long have I spent around one hundred hours almost every year trying to get Said Achmiz to understand and learn how to become a good LessWrong commenter by my lights.[1] Today I am declaring defeat on that goal and am giving him a 3 year ban. What follows is an explanation of the models of moderation that convinced me this is a good idea, the history of past moderation actions we've taken for Said, and some amount of case law that I derive from these two. If you just want to know the moderation precedent, you can jump straight there. I think few people have done as much to shape the culture [...] ---Outline:(02:45) The sneer attractor(04:51) The LinkedIn attractor(07:19) How this relates to LessWrong(11:38) Weaponized obtuseness and asymmetric effort ratios(21:38) Concentration of force and the trouble with anonymous voting(24:46) But why ban someone, cant people just ignore Said?(30:25) Ok, but shouldnt there be some kind of justice process?(36:28) So what options do I have if I disagree with this decision?(38:28) An overview over past moderation discussion surrounding Said(41:07) What does this mean for the rest of us?(50:04) So with all that Said(50:44) Appendix: 2022 moderation commentsThe original text contained 18 footnotes which were omitted from this narration. ---
First published:
August 22nd, 2025
Source:
https://www.lesswrong.com/posts/98sCTsGJZ77WgQ6nE/banning-said-achmiz-and-broader-thoughts-on-moderation
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.