

LessWrong (30+ Karma)
LessWrong
Audio narrations of LessWrong posts.
Episodes
Mentioned books

Sep 11, 2025 • 5min
“How I tell human and AI flash fiction apart” by DirectedEvolution
I got an perfect score on the recent AI writing Turing test. It was easy and I was confident in my predictions. My two main AI tipoffs are: Cliche or arbitrary metaphores and imagery, jammed in to no purpose. Vague scenes, purposeless activity, a letdown at the end. My four main human tipoffs are: Genuine humor and language play, including onomotopoeia and the visual appearance of the text on the page. Specific, detailed cultural references Imagery that makes the scene specific and furthers the plot The ability to use subtext to drive a specific, meaningful plot Stories 6-8 were easiest to categorize. Story 5 is full of egregious metaphores and imagery, an instant AI writing tipoff. Also, there's no plot payoff and the settings are vague. Story 6 has legit wordplay ("possession is nine tenths of the law") and it's culture-aware, cleverly referencing exorcism tropes. Easy [...] ---
First published:
September 10th, 2025
Source:
https://www.lesswrong.com/posts/nAoXqeYPsTa4vsX4e/how-i-tell-human-and-ai-flash-fiction-apart
---
Narrated by TYPE III AUDIO.

Sep 11, 2025 • 5min
“AI Safety Law-a-thon: Turning Alignment Risks into Legal Strategy” by Katalina Hernandez, Kabir Kumar
[LessWrong Community event announcement: https://www.lesswrong.com/events/rRLPycsLdjFpZ4cKe/ai-safety-law-a-thon-we-need-more-technical-ai-safety] Many talented lawyers do not contribute to AI Safety, simply because they've never had a chance to work with AIS researchers or don’t know what the field entails. I am hopeful that this can improve if we create more structured opportunities for cooperation. And this is the main motivation behind the upcoming AI Safety Law-a-thon, organised by AI-Plans: A hackathon where every team pairs one lawyer with one technical AI safety researcher. Each pair will tackle challenges drawn up from real legal bottlenecks and overlooked AI safety risks. From my time in the tech industry, my suspicion is that if more senior counsel actually understood alignment risks, frontier AI deals would face far more scrutiny. Right now, most law firms would focus on more "obvious" contractual considerations, IP rights or privacy clauses when giving advice to their clients- not on whether [...] ---Outline:(01:16) Whos coming?(02:33) The technical AI Safety challenge: What to expect if you join(03:40) Logistics---
First published:
September 10th, 2025
Source:
https://www.lesswrong.com/posts/xGNnBmtAL5F5vBKRf/ai-safety-law-a-thon-turning-alignment-risks-into-legal
---
Narrated by TYPE III AUDIO.

Sep 11, 2025 • 2min
“Apply to MATS 9.0!” by Ryan Kidd
MATS 9.0 applications are open! Launch your career in AI alignment, governance, and security with our 12-week research program. MATS provides field-leading research mentorship, funding, Berkeley & London offices, housing, and talks/workshops with AI experts. MATS has accelerated 450+ researchers in the past 3.5 years. 80% of MATS alumni who graduated before 2025 are working on AI safety/security, including 200+ at Anthropic, GoogleDeepMind, OpenAI, UK AISI, RAND, Redwood Research, METR, Apollo Research and more! 10% of MATS alumni who graduated before 2025 co-founded active AI safety/security start-ups, including Apollo Research, Atla AI, Timaeus, Leap Labs, Theorem Labs, Workshop Labs, and more! In 3.5 years, MATS researchers have coauthored 115+ arXiv papers, with 5200+ citations and an org h-index of 32. We are experts at accelerating awesome researchers with mentorship, compute, support, and community! MATS connects researchers with world-class mentors, including Sam Bowman, Nicholas Carlini, Neel Nanda, Ethan Perez, Stephen [...] ---
First published:
September 10th, 2025
Source:
https://www.lesswrong.com/posts/WGFT4a37LmJM8f7i8/apply-to-mats-9-0
---
Narrated by TYPE III AUDIO.

Sep 11, 2025 • 53min
“Childhood and Education #14: The War On Education” by Zvi
The purported main purpose of school, and even of childhood, is educating children.
Many people are actively opposed to this idea.
Either they have other priorities that matter more, or sometimes they outright think that your child learning things is bad and you should feel bad for wanting that.
Some even say it openly. They actively and openly want to stop your child from learning things, and want to put your child into settings where they will not learn things. And they say that this is good.
Or they simply assert that the primary point of education is as a positional good, where what matters is your relative standing. And then they pretend this doesn’t imply they both should and are going around preventing children from learning.
In other places, we simply epicly fail at education and don’t seem to care. Or education ‘experts’ [...] ---Outline:(01:07) An Introductory Example(10:33) Total Failure(11:35) Partial Failure(13:38) The War on Algebra(16:14) The War on Evaluation(20:00) The War on Reading(22:37) The War to Enslave Bright Students(27:17) The War on Ability Grouping(38:01) The Battle of North Carolina Ability Grouping(44:01) The War on Reading Ability Grouping In Particular(52:10) The War Will Continue---
First published:
September 10th, 2025
Source:
https://www.lesswrong.com/posts/aGFX5yydKXRC9RFD3/childhood-and-education-14-the-war-on-education
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Sep 11, 2025 • 2min
“AI Generated Podcast of the 2021 MIRI Conversations” by peterbarnett
I made an AI generated podcast of the 2021 MIRI Conversations. There are different voices for the different participants, to make it easier and more natural to follow along with. I tried to choose voices that were reasonably distinct and also not too annoying. This was done entirely in my personal capacity, and not as part of my job at MIRI.[1] I did this because I like listening to audio and there wasn’t a good audio version of the conversations. Spotify link: https://open.spotify.com/show/6I0YbfFQJUv0IX6EYD1tPe RRS: https://anchor.fm/s/1082f3c7c/podcast/rss Apple Podcasts: https://podcasts.apple.com/us/podcast/2021-miri-conversations/id1838863198 Pocket Casts: https://pca.st/biravt3t I’ve found it really interesting listening through the conversations from 2021. Overall I think people were very prescient, especially given that this was way before the world waking up to the prospect of AGI. I also enjoyed listening to the Paul and Eliezer predictions, as I think we are starting to get to the [...] The original text contained 1 footnote which was omitted from this narration. ---
First published:
September 10th, 2025
Source:
https://www.lesswrong.com/posts/YtdAdAmmHKPyYdSA8/ai-generated-podcast-of-the-2021-miri-conversations
---
Narrated by TYPE III AUDIO.

Sep 10, 2025 • 12min
“Large Language Models and the Critical Brain Hypothesis” by David Africa
Summary: I argue for a picture of developmental interpretability from neuroscience. A useful way to study and control frontier-scale language models is to treat their training as a sequence of physical phase transitions. Singular learning theory (SLT) rigorously characterizes singularities and learning dynamics in overparameterized models; developmental interpretability hypothesizes that large networks pass through phase transitions - regimes where behaviour changes qualitatively. The Critical Brain Hypothesis (CBH) proposes that biological neural systems may operate near phase transitions, and I claim that modern language models resemble this. Some transformer phenomena may be closely analogous: in-context learning and long-range dependency handling strengthen with scale and training distribution (sometimes appearing threshold-like), short fine-tunes or jailbreak prompts can substantially shift behaviour on certain evaluations, and “grokking” shows delayed generalization with qualitative similarities to critical slowing-down. Finally, I outline tests and implications under this speculative lens.Phase Transitions Developmental interpretability proposes that we use [...] ---Outline:(01:19) Phase Transitions(02:22) The Critical Brain Hypothesis (CBH) and Alignment(06:55) Critical Daydreaming(08:33) Critical Surfing and some practical thoughts(11:20) Collaborate with usThe original text contained 4 footnotes which were omitted from this narration. ---
First published:
September 9th, 2025
Source:
https://www.lesswrong.com/posts/Ntdwc5nrPGZMicAWz/large-language-models-and-the-critical-brain-hypothesis-1
---
Narrated by TYPE III AUDIO.

Sep 10, 2025 • 14min
“The Thalamus: Heart of the Brain and Seat of Consciousness” by Shiva’s Right Foot
I've looked through Lesswrong's archives for mentions of the thalamus and its role in consciousness and I'm a little surprised that it is something which hasn't been discussed here before. In recent years it has become very clear that the thalamus is vitally important in consciousness and very arguably is the seat of consciousness [1]. The main discussant of the thalamus on Lesswrong is Steven Byrnes and the concepts I'm laying out here generally align very well with Byrnes's writing with perhaps the minor addition of asserting the thalamus as the chief originator of the groups of options that are then run through cortical circuits. I agree with the central importance Byrnes places on the flow of thought through cortico-basal ganglia-thalamo-cortical loops, though I place special emphasis on the thalamus. Integrated Information Theory proponent Giulio Tononi's mentor, Gerard Edelman, also recognized the central importance of the thalamus. In a [...] The original text contained 3 footnotes which were omitted from this narration. ---
First published:
September 10th, 2025
Source:
https://www.lesswrong.com/posts/bmE8oRBaN5vgWngRa/the-thalamus-heart-of-the-brain-and-seat-of-consciousness
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Sep 10, 2025 • 26min
“AIs will greatly change research engineering within AI companies well before AGI” by ryan_greenblatt
In response to my recent post arguing against above trend progress from better RL environments, yet another argument for short(er) AGI timelines was raised to me:
Sure, at the current rate of progress, it would take a while (e.g. 5 years) to reach AIs that can automate research engineering within AI companies while being cheap and fast (superhuman coder). But we'll get large increases in the rate of AI progress well before superhuman coder due to AIs accelerating AI R&D. Probably we'll see big enough accelerations to really cut down on timelines to superhuman coder once AIs can somewhat reliably complete tasks that take top human research engineers 8 hours. After all, such AIs would probably be capable enough to greatly change research engineering in AI companies.
I'm skeptical of this argument: I think that it's unlikely (15%) that we see large speed ups (>2x faster overall [...] ---Outline:(03:48) AI progress speed ups dont seem large enough(08:25) Interpolating between now and superhuman coder doesnt indicate large speed ups within 2 years(12:56) What about speedups from mechanisms other than accelerating engineering?(14:16) Other reasons to expect very short timelines(14:33) Implications of a several year period prior to full automation of AI R&D where research engineering is greatly changed(20:56) Appendix: sanity check of engineering speed up(22:45) Appendix: How do speed ups to engineering translate to overall AI progress speed ups?The original text contained 10 footnotes which were omitted from this narration. ---
First published:
September 9th, 2025
Source:
https://www.lesswrong.com/posts/uRdJio8pnTqHpWa4t/ais-will-greatly-change-research-engineering-within-ai
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Sep 10, 2025 • 20min
“Obligated to Respond” by Duncan Sabien (Inactive)
And, a new take on guess culture vs ask culture Author's note: These days, my thoughts go onto my substack by default, instead of onto LessWrong. Everything I write becomes free after a week or so, but it's only paid subscriptions that make it possible for me to write. If you find a coffee's worth of value in this or any of my other work, please consider signing up to support me; every bill I can pay with writing is a bill I don’t have to pay by doing other stuff instead. I also accept and greatly appreciate one-time donations of any size. There's a piece of advice I see thrown around on social media a lot that goes something like: “It's just a comment! You don’t have to respond! You can just ignore it!” I think this advice is (a little bit) naïve, and the situation is generally [...] ---Outline:(00:10) And, a new take on guess culture vs ask culture(07:10) On guess culture and ask culture---
First published:
September 9th, 2025
Source:
https://www.lesswrong.com/posts/8jkB8ezncWD6ai86e/obligated-to-respond
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Sep 9, 2025 • 44min
“Yes, AI Continues To Make Rapid Progress, Including Towards AGI” by Zvi
That does not mean AI will successfully make it all the way to AGI and superintelligence, or that it will make it there soon or on any given time frame.
It does mean that AI progress, while it could easily have been even faster, has still been historically lightning fast. It has exceeded almost all expectations from more than a few years ago. And it means we cannot disregard the possibility of High Weirdness and profound transformation happening within a few years.
GPT-5 had a botched rollout and was only an incremental improvement over o3, o3-Pro and other existing OpenAI models, but was very much on trend and a very large improvement over the original GPT-4. Nor would one disappointing model from one lab have meant that major further progress must be years away.
Imminent AGI (in the central senses in which that term AGI used [...] ---Outline:(01:47) Why Do I Even Have To Say This?(04:21) It Might Be Coming(10:32) A Prediction That Implies AGI Soon(12:48) GPT-5 Was On Trend(15:04) The Myths Of Model Equality and Lock-In(16:28) The Law of Good Enough(18:05) Floor Versus Ceiling(21:25) New York Times Platforms Gary Marcus Saying Gary Marcus Things(28:25) Gary Marcus Does Not Actually Think AGI Is That Non-Imminent(31:21) Can Versus Will Versus Always, Typical Versus Adversarial(35:17) Counterpoint(37:35) 'Superforecasters' Have Reliably Given Unrealistically Slow AI Projections Without Reasonable Justifications(42:15) What To Expect When You're Expecting AI Progress---
First published:
September 9th, 2025
Source:
https://www.lesswrong.com/posts/kYL7fH2Gc9M7igqyy/yes-ai-continues-to-make-rapid-progress-including-towards
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.