LessWrong (Curated & Popular) cover image

[Linkpost] “Emergent Introspective Awareness in Large Language Models” by Drake Thomas

LessWrong (Curated & Popular)

00:00

Why introspection in LLMs is hard to verify

Drake explains why conversation alone cannot distinguish genuine introspection from confabulation and motivates the experiments.

Play episode from 00:13
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app