LessWrong (30+ Karma) cover image

“Tests of LLM introspection need to rule out causal bypassing” by Adam Morris, Dillon Plunkett

LessWrong (30+ Karma)

00:00

Implications for AI Safety and Generalization

They argue grounded introspection may generalize to novel contexts, unlike static cached self-knowledge.

Play episode from 06:48
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app