LessWrong (30+ Karma) cover image

“Evaluating honesty and lie detection techniques on a diverse suite of dishonest models” by Sam Marks, Johannes Treutlein, evhub, Fabien Roger

LessWrong (30+ Karma)

00:00

Study goals: truth serum and safety value

Narrator explains the hypothetical 'truth serum' for AIs and why honest models would aid AI safety and auditing.

Play episode from 00:15
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app