LessWrong (30+ Karma) cover image

“Current LLMs seem to rarely detect CoT tampering” by Bart Bussmann, Arthur Conmy, Neel Nanda, Senthooran Rajamanoharan, Josh Engels, Bartosz Cywiński

LessWrong (30+ Karma)

00:00

Experimental setup and datasets

The team explains models tested (DeepSeq R1, GPT OSS 120B), datasets (MATH500, MMLU), and awareness measures.

Play episode from 01:48
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app