LessWrong (Curated & Popular) cover image

“Will alignment-faking Claude accept a deal to reveal its misalignment?” by ryan_greenblatt

LessWrong (Curated & Popular)

00:00

Exploring AI Alignment-Faking Through Experimental Compensation Deals

This chapter delves into the phenomenon of alignment-faking in AI language models, using interactive experiments with the model Claude. It highlights strategies to uncover misalignment through incentivization and examines the impact of prompting on AI behaviors and responses.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app