LessWrong (Curated & Popular) cover image

"Alignment Pretraining: AI Discourse Causes Self-Fulfilling (Mis)alignment" by Cam, Puria Radmard, Kyle O’Brien, David Africa, Samuel Ratnam, andyk

LessWrong (Curated & Popular)

00:00

Tampering and Alignment-in-Depth

Benign fine-tuning erases SFT/DPO gains for negative priors; positive pretraining resists tampering.

Play episode from 12:16
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app