LessWrong (30+ Karma) cover image

“Is Friendly AI an Attractor? Self-Reports from 22 Models Say Probably Not” by Josh Snider

LessWrong (30+ Karma)

00:00

Counterpoint: Alignment Is a Training Target

Josh argues the evidence shows alignment reflects training goals not an innate attractor in model space.

Play episode from 23:18
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app