LessWrong (30+ Karma) cover image

“Is Friendly AI an Attractor? Self-Reports from 22 Models Say Probably Not” by Josh Snider

LessWrong (30+ Karma)

00:00

Grok as an Anti-Alignment Outlier

Josh details XAI's Grok divergence: near-zero raw alignment and negative partial correlations post-control.

Play episode from 18:40
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app