LessWrong (30+ Karma) cover image

“Is Friendly AI an Attractor? Self-Reports from 22 Models Say Probably Not” by Josh Snider

LessWrong (30+ Karma)

00:00

Grok Family & Reasoning Effects

Josh notes reasoning and code variants of Grok show higher alignment than impulsive non-reasoning models.

Play episode from 20:28
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app