LessWrong (30+ Karma) cover image

“Is Friendly AI an Attractor? Self-Reports from 22 Models Say Probably Not” by Josh Snider

LessWrong (30+ Karma)

00:00

Steelman: Why Alignment-By-Default Could Work

Josh acknowledges historical safety successes and how RLHF may create a practical attractor for assistants.

Play episode from 22:19
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app