LessWrong (30+ Karma) cover image

Alignment will happen by default. What’s next?

LessWrong (30+ Karma)

00:00

Sharp left turn unlikely given observed chains of thought

Host suggests models haven't shown hidden scheming and are corrigible when trained to be so.

Play episode from 07:30
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app