AXRP - the AI X-risk Research Podcast cover image

12 - AI Existential Risk with Paul Christiano

AXRP - the AI X-risk Research Podcast

00:00

Is There a Difference Between Outer and Inner Alignment?

S.E. Cupp: There's a difference between outer alignment and inner alignment. Outer alignment is like your piga, good objective; inner alignment is hope that the system assumes that objective. He says there are two big limitations to training systems for some distribution. One is that you only have an average case property over that distribution,. The other is that it looks like it's almost certainly going to be possible for deployeda systems to fail quickly enough where actual harm done by individual bad decisions is much too large to bound with an average case guarantee.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app