AXRP - the AI X-risk Research Podcast cover image

12 - AI Existential Risk with Paul Christiano

AXRP - the AI X-risk Research Podcast

CHAPTER

How to Train a System Like This?

The main idea is to use objectives other than just a function of what it outputs like. They're not the supervised objective of how wellitsOutputs match human Outputs. The central case we're thinking about as kind of dislike a mis match between a way the urai, most naturallists said to be thinking about what's happening. You don't like the way thay is thinking about what’s happening, and the way a human would think about what they are doing. And so then most of the time has gone into ideas that are like basically taking those consistency conditions,. So saying, we expect that, like, when thereis a bark, it's most likely

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner