AXRP - the AI X-risk Research Podcast cover image

12 - AI Existential Risk with Paul Christiano

AXRP - the AI X-risk Research Podcast

00:00

Is It the Right Thing to Do in Computer Science?

iam wondering what you think of these approaches for things. Ha, look more like outer alignment, more like trying to specify what a good objective is. I don't really know that much about the history of science though, so i'm just guessing that that might be a good approach sometimes. So perhaps it's like the human wants. There'se some human whou like has some desires, and they act a certain way because of those desires. And we use that to do some kind of inference. This might look like inverse reinforcement learning. A simple version of it might look like imitation learning. Anyway, there's then this further thing, an Licklickn’s

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app