AXRP - the AI X-risk Research Podcast cover image

12 - AI Existential Risk with Paul Christiano

AXRP - the AI X-risk Research Podcast

00:00

How to Optimize a Reference Manual

We need some objective to actually optimise. The prop for the objective is give that reference manuel to some humans, ask them to do the task or to eventually break down the task of predicting the next word of a web page. And then we can get an objective for this reference manual. So instead of optimizing your nural network by sarcastic radient descent in order to make good predictions, optimize that like whatever reference manual,. Or giving a human gradient descent  in order to cause it to make humans make good predictions. It's not obvious that humans can't just do all the tasks you want to apply e to. You could imagine a world er were just applying a to tasks where

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app