AXRP - the AI X-risk Research Podcast cover image

2 - Learning Human Biases with Rohin Shah

AXRP - the AI X-risk Research Podcast

CHAPTER

The Learning Planner Reward Function - How to Predict the Highest Reward

In a tabular setting, if your reward function is a little bit off, then the optimal policy only gets a little bit less reward. If you don't make any assumptions at all, well, that lets you get around the impossibility results. You dou have to make some assumptions too to deal with it. But for the most part, you justo, mostly need to predict where is the highest reward in this good world.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner