The Robot Brains Podcast cover image

Chelsea Finn: how to build AI that can keep up with an always changing world

The Robot Brains Podcast

CHAPTER

Reward Functions in Deep Reactor Learning

In reinforcement learning, typical formulation is that an agent is requested to optimize a reward function. For example, scoring the game or something task completion for a robot. I found you saying, I actually don't like reward functions. What do you want to, what do you want instead? You've done a lot of the leading work in deep reinforcement learning over the past several years.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner