The Robot Brains Podcast cover image

Chelsea Finn: how to build AI that can keep up with an always changing world

The Robot Brains Podcast

00:00

Reward Functions in Deep Reactor Learning

In reinforcement learning, typical formulation is that an agent is requested to optimize a reward function. For example, scoring the game or something task completion for a robot. I found you saying, I actually don't like reward functions. What do you want to, what do you want instead? You've done a lot of the leading work in deep reinforcement learning over the past several years.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app