TalkRL: The Reinforcement Learning Podcast cover image

Danijar Hafner 2

TalkRL: The Reinforcement Learning Podcast

00:00

The Importance of Objective Functions in Agent Design

The big question this is adjusting is what objective function should your agent optimize? And the objective function shouldn't just be a reward. You can have objective functions that depend not just on what the agent is seeing or receiving, but also on the agent's internal variables and its actions. The most complete way of doing that is to model your complete trajectories in the past.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app