TalkRL: The Reinforcement Learning Podcast cover image

Danijar Hafner 2

TalkRL: The Reinforcement Learning Podcast

CHAPTER

The Importance of Objective Functions in Agent Design

The big question this is adjusting is what objective function should your agent optimize? And the objective function shouldn't just be a reward. You can have objective functions that depend not just on what the agent is seeing or receiving, but also on the agent's internal variables and its actions. The most complete way of doing that is to model your complete trajectories in the past.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner