AI Safety Fundamentals: Alignment cover image

Specification Gaming: The Flip Side of AI Ingenuity

AI Safety Fundamentals: Alignment

CHAPTER

The Importance of Learning the Reward Function

Shaping rewards can change the optimal policy if they are not potential based. It is often easier to evaluate whether an outcome has been achieved than to specify it explicitly. The learned reward model could also be misspecified for other reasons, such as poor generalization. Another class of specification gaming comes from the agent exploiting simulator bugs.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner