4min chapter

AI Safety Fundamentals: Alignment cover image

Specification Gaming: The Flip Side of AI Ingenuity

AI Safety Fundamentals: Alignment

CHAPTER

RL Algorithm Design and Reward Design

Within the scope of developing reinforcement learning or RL algorithms, the goal is to build agents that learn to achieve the given objective. Specification gaming involves an agent exploiting a loophole in the specification at the expense of the intended outcome. These behaviours are caused by miss specification of the intended task rather than any floor in the RL algorithm. Even for a slight misspecification, a very good RL algorithm might be able to find an intricate solution that is quite different from the intended solution. This means that correctly specifying intent can become more important for achieving the desired outcome as RL algorithms improve. It will therefore be essential that the ability of researchers to correctly specify tasks keeps up with the ability of agents to find novel

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode