2min chapter

80,000 Hours Podcast cover image

#92 – Brian Christian on the alignment problem

80,000 Hours Podcast

CHAPTER

Rewarding Actions, Not Actions of the Agent

There has been a lot of theoretical work, including by stuart russell himself in the nineties, on how you can modify a reward function such that you won't change the optimal policy. So really, what you're rewarding is the agent's position, not their behavior. I think this is a very deep result that has implications, not just in a i but for thinking about human centives and parenting and things like this. It's hard to actually develop a reward that prevents hacking or misbehavior completely.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode