3min chapter

80,000 Hours Podcast cover image

#151 – Ajeya Cotra on accidentally teaching AI models to deceive us

80,000 Hours Podcast

CHAPTER

The Evolution of the Neural Network

In most realistic training setups we could imagine we're actively rewarding the model sometimes for doing Y for doing the lying. I think in fact it's worse than that I think the policy sometimes lie will get accidentally rewarded. So one example might be suppose you're getting your model to write some code for you and you give it some kind of computation budget to run experiments and you reward it based on like how cheaply were these experiments run and then how good is the resulting code. If the model is able to use a lot more computation surreptitiously without letting you realize that it actually spent this computation by attributing the budget to some other team that you're not paying attention to or siph

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode