3min chapter

80,000 Hours Podcast cover image

#151 – Ajeya Cotra on accidentally teaching AI models to deceive us

80,000 Hours Podcast

CHAPTER

How to Assess How Far Away a Model Is From Passing the Test

In order to pull off the plan would have to produce like just a constant series of sensible actions or what would a single bad action trip it up. The thing that you're trying to assess how far away the model is from is it can do this autonomously. So they're sort of doing something that is basically qualitatively speaking, tracking how often the model went off the rails and how often it had to be put back on track by the human. And then that's something you can see decreasing over time.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode