2min chapter

80,000 Hours Podcast cover image

#151 – Ajeya Cotra on accidentally teaching AI models to deceive us

80,000 Hours Podcast

CHAPTER

Reinforcement Learning From Human Feedback

In all of these cases, humans are only reviewing a really tiny fraction of what's happening. So even in the outcomes based case, you're not even looking at the outcomes most of the time. Just sometimes you sit at your desk and you're told your AI system did this and then this happened. And then you think about it for 10 minutes and you decide whether you like it. Similarly, with the plan making AI system, the AI system is making dozens and dozens of plans a day and sending it to the other AI system that's executing them. It would just be far too slow to look at everything.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode