2min chapter

80,000 Hours Podcast cover image

Preventing an AI-related catastrophe (Article)

80,000 Hours Podcast

CHAPTER

Controlling the Objectives of AI Systems

In general, we might expect that even if a proxy appears to correlate well with successful outcomes, it might not do so when that proxy is optimized for. We'll look at a more specific example of how problems with proxies could lead to an existential catastrophe here. For more on the specific difficulty of controlling the objectives given to deep neural networks, trained using self-supervised learning and reinforcement learning, we recommend OpenAI governance researcher Richard Goes' discussion of how realistic training processes lead to the development of misaligned goals.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode