Hear This Idea cover image

Bonus: Preventing an AI-Related Catastrophe

Hear This Idea

00:00

Why We Should Not Give AI Systems Bad Goals

Some of the more cartoon-style illustrations of risks from AI involve giving the AI goals that are clearly not what we really want. For example, producing many paperclips as possible is no one's real overriding goal. So it might seem like we can just easily avoid the problems discussed above by making sure to only give AI systems goals that we actually want achieved. We also gave a few reasons above for why controlling these goals could be hard. In short, there might be problems with measurable proxies used to specify goals and difficulties arise from modern ML systems implicit goals developing through training rather than explicit programming. There's also a concern that if one group does succeed in giving an AI only correct goals, other

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app