LessWrong (Curated & Popular) cover image

Counterarguments to the basic AI x-risk case

LessWrong (Curated & Popular)

00:00

Introduction

This is an audio version of counter-arguments to the basic AI risk case. If superhuman AI systems are built, their desired outcomes will probably be about as bad as an empty universe by human lights. Even if humanity found acceptable goals, giving a powerful AI system any specific goals appears to be hard. We don't know of any procedure to do it and we have theoretical reasons to expect that AI systems produced through machine learning training will generally end up with goals other than those they were trained for.

Play episode from 00:00
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app