LessWrong (Curated & Popular) cover image

Counterarguments to the basic AI x-risk case

LessWrong (Curated & Popular)

00:00

How to Align AI to Human-Like Goals

There are many super-intelligent goal-directed AI systems around. They are trained to have human-like goals, but we know that their training is imperfect. None of them has goals exactly like those presented in training. If you just heard about a particular system's intentions, you wouldn't be able to guess if it was an AI or a human. But not obviously in a direction less broadly in line with human goals than when humans are in charge.

Play episode from 21:26
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app