LessWrong (Curated & Popular) cover image

Counterarguments to the basic AI x-risk case

LessWrong (Curated & Popular)

00:00

The Argument for AI Safety

Concepts such as control, power, and alignment with human values all seem vague. Heading, the argument overall, proves too much about corporations. We don't have a way to usefully point out human goals,. divergences from human goals seem likely to produce goal-directed systems that are an intense conflict with human goals. Find useful goals that aren't extinction-level bad appears to be hard.

Play episode from 01:08:08
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app