LessWrong (Curated & Popular) cover image

Counterarguments to the basic AI x-risk case

LessWrong (Curated & Popular)

00:00

Arguments for AI Risk

If most goal-directed superhuman AI systems have bad goals, the future will very likely be bad. This is supported by at least one of the following being true. It's hard to find goals that aren't extinction-level bad. Superhuman AI would gradually come to control the future via accruing power and resources. Power and resources would be more available to the AI systems than to humans on average because of the AI having far greater intelligence. A gap in quotes is not necessarily unfillable, and may have been filled in any of the countless writings on this topic that I haven't read.

Play episode from 02:53
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app