LessWrong (Curated & Popular) cover image

Counterarguments to the basic AI x-risk case

LessWrong (Curated & Popular)

00:00

The Importance of Intelligence in Goals

Trying to take over the universe as a sub-step is entirely laughable for almost any human goal. Heading, unclear that many goals realistically incentivised taking over the universe. The main thing stopping them from winning is that their position as psychopaths spent on taking power for incredibly pointless ends is widely understood. All they have to do is to annihilate humanity. And they are way better positioned to do that than I am.

Play episode from 59:47
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app