EconTalk cover image

Nick Bostrom on Superintelligence

EconTalk

00:00

The Future of Artificial Intelligence

In the first type of scenario that I mentioned, where you have a single super intelligence is so powerful, then yes, I think a lot will depend on what that super intelligence would want. The standard example being that of a paperclip maximizer. It seems that almost all those goals, if consistently and maximally realized, would lead to a world where there would be no human beings or indeed perhaps nothing that we humans would accord value to.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app