
Artificial Intelligence
The End Of The World with Josh Clark
Eukowski's Coherent Extrapolated Vision
Elie Eukowski suggests we build a one-use superintelligence with the goal of determining how to best express to another machine the goal of ensuring the well-being and happiness of all humans. If our machine ran amok, why wouldn't we just turn it off? In the movies, there's always a relatively simple one, for dealing with troublesome AI. But should we ever face the reality of a super-intelligent AI emerging among us, we would almost certainly not come out on top. An AI has plenty of reasons to take steps to keep us from turning it off.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.