LessWrong (Curated & Popular) cover image

"AGI Ruin: A List of Lethalities" by Eliezer Yudkowsky

LessWrong (Curated & Popular)

00:00

The Lethal Problem of a G I Alignment

No difficulty discussed here about a g i alignment is claimed by me to be impossible to merely human science and engineering, let alone in principle. We're going to be doing everything with metaphorical sigmoids on the first critical try. Things much smarter than human would be able to learn from less evidence than humans require to have ideas driven into their brains. A cognitive system with sufficiently high cognitive powers will not find it difficult to boot strap to overpowering capabilities independent of human interastructure. Each bit of information that couldn't already be fully predicted can eliminate at most half the probability mass of all hypotheses under consideration. There are theoretical upper bounds here, but those upper bounds seem very high. It

Play episode from 05:14
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app