LessWrong (Curated & Popular) cover image

"AGI Ruin: A List of Lethalities" by Eliezer Yudkowsky

LessWrong (Curated & Popular)

00:00

Facebook Ai Research

The given lethal challenge is to solve within a time limit driven by the dynamic in which, over time, increasingly actors with a smaller and smaller fraction of total computing power become able to build a gi and destroy the world. Powerful actors all refraining in unison from doing the suicidal thing just delays this time limit. It does not lift it unless computer hardware and soft ware progress a both brought to complete severe halts across the whole earth. The current state of this co operation to have every big actor refrain from doing the stupid thing is that some large actors with a lot of researches and computing power are led by people who vocally disdain all talk of ag safety.

Play episode from 13:38
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app