LessWrong (Curated & Popular) cover image

"Where I agree and disagree with Eliezer" by Paul Christiano

LessWrong (Curated & Popular)

00:00

The Cinematic Universe of Eliezer's Stories of Doom

Many research problems in other areas are chosen for tractability, or being just barely out of reach. Alignment isn't like that; it was chosen to be an important problem and there is no one ensuring the game is fair. We will be able to learn a lot about alignment from experiments and trial and error. I think we can get a lot of feedback about what works and deploy more traditional R&D methodology. The cinematic universe of Eliezer's stories of doom doesn't seem to me like it holds together. By the time we have AI systems that can overpower humans decisively with nanotech, we have other AI systems that will either kill humans in more boring ways, or else radically

Play episode from 08:51
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app