LessWrong (Curated & Popular) cover image

"Where I agree and disagree with Eliezer" by Paul Christiano

LessWrong (Curated & Popular)

00:00

Introduction

Powerful AI systems have a good chance of deliberately and irreversibly disempowering humanity. It's wishful thinking to look at possible stories of doom and say, we wouldn't let that happen? Humanity is fully capable of messing up even very basic challenges - especially if they are novel. There are relatively few researchers who are effectively focused on the technical problems most relevant to existential risk from alignment failures. People seem to consistently round this risk down to more boring stories that fit better with their narratives about the world. The broader intellectual world seems to wildly overestimate how long it will take AI systems to go from large impact on the world to unrecognizably transformed world.

Play episode from 00:00
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app