LessWrong (Curated & Popular) cover image

"An artificially structured argument for expecting AGI ruin" by Rob Bensinger

LessWrong (Curated & Popular)

00:00

The Problems With AGI Proliferation

We can't just build a very weak system which is less dangerous because it is so weak and declare victory. We need at least one system strong enough to help in some pivotal act, unless we find some way to globally limit AGI proliferation without the help of stem-level AGI. For 10? Powerful AGI's doing dangerous things that will kill you if misaligned must have an alignment property that generalized far out of distribution from safer building or training operations that didn't kill you. Many alignment problems of superintelligence will not naturally appear at predangorous, passive-safe levels of capability. Point 15? Fast capability gains seem likely and may break lots of previous alignment required invariance simultaneously

Play episode from 45:49
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app