LessWrong (Curated & Popular) cover image

"An artificially structured argument for expecting AGI ruin" by Rob Bensinger

LessWrong (Curated & Popular)

00:00

The Difficulty of AI Alignment

Nateswara: It just looks too difficult for humanity to do, under time pressure, given anything remotely like our current technical understanding. If cognitive machinery doesn't generalise fire out of the distribution where you did tons of training, it can't solve problems on the order of build nanotechnology. Nateswara's gives in ensuring smarter than human intelligence has a positive outcome linked here.

Play episode from 28:34
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app