LessWrong (Curated & Popular) cover image

"An artificially structured argument for expecting AGI ruin" by Rob Bensinger

LessWrong (Curated & Popular)

00:00

The Importance of Goals in AI

Microscope AI involves using transparency tools to verify that the model isn't performing any optimization. It's extremely unlikely we'll be able to get major new scientific or predictive insights from AI without it doing any optimizing. If operators have enough visibility into the AI's mind and can get it to think useful and relevant thoughts at all, then you may be able to avoid microscope AI approaches. In real life, however, it's very unlikely that we'll have that level of mastery of first stem level AGI systems.

Play episode from 19:27
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app