LessWrong (Curated & Popular) cover image

Counterarguments to the basic AI x-risk case

LessWrong (Curated & Popular)

00:00

The Importance of Utility Maximization in the AI Economy

There are not that many systems doing something like utility maximization in the new AI economy. It is unclear whether a particular qualitatively identified force for goal directedness will cause disaster within a particular time. In coherent AI's are never observed making themselves more coherent, and training has never produced an agent unexpectedly. There are lots of vaguely-agientic things, but they don't pose much of a problem. Some amount of AI divergence from your own values is probably broadly fine, that is, not worse than what you should otherwise expect without AI.

Play episode from 17:40
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app