LessWrong (Curated & Popular) cover image

Counterarguments to the basic AI x-risk case

LessWrong (Curated & Popular)

00:00

The Evolution of Utility Maximizers in AI

There is a large space of systems which strongly increase the chance of some desirable objective, over-occurring. For instance, without searching out novel ways of making O occur, or modifying themselves to be more consistently O-maximizing, call these weak pseudo-agents. Humans might not be very far on this spectrum, but they seem enough like utility maximizers already to be alarming. It is not clear that economic incentives gradually favor the far end of this spectrum over weak pseudo-agency. There aren't any kind of goal-directedness is incentivized in AI systems.

Play episode from 08:56
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app