LessWrong (Curated & Popular) cover image

Counterarguments to the basic AI x-risk case

LessWrong (Curated & Popular)

00:00

Utility Maximization: A Concept of Goal-Directedness

One well-defined concept of goal-directedness is utility maximization, always doing what maximizes a particular utility function. If you want things to go a certain way, then you have reason to control anything which gives you any leverage over that. This is in serious conflict with anyone else with resource-sensitive goals. Call machines that push toward particular goals but are not utility maximizers, pseudo-agents.

Play episode from 05:52
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app