
8 - Assistance Games with Dylan Hadfield-Menell
AXRP - the AI X-risk Research Podcast
00:00
The Goal Achievement Component of Intelligence Is Minimizing Expected Utility
i could certainly see that perhaps some of the intuitive reasons for why i would want risk oversion has to do with my my statement earlier about utilities. The goal achievement part of intelligence is smaller than we think it is. But if you imagine that the goal achievement component is one part of the system that's going to be working with representations and abstractions of the real world, then planning conservatively is probably better in expectation. There's also this ome, the initial model for the world that like, bears really heavily on this analysis.
Play episode from 01:53:23
Transcript


