
Counterarguments to the basic AI x-risk case
LessWrong (Curated & Popular)
00:00
The Disadvantages of AI Utility Functions
If we care to make the ML training process more accurate than the human-learning one, it seems likely that we could. For things that I have seen AI learn so far, the distance from the real thing is intuitively small. If humans also substantially learn their values via observing examples, then the variation in human values might be expected to be of a similar scale. It is still important to actually do this training rather than making AI systems not trained to have human values.
Play episode from 23:19
Transcript


