
"Where I agree and disagree with Eliezer" by Paul Christiano
LessWrong (Curated & Popular)
00:00
The Importance of List of Lethalities in AI
List of lethalities number 13 makes a particular argument that we won't see many AI problems in advance. As far as I can tell, most of Eliezer's position here comes from general intuitions rather than arguments. When he dismisses the possibility of AI systems performing safer tasks millions of times in training, and then safely transferring to build nanotechnology, 0.11 is not engaging with the kind of system that is likely to be built.
Play episode from 22:19
Transcript


