
Counterarguments to the basic AI x-risk case
LessWrong (Curated & Popular)
00:00
The Importance of Feedback Loops in AI Performance
A number of arguments have been posed for expecting very fast growth in intelligence at around human level. These include that any movement past human level will take us to unimaginably superhuman level. I don't know of a strong reason to expect this, though there are counter-arguments. AI systems would still perform at around humanlevel at various tasks and contribute to research along with everything else. This would not lead to super-intelligent AI system in minutes.
Play episode from 01:06:02
Transcript


