Is there some parallelism between the human brain and the neural network of this, the AI that you're effectively leveraging? Or do you think it's a generalizable claim without that parallel? What I'm talking about is hill climbing optimization that spits out intelligences that generalize. As you grind these things further and further, they can do more and more stuff, including stuff they were never trained on. That was always the goal of artificial general intelligence.
Eliezer Yudkowsky insists that once artificial intelligence becomes smarter than people, everyone on earth will die. Listen as Yudkowsky speaks with EconTalk's Russ Roberts on why we should be very, very afraid, and why we're not prepared or able to manage the terrifiying risks of artificial intelligence.