The idea of explainability is very tricky in AI. There's almost a inverse relationship between the success of a system and its explainability, at least with these deep neural networks. A lot of people are working on ways to make the machines more explainable or almost virtual microscopes that can have you go in. But it's certainly an unsolved problem. And I think there's also a problem like what is an explanation? What counts as an explanation?"
Computer Scientist and author Melanie Mitchell of Portland State University and the Santa Fe Institute talks about her book Artificial Intelligence with EconTalk host Russ Roberts. Mitchell explains where we are today in the world of artificial intelligence (AI) and where we might be going. Despite the hype and excitement surrounding AI, Mitchell argues that much of what is called "learning" and "intelligence" when done by machines is not analogous to human capabilities. The capabilities of machines are highly limited to explicit, narrow tasks with little transfer to similar but different challenges. Along the way, Mitchell explains some of the techniques used in AI and how progress has been made in many areas.