Interpretability as a goal in AI research is being able to look inside our systems and say what they're doing. Sam says there are two main ways to approach this problem. One is to try to decipher the systems we already have, to understand what these billions of numbers going up and down actually mean. The other avenue of research is trying to build systems that can do a lot of the powerful things that we're excited about with something like UBD4 but where there aren't giant inscrutable piles of numbers in the middle.
AI has the potential to impact our society in dramatic ways, but researchers can’t explain precisely how it works or how it might evolve. Will they ever understand it?
This is the first episode of our new two-part series, The Black Box.
For more, go to http://vox.com/unexplainable
It’s a great place to view show transcripts and read more about the topics on our show.
Also, email us! unexplainable@vox.com
We read every email.
Support Unexplainable by making a financial contribution to Vox! bit.ly/givepodcasts
Learn more about your ad choices. Visit podcastchoices.com/adchoices