Some researchers have put forward another idea. Monitoring AIs by using more AIs, at the very least just to alert users if AIs seem to be behaving kind of erratically. But it's a little bit circular because then you have to ask, how would we be sure that our helper AI is not tricking us in the same way that we're worried our original AI is doing? So if these kind of tech-centric solutions aren't the way forward, the best path could be political,. Just trying to reduce the power and ubiquity of certain kinds of AI.
AI can often solve problems in unexpected, undesirable ways. So how can we make sure it does what we want, the way we want? And what happens if we can’t?
This is the second episode of our new two-part series, The Black Box.
For more, go to http://vox.com/unexplainable
It’s a great place to view show transcripts and read more about the topics on our show.
Also, email us! unexplainable@vox.com
We read every email.
Support Unexplainable by making a financial contribution to Vox! bit.ly/givepodcasts
Learn more about your ad choices. Visit podcastchoices.com/adchoices