
Episode 27: How could self-driving cars change the world? - Part 2
The Received Wisdom
The Problem of Interpretability in Machine Learning
Alex Kendall accepts that there may need to be a bit of give and take, but he doesn't want to dwell on that. If something does go wrong, like when the self-driving Uber killed Elaine Hertzburg would we know why a self-driving car did what it did? Machine learning people call this the problem of interpretability. As machine learning gets more and more complicated, interpretability may get harder and harder. Trust primarily comes from performance and safety, which is what we need to drive first and foremost.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.