Bias gets a bad rap in machine learning. And yet, the whole point of a machine learning model is that it biases certain inputs to certain outputs — a picture of a cat to a label that says “cat”, for example. Machine learning is bias-generation.
So removing bias from AI isn’t an option. Rather, we need to think about which biases are acceptable to us, and how extreme they can be. These are questions that call for a mix of technical and philosophical insight that’s hard to find. Luckily, I’ve managed to do just that by inviting onto the podcast none other than Margaret Mitchell, a former Senior Research Scientist in Google’s Research and Machine Intelligence Group, whose work has been focused on practical AI ethics. And by practical, I really do mean the nuts and bolts of how AI ethics can be baked into real systems, and navigating the complex moral issues that come up when the AI rubber meets the road.
***
Intro music:
➞ Artist: Ron Gelinas
➞ Track Title: Daybreak Chill Blend (original mix)
➞ Link to Track: https://youtu.be/d8Y2sKIgFWc
***
Chapters:
- 0:00 Intro
- 1:20 Margaret’s background
- 8:30 Meta learning and ethics
- 10:15 Margaret’s day-to-day
- 13:00 Sources of ethical problems within AI
- 18:00 Aggregated and disaggregated scores
- 24:02 How much bias will be acceptable?
- 29:30 What biases does the AI ethics community hold?
- 35:00 The overlap of these fields
- 40:30 The political aspect
- 45:25 Wrap-up