AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Getting Rid of Bias in Algorithms
Bias in machine learning algorithms is inherited. Is it the responsibility of the machine learning engineer to be finding these things and getting rid of them, or should it work more like software? What do you think is the right team structure for getting rid of these inherited biases? While designing and algortem, no one can actually foresee what sort of biases that these algordems may actually come up with.