AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
How to Quantify a Model's Uncertainty About Its Output
A lot of work has been happening in the last three or four years on quantifying a model's uncertainty about its output. The idea would be that even if you can't solve this training data bias issue, if a model is operating out of its training distribution, it should know that. And there are some cool techniques that you can do. For example, you can use something called dropout, where you randomly deactivate certain parts of the network and then you rerun the prediction with different parts of theNetwork turned off.