
Making Machines More Human: Best-Selling Author Brian Christian on the Alignment Problem - Ep. 135
NVIDIA AI Podcast
How to Quantify a Model's Uncertainty About Its Output
A lot of work has been happening in the last three or four years on quantifying a model's uncertainty about its output. The idea would be that even if you can't solve this training data bias issue, if a model is operating out of its training distribution, it should know that. And there are some cool techniques that you can do. For example, you can use something called dropout, where you randomly deactivate certain parts of the network and then you rerun the prediction with different parts of theNetwork turned off.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.