3min chapter

The AI Podcast cover image

Making Machines More Human: Best-Selling Author Brian Christian on the Alignment Problem - Ep. 135

The AI Podcast

CHAPTER

How to Quantify a Model's Uncertainty About Its Output

A lot of work has been happening in the last three or four years on quantifying a model's uncertainty about its output. The idea would be that even if you can't solve this training data bias issue, if a model is operating out of its training distribution, it should know that. And there are some cool techniques that you can do. For example, you can use something called dropout, where you randomly deactivate certain parts of the network and then you rerun the prediction with different parts of theNetwork turned off.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode