Many Minds cover image

From the archive: Why do we dream?

Many Minds

00:00

Overfitting in the Deep Learning World

When you train a neural network on one particular task, it's going to generally get worse at other things. So if I take a neural network and I train it to be really good at chess, the chances that I can just immediately transition it over to go, even go, which is somewhat similar to chess is very low. And this is a broader sense of overfitting that's not really talking about single sets of data within a task but across tasks. But generally we should really think about learning as a trade off.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner
Get the app