The Robot Brains Podcast cover image

Yoshua Bengio: equipping AI with higher level cognition and creativity

The Robot Brains Podcast

CHAPTER

Inductive Bias in Machine Learning

In RL, we have a very sparse dependency graph where concepts can only enter in relationship with others through dependencies that involve just maybe two, three, four, five things at most. And our memories also structured around these little chunks,dependencies, like a sentence. So it must be because it has an evolutionary advantage and it must be a learning advantage, I think, because it's a constraint. We know in machine learning constraints, like, you know, regularizers, things like that, usually represent a strong inductive bias.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner