
Episode 23: Celeste Kidd, UC Berkeley, on attention and curiosity, how we form beliefs, and where certainty comes from
Generally Intelligent
00:00
Is There a Sweet Spot of Learning in Machine Learning?
One theory is that it seems like there's some mechanism that drives you to go toward like optimal learning, but there's also an optimal learning. Other theories are more about information overload so you can only process so much information at time and so you want to prefer doing just a little bit rather than overloading your system. The fourth possibility is the one that I think is most likely under model of how infants are parsing the incoming sequential information. It's a very simple model that's looking at uniform statistics and transitional statistics, but that's not the only way in which you could represent the world. And if you concluded that this was random, you're actually done. That's a higher order representation
Transcript
Play full episode