AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
How the Brain Can Learn Multilayne Nets?
My main interest has always been unsupervised learning, because that's what most human learning is. If we can get the idea of top-down predictions and bottom-up predictions agreeing in a contrastive sense, that may explain how the brain can learn multilayne nets. Ting Chen made it work really well for static images, and we're now trying to extend that to video. But we're trying to extend it using attention, which is going to be very important for video - you can't possibly process everything in a video at high resolution. And the primary question in vision is where should I look next? It's potentially crucial when it's sort of central to humans' vision