AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Science of Generalization in Machine Learning
About 10,000 deep learning papers have been written about hard coding priors about a specific task and neural network architecture works better than a lack of a prior. Do you think we can go far by coming up with better methods for this kind of cheating for better methods of large scale annotation of data? So building better priors. If you made it, it's not sitting anymore. Right.