AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Evolution of Inductive Bias in Neural Nets
The journey of neural nets over the last 30-35 years reflects the concept of inductive bias, which states that learning from data requires underlying assumptions. The No Free Lunch theorem mathematically proves this. Earlier, simpler neural nets required a lot of human expert knowledge and assumptions, while modern models like large-scale transformers rely on inductive biases related to attention. Even with new generative pre-trained models, inductive bias is imposed during the inferencing stage, making the field of deep learning rich in concepts like pre-training, transfer learning, and zero-shot learning.