Microsoft Research Podcast cover image

AI Frontiers: A deep dive into deep learning with Ashley Llorens and Chris Bishop

Microsoft Research Podcast

00:00

Evolution of Inductive Bias in Neural Nets

The journey of neural nets over the last 30-35 years reflects the concept of inductive bias, which states that learning from data requires underlying assumptions. The No Free Lunch theorem mathematically proves this. Earlier, simpler neural nets required a lot of human expert knowledge and assumptions, while modern models like large-scale transformers rely on inductive biases related to attention. Even with new generative pre-trained models, inductive bias is imposed during the inferencing stage, making the field of deep learning rich in concepts like pre-training, transfer learning, and zero-shot learning.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner