
Vladimir Vapnik: Statistical Learning
Lex Fridman Podcast
Deep Learning Is Not a Very Difficult Problem
Deep learning has shown that it can outperform the best human players despite not being fully understood mathematically, highlighting it as not a particularly difficult problem. Empirical evidence suggests that deep learning methodologies do not necessarily require large datasets; significantly fewer training samples can still yield effective models. Some problems that deep learning methods struggle with may not need them at all. The formulation of deep architectures relies on creating a suitable set of admissible functions, which should stem from the data rather than purely from theoretical speculation. For effective training, applying the law of large numbers is crucial, but a robust strategy can decrease the data requirements greatly, achieving convergence more efficiently.


