AXRP - the AI X-risk Research Podcast cover image

29 - Science of Deep Learning with Vikrant Varma

AXRP - the AI X-risk Research Podcast

00:00

Exploring Implict Priors and Optimization in Deep Learning Models

The chapter analyzes how probability mass on initialization impacts weights and parameters in the context of Stochastic Gradient Descent, discussing implicit priors, regularization preferences, and the scaling up of parameters in layers. It also delves into phenomena like semi-grossing and ungrossing in deep learning models, examining their implications on efficiency and optimization difficulties.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app