Lex Fridman Podcast cover image

#206 – Ishan Misra: Self-Supervised Deep Learning in Computer Vision

Lex Fridman Podcast

00:00

Optimizing RegNets for Efficiency

This chapter explores the RegNets architecture, emphasizing the balance between computational efficiency and memory usage in deep learning models. It discusses innovative techniques such as squeeze excitation blocks, training methodologies, and the role of self-supervised learning in handling large data sets. The chapter also addresses the challenges of scaling neural networks and the importance of libraries like VISL in standardizing self-supervised learning practices.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app