2min chapter

Lex Fridman Podcast cover image

#206 – Ishan Misra: Self-Supervised Deep Learning in Computer Vision

Lex Fridman Podcast

CHAPTER

Designing Neural Network Architectures That Are Efficient in the Memory Space

A lot of neural networks are characterized in terms of flops, right? Flops basically being the floating point operations. Now it turns out that flops are really not a good indicator of how well a particular network is. And what a better indicator is is the activation or the memory that is being used by this particular model. Regnet is an architecture family that's particularly good at both flops and Memory.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode