
[28] Karen Ullrich - A Coding Perspective on Deep Latent Variable Models
The Thesis Review
Optimizing Neural Network Efficiency
This chapter explores the challenges and advancements in optimizing parameterized posteriors within coding and neural network architectures. It highlights the evolution from traditional compression methods to modern strategies that directly measure energy consumption, including techniques like soft weight sharing and integer precision. The discussion also delves into the implications of compression on model performance, accuracy metrics, and the effectiveness of neural networks in handling underrepresented classes.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.