
[28] Karen Ullrich - A Coding Perspective on Deep Latent Variable Models
The Thesis Review
00:00
Optimizing Neural Network Efficiency
This chapter explores the challenges and advancements in optimizing parameterized posteriors within coding and neural network architectures. It highlights the evolution from traditional compression methods to modern strategies that directly measure energy consumption, including techniques like soft weight sharing and integer precision. The discussion also delves into the implications of compression on model performance, accuracy metrics, and the effectiveness of neural networks in handling underrepresented classes.
Transcript
Play full episode