Machine Learning Street Talk (MLST) cover image

Want to Understand Neural Networks? Think Elastic Origami! - Prof. Randall Balestriero

Machine Learning Street Talk (MLST)

CHAPTER

Exploring Representation Learning in Neural Networks

This chapter examines the nuances of learning representations through reconstruction methods, particularly comparing auto-encoders with contrastive and non-contrastive models. It highlights the importance of noise strategies and dataset bias in autoencoders, illustrating how engineered noise can improve model performance on complex tasks like toxicity detection. The discussion also emphasizes the geometric characteristics of layers in large language models and the role of differentiable features in enhancing interpretability and robustness.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner