Machine Learning Street Talk (MLST) cover image

Want to Understand Neural Networks? Think Elastic Origami! - Prof. Randall Balestriero

Machine Learning Street Talk (MLST)

00:00

Exploring Representation Learning in Neural Networks

This chapter examines the nuances of learning representations through reconstruction methods, particularly comparing auto-encoders with contrastive and non-contrastive models. It highlights the importance of noise strategies and dataset bias in autoencoders, illustrating how engineered noise can improve model performance on complex tasks like toxicity detection. The discussion also emphasizes the geometric characteristics of layers in large language models and the role of differentiable features in enhancing interpretability and robustness.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app