AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Exploring Failure in Machine Learning Research and Parameter-Efficient Techniques for Large Language Models
The chapter delves into the concept of a failure CV, the ML Collective, and the speaker's machine learning research at Google DeepMind. It focuses on topics like training dynamics, model capacity, scaling, Intrinsic Dimension, and low rank adaptation for parameter-efficient fine-tuning of large language models. The discussion also includes the use of parameter-efficient techniques in training large language models, balancing curiosity-driven and goal-driven research, and nuances between ML engineers and researchers.