
Lewis Tunstall: Hugging Face, SetFit and Reinforcement Learning | Learning from Machine Learning #6
Learning from Machine Learning
Adapters in Fine-Tuning Pre-Trained Transformers
This chapter discusses the concept of adapters in fine-tuning pre-trained transformers. It explains how adapters, small matrices of weights inserted into the linear layers of the transformer, can reduce the number of parameters and make the process more memory-efficient and faster. It also touches upon the challenges of dealing with large models and introduces the concept of proximal policy optimization in reinforcement learning.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.