
Pre-training LLMs: One Model To Rule Them All? with Talfan Evans, DeepMind
Thinking Machines: AI & Philosophy
00:00
Navigating the Heterogeneous Model Landscape
This chapter delves into the complexities of managing multiple Low-Rank Adaptations (LoRAs) on single GPUs, addressing the engineering hurdles encountered by companies like OpenAI. It also considers the role of cloud providers in facilitating LoRA hosting and contemplates the trade-offs between generality and specialization in model training.
Transcript
Play full episode