
Pre-training LLMs: One Model To Rule Them All? with Talfan Evans, DeepMind
Thinking Machines: AI & Philosophy
Navigating the Heterogeneous Model Landscape
This chapter delves into the complexities of managing multiple Low-Rank Adaptations (LoRAs) on single GPUs, addressing the engineering hurdles encountered by companies like OpenAI. It also considers the role of cloud providers in facilitating LoRA hosting and contemplates the trade-offs between generality and specialization in model training.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.