Thinking Machines: AI & Philosophy cover image

Pre-training LLMs: One Model To Rule Them All? with Talfan Evans, DeepMind

Thinking Machines: AI & Philosophy

CHAPTER

Navigating the Heterogeneous Model Landscape

This chapter delves into the complexities of managing multiple Low-Rank Adaptations (LoRAs) on single GPUs, addressing the engineering hurdles encountered by companies like OpenAI. It also considers the role of cloud providers in facilitating LoRA hosting and contemplates the trade-offs between generality and specialization in model training.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner