Latent Space: The AI Engineer Podcast cover image

How to train a Million Context LLM — with Mark Huang of Gradient.ai

Latent Space: The AI Engineer Podcast

CHAPTER

Understanding Context Length in Language Model Training

This chapter explores the impact of varying context lengths on training large language models, comparing curriculum learning with traditional methods. It emphasizes the significance of data quality, positional encodings, and recent advancements in scaling techniques, while addressing the complexities of evaluating model performance.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner