Simon Karasik, an experienced ML Engineer, discusses handling multi-terabyte LLM checkpoints. Topics include managing massive models, cloud storage options, comparing Slurm and Kubernetes, navigating data processing challenges, monitoring Kubernetes nodes with faulty GPUs, and simplifying model training processes.
Read more
AI Summary
Highlights
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
Managing terabyte-sized LLM checkpoints requires a deep understanding of scaling laws and strategic checkpoint frequency planning.
Utilizing Nebius AI's cloud resources for LLM training can offer tailored tools, GPU availability, and user-friendly interfaces for engineers.
Deep dives
Training Large Language Models at NebiusCI
NebiusCI, a cloud company known for AI-specific cloud services, is currently focusing on training large language models (LLMs). Simon, a journal-driven engineer at NebiusCI, discusses his work on a 300 billion parameter model, highlighting the challenges of managing checkpoints in such large-scale training. The complexity of scaling laws for managing terabyte-sized checkpoints and the need to strategize checkpoint frequency are key points.
Nebius AI's Specialized Cloud Infrastructure
Nebius AI offers tailored tools for training and inference processes, with a focus on GPU availability and user-friendly interfaces for engineers and developers. Simon emphasizes the importance of utilizing Nebius AI's cloud resources for LLM training, detailing the unique challenges and benefits of working within their specialized infrastructure.
Transition from Traditional ML to Large Language Models
Simon reflects on transitioning from traditional machine learning practices, like scikit-learn models, to training massive LLMs. He discusses the shift in complexity and resource requirements, emphasizing the need to keep the deployment simple when venturing into deep learning and LLM training.
Challenges and Strategies with Checkpoints and Storage
The discussion delves into critical aspects of managing checkpoints and storage in LLM training. Simon highlights the differences in approach between pre-training and fine-tuning, stressing the need for scalable and efficient storage solutions and network management to handle the massive data volumes and complexities of large-scale training.
Simon Karasik is a proactive and curious ML Engineer with 5 years of experience. Developed & deployed ML models at WEB and Big scale for Ads and Tax.
Huge thank you to Nebius AI for sponsoring this episode. Nebius AI - https://nebius.ai/
MLOps podcast #228 with Simon Karasik, Machine Learning Engineer at Nebius AI, Handling Multi-Terabyte LLM Checkpoints.
// Abstract
The talk provides a gentle introduction to the topic of LLM checkpointing: why is it hard, how big are the checkpoints. It covers various tips and tricks for saving and loading multi-terabyte checkpoints, as well as the selection of cloud storage options for checkpointing.
// Bio
Full-stack Machine Learning Engineer, currently working on infrastructure for LLM training, with previous experience in ML for Ads, Speech, and Tax.
// MLOps Jobs board
https://mlops.pallet.xyz/jobs
// MLOps Swag/Merch
https://mlops-community.myshopify.com/
// Related Links
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletters, and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Simon on LinkedIn: https://www.linkedin.com/in/simon-karasik/
Timestamps:
[00:00] Simon preferred beverage
[01:23] Takeaways
[04:22] Simon's tech background
[08:42] Zombie models garbage collection
[10:52] The road to LLMs
[15:09] Trained models Simon worked on
[16:26] LLM Checkpoints
[20:36] Confidence in AI Training
[22:07] Different Checkpoints
[25:06] Checkpoint parts
[29:05] Slurm vs Kubernetes
[30:43] Storage choices lessons
[36:02] Paramount components for setup
[37:13] Argo workflows
[39:49] Kubernetes node troubleshooting
[42:35] Cloud virtual machines have pre-installed mentoring
[45:41] Fine-tuning
[48:16] Storage, networking, and complexity in network design
[50:56] Start simple before advanced; consider model needs.
[53:58] Join us at our first in-person conference on June 25 all about AI Quality
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode