

Insights from Building AI Systems At Google Scale: In Conversation With Kyle Nesbit
26 snips Dec 10, 2024
Kyle Nesbit, a longtime Googler with 17 years of experience in AI and distributed systems, shares invaluable insights about building AI at scale. He discusses the challenges of transitioning traditional engineering teams to embrace LLM development and underscores the importance of starting with solid evaluation metrics. Kyle reveals strategies for iterative improvement, tackling data discovery issues, and balancing product quality during scaling. He also provides a peek into the real story behind AI demos and the complexities of integration within organizations.
AI Snips
Chapters
Transcript
Episode notes
Early LLM Experiences
- Kyle Nesbit worked on a receipt scanning project involving LSTMs and RNNs, which required extensive hand-tuning.
- Transformers simplified the process, replacing much of the manual effort and improving NLP tasks.
LLM Ease vs. Fundamentals
- LLMs have made achieving decent quality easier, but some fundamental ML practices are being overlooked.
- Overemphasis on scaling might be ending, shifting focus back to engineering and fine-tuning.
Improving LLM Quality
- Focus on specific use cases like Natural Language to SQL, where quality plateaus.
- Crafty engineering, RAG systems, and intent analysis become crucial for hyper-optimizing specific problems.