Insights from Building AI Systems At Google Scale: In Conversation With Kyle Nesbit
Dec 10, 2024
auto_awesome
Kyle Nesbit, a longtime Googler with 17 years of experience in AI and distributed systems, shares invaluable insights about building AI at scale. He discusses the challenges of transitioning traditional engineering teams to embrace LLM development and underscores the importance of starting with solid evaluation metrics. Kyle reveals strategies for iterative improvement, tackling data discovery issues, and balancing product quality during scaling. He also provides a peek into the real story behind AI demos and the complexities of integration within organizations.
Transitioning teams to AI-focused workflows requires a careful blend of iterative feedback loops and a deep understanding of evaluation metrics.
The advent of transformer models has simplified LLM development, but a renewed focus on fundamental ML principles is still necessary.
Semantic data models are pivotal for enhancing AI applications in business intelligence, making data more accessible and user-friendly for analysis.
Deep dives
The Importance of Iterative Problem Framing
Addressing complex problems requires a continuous iterative approach rather than a one-time solution. As users interact with products, feedback and metrics must be regularly reviewed to identify gaps in quality and engagement. This iterative process helps refine metrics and improve data collection, ensuring that the product evolves with user needs. Adapting to new signals from users is crucial for maintaining high standards in product offerings.
Experiences with Large Language Models
Early experiences with large language models (LLMs) marked a significant shift from older technologies, such as LSTMs and RNNs, which demanded extensive hand tuning. The release of transformer models drastically simplified the process, enabling effective results with reduced effort. However, the ease of achieving decent performance has led to a lapse in understanding fundamental ML principles that were crucial in previous iterations. This shift illustrates a possible need for a return to more engineering-driven approaches alongside advanced model capabilities.
Semantic Data Models and User Engagement
The concept of semantic data models is emerging as a critical area for AI applications, particularly in business intelligence. By transforming raw data into familiar representations for users, such models enhance accessibility and usability. This approach enables users to perform more flexible analyses, including natural language inquiries about their data. Integrating AI-driven solutions with semantic models ultimately aims to improve user satisfaction and effectiveness in data handling.
The Dangers of Overhype and AI Demos
The current trend of AI demos tends to exaggerate capabilities, often leading to skepticism among potential users and investors. While demonstrations are valuable for communicating concepts, they often lack substance and realism in practical applications. Developers and stakeholders are encouraged to focus on specific metrics and quality evaluations rather than relying heavily on flashy presentations. Companies that demonstrate a thoughtful understanding of evaluation may hold more promise for future development.
Prioritizing Challenges in AI Development
In the evolving landscape of AI development, it's essential to establish clear metrics for evaluating progress and performance in AI models. When faced with multiple performance issues, teams should focus on solving one specific problem at a time, ensuring that any changes made can be accurately measured. This disciplined approach to problem-solving, coupled with regular evaluations and reflections on workflows, can drive innovation while avoiding confusion and inefficiency. Ultimately, successful teams will prioritize deep dives into quality metrics to refine their products continually.
Kyle Nesbit, a longtime Googler and AI expert, joins us on Deployed to share lessons from his 17+ years at the forefront of distributed systems, machine learning, and AI-driven product innovation.
Kyle has helped build foundational technologies like BigQuery and worked on early large language model (LLM) development at Google, giving him a unique perspective on how teams can successfully transition from traditional engineering to modern AI-focused workflows.
In this episode, we explore:
The challenges and opportunities of transitioning traditional engineering teams to LLM development
Why starting with evaluation metrics is the foundation for success
Practical strategies for iterative improvement and guardrail design
How to balance product priorities and quality trade-offs when scaling AI systems
The real story behind AI demos and how to communicate progress effectively
Foundational issues in data discovery and access—and why solving them matters more than chasing trends
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode