Google AI: Release Notes cover image

Google AI: Release Notes

Latest episodes

undefined
53 snips
May 2, 2025 • 60min

Deep Dive into Long Context

Nikolay Savinov, a Staff Research Scientist at Google DeepMind, delves into the cutting-edge realm of long context in AI. He emphasizes the crucial role of large context windows in enhancing AI agents' performance. The discussion reveals the synergy between long context models and Retrieval Augmented Generation, addressing scaling challenges beyond 2 million tokens. Savinov also shares insights into optimizing context management, improving AI reasoning capabilities, and the future of long context technologies in enhancing user interactions.
undefined
36 snips
Mar 28, 2025 • 28min

Launching Gemini 2.5

Tulsee Doshi, Head of Product for Gemini Models at Google, discusses the launch of Gemini 2.5 Pro, a cutting-edge multimodal thinking model. The conversation highlights its advanced reasoning and coding abilities, enabling the creation of complex web applications. Doshi elaborates on balancing academic evaluations with user satisfaction and shares community use cases that showcase its enhanced understanding of physics. The episode emphasizes the collaborative efforts behind the model’s development and the exciting enhancements motivated by user feedback.
undefined
Mar 20, 2025 • 37min

Gemini app: Canvas, Deep Research and Personalization

Dave Citron, Senior Director of Product Management at Google and the driving force behind the Gemini app, dives into the latest innovations like Canvas for collaborative content creation. He reveals how Deep Research is enhanced with new Thinking Models and automated reasoning, making it smarter and more efficient. Personalization takes center stage, too, showcasing how user preferences shape responses while balancing privacy concerns. Citron’s insights promise a future of seamless interactions tailored to every user.
undefined
26 snips
Feb 24, 2025 • 1h 4min

Developing Google DeepMind's Thinking Models

Jack Rae, Principal Scientist at Google DeepMind, shares insights on advancing reasoning models like Gemini. He discusses how increased 'thinking time' enhances model performance and the significance of long context in language modeling. Rae also highlights the evolution from gaming memory systems to real-world AI applications, emphasizing the need for developer feedback and user interaction. The conversation delves into practical uses, the future of AI reasoning, and innovative evaluation methods that reflect real-world scenarios.
undefined
8 snips
Dec 11, 2024 • 35min

Behind the Scenes of Gemini 2.0

Tulsee Doshi, model product lead for Gemini at Google, shares insights on the groundbreaking Gemini 2.0. She discusses the model's significant improvements over its predecessor, including enhanced multimodal capabilities and native tool use, which boost productivity in Google products. Doshi highlights the thrill of launching experimental models while emphasizing the importance of user feedback in refining AI technology. The conversation also unveils innovations like function calling and sophisticated AI agents that lead to richer, personalized user experiences.
undefined
Dec 5, 2024 • 43min

Smaller, Faster, Cheaper & The Story of Flash 8B

Emanuel Taropa, a leading developer of Google’s Gemini AI, shares his expertise on the technical intricacies of large language models. He discusses the challenges and triumphs during the launch of the Flash 8B model, emphasizing the shift to smaller, cost-effective models for enhanced accessibility. The conversation also touches on the art of naming models and how these names can inspire innovation amidst launch pressures. Taropa reflects on the teamwork and culture at Google that fuels ongoing advancements in AI technology.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner
Get the app