

Training Data
Sequoia Capital
Join us as we train our neural nets on the theme of the century: AI. Sonya Huang, Pat Grady and more Sequoia Capital partners host conversations with leading AI builders and researchers to ask critical questions and develop a deeper understanding of the evolving technologies—and their implications for technology, business and society.The content of this podcast does not constitute investment advice, an offer to provide investment advisory services, or an offer to sell or solicitation of an offer to buy an interest in any investment fund.
Episodes
Mentioned books

126 snips
Jan 21, 2026 • 40min
Context Engineering Our Way to Long-Horizon Agents: LangChain’s Harrison Chase
Harrison Chase, cofounder of LangChain and a pioneer in AI agent frameworks, dives into the world of long-horizon agents capable of autonomous operation. He explains how context engineering has become vital for agent development, emphasizing improvements in harnesses over mere model upgrades. Harrison shares fascinating applications of coding agents and highlights the importance of traces as new sources of truth. He also contrasts building agents with traditional software, revealing insights into memory and self-improvement mechanisms that set them apart.

90 snips
Jan 14, 2026 • 37min
How Ricursive Intelligence’s Founders are Using AI to Shape The Future of Chip Design
Azalia Mirhoseini and Anna Goldie, co-founders of Recursive Intelligence, revolutionized chip design at Google with AlphaChip, drastically speeding up the process. They discuss how chip design bottlenecks hinder AI's progress and their vision for 'designless' custom silicon, making it accessible for all companies. The duo shares insights into using AI for advanced placements, novel chip shapes, and recursive self-improvement, where AI enhances its own designs. Their optimism for AGI and a 'Cambrian explosion' of custom silicon applications paints an exciting future for technology.

147 snips
Jan 6, 2026 • 1h 2min
Training General Robots for Any Task: Physical Intelligence’s Karol Hausman and Tobi Springenberg
Karol Hausman and Tobi Springenberg from Physical Intelligence discuss the groundbreaking potential of robotic foundation models. They argue that the intelligence bottleneck, not hardware, limits robotics and explain their mission to create models capable of performing diverse tasks. The duo dives into their end-to-end learning approach, emphasizing recent improvements in reinforcement learning and real-world deployment. Insights into unexpected applications from open-sourced models and the aspiration for continual robot learning highlight a pivotal shift in intelligent machine design.

146 snips
Dec 16, 2025 • 38min
Why the Next AI Revolution Will Happen Off-Screen: Samsara CEO Sanjit Biswas
Sanjit Biswas, Co-founder and CEO of Samsara, shares his unique insights on scaling AI in the physical world. He explains the key differences between physical AI and cloud-based systems, especially how real-world data like weather influences outcomes. Sanjit also reveals how Samsara leverages extensive driving data to enhance safety and efficiency, while highlighting the role of AI in coaching frontline workers. He discusses the future of autonomy in various industries, emphasizing edge computing's advantages in operational control.

69 snips
Dec 10, 2025 • 1h 2min
The Rise of Generative Media: fal's Bet on Video, Infrastructure, and Speed
In this conversation, Gorkem Yurtseven, co-founder of Fal, and Batuhan Taskaya, Head of Engineering, share insight into the rapidly evolving world of generative media. They delve into the computational challenges of video models compared to LLMs and discuss the performance enhancements of Fal's tracing compiler and custom kernels. The team highlights the booming demand from AI-native studios and the future of generative video in educational contexts. They also explore how rapid iteration in video model design is shaping the landscape, paving the way for new creative possibilities.

122 snips
Dec 2, 2025 • 40min
Why IDEs Won't Die in the Age of AI Coding: Zed Founder Nathan Sobo
Nathan Sobo, founder of Zed and former lead at GitHub's Atom, dives into the world of integrated development environments (IDEs) and AI coding collaboration. He argues that despite the rise of terminal-based tools, IDEs are invaluable for human readability of code. Nathan shares insights on the Agent Client Protocol, allowing diverse AI tools to work with Zed seamlessly. He also discusses the future of coding conversations linked to fine-grained edits, envisioning a more interactive way for developers to collaborate and innovate.

102 snips
Nov 18, 2025 • 42min
How End-to-End Learning Created Autonomous Driving 2.0: Wayve CEO Alex Kendall
Alex Kendall, Founder and CEO of Wayve, discusses his revolutionary approach to autonomous driving. He explains how end-to-end deep learning can replace traditional methods, enabling rapid adaptations across cities. Kendall delves into the power of world models for reasoning in complex scenarios and the significance of partnerships with automotive manufacturers for scaling benefits. He highlights the potential of AI breakthroughs, including language integration, to open new avenues for driving technology and transform the physical economy.

201 snips
Nov 11, 2025 • 44min
How Google’s Nano Banana Achieved Breakthrough Character Consistency
Nicole Brichtova, the product lead for Google's Nano Banana, and Hansa Srinivasan, the engineering lead, delve into the groundbreaking character consistency of their AI image model. They share their journey of creating a platform where users can see themselves in vibrant AI worlds. The duo emphasizes the importance of human evaluation and data quality, reveals unexpected community uses, and discusses the model's whimsical name. They also touch on the future of visual AI, advocating for accessibility and creativity as essential companions in technology.

69 snips
Nov 6, 2025 • 1h
OpenAI Sora 2 Team: How Generative Video Will Unlock Creativity and World Models
Bill Peebles, the head of OpenAI's Sora team and inventor of the diffusion transformer, leads a discussion on revolutionizing filmmaking from months to days. Along with Thomas Dimson, who optimizes for creative engagement, and Rohan Sahai, product lead focusing on user diversity, they explore how Sora’s innovative tech redefines video creation. Topics include the design against mindless scrolling, future world simulators for scientific breakthroughs, and the potential for AI-generated content to win awards, all while aiming to democratize creativity.

163 snips
Oct 28, 2025 • 42min
Nvidia CTO Michael Kagan: Scaling Beyond Moore's Law to Million-GPU Clusters
Michael Kagan, CTO of NVIDIA and co-founder of Mellanox, discusses the transformative impact of Mellanox on NVIDIA's AI infrastructure. He delves into the technical challenges of scaling GPU clusters to million-GPU data centers and emphasizes that network performance is key to efficiency, not just raw compute power. Kagan envisions AI as a 'spaceship of the mind' that could unlock new physics laws. He also explores the differences in training versus inference workloads and the critical role of high-performance networking in enhancing data center operations.


