Latent Space AI

Nvidia States Its Latest GPUs Leave Google’s TPUs a Generation Behind

Nov 27, 2025
The hosts dive into Nvidia's assertive claims that their GPUs outpace Google's TPUs by a generation. They discuss the implications of the TPU architecture vs. Nvidia's broad ecosystem advantages. Market reactions and Nvidia's stock dip after TPU news are analyzed, revealing the competitive landscape. The debate on flexibility highlights the contrast between Nvidia's general-purpose chips and Google's ASIC designs. Finally, they touch on how Google's Gemini is trained on TPUs yet runs on Nvidia GPUs, showcasing the intricate interplay between the two technologies.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

NVIDIA Frames A Generational Lead

  • NVIDIA claims its GPUs are a generation ahead of Google's TPUs based on datacenter performance.
  • Jaeden Schafer notes NVIDIA emphasizes software, infrastructure, and flexibility as competitive advantages.
INSIGHT

Architecture Vs. Ecosystem Advantage

  • TPUs may have a superior architecture for AI but NVIDIA's ecosystem multiplies GPU value.
  • Jaeden highlights that NVIDIA's software and multi-chip systems make GPUs broadly useful beyond pure AI tasks.
INSIGHT

TPUs Are Highly Specialized

  • Google's TPUs are ASIC-like and optimized specifically for training AI models, making them cost-effective for that task.
  • Jaeden explains TPUs are in-house for Google and rented via Google Cloud rather than sold like NVIDIA GPUs.
Get the Snipd Podcast app to discover more snips from this episode
Get the app