The New Stack Podcast cover image

The New Stack Podcast

Latest episodes

undefined
8 snips
May 22, 2025 • 18min

Google Cloud Next Wrap-Up

Janakiram MSV, an independent analyst specializing in cloud computing and AI, joins a vibrant discussion on the latest from Google Cloud Next. The focus is on the rapid evolution of AI agents, particularly Google’s new Agent Development Kit, which positions it against competitors like Microsoft and OpenAI. The conversation highlights the challenges companies face in integrating AI into workflows and emphasizes Google's commitment to full-stack AI development with innovative tools. The synergy between Kubernetes and AI is also explored, revealing a new ecosystem in the making.
undefined
18 snips
May 20, 2025 • 19min

Agentic AI and A2A in 2025: From Prompts to Processes

Kevin Laughridge, a Deloitte consultant with 20 years of experience, discusses the future of AI, particularly Agentic AI, which goes beyond traditional generative models to enable autonomous business processes. He highlights the shift from AI pilots to full-scale deployment, facilitated by Google's new Agent2Agent protocol, allowing seamless communication between AI agents. Laughridge emphasizes the importance of a robust AI platform for integrating tools across systems and the evolving role of developers from coding to architecting adaptive AI solutions.
undefined
17 snips
May 15, 2025 • 21min

Your AI Coding Buddy Is Always Available at 2 a.m.

Aja Hammerly, Director of Developer Relations at Google, discusses the transformative power of AI in coding. She envisions AI as a virtual coding partner, ready to assist even during late-night projects. Hammerly emphasizes starting with simple AI tools for code writing, advocating for a tailored approach to integrate AI into personal workflows. She highlights Firebase Studio, showcasing how it simplifies app development with automation and collaboration, allowing developers to focus on creative aspects while AI handles repetitive tasks.
undefined
13 snips
May 13, 2025 • 20min

Google AI Infrastructure PM On New TPUs, Liquid Cooling and More

Chelsie Czop, Senior Product Manager for AI Infrastructure at Google Cloud, dives into cutting-edge developments in AI hardware. She discusses the impressive new Ironwood TPUs, boasting 42.5 exaflops, and the advancements in liquid cooling, essential for managing heat. Chelsie clarifies the ongoing debate between using TPUs or GPUs, noting significant performance boosts for some users. Moreover, she highlights the collaboration with DeepMind to stay ahead of evolving model architectures and the sustainable innovations shaping the future of data centers.
undefined
15 snips
May 8, 2025 • 24min

Google Cloud Therapist on Bringing AI to Cloud Native Infrastructure

Bobby Allen, the 'cloud therapist' and Group Product Manager for Google Kubernetes Engine, discusses the essential role of GKE in supporting AI workloads like Vertex AI. He emphasizes how Kubernetes enables efficient operation with features that ensure high availability and secure orchestration. Bobby highlights the importance of flexibility in cloud-native infrastructure, the ongoing trend of using AI in operational efficiency, and how organizations are increasingly seeking seamless integration of advanced hardware like GPUs and TPUs for optimized performance.
undefined
10 snips
May 6, 2025 • 31min

VMware's Kubernetes Evolution: Quashing Complexity

Paul Turner, Vice President of Products for VMware Cloud Foundation at Broadcom, discusses the evolving landscape of Kubernetes and virtualization. He reveals how VMware's integrated solutions simplify infrastructure management, allowing developers to focus on coding rather than troubleshooting. Turner emphasizes AI's impact on nearly half of Kubernetes deployments and details VMware's partnership with Nvidia for GPU virtualization. He also highlights VMware's commitment to open-source projects, ensuring Kubernetes remains cloud-independent and tailored for efficient AI workloads.
undefined
4 snips
May 5, 2025 • 5min

Prequel: Software Errors Be Gone

Discover how Prequel is shaking up the software reliability scene by democratizing error detection. Co-founders, both ex-NSA engineers, unveil an innovative approach using Common Reliability Enumerations (CREs) to pinpoint performance issues. Dive into the community-driven tools that allow developers to quickly build and share bug detectors. With the rapid rise of AI development and third-party integrations, Prequel aims to empower engineers with actionable insights rather than just symptoms, challenging the dominance of traditional observability giants.
undefined
7 snips
May 1, 2025 • 18min

Arm’s Open Source Leader on Meeting the AI Challenge

Andrew Wafaa, Fellow and Senior Director for Software Communities at Arm, advocates for open source as the default in tech. He reveals Arm's commitment to enhancing essential projects like the Linux kernel and PyTorch. Wafaa challenges the supremacy of GPUs in AI, arguing that advanced CPUs with Arm’s technologies often outperform them. He champions PyTorch as the cornerstone of open source AI development, urging for broader community involvement to ensure sustainability in this rapidly evolving landscape.
undefined
8 snips
Apr 29, 2025 • 17min

Why Kubernetes Cost Optimization Keeps Failing

In today’s uncertain economy, businesses are tightening costs, including for Kubernetes (K8s) operations, which are notoriously difficult to optimize. Yodar Shafrir, co-founder and CEO of ScaleOps, explained at KubeCon + CloudNativeCon Europe that dynamic, cloud-native applications have constantly shifting loads, making resource allocation complex. Engineers must provision enough resources to handle spikes without overspending, but in large production clusters with thousands of applications, manual optimization often fails. This leads to 70–80% resource waste and performance issues. Developers typically prioritize application performance over operational cost, and AI workloads further strain resources. Existing optimization tools offer static recommendations that quickly become outdated due to the dynamic nature of workloads, risking downtime. Shafrir emphasized that real-time, fully automated solutions like ScaleOps' platform are crucial. By dynamically adjusting container-level resources based on real-time consumption and business metrics, ScaleOps improves application reliability and eliminates waste. Their approach shifts Kubernetes management from static to dynamic resource allocation. Listen to the full episode for more insights and ScaleOps' roadmap.Learn more from The New Stack about the latest in scaling Kubernetes and managing operational costs: ScaleOps Adds Predictive Horizontal Scaling, Smart Placement ScaleOps Dynamically Right-Sizes Containers at Runtime Join our community of newsletter subscribers to stay on top of the news and at the top of your game. 
undefined
10 snips
Apr 24, 2025 • 18min

How Heroku Is ‘Re-Platforming’ Its Platform

Betty Junod, the CMO and SVP at Heroku, shares insights on Heroku's ambitious re-platforming efforts. She discusses the shift to Kubernetes and open-source engagement, including becoming a platinum member of the CNCF. Heroku's new features like Cloud Native Buildpacks allow developers to create container images more efficiently. Additionally, Betty highlights innovations aimed at supporting data scientists and the impact of AI and 'vibe coding' on developer productivity. This transformation aligns Heroku with contemporary cloud-native needs.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app