
Eye On A.I. #308 Christopher Bergey: How Arm Enables AI to Run Directly on Devices
22 snips
Dec 19, 2025 Christopher Bergey, Executive VP at Arm, shares insights from his 30-year semiconductor career. He discusses the shift of AI from the cloud to devices, emphasizing the practicality of edge AI across smartphones and wearables. Learn how ARM's v9 architecture enhances AI inference, and why memory bandwidth is a critical challenge. Bergey also covers the role of heterogeneous computing, the importance of latency and security, and real-world examples from smart cameras to robotics, predicting a future where AI is embedded in everything we use.
AI Snips
Chapters
Transcript
Episode notes
V9 Brings AI To Familiar CPUs
- Arm's V9 architecture targets security, performance, and AI to enable edge intelligence.
- SME matrix extensions bring CPU-friendly AI without forcing accelerator-only programming.
Heterogeneity And Memory Are Key
- Heterogeneous computing mixes CPUs, GPUs, and NPUs because different tasks need different trade-offs.
- Memory bandwidth, not raw compute, is often the limiting factor for AI systems.
Move Workloads Only When Worthwhile
- Prefer CPUs when developer friendliness and flexibility outweigh accelerator gains.
- Move workloads to accelerators only when you get large performance uplifts that justify the complexity.
