Nandan Nayampally, Chief Commercial Officer at Baya Systems, shares insights from his extensive background in chip design at companies like ARM and Amazon Alexa. He discusses how data movement, not just processing speed, is the key bottleneck in AI chip technology. The conversation covers innovative topics such as network-on-chip designs, silicon photonics for faster data transfer, and the evolution from punch cards to neuromorphic computing. Nayampally emphasizes the need for smarter architecture to meet future AI demands.
46:33
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
Data Movement Bottleneck in AI Chips
The biggest bottleneck in AI chip performance is data movement, not just computation speed.
Energy use and costs mainly come from moving data between memory and processors, highlighting the need for efficient data flow architectures.
insights INSIGHT
Chiplets Improve Yield and Speed
Chiplet design lets manufacturers combine best-in-class components like CPUs and GPUs within one package.
What if the biggest challenge in AI isn't how fast chips can compute, but how quickly data can move? In this episode of Eye on AI, Nandan Nayampally, Chief Commercial Officer at Baya Systems, shares how the next era of computing is being shaped by smarter architecture, not just raw processing power. With experience leading teams at ARM, Amazon Alexa, and BrainChip, Nandan brings a rare perspective on how modern chip design is evolving. We dive into the world of chiplets, network-on-chip (NoC) technology, silicon photonics, and neuromorphic computing. Nandan explains why the traditional path of scaling transistors is no longer enough, and how Baya Systems is solving the real bottlenecks in AI hardware through efficient data movement and modular design. From punch cards to AGI, this conversation maps the full arc of computing innovation. If you want to understand how to build hardware for the future of AI, this episode is a must-listen. Subscribe to Eye on AI for more conversations on the future of artificial intelligence and system design. Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI (00:00) Why AI’s Bottleneck Is Data Movement (01:26) Nandan’s Background and Semiconductor Career (03:06) What Baya Systems Does: Network-on-Chip + Software (08:40) A Brief History of Computing: From Punch Cards to AGI (11:47) Silicon Photonics and the Evolution of Data Transfer (20:04) How Baya Is Solving Real AI Hardware Challenges (22:13) Understanding CPUs, GPUs, and NPUs in AI Workloads (24:09) Building Efficient Chips: Cost, Speed, and Customization (27:17) Performance, Power, and Area (PPA) in Chip Design (30:55) Partnering to Build Next-Gen Photonic and Copper Systems (32:29) Why Moore’s Law Has Slowed and What Comes Next (34:49) Wafer-Scale vs Traditional Die: Where Baya Fits In (36:10) Chiplet Stacking and Composability Explained (39:44) The Future of On-Chip Networking (41:10) Neuromorphic Computing: Energy-Efficient AI (43:02) Edge AI, Small Models, and Structured State Spaces