

Computer Architecture Podcast
comparchpodcast
A show that brings you closer to the cutting edge in computer architecture and the remarkable people behind it. Hosted by Dr. Suvinay Subramanian, who is a computer architect at Google in the Systems Infrastructure group, working on designing Google’s machine learning accelerators (TPU), and Dr. Lisa Hsu who is semi-retired and works part-time in Reality Labs Research at Meta on optics and display technologies for AR.
Episodes
Mentioned books

Dec 10, 2025 • 1h 3min
Ep 22: Measuring Datacenter Efficiency and Visioning the Future of Computer Architecture with Dr. Babak Falsafi, EPFL
Dr. Babak Falsafi, a professor at EPFL and president of the Swiss Data Center Efficiency Association, delves into the intricate world of data center efficiency. He highlights challenges in measuring energy flows and contrasts established metrics like PUE with unknown IT energy use. Babak warns against overprovisioning by customers and discusses the benefits of emerging technologies like Rack2.0 for disaggregated computing. Sharing insights on GPU utilization for AI, he emphasizes the importance of co-design in crafting sustainable, efficient architectures for the future.

Oct 1, 2025 • 1h 1min
Ep 21: High-assurance Computer Architectures with Dr. Caroline Trippel, Stanford University
Dr. Caroline Trippel is an Assistant Professor in the Computer Science and Electrical Engineering Departments at Stanford University. Caroline's research operates at the critical intersection of hardware and software, focusing on developing high-assurance computer architectures. Her work tackles the challenge of ensuring that complex hardware designs are correct and secure. She has pioneered automated tools that bridge the gap between a processor's implementation (its RTL) and its formal specification, as well as frameworks and compilers that find and mitigate hardware-related security vulnerabilities in software.

Jun 17, 2025 • 1h 8min
Ep 20: The Tech Transfer Playbook – Bridging Research to Production with Dr. Ricardo Bianchini, Microsoft
Dr. Ricardo Bianchini is a Technical Fellow and Corporate Vice President at Microsoft Azure, where he leads the team responsible for managing Azure’s compute workload, server capacity, and datacenter infrastructure with a strong focus on efficiency and sustainability. Before joining Azure, Ricardo led the Systems Research Group and the Cloud Efficiency team at Microsoft Research (MSR). He created research projects in power efficiency and intelligent resource management that resulted in large-scale production systems across Microsoft. Prior to Microsoft, he was a Professor at Rutgers University, where he conducted research in datacenter power and energy management, cluster-based systems, and other cloud-related topics. Ricardo is a Fellow of both the ACM and IEEE.

Mar 17, 2025 • 1h 8min
Ep 19: Memory Management and Software Reliability with Dr. Arkaprava Basu, Indian Institute of Science
Dr. Arkaprava Basu, an Associate Professor at the Indian Institute of Science, specializes in memory management and software reliability for CPUs and GPUs. He discusses innovative strategies for optimizing memory systems in chiplet-based GPUs. The conversation spans the transition from CPU to GPU programming, highlighting synchronization issues and the significance of reliability. Arka emphasizes tools that improve performance and correctness in GPU programming, while also reflecting on mentorship's impact in shaping future talent in systems research.

Dec 17, 2024 • 54min
Ep 18: Codesign for Industrial Robotics and the Startup Pivot with Dr. Dan Sorin, Duke University
Dr. Dan Sorin is a Professor of Electrical and Computer Engineering at Duke University, and a co-founder of Realtime Robotics. Dan is widely known for his pioneering work in memory systems. He has co-authored the seminal Primer on Memory Consistency and Cache Coherence, which has become a foundational resource for students and researchers alike. Dan’s contributions span from developing resilient systems that tolerate hardware faults to innovations in cache coherence protocols, and has been recognized by multiple best paper awards and patents. His work at Realtime Robotics has pushed the boundaries of autonomous motion planning, enabling real-time decision-making in dynamic environments.

Sep 3, 2024 • 60min
Ep 17: Architecture 2.0 and AI for Computer Systems Design with Dr. Vijay Janapa Reddi, Harvard University
Dr. Vijay Janapa Reddi is an Associate Professor at Harvard University, and Vice President and Co-founder of MLCommons. He has made substantial contributions to mobile and edge computing systems, and played a key role in developing the MLPerf Benchmarks. Vijay has authored the machine learning systems book mlsysbook.ai, as part of his twin passions of education and outreach. He received the IEEE TCCA Young Computer Architect Award in 2016, has been inducted in the MICRO and HPCA Halls of Fame, and is a recipient of multiple best paper awards.

Jun 19, 2024 • 1h 7min
Ep 16: Sustainability in a Post-AI World with Dr. Carole-Jean Wu, Meta
Dr. Carole-Jean Wu is a Director of AI Research at Meta. She is a founding member and a Vice President of MLCommons – a non-profit organization that aims to accelerate machine learning innovations for the benefits of all. Dr. Wu also serves on the MLCommons Board as a Director, chaired the MLPerf Recommendation Benchmark Advisory Board, and co-chaired for MLPerf Inference. Prior to Meta/Facebook, Dr. Wu was a professor at ASU. She earned her M.A. and Ph.D. degrees in Electrical Engineering from Princeton University and a B.Sc. degree in Electrical and Computer Engineering from Cornell University. Dr. Wu’s expertise sits at the intersection of computer architecture and machine learning. Her work spans across datacenter infrastructures and edge systems, such as developing energy- and memory-efficient systems and microarchitectures, optimizing systems for machine learning execution at-scale, and designing learning-based approaches for system design and optimization. She is passionate about pathfinding and tackling system challenges to enable efficient and responsible AI technologies.

Mar 28, 2024 • 1h 3min
Ep 15: The Hardware Startup Experience from Business Case to Software with Dr. Karu Sankaralingam, University of Wisconsin-Madison/Nvidia
Dr. Karu Sankaralingam is a Professor at the University of Wisconsin-Madison, an entrepeneur, inventor, as well as a Principal Research Scientist at NVIDIA. His work has been featured in industry forums of Mentor and Synopsys, and has been covered by the New York Times, Wired, and IEEE Spectrum. He founded the hardware startup SimpleMachines in 2017 which developed chip designs applying dataflow computing to push the limits of AI generality in hardware and built the Mozart chip. In his career, he has led three chip projects: Mozart (16nm, HBM2 based design), MIAOW open source GPU on FPGA, and the TRIPS chip as a student during his PhD. In his research he has pioneered the principles of dataflow computing, focusing on the role of architecture, microarchitecture and the compiler. He has published over 100 research papers, has graduated 9 PhD students, is an inventor on 21 patents, and 9 award papers. He is a Fellow of IEEE.

Dec 6, 2023 • 1h 5min
Ep 14: System Design for Exascale Computing and Advanced Memory Technologies with Dr. Gabriel Loh, AMD
Dr. Gabriel Loh is a Senior Fellow at AMD Research and Advanced Development. Gabe is known for his contributions to 3D die-stacked architectures, memory organization and caching techniques, and chiplet multicore architectures. His ideas have influenced multiple commercial products and industry standards. He is a recipient of ACM SIGARCH's Maurice Wilkes Award, is a Hall of Fame member for MICRO, HPCA, ISCA, and a recipient of the NSF CAREER award.

Sep 27, 2023 • 58min
Ep 13: Energy-efficient Algorithm-hardware Co-design with Dr. Vivienne Sze, MIT
Dr. Vivienne Sze is an associate professor in the EECS department at MIT. Vivienne is recognized for her leading work on energy-efficient computing systems spanning a wide range of domains: from video compression, to machine learning, robotics and digital health. She received the DARPA Young Faculty Award, Edgerton Faculty Award, faculty grants from Google, Facebook and Qualcomm, and a Primetime Engineering Emmy as a member of the team that developed the High-Efficiency Video Coding standard.


