

171: GreyBeards talk Storage.AI with Dr. J Metz, SNIA Chair and Technical Director, AMD
SNIA’s Storage Developer Conference (SDC) was held last week in CA and although I didn’t attend I heard it was quite a gathering. Just prior to the show, I was talking with Jason about the challenges of storage for AI and he mentioned that SNIA had a new Storage.AI initiatdze focused on these issues. I called Dr. J Metz, Chair of SNIA & Technical Director @ AMD (@drjmetz, blog) and asked if the wanted to talk to us about SNIA’s new initiative.
Storage.AI is a SNIA standards development community tasked with addressing the myriad problems AI has with data. Under its umbrella, a number of technical working groups (TWGs) will work on standards to improve AI data access, Just about every IT vendor in the universe is listed as a participating company in the initiative. Listen to the podcast to learn more.
We started discussing Dr. J’s current roles at SNIA and AMD and how SDC went last week. It turns out, it was the best attended SDC ever and Dr. J’s keynote on Storage.AU was a highlight of the show.
The storage/data needs for AI span a wide spectrum of activities or workloads. Dr. J spoke on the lengthy data pipeline, e.g. ingest, prep/clean, transform, train, checkpoint/reload, RAG upload/update and inference to name just a few. In all these diverse activities, storage’s job is getting the right data bits to the right process (GPU/accelerators for training) throughout the pipeline. Inferencing has somewhat less of a convoluted data journey but is still complex and performance critical.
Te take just one component of the data pipeline checkpointing is a data intensive process. When training a multi-billion parameter model or, dare I say, multi-trillion parameter model with 10K to Million’s of GPUs, failure’s happen, often. Checkpoints are the only way model training can make progress in the face of significant GPU failures. And of course, any checkpoint needs to be reloaded to verify it’s correct.
So checkpointing and reloading is an IO activity that happens constantly when models are trained. Checkpoints essentially save the current model parameters during training. Speeding up checkpoint/reload could increase AI model training throughput considerably
And of course, GPUs and the power they consume are an expensive activity. When one has 1000’s to Millions of GPUs in a data center, having them sit idle is a vast waste of resources. Anything to help speed up accelerator data access could potentially save millions.
In the old days compute, storage and networking were isolated/separate silos of technology. Nowadays, the walls between them have been blown away, mostly by the advent of AI.
Dr. J talks about first principles, such as the speed of light that determines the time it takes for data to move from one place to another. These limits exist throughout IT infrastructure. But OS stacks surrounding these activities have spawned layer upon layer of software to do these actions. If one can wipe the slate clean, infrastructure activities can get closer to those first principles and reduce overhead
SNIA has current TWGs focused on a number of activities that could help speed up AI IO. We talked about SNIA’s Smart Data Acceleration Initiative (SDXI), but there are others in process as well. But SNIA has also identified a few new ones they plan to fire up such as GPU direct access bypass and GPU-Initiated IO to address other gaps in Storage.AI.
In today’s performance driven AI environments, proprietary solutions are often developed to address some of these same issues. We ended up discussing the role of standards vs. proprietary solutions in IT in general and in today’s AI infrastructure.
Yes there’s a place for proprietary solutions and there’s also a place for standards. Sometimes they merge, sometimes not, but they can often help inform each other on industry trends and challenges.
I thought that proprietary technologies always seem to emerge early and then transition to standards over time. Dr. J said it’s more of an ebb and flow between proprietary and standards, and mentioned as one example the ESCON-FC-FICON-Fabric proprietary/standards activities from last century.
As always It was an interesting conversation with Dr. J and Jason and I look forward to seeing how SNIA’s Storage.AI evolves over time.
Dr. J. Metz, Chair and Chief Executive of SNIA & Technical Director, AMD

J is Technical Director for Systems Design for AMD where he works to coordinate and lead strategy on various industry initiatives related to systems architecture, including advanced networking and storage. He has a unique ability to dissect and explain complex concepts and strategies, and is passionate about the inner workings and application of emerging technologies.
J has previously held roles in both startup and Fortune 100 companies as a Field CTO, R&D Engineer, Solutions Architect, and Systems Engineer. He is and has been a leader in several key industry standards groups, currently Chair of SNIA as well as the Chair of the Ultra Ethernet Consortium (UEC). Previously, he was on the board of the Fibre Channel Industry Association (FCIA) and Non-Volatile Memory Express (NVMe) organizations. A popular blogger and active on Twitter, his areas of expertise include both storage and networking for AI and HPC environments.
Additionally, J is an entertaining presenter and prolific writer. He has won multiple awards as a speaker and author, writing over 300 articles and giving presentations and webinars attended by over 10,000 people. He earned his PhD from the University of Georgia.