The Cloudcast

How AI is evolving Enterprise Infrastructure

24 snips
Dec 10, 2025
Dan McConnell, Senior VP of Product Management at Hitachi Vantara, delves into the critical evolution of enterprise infrastructure in response to AI's challenges. He explains how AI workloads differ fundamentally from traditional ones and highlights the limitations of existing cloud environments. Dan discusses pressure points organizations face and common misconceptions about preparing for AI. He also outlines the perks of unified data platforms and shares insights on Hitachi's role in meeting growing infrastructure demands, emphasizing the importance of integrated systems for analytics.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

AI Phases Move Toward Data Management

  • AI adoption progressed from a GPU arms race to practical use cases and now to heavy focus on data management.
  • Dan McConnell emphasizes that data is the fuel for AI and requires cleansing, classification, and tagging to power pipelines.
INSIGHT

AI Workloads Require Multi-Modal Performance

  • AI workloads demand varied performance profiles across pipelines, from high-throughput streaming to low-latency transactional IOPS.
  • Dan says a common storage platform that meets those differing performance needs avoids costly data movement.
ADVICE

Move Compute To Your Data

  • Bring compute close to where data lives to reduce latency and honor data gravity.
  • Favor hybrid/cloud-capable storage architectures that operate across core, cloud, and edge locations.
Get the Snipd Podcast app to discover more snips from this episode
Get the app