The Ravit Show

Ravit Jain
undefined
Jan 30, 2026 • 22min

From Knowledge to Autonomy: How BMC Is Shaping the AI Journey for the Mainframe

Most mainframe challenges today are not caused by broken systems.They are caused by disappearing knowledge.Senior experts are retiring.Documentation is incomplete or outdated.And newer team members are expected to operate some of the most critical systems in the enterprise with very little context.In Part 1 of this podcast, we focus on where the AI journey really begins.- Why mainframe teams are under pressure right now.- What types of institutional knowledge are most at risk.- And how AI changes the way that knowledge is captured, shared, and used.Liat Sokolov walks through why AI is showing up at this moment and why traditional approaches have not been enough.Anthony DiStauro explains how AI helps close the skills gap by shortening learning curves and guiding teams through complex systems with more confidence.This episode is not about autonomy yet.It is about preserving what matters and helping people work better with the systems they already depend on.Part 1 is live now.#data #ai #mainframe #agents #bmi #mainframe #skills #bmc #theravitshow
undefined
Jan 28, 2026 • 20min

Mainframe Modernization: A Practical Conversation with BMC on Understanding Before Converting

I had a blast chatting with Anthony Anter, DevOps Evangelist at BMC Software on The Ravit Show and this one goes deep into a topic many enterprises are struggling with quietly. Mainframe modernization. Not tools. Not hype. Real ground reality.We started with a simple but uncomfortable truth Tony writes aboutBefore you even think about converting code, explain what you already have.That single line sets the tone for the entire conversation.- We talked about why so many COBOL to Java projects fail even before they begin.- Why teams rush into conversion without understanding decades of business logic buried in code.- And why mainframe systems often look like a long game of telephone, where intent is lost but code survives.A big part of the discussion focused on generative AI.Not as a magic converter, but as a way to explain, map, and document existing systems before touching a single line of Java.When teams finally see dependencies and flows clearly, the surprises are often eye opening.We also broke down a critical distinction that is often ignoredCode explanation is not the same as code translation.Missing this is where most modernization programs go wrong.Tony also shared why refactoring before rewriting matters, what practical cleanup really looks like, and how GenAI can help create Java code that is actually maintainable, not just converted.One part I personally found valuable was the balance between automation and human expertise.Where AI helps, where humans are still irreplaceable, and what governance is needed so AI output can be trusted.We wrapped with Tony’s checklist for smarter modernization and one clear takeaway for anyone working on or around mainframes today.If you are a CIO, architect, or mainframe professional thinking about modernization, this conversation will save you from expensive mistakes.#data #ai #mainframes #bmc #theravitshow
undefined
Jan 27, 2026 • 25min

Bridging the Data–AI Divide: Why DataPelago Was Founded

AI is moving fast. But the data foundations underneath it are not. That is the core theme of my latest interview with Rajan Goyal, Founder and CEO of DataPelago on The Ravit Show, recorded at their office in Mountain View.Here is what we unpacked -- Why now- Three shifts are colliding at once. Hardware acceleration is now mainstream. Generative AI has changed how data is created and consumed. Data complexity has exploded beyond what existing systems were designed to handle. The result is clear. Enterprises do not just need faster systems. They need a more unified data foundation.Where the tension is- AI models are advancing quickly, from multimodal systems to agents and domain-specific LLMs. But the data infrastructure beneath them is still built for an analytics-first world. Most companies spend more time moving data between systems than actually innovating with it.What DataPelago is building- We spent time breaking down DataPelago Nucleus, described as the world’s first universal data processing engine. One engine that can handle batch, streaming, relational, vector, and tensor workloads together. The key idea is simple but powerful. Ingest, transform, and query data without constantly moving it across systems.We also talked about what makes their approach different.- A DataOS layer that intelligently maps workloads across CPUs, GPUs, and other accelerators.A DataApp layer that plugs into engines like Spark and Trino.And DataVM, a data-focused virtual machine that unifies execution across heterogeneous hardware.Why Spark acceleration matters- For teams running Spark today, we discussed the DataPelago Accelerator for Spark. It runs existing Spark workloads on accelerated compute with zero code changes. Faster joins, shuffles, preprocessing, and lower cost, without rewriting pipelines.Why today’s stack is breaking- Warehouses, lakes, and lakehouses were built for SQL analytics. AI workloads need tight coupling between data and compute. The separation we see today leads to redundant pipelines, silos, and expensive data movement. Many teams are forced to optimize for analytics or AI, but not both.Why DataPelago was founded and what customers see- The founding insight was clear. Data systems were never designed for AI-scale throughput. Customers adopting this approach are unifying analytics and AI pipelines on one platform, simplifying infrastructure while improving performance, governance, and observability. Rajan made an interesting comparison. This shift for data processing is similar to what GPUs did for compute.What’s next- We closed by talking about how the data and AI relationship will evolve over the next few years, and what this looks like in real-world deployments. That is what the next episode will dive into.If you are building AI systems and still relying on analytics-era data foundations, this one is worth your time.#data #ai #gpu #datapelago #lakehouse #sql #analytics #theravitshow
undefined
Jan 26, 2026 • 14min

Why Enterprise AI Stalls And How Glean Gets It Moving

AI pilots are not the story anymore. The real story is who can turn AI into everyday, reliable work. Here at AWS re:Invent, I had the chance to sit down with Arvind Jain, Founder and CEO of Glean, for a conversation I have wanted to do for a long time!!!!We spoke about why so many enterprises are stuck in pilot mode and what actually has to change for AI to move from experiments to real impact on how people work. Arvind went deep on why enterprise context is still missing in most AI projects and what it looks like in practice when you finally get that piece right.We also talked about what CIOs and CTOs are really worried about this week at re:Invent. Not the buzzwords, but the hard problems around adoption, scale, and the misconceptions that keep slowing teams down!Since so many organizations here already run on AWS, I asked Arvind how the Glean and AWS partnership shows up in the real world. He shared how customers are thinking about reliability, security, and scale, and what a simple, low-risk starting point looks like if you want to see value fast.To close it out, Arvind shared one clear prediction for where AI is heading by 2026 and how it will change the way organizations think about work.Thanks Arvind Jain for always sharing amazing insights!!!!#data #ai #awsreinvent #aws #agents #awscompetencypartners #agenticai #theravitshow
undefined
Jan 23, 2026 • 8min

How SREs are Leveraging AI: Coding Agents and the Future of Shell Scripting

The future of reliability is not one tool. It is a team of agents working together. At AWS re:Invent, I had a chat with Francois Martel, Field CTO at NeuBird.ai, to talk about how AI is changing the way developers and SREs handle reliability in the real world.Here are the key takeaways from our conversation-- Coding agents are becoming the front door to AITools like GitHub Copilot and Cursor are getting massive adoption. When paired with NeuBird’s Hawkeye agentic SRE server, these agents can jump straight into root cause analysis and even take action to remediate issues-- SREs are a natural fit for agentsSREs already live in the command line and think in scripts. Coding agents are an easy and practical entry point for bringing AI into day to day SRE workflows-- Agent adoption is speeding upWe are past experimentation. Customers are seeing value from early use cases, which is pushing broader and faster adoption of agent based systems-- Enterprise security still mattersFor larger organizations, NeuBird can deploy the agent inside the customer’s VPC. The data stays in their environment and the full data path remains under their control-- AWS partnership momentumNeuBird is launching a pay as you go offering on the AWS Marketplace. This makes it one of the first agentic SRE servers you can try without long term commitment and connect to tools like AWS, Datadog, Dynatrace, and GrafanaIf you want to see how agentic SRE works in practice, you can start with the pay as you go option or the two week free trial and pairing it with your favorite coding agent.It was great catching up with François again and seeing how NeuBird is pushing the agentic SRE space forward.#data #ai #awsreinvent #aws #agents #awspartners #copilot #agents #theravitshow
undefined
Jan 22, 2026 • 11min

Inside “TouchPoint GPT”: Bringing Qlik Answers To 650+ Healthcare Facilities

Frontline healthcare is where AI either proves itself or stays a buzzword. At AWS re:Invent, I sat down with Max Mosky from TouchPoint Support Services, a Qlik customer working across more than 650 healthcare facilities!!!!Max and his team support the people who are closest to patients. I wanted to understand how they are using data and AI to make life easier for frontline staff, not harder.We spoke about- What TouchPoint Support Services does and Max’s role in supporting frontline teams- The day to day challenges their staff face inside hospitals and care facilities- Why they chose to partner with Qlik and AWS instead of trying to build everything in house- How Qlik Answers is bringing AI into frontline operations and what has changed for staff and patients so far- How AWS Bedrock and Qlik work together behind the scenes to deliver real time support at scale- What it means for Max and the team to be recognized as an AWS GenAI Trailblazer- If you care about how AI can support real people in real hospitals, this one will be worth your time.#data #ai #aws #reinvent2025 AWS Partners #analytics #agents #agenticai #qlik #theravitshow
undefined
Jan 21, 2026 • 7min

Qlik, AWS, and Data Sovereignty: What the AWS European Sovereign Cloud Unlocks

Where your data lives will decide what AI you can run tomorrow. On The Ravit Show at AWS re:Invent, I spoke to Jessica DuBois, Senior Director for the AWS partnership at Qlik. Jessica is right at the center of how Qlik and AWS work together for customers.We spoke about- Her role in leading the Qlik x AWS partnership and how that work shows up for customers in real projects- How she describes the Qlik x AWS partnership today and why it has become so important for customers who want trusted analytics and AI on AWS- Why data sovereignty is now a non negotiable topic in boardrooms and how Qlik becoming a launch partner for AWS European Sovereign Cloud opens new options for customers with strict regulatory needs- What this means in practice for organizations in regulated industries who want to move faster with analytics and AI without losing control of their data- Where she sees the Qlik and AWS partnership heading as we move into 2026 and the next wave of AI driven use cases- If you care about data sovereignty, trust, and how partners like Qlik and AWS are shaping the next phase of AI, this one is a good listen.Feel free to follow Jessica and learn more about the partnership!!!!#data #ai #aws #reinvent2025 AWS Partners #analytics #agents #agenticai #qlik #theravitshow
undefined
Jan 20, 2026 • 8min

Agentic AI At Qlik: What It Means For Customers In 2025–2026

At the AWS re:Invent in Vegas, I spoke to Christopher Powell, CMO at Qlik. We last spoke at Qlik Connect 2025 in Florida, so this was a good moment to step back and look at what has changed since then.In our conversation, we covered- What Chris and the Qlik team have been focused on since Qlik Connect and how priorities have evolved through the year- The announcements from AWS re:Invent that stood out most to him and why they matter for customers- Qlik’s new master reseller partnership with Megazone Cloud in Korea and why this collaboration is strategically important for the region- How Qlik’s push into agentic AI is shaping customer adoption and what to expect over the next year- Why the AWS partnership continues to be a core pillar for Qlik and where the biggest opportunities lie- The trends Chris is watching as we move into 2026 and what they mean for data and AI leadersIf you want a clear view of how Qlik is thinking about AI, partnerships, and what is coming next, this conversation is worth your time.#data #ai #aws #reinvent2025 AWS Partners #analytics #agents #agenticai #qlik #theravitshow
undefined
Jan 19, 2026 • 7min

Inside Iceberg: Real Recovery Paths Across S3, DynamoDB, And Iceberg

At AWS re:Invent I spoke to Woon Ho Jung, CTO for Cloud Native at Commvault, to talk about how they are helping AWS customers protect more than just one type of workloadWe spoke about how they started with BackTrack for S3 and now support DynamoDB and Apache Iceberg, and what real problem that solves when your data is spread across so many services!For teams who are new to Apache Iceberg on AWS, I asked Woon to break down the basics. What do you need in place so that recovery is not a theory, but something you can rely on when a table, job, or pipeline goes wrong!If you care about resilience across modern AWS workloads, this one will be worth watching.#data #ai #awsreinvent #aws #agents Amazon Web Services (AWS) AWS Partners AWS Events #awspartners #awscompetencypartners #agenticai #theravitshow
undefined
Jan 16, 2026 • 7min

Design Rules For Resilient AI On S3, DynamoDB, And Iceberg

AI workloads do not fail where people expect. They fail where resilience was never designed in. At AWS re:Invent, I spoke with my friend Michael Fasulo, Senior Director for Portfolio, AI Resilience and ResOps, Commvault to understand where AI systems on AWS are actually breaking and what teams should do differently.This conversation was practical and grounded in what customers are seeing today. We covered- Where resilience breaks first in real world AI workloads running on AWS- What ResOps means in simple terms and why it is becoming essential for AWS customers running AI at scale- How architects should think about recovery when AI spreads across S3, DynamoDB, and Iceberg based data lakes- The design rules that make recovery easier instead of more complex- The most important AI driven resilience capability Commvault is building into its AWS portfolio over the next yearIf you are building AI on AWS and assuming resilience will take care of itself, this conversation is a good wake up call.#data #ai #awsreinvent #aws #agents #awspartners #awscompetencypartners #agenticai #theravitshow

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app