

The Ravit Show
Ravit Jain
The Ravit Show aims to interview interesting guests, panels, companies and help the community to gain valuable insights and trends in the Data Science and AI space! The show has CEOs, CTOs, Professors, Tech Authors, Data Scientists, Data Engineers, Data Analysts and many more from the industry and academia side.
We do live shows on LinkedIn, YouTube, Facebook and other platforms. The motto of The Ravit Show is to the Data Science/AI community grow together!
We do live shows on LinkedIn, YouTube, Facebook and other platforms. The motto of The Ravit Show is to the Data Science/AI community grow together!
Episodes
Mentioned books

Dec 9, 2025 • 14min
HPE Agentic Smart City Solution
AI in the public sector isn’t a pilot anymore. It’s running in the real world. Check out my conversation from NVIDIA GTC in Washington with Robin Braun, VP AI Business Development, Hybrid Cloud at Hewlett Packard Enterprise, and Russell Forrest, Town Manager of Town of Vail. This one is important because it’s about AI for cities, not just AI for big tech. I had a blast interviewing both Robin and Russell.We talked about how HPE is using AI to tackle real problems every city deals with: traffic, safety, and energy efficiency. Robin walked through how you build a smarter, more connected city by turning live data into decisions that actually help people on the ground.Russell brought the city view from Vail. He explained what it takes to move from “we’re testing AI” to “we’re using this in operations.” We got into risk, cost, and how you deploy without adding complexity or slowing down public services.We also discussed agentic AI. Not as a buzzword, but as something that can help a town react in real time while still keeping humans in control.Better safety. Better visitor experience. Better use of resources. Same team.This is AI as public service infrastructure.Full interview is now live on LinkedIn and YouTube.#data #ai #nvidiagtc #nvidia #hammerspace #gpu #storage #theravitshow

Dec 8, 2025 • 9min
HPE’s Path to Sovereign AI: Securing Data, Enabling Innovation
Public sector AI is moving fast. The big question is how to build it the right way. I had a blast chatting with Andrew Wheeler, Hewlett Packard Enterprise Fellow and Vice President at Hewlett Packard Labs and HPC & AI Advanced Development at NVIDIA GTC in Washington, DC. We talked about:* How HPE helps agencies build and scale sovereign AI ecosystems* Why the public sector is a core focus for HPE in AI* Practical steps for data sovereignty, compliance, and security without slowing innovation* Where sovereign AI shows up first: government, defense, citizen services, large-scale research* How HPC and supercomputing power national-scale AI* What quantum could unlock for government programs and where HPE fitsIf you care about trusted AI for cities, states, and national labs, this one is worth a watch.Full interview now live!!!!#data #ai #hpe #nvidiagtc #gtc2025 #gpu #sovereign #nvidia #theravitshow

Dec 3, 2025 • 12min
Inside the AI Factory: How DDN is Powering the Next Wave of Enterprise AI
AI ROI is now the real test. I got a chance to chat Joe Corvaia, SVP Sales at DDN at NVIDIA GTC in Washington. This one is for CEOs and exec teams who are being pushed to “do AI” but still can’t show a return.We started with a simple question. Why are some companies actually getting ROI from AI while others are still stuck in pilots. Joe was very direct on what separates the ones who are scaling from the ones who are still presenting slides.We talked about infrastructure as a board-level strategy. Not just “buy more GPUs,” but “are you using the GPUs you already bought.” Joe walked through how data infrastructure and data flow have to be part of the conversation in the boardroom, not just in IT.We got into AI factories and the new DDN Enterprise AI HyperPOD. Built with Supermicro and accelerated by NVIDIA, HyperPOD is designed to take teams from first deployment to serious scale. The idea is you should be able to stand up production AI without rebuilding the stack every time you grow the workload.Joe also broke down why platforms like HyperPOD, and GPU-aware networking and acceleration like NVIDIA BlueField 4, are about more than performance. They are about efficiency. Max GPU utilization. No idle spend. Faster time to value. This matters not just for big tech, but for regulated industries and sovereign AI programs that need capability and control.We closed on one topic that every CEO is thinking about right now. How do you future proof AI investments. Joe shared the one principle leaders should follow so they are not buying hardware for headlines, but building an AI foundation that still makes financial sense five years out.If you own AI strategy, budget, or delivery, watch this.#data #ai #nvidiagtc #nvidia #hammerspace #gpu #storage #theravitshow

Dec 3, 2025 • 28min
Inside AI Magic Wand: Building Web Data Agents With One Click
I have seen a lot of AI demos this year. Very few make hard, messy work feel simple. Next week I am going live with Sarah McKenna, CEO of Sequentum, for an AI Magic Wand Launch Celebration on The Ravit Show.What is happening -We are going to walk through how Sequentum is using AI to change web data work. Not slides. Actual product.Here is what we will get into during the show:- AI Magic Wand (beta)A new feature that turns high level intent into working web data flows. Think less trial and error, more “tell it what you want and refine.”- Command TemplatesHow reusable templates help teams stop rebuilding the same patterns and start sharing what works across the company.- New tools coming in the next weeksUnblocker, Paginations and more. All focused on enhancing Sequentum’s data collection capabilities - Latest in standardsThe standards and good practices that matter if you want web data and AI that can stand up in an enterprise.Why I am excited about this oneMost teams I meet are still stuck between scripts, manual fixes, and brittle tools when it comes to web data. Sequentum is trying to give them a cleaner path with AI on top. This session is about showing that work in public and talking through the real trade offs.If you care about web data, automation, and using AI for real work, this will be a good one to watch.

Dec 2, 2025 • 22min
Inside IBM’s Vision for India: AI, Hybrid Cloud, and Building a Future-Ready Workforce
Think Mumbai was electric. India’s AI build-out just moved into a higher gear.I sat down with Sandip Patel, Managing Director, IBM India & South Asia at IBM’s Mumbai office. We unpack what Think Mumbai means for teams building with AI, hybrid cloud, and data at scale.What stood out and why it matters:IBM and airtel partnership• Aim: give regulated industries a safe and fast path to run AI at scale• How: combine Airtel’s cloud footprint and network with IBM’s hybrid cloud and watsonx stack• Why it helps: data stays controlled and compliant while workloads flex across on-prem, cloud, and edge• Impact areas: banking, insurance, public sector, large enterprises with strict governanceFirst cricket activation on watsonx• What: AI-driven insights powering a live cricket experience• Why it matters: shows real-time analytics, content, and decisioning are ready for prime time• Enterprise takeaway: the same pattern applies to contact centers, fraud, supply chains, and field ops where seconds countAI value for Indian enterprises today• Start with governed data and clear ownership• Use hybrid patterns so models run where the work and data live• Blend predictive models with generative workflows inside watsonx for measurable lift• Track outcomes in productivity, risk reduction, customer experience, and time to valueSkills as the force multiplier• Priority skills: data governance, MLOps, orchestration, security on hybrid cloud• Team model: small core teams operating a shared platform, federated use cases across business units• Result: faster move from pilots to production with repeatable guardrailsMy takeIndia is moving from talk to build. The blueprint is open, hybrid, and governed. Partnerships that keep control local while staying flexible will unlock scale. Sports gave us a sharp demo of real-time AI. The next wins will be in operations, customer journeys, and risk.The interview is live now. Link to the complete interview in the comments!#data #ai #agentic #ibm #ThinkMumbai #governance #cloud #watsonx #IBMPartner #theravitshow

Nov 28, 2025 • 16min
Inside Flink Agents: Open Source Agents for the Enterprise
Flink Forward Barcelona 2025 was not just about streaming. It was about what comes next for enterprise AI. I sat down with Qingsheng Ren, Team Lead, Flink Connectors & Catalogs at Ververica, and Xintong Song, Staff Software Engineer at Alibaba Cloud, to talk about something that could change how enterprises build AI systems in production: Flink Agents.Flink Agents is being introduced as an open source sub-project under Apache Flink. The goal is simple and ambitious at the same time: bring agentic AI into the same reliable, scalable, fault-tolerant world that already powers real-time data infrastructure.We talked about why this matters.First, why Flink Agents and why now?They walked me through the motivation. Most AI agent frameworks today look exciting in a demo, but they break once you try to run them against live data, streaming events, strict SLAs, audit requirements, cost pressure, and real users. There’s a big gap between prototypes and reliable operations. That’s the gap Flink Agents is aiming to close.Why open source?Both Ververica and Alibaba made it clear that this is not meant to be a proprietary, closed feature. They want this to be a community effort under Apache Flink, not a vendor lock-in story. The belief is that enterprises will only bet on AI agents at scale if the runtime is open, portable, and battle tested.How is building an AI agent different from building a normal Flink job?This part was interesting. A standard Flink job processes streams. An agent has to do more. It has to reason, take actions, call tools, maintain context, react to feedback, and keep doing that continuously. You’re not just transforming data. You’re orchestrating behavior. Flink Agents is meant to give you those building blocks on top of Flink instead of forcing teams to stitch this together themselves.What kind of companies is this for?We got into enterprise workloads that actually need this. Think about environments where fast decisions matter and you can’t afford to go offline:-- Fraud detection and response-- Customer support and workflow automation-- Operational monitoring, alert triage, and remediation-- Real-time personalization and recommendations-- Anywhere you need an autonomous loop, not just a dashboard-- And finally, roadmap.We talked about the next 2 to 3 years. The focus is on deeper runtime primitives for agent behavior, cleaner developer experience, and patterns that large enterprises can trust and repeat.My takeaway:Flink Agents is not just “yet another agent framework.” It’s an attempt to operationalize agentic AI on top of a streaming backbone that already runs at massive scale in production.This is the conversation every enterprise AI team needs to be having right now.#FlinkForward #Ververica #Streaming #RealTime #DataEngineering #AI #TheRavitShow

Nov 26, 2025 • 8min
From Kafka to Flink: What Aiven and Ververica Can Do Together
Real time is getting simpler. At Flink Forward, I sat down with Josep Prat, Director, Aiven. We discussed about Aiven and Ververica | Original creators of Apache Flink® new partnership and what it unlocks for data teams!!!!What we covered: • Why this partnership makes sense now and the outcomes it targets • Fastest ROI use cases for joint customers • How Aiven and Ververica split support, SLAs, and upgrades • The first deployment patterns teams should try. POCs, phased rollouts, or full cutovers • Support for AI projects that need fresh data with low latency • What is coming next on the shared roadmap over the next two quartersIf you care about streaming in production and a cleaner path to value, this one is worth a watch.Full interview now live!!!!#data #ai #streaming #Flink #Aiven #Ververica #realtimestreaming #theravitshow

Nov 25, 2025 • 20min
Building With Fluss: Real Use Cases and Patterns
Flink Forward Barcelona 2025 was a big week for streaming and the streamhouse.I sat down with Jark Wu, Staff Software Engineer at Alibaba Cloud, and Giannis Polyzos, Staff Streaming Architect at Ververica, to talk about Apache Fluss and what is coming next.First, a quick primer. Fluss is built for real-time data at scale. It sits cleanly in the broader ecosystem, connects to the tools teams already use, and focuses on predictable performance and simple operations.What stood out in our chat:• Enterprise features that matterSecurity, durability, and consistent throughput. Cleaner ops, stronger governance, and a smoother path from POC to production.• Zero-state analyticsThey walked me through how Fluss cuts network hops and lowers latency. Less shuffling. Faster results. More efficient pipelines.• Fluss 0.8 highlightsBetter developer experience, more stable primitives, and upgrades that help teams standardize on one streaming backbone.• AI-ready directionVendors are shifting to AI. Fluss is adapting with functions that support agents, retrieval, and low-latency model workflows without bolting on complexity.• Streamhouse alignmentThe new capabilities strengthen Fluss in a streamhouse architecture. One place to handle fast ingest, storage, and analytics so teams do not stitch together five systems.We also covered the roadmap. Expect continued work on latency, cost control, and easier day-two operations, plus patterns that large teams can repeat with confidence.Want to get involvedJoin the community, review the open issues, try the latest builds, and share feedback from real workloads. That is how this moves forward.The full conversation with Jark and Giannis is live now on The Ravit Show.#data #ai #FlinkForward #Flink #Streaming #Ververica #TheRavitShow

Nov 24, 2025 • 10min
PlayStation at Scale with Flink: Telemetry, Latency, and Reliability
How does PlayStation run real time at massive scale. I sat down with Bahar Pattarkine from PlayStation the team to unpack how they use Apache Flink across telemetry and player experiences.What we covered:-- Why they chose Flink and what problem it solved first-- Running 15,000+ events per second, launch peaks, regional latency SLOs, and avoiding hot partitions across titles-- Phasing the move from Kafka consumers to a unified Flink pipeline without double processing during cutover-- How checkpointing and async I/O keep latency low during spikes or failures-- Privacy controls and regional rules enforced in real time-- What Flink simplified in their pipelines and the impact on cost and ops#data #ai #streaming #Flink #Playstation #Ververica #realtimestreaming #theravitshow

Nov 21, 2025 • 13min
Convergence Is Here: On Prem Meets Cloud for AI with Cloudera
#EVOLVE25 London was a clear signal that Big Data is entering its third era, and it is about outcomes, not buzzwords. I sat down with Sergio Gago, CTO, Cloudera and we went straight to the shift everyone is feeling. This era is defined by convergence. The work now is to bring on-prem and cloud together so teams can move fast, stay compliant, and keep costs in check. That is where the real AI wins will come from.Here is what we covered in the interview I am publishing next:- What truly defines the third era of Big Data and how it differs from the last decade- Why convergence matters now for performance, cost, and control- Where Cloudera wins today, and where it chooses not to compete- How a unified data foundation raises trust in AI- The new Cloudera + Dell “AI-in-a-box” approach for private, trusted AI- A five-year view of on-prem, cloud, and AI working together- Cloudera’s vision to support this shift end to endIf you care about building trustworthy AI on real enterprise data, this conversation will be useful. #data #ai #EVOLVE25 #cloudera #theravitshow


