
What's New In Data
From Data Pipelines to Agentic Applications: Deploying LLM Apps That Actually Work
May 1, 2025
Spencer Cook, Lead Solutions Architect at Databricks, helps financial services leverage cloud solutions for real-world challenges. He discusses the journey from data hype to practical AI, focusing on real-time data pipelines and vector search. Spencer elaborates on how to minimize AI hallucinations while maximizing business value through clean data governance. The conversation highlights innovations like Retrieval-Augmented Generation (RAG) for AI accuracy, and the critical role of data management in ensuring effective LLM deployment and customer experience.
41:34
Episode guests
AI Summary
AI Chapters
Episode notes
Podcast summary created with Snipd AI
Quick takeaways
- Real-time, high-quality data management is crucial for enterprises to achieve measurable business outcomes with generative AI and LLM applications.
- Implementing strict data governance and safety measures is essential for ensuring accuracy and fostering trust in AI-generated outputs for end users.
Deep dives
Deployment of Generative AI at Scale
Enterprises are increasingly implementing generative AI and agentic applications to achieve measurable business outcomes. The discussion highlights the importance of real-time, high-quality data management as companies navigate the challenges of leveraging large language models (LLMs). Successful use cases often involve transforming information retrieval and coding assistance into tangible business value, even as organizations grapple with the complexities of preparing their data for AI applications. Adopting strict data governance and security measures is essential to balancing innovative AI deployment while maintaining operational safety and reliability.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.