
AI Explained
AI Explained is a series hosted by Fiddler AI featuring industry experts on the most pressing issues facing AI and machine learning teams.
Learn more about Fiddler AI: www.fiddler.ai
Latest episodes

Mar 20, 2025 • 44min
AI Observability and Security for Agentic Workflows with Karthik Bharathy
Karthik Bharathy, General Manager of AI Ops & Governance at AWS, brings over 20 years of AI and ML expertise. He delves into critical aspects of AI security and observability, emphasizing the need for human oversight in agentic workflows. The conversation covers the evolution of AI in enterprises, the transformative impact of generative AI, and the complexities of implementing AI security and compliance. Karthik also discusses strategic partnerships that enhance AI workflows and the significance of robust data infrastructure in financial services.

Feb 28, 2025 • 40min
GenAI Use Cases and Challenges in Healthcare with Dr. Girish Nadkarni
In this episode of AI Explained, Dr. Girish Nadkarni from Icahn School of Medicine at Mount Sinai.
He discusses the implementation and impact of AI, specifically generative AI, in healthcare. He covers topics such as clinical implementation, risk prediction, the interplay between predictive and generative AI, the importance of governance and ethical considerations in AI deployment, and the future of personalized medicine.

Dec 7, 2024 • 57min
GRC in Generative AI with Navrina Singh
Navrina Singh, Founder and CEO of Credo AI, dives into the pivotal role of AI governance in driving innovation. With a rich background at Microsoft and Qualcomm, she discusses why responsible AI practices are critical for all industries, not just the regulated ones. Gain insights on the challenges of implementing transparent and fair AI, the impact of the EU AI Act, and the complexities of state-level regulations in the U.S. Singh also underscores the need for trust, accountability, and collaboration to navigate the evolving landscape of generative AI.

9 snips
Nov 9, 2024 • 53min
Inference, Guardrails, and Observability for LLMs with Jonathan Cohen
Jonathan Cohen, VP of Applied Research at NVIDIA and leader of the NeMo platform, dives into the vital role of AI in enterprise applications. He discusses how NeMo Guardrails enhance AI security and observability, crucial for responsible deployments. Jonathan shares insights on the evolving landscape of AI agents, balancing automation with human oversight. Real-world examples illustrate the power of AI, like successful implementations in telecommunications, showcasing how organizations can leverage advanced AI while navigating security challenges.

Oct 25, 2024 • 46min
What the EU AI Act Really Means with Kevin Schawinski
On this episode, we’re joined by Kevin Schawinski, CEO and Co-Founder at Modulos AG
The EU AI Act was passed to redefine the landscape for AI development and deployment in Europe. But what does it really mean for enterprises, AI innovators, and industry leaders?
Schawinski will share actionable insights to help organizations stay ahead of the EU AI Act, and discuss risk implications to meeting transparency requirements, while advancing responsible AI practices.

Jul 29, 2024 • 48min
Productionizing GenAI at Scale with Robert Nishihara
In this insightful discussion, Robert Nishihara, Co-founder and CEO of Anyscale, dives into the complexities of scaling generative AI in enterprises. He highlights the challenges of building robust AI infrastructure and the journey from theoretical concepts to practical applications. Key topics include the integration of Ray and PyTorch for efficient distributed training and the critical role of observability in AI workflows. Nishihara also addresses the nuances of evaluating AI performance metrics and the evolution of retrieval-augmented generation.

May 2, 2024 • 59min
Metrics to Detect Hallucinations with Pradeep Javangula
In this episode, we’re joined by Pradeep Javangula, Chief AI Officer at RagaAI
Deploying LLM applications for real-world use cases requires a comprehensive workflow to ensure LLM applications generate high-quality and accurate content. Testing, fixing issues, and measuring impact are critical steps of the workflow to help LLM applications deliver value.
Pradeep Javangula, Chief AI Officer at RagaAI will discuss strategies and practical approaches organizations can follow to maintain high performing, correct, and safe LLM applications.

Mar 7, 2024 • 57min
AI Safety and Alignment with Amal Iyer
In this episode, we’re joined by Amal Iyer, Sr. Staff AI Scientist at Fiddler AI.
Large-scale AI models trained on internet-scale datasets have ushered in a new era of technological capabilities, some of which now match or even exceed human ability. However, this progress emphasizes the importance of aligning AI with human values to ensure its safe and beneficial societal integration. In this talk, we will provide an overview of the alignment problem and highlight promising areas of research spanning scalable oversight, robustness and interpretability.

Jan 23, 2024 • 57min
Managing the Risks of Generative AI with Kathy Baxter
On this episode, we’re joined by Kathy Baxter, Principal Architect of Responsible AI & Tech at Salesforce.
Generative AI has become widely popular with organizations finding ways to drive innovation and business growth. The adoption of generative AI, however, remains low due to ethical implications and unintended consequences that negatively impact the organization and its consumers.
Baxter will discuss ethical AI practices organizations can follow to minimize potential harms and maximize the social benefits of AI.

Dec 21, 2023 • 59min
Legal Frontiers of AI with Patrick Hall
On this episode, we’re joined by Patrick Hall, Co-Founder of BNH.AI.
We will delve into critical aspects of AI, such as model risk management, generating adverse action notices, addressing algorithmic discrimination, ensuring data privacy, fortifying ML security, and implementing advanced model governance and explainability.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.