
ESC TV Today – Your Cardiovascular News Season 3 - Ep.27: Extended interview on 'ChatGPT, MD?': large language models at the bedside
Nov 20, 2025
Folkert Asselbergs, Professor of Cardiology and digital health expert, dives into the transformative role of large language models in clinical settings. He discusses how these models can automate administrative tasks and even serve as AI companions grounded in ESC guidelines. Folkert addresses concerns about hallucinations and bias, and argues for innovative evaluation methods over traditional trials. He encourages democratized patient access to AI tools while emphasizing data security and the potential for personal health apps in the future.
AI Snips
Chapters
Transcript
Episode notes
AI Frees Clinicians From Administrative Burden
- Large language models can remove redundant clinical tasks like reporting, coding and administrative work.
- Folkert Asselbergs expects AI to free physicians to focus on patient care rather than paperwork.
Guideline-Tuned LLMs Reduce Hallucinations
- ESC built a guideline-tuned large language model to provide trustworthy, source-linked answers.
- The model highlights the originating guideline section to reduce hallucination and increase clinician trust.
Hallucinations And Bias Are Real Risks
- Hallucinations occur when models fabricate references or facts, which can mislead clinicians and patients.
- Asselbergs warns that biased recommendations based on patient attributes are dangerous and must be addressed.
