

From Clinical Notes to GPT-4: Dr. Emily Alsentzer on Natural Language Processing in Medicine
77 snips Feb 19, 2025
Dr. Emily Alsentzer, an assistant professor at Stanford University and an expert in natural language processing, takes listeners on a journey from medicine to AI. She discusses her evolution from a pre-med background to specializing in clinical AI. The conversation dives into the biases present in AI healthcare diagnostics, particularly with GPT-4, highlighting the disparities in diagnostic accuracy across demographics. Emily also emphasizes the need for collaboration between clinicians and computer scientists, addressing the future of AI in medical research.
AI Snips
Chapters
Books
Transcript
Episode notes
Balancing New and Old NLP
- Modern NLP models facilitate a focus on broader clinical tasks like summarizing medical records.
- However, some valuable principles from older pipeline-based NLP approaches should be reintroduced.
ClinicalBERT's Power
- ClinicalBERT, a model trained on clinical notes, outperforms general language models on medical tasks.
- Publicly available on Hugging Face, it addresses challenges of medical terminology and patient presentations.
Start Small, Think Clinical
- Consider smaller, specialized models before using large general language models for medical NLP tasks.
- Clinical language models are more parameter-efficient and cost-effective, often surpassing larger models' performance.