What Next cover image

ChatGPT, MD

What Next

00:00

Hallucinations and safety risks

Brittany summarizes studies showing LLMs give potentially harmful or omitted clinical advice roughly one in five times.

Play episode from 06:12
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app