MLOps.community  cover image

Data Privacy and Security // LLMs in Production Conference Panel Discussion

MLOps.community

00:00

The Challenges of Hallucination

Shraya: There's obviously a lot of technical work to be done on reducing hallucination, better grounding essentially a lot of these models. But then on the other side, just because something is hallucinating doesn't mean that it's not a useful tool for people. So how do we make sure that people have the right expectations when they're using a product and also a large language model? They can get the most out of it. Shraya: I think like grounding honestly is the way to go. Is the way to kind of solve these solve very domain specific hallucination problems.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app