Data Brew by Databricks cover image

LLMs: Internals, Hallucinations, and Applications | Data Brew | Episode 33

Data Brew by Databricks

00:00

Risks of Hallucinations in Language Models and Mitigation Strategies

This chapter explores the risks of hallucinations in language models and provides strategies to mitigate them, including adjusting the temperature of the ORM, guiding models with prompts, improving model architectures and data quality, conducting offline and online evaluations, enhancing prompt information, and leveraging the chain of thought process for reasoning improvement.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app