Data Science at Home cover image

LLMs: Totally Not Making Stuff Up (they promise) (Ep. 263)

Data Science at Home

00:00

Mitigating Hallucinations in Large Language Models

This chapter explores the obstacles in training large language models, emphasizing issues of overfitting and the hallucination of facts. It introduces the Lamini memory tuning approach aimed at reducing hallucination while discussing the associated computational challenges and environmental concerns.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app