
LLMs: Totally Not Making Stuff Up (they promise) (Ep. 263)
Data Science at Home
00:00
Mitigating Hallucinations in Large Language Models
This chapter explores the obstacles in training large language models, emphasizing issues of overfitting and the hallucination of facts. It introduces the Lamini memory tuning approach aimed at reducing hallucination while discussing the associated computational challenges and environmental concerns.
Transcript
Play full episode