
‘These models will always hallucinate’: Seth Dobrin on LLMsDek
American Banker Podcast
00:00
Tackling Hallucinations in Language Models
This chapter explores the challenges of hallucinations in large language models, particularly their inaccuracies and the risks of using transformer architecture for broader applications. It discusses potential mitigation strategies, such as retrieval augmented generation and knowledge graphs, while noting the limitations in achieving zero hallucinations in critical fields.
Transcript
Play full episode