
‘These models will always hallucinate’: Seth Dobrin on LLMsDek
American Banker Podcast
Tackling Hallucinations in Language Models
This chapter explores the challenges of hallucinations in large language models, particularly their inaccuracies and the risks of using transformer architecture for broader applications. It discusses potential mitigation strategies, such as retrieval augmented generation and knowledge graphs, while noting the limitations in achieving zero hallucinations in critical fields.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.