American Banker Podcast cover image

‘These models will always hallucinate’: Seth Dobrin on LLMsDek

American Banker Podcast

00:00

Tackling Hallucinations in Language Models

This chapter explores the challenges of hallucinations in large language models, particularly their inaccuracies and the risks of using transformer architecture for broader applications. It discusses potential mitigation strategies, such as retrieval augmented generation and knowledge graphs, while noting the limitations in achieving zero hallucinations in critical fields.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app