Neural Search Talks — Zeta Alpha cover image

The Promise of Language Models for Search: Generative Information Retrieval

Neural Search Talks — Zeta Alpha

00:00

Hallucination and Faithfulness

The problem is not going to be solved easily, as we have now seen. I think connecting to faithfulness, hallucination is you have some input that should be used to produce the answer. And the answer is not faithful to that input. Regardless of whether this is factual or it supports anything else, it's just not faithful toThat input it should be using. But then how related would it be, how would a line, would hallucination be related to alignment?

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app