
How to test, optimize, and reduce hallucinations of AIs with Thomas Natschlaeger
PurePerformance
00:00
Using LLMs as judges and measuring faithfulness
Thomas explains using stronger LLMs to judge outputs, assessing faithfulness to source documents and avoiding hallucinations.
Play episode from 20:17
Transcript


