
How to test, optimize, and reduce hallucinations of AIs with Thomas Natschlaeger
PurePerformance
00:00
Using LLMs as judges and measuring faithfulness
Thomas explains using stronger LLMs to judge outputs, assessing faithfulness to source documents and avoiding hallucinations.
Transcript
Play full episode


