
The Case Against Generative AI (Part 2)
Better Offline
00:00
Hallucinations and the Limits of LLM Reliability
Ed expands the definition of hallucinations and argues LLMs cannot be trusted for consistent, multi-step reasoning.
Transcript
Play full episode