
CZM Rewind: The Case Against Generative AI (Part 2)
Better Offline
00:00
Hallucinations and Unreliable LLM Behavior
Ed expands the definition of hallucinations and argues LLMs fail on consistency, complex tasks, and multi-step reasoning.
Play episode from 04:22
Transcript


