Better Offline cover image

CZM Rewind: The Case Against Generative AI (Part 2)

Better Offline

00:00

Hallucinations and Unreliable LLM Behavior

Ed expands the definition of hallucinations and argues LLMs fail on consistency, complex tasks, and multi-step reasoning.

Play episode from 04:22
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app