
Tech Lead Journal #238 - AI is Smart Until It's Dumb: Why LLM Will Fail When You Least Expect It - Emmanuel Maggiori
18 snips
Nov 10, 2025 Emmanuel Maggiori, an AI expert and author, dives into the pitfalls of AI in this insightful discussion. He explains why large language models (LLMs) can excel yet fail dramatically, particularly when handling basic tasks. Emmanuel details the common reasons behind AI project failures, emphasizing the importance of realistic AI adoption in businesses. He also highlights the concept of hallucinations in AI and shares advice for engineers to stay relevant in a rapidly evolving landscape. His perspective helps demystify the tech while grounding expectations.
AI Snips
Chapters
Books
Transcript
Episode notes
Smart Until It’s Dumb
- AI systems appear brilliant until they make mistakes humans never would, revealing their lack of real understanding.
- These 'epic mistakes' (hallucinations) are intrinsic to current machine-learning methods and persist unpredictably.
LLMs Are Predictors With Wrappers
- A language model's core task is predicting the next token from context; all higher behaviors are wrappers around that.
- Web search or tools are separate programs that gather data and insert it into prompts for the model to use.
Training Data Limits Truth
- LLMs train on massive internet text and human-ranked responses, which shape style and alignment but not perfect factuality.
- Lack of exhaustive data causes plausible but incorrect outputs like invented citations or math errors.





