

#278: Is AI Good at Data Analysis? That's the Wrong Question? with Juliana Jackson
Aug 19, 2025
Juliana Jackson, an Associate Director of Data and Digital Experience at Monks and co-host of the Standard Deviation podcast, dives deep into the intersection of AI and data analysis. She shares her experiences with the limitations and challenges of large language models, warning against oversimplification in the analytics space. The discussion ranges from the psychological pressures faced by analysts to understanding data's probabilistic nature. Juliana emphasizes the essential role of human insight in unraveling complex data narratives amid evolving AI technology.
AI Snips
Chapters
Transcript
Episode notes
How A Podcast Partnership Began
- Juliana tells how Simo Ahava contacted her after a talk and joined her podcast, boosting its reach and longevity.
- Their partnership grew organically and they still experiment with format and topics after nearly three years.
LLMs Are Not A Replacement For Statistical Rigor
- Large language models (LLMs) are probabilistic next-word predictors and introduce unnecessary uncertainty for deterministic numeric analysis.
- Statistical methods and code produce repeatable results, so LLMs alone are a poor replacement for rigorous data analysis.
Combine LLMs With Code For Reliable Analysis
- Use LLMs combined with code (Python/R) rather than alone for reliable tabular analysis and reproducible pipelines.
- Treat LLMs as a tool for discovery and prototyping, not the final production analytic engine.