
Data Skeptic
The Data Skeptic Podcast features interviews and discussion of topics related to data science, statistics, machine learning, artificial intelligence and the like, all from the perspective of applying critical thinking and the scientific method to evaluate the veracity of claims and efficacy of approaches.
Latest episodes

Jun 21, 2021 • 36min
Automatic Identification of Outlier Galaxy Images
Lior Shamir, Associate Professor of Computer Science at Kansas University, joins us today to talk about the recent paper Automatic Identification of Outliers in Hubble Space Telescope Galaxy Images. Follow Lio on Twitter @shamir_lior

Jun 16, 2021 • 29min
Do We Need Deep Learning in Time Series
Shereen Elsayed and Daniela Thyssens, both are PhD Student at Hildesheim University in Germany, come on today to talk about the work “Do We Really Need Deep Learning Models for Time Series Forecasting?”

Jun 11, 2021 • 27min
Detecting Drift
Sam Ackerman, Research Data Scientist at IBM Research Labs in Haifa, Israel, joins us today to talk about his work Detection of Data Drift and Outliers Affecting Machine Learning Model Performance Over Time. Check out Sam's IBM statistics/ML blog at: http://www.research.ibm.com/haifa/dept/vst/ML-QA.shtml

May 31, 2021 • 25min
Darts Library for Time Series
Julien Herzen, PhD graduate from EPFL in Switzerland, comes on today to talk about his work with Unit 8 and the development of the Python Library: Darts.

May 24, 2021 • 32min
Forecasting Principles and Practice
Welcome to Timeseries! Today’s episode is an interview with Rob Hyndman, Professor of Statistics at Monash University in Australia, and author of Forecasting: Principles and Practices.

May 21, 2021 • 9min
Prequisites for Time Series
Today's experimental episode uses sound to describe some basic ideas from time series. This episode includes lag, seasonality, trend, noise, heteroskedasticity, decomposition, smoothing, feature engineering, and deep learning.

May 7, 2021 • 33min
Orders of Magnitude
Today’s show in two parts. First, Linhda joins us to review the episodes from Data Skeptic: Pilot Season and give her feedback on each of the topics. Second, we introduce our new segment “Orders of Magnitude”. It’s a statistical game show in which participants must identify the true statistic hidden in a list of statistics which are off by at least an order of magnitude. Claudia and Vanessa join as our first contestants. Below are the sources of our questions. Heights https://en.wikipedia.org/wiki/Willis_Tower https://en.wikipedia.org/wiki/Eiffel_Tower https://en.wikipedia.org/wiki/GreatPyramidof_Giza https://en.wikipedia.org/wiki/InternationalSpaceStation Bird Statistics Birds in the US since 2000 Causes of Bird Mortality Amounts of Data Our statistics come from this post

May 3, 2021 • 44min
They're Coming for Our Jobs
AI has, is, and will continue to facilitate the automation of work done by humans. Sometimes this may be an entire role. Other times it may automate a particular part of their role, scaling their effectiveness. Unless progress in AI inexplicably halts, the tasks done by humans vs. machines will continue to evolve. Today’s episode is a speculative conversation about what the future may hold. Co-Host of Squaring the Strange Podcast, Caricature Artist, and an Academic Editor, Celestia Ward joins us today! Kyle and Celestia discuss whether or not her jobs as a caricature artist or as an academic editor are under threat from AI automation. Mentions https://squaringthestrange.wordpress.com/ https://twitter.com/celestiaward The legendary Dr. Jorge Pérez and his work studying unicorns Supernormal stimulus International Society of Caricature Artists Two Heads Studios

Apr 26, 2021 • 40min
Pandemic Machine Learning Pitfalls
Today on the show Derek Driggs, a PhD Student at the University of Cambridge. He comes on to discuss the work Common Pitfalls and Recommendations for Using Machine Learning to Detect and Prognosticate for COVID-19 Using Chest Radiographs and CT Scans. Help us vote for the next theme of Data Skeptic! Vote here: https://dataskeptic.com/vote

Apr 19, 2021 • 20min
Flesch Kincaid Readability Tests
Given a document in English, how can you estimate the ease with which someone will find they can read it? Does it require a college-level of reading comprehension or is it something a much younger student could read and understand? While these questions are useful to ask, they don't admit a simple answer. One option is to use one of the (essentially identical) two Flesch Kincaid Readability Tests. These are simple calculations which provide you with a rough estimate of the reading ease. In this episode, Kyle shares his thoughts on this tool and when it could be appropriate to use as part of your feature engineering pipeline towards a machine learning objective. For empirical validation of these metrics, the plot below compares English language Wikipedia pages with "Simple English" Wikipedia pages. The analysis Kyle describes in this episode yields the intuitively pleasing histogram below. It summarizes the distribution of Flesch reading ease scores for 1000 pages examined from both Wikipedias.