Snipd home pageGet the app
public
Data Skeptic chevron_right

Detecting Drift

Jun 11, 2021
27:19
forum Ask episode
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
1
Introduction
00:00 • 2min
chevron_right
2
How Did You Get a Taste for Machine Learning?
01:48 • 2min
chevron_right
3
Is There a Difference Between Overfit and Drift?
03:28 • 2min
chevron_right
4
Is It Possible to Retrain a Machine Learning Model?
05:22 • 2min
chevron_right
5
How to Measure a Model's Drift?
07:06 • 2min
chevron_right
6
Is the Under Liquid Date Distribution Changing?
08:37 • 2min
chevron_right
7
Is There a Gap in Confidence?
10:21 • 2min
chevron_right
8
How Do We Know if Something Is a Seven?
12:01 • 2min
chevron_right
9
Vertica Analytics for Pioneers - Vertica Dot Com Slash Insights
14:28 • 3min
chevron_right
10
Is the Cost of Retraining a Good Thing?
17:37 • 3min
chevron_right
11
Is the CPM Scalable?
20:27 • 2min
chevron_right
12
Is This Really Necessary?
22:24 • 2min
chevron_right
13
Change Point Modelling
24:35 • 2min
chevron_right
14
Machine Learning and Data Science in Practical Situations
26:15 • 58sec
chevron_right

Sam Ackerman, Research Data Scientist at IBM Research Labs in Haifa, Israel, joins us today to talk about his work Detection of Data Drift and Outliers Affecting Machine Learning Model Performance Over Time.

Check out Sam's IBM statistics/ML blog at: http://www.research.ibm.com/haifa/dept/vst/ML-QA.shtml
HomeTop podcastsPopular guests