Data Skeptic

Kyle Polich
undefined
Nov 11, 2016 • 34min

Unstructured Data for Finance

Financial analysis techniques for studying numeric, well structured data are very mature. While using unstructured data in finance is not necessarily a new idea, the area is still very greenfield. On this episode,Delia Rusu shares her thoughts on the potential of unstructured data and discusses her work analyzing Wikipedia to help inform financial decisions. Delia's talk at PyData Berlin can be watched on Youtube (Estimating stock price correlations using Wikipedia). The slides can be found here and all related code is available on github.
undefined
Nov 4, 2016 • 11min

[MINI] AdaBoost

AdaBoost is a canonical example of the class of AnyBoost algorithms that create ensembles of weak learners. We discuss how a complex problem like predicting restaurant failure (which is surely caused by different problems in different situations) might benefit from this technique.
undefined
Oct 28, 2016 • 37min

Stealing Models from the Cloud

Platform as a service is a growing trend in data science where services like fraud analysis and face detection can be provided via APIs. Such services turn the actual model into a black box to the consumer. But can the model be reverse engineered? Florian Tramèr shares his work in this episode showing that it can. The paper Stealing Machine Learning Models via Prediction APIs is definitely worth your time to read if you enjoy this episode. Related source code can be found in https://github.com/ftramer/Steal-ML.
undefined
Oct 21, 2016 • 13min

[MINI] Calculating Feature Importance

For machine learning models created with the random forest algorithm, there is no obvious diagnostic to inform you which features are more important in the output of the model. Some straightforward but useful techniques exist revolving around removing a feature and measuring the decrease in accuracy or Gini values in the leaves. We broadly discuss these techniques in this episode.
undefined
Oct 14, 2016 • 30min

NYC Bike Share Rebalancing

As cities provide bike sharing services, they must also plan for how to redistribute bicycles as they inevitably build up at more popular destination stations. In this episode, Hui Xiong talks about the solution he and his colleagues developed to rebalance bike sharing systems.
undefined
Oct 7, 2016 • 13min

[MINI] Random Forest

The podcast discusses the Random Forest Algorithm, its use in ensemble learning, and its analogy to running a bookstore. It explores scenarios of helping customers find books, the wisdom of the crowds, and customer interactions. The hosts also delve into the distinction between machine learning algorithms and human judgment.
undefined
Sep 30, 2016 • 22min

Election Predictions

Jo Hardin joins us this week to discuss the ASA's Election Prediction Contest. This is a competition aimed at forecasting the results of the upcoming US presidential election competition. More details are available in Jo's blog post found here. You can find some useful R code for getting started automatically gathering data from 538 via Jo's github and official contest details are available here. During the interview we also mention Daily Kos and 538.
undefined
Sep 23, 2016 • 9min

[MINI] F1 Score

The F1 score is a model diagnostic that combines precision and recall to provide a singular evaluation for model comparison.  In this episode we discuss how it applies to selecting an interior designer.
undefined
Sep 16, 2016 • 35min

Urban Congestion

Urban congestion effects every person living in a city of any reasonable size. Lewis Lehe joins us in this episode to share his work on downtown congestion pricing. We explore topics of how different pricing mechanisms effect congestion as well as how data visualization can inform choices. You can find examples of Lewis's work at setosa.io. His paper which we discussed during the interview isDistance-dependent congestion pricing for downtown zones. On this episode, we discuss State of California data which can be found at pems.dot.ca.gov.
undefined
Sep 9, 2016 • 9min

[MINI] Heteroskedasticity

Heteroskedasticity is a term used to describe a relationship between two variables which has unequal variance over the range.  For example, the variance in the length of a cat's tail almost certainly changes (grows) with age.  On the other hand, the average amount of chewing gum a person consume probably has a consistent variance over a wide range of human heights. We also discuss some issues with the visualization shown in the tweet embedded below.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app