Linear Digressions
Ben Jaffe and Katie Malone
Linear Digressions is a podcast about machine learning and data science. Machine learning is being used to solve a ton of interesting problems, and to accomplish goals that were out of reach even a few short years ago.
Episodes
Mentioned books
Nov 6, 2017 • 22min
Machine Learning: The High Interest Credit Card of Technical Debt
This week, we've got a fun paper by our friends at Google about the hidden costs of maintaining machine learning workflows. If you've worked in software before, you're probably familiar with the idea of technical debt, which are inefficiencies that crop up in the code when you're trying to go fast. You take shortcuts, hard-code variable values, skimp on the documentation, and generally write not-that-great code in order to get something done quickly, and then end up paying for it later on. This is technical debt, and it's particularly easy to accrue with machine learning workflows. That's the premise of this episode's paper.
Oct 30, 2017 • 15min
Improving Upon a First-Draft Data Science Analysis
There are a lot of good resources out there for getting started with data science and machine learning, where you can walk through starting with a dataset and ending up with a model and set of predictions. Think something like the homework for your favorite machine learning class, or your most recent online machine learning competition. However, if you've ever tried to maintain a machine learning workflow (as opposed to building it from scratch), you know that taking a simple modeling script and turning it into clean, well-structured and maintainable software is way harder than most people give it credit for. That said, if you're a professional data scientist (or want to be one), this is one of the most important skills you can develop.
In this episode, we'll walk through a workshop Katie is giving at the Open Data Science Conference in San Francisco in November 2017, which covers building a machine learning workflow that's more maintainable than a simple script. If you'll be at ODSC, come say hi, and if you're not, here's a sneak preview!
Oct 23, 2017 • 17min
Survey Raking
It's quite common for survey respondents not to be representative of the larger population from which they are drawn. But if you're a researcher, you need to study the larger population using data from your survey respondents, so what should you do? Reweighting the survey data, so that things like demographic distributions look similar between the survey and general populations, is a standard technique and in this episode we'll talk about survey raking, a way to calculate survey weights when there are several distributions of interest that need to be matched.
Oct 16, 2017 • 16min
Happy Hacktoberfest
It's the middle of October, so you've already made two pull requests to open source repos, right? If you have no idea what we're talking about, spend the next 20 minutes or so with us talking about the importance of open source software and how you can get involved. You can even get a free t-shirt!
Hacktoberfest main page: https://hacktoberfest.digitalocean.com/#details
Oct 9, 2017 • 18min
Re - Release: Kalman Runners
In honor of the Chicago marathon this weekend (and due in large part to Katie recovering from running in it...) we have a re-release of an episode about Kalman filters, which is part algorithm part elaborate metaphor for figuring out, if you're running a race but don't have a watch, how fast you're going.
Katie's Chicago race report:
miles 1-13: light ankle pain, lovely cool weather, the most fun EVAR
miles 13-17: no more ankle pain but quads start getting tight, it's a little more effort
miles 17-20: oof, really tight legs but still plenty of gas in then tank.
miles 20-23: it's warmer out now, legs hurt a lot but running through Pilsen and Chinatown is too fun to notice
mile 24: ugh cramp everything hurts
miles 25-26.2: awesome crowd support, really tired and loving every second
Final time: 3:54:35
Oct 2, 2017 • 19min
Neural Net Dropout
Neural networks are complex models with many parameters and can be prone to overfitting. There's a surprisingly simple way to guard against this: randomly destroy connections between hidden units, also known as dropout. It seems counterintuitive that undermining the structural integrity of the neural net makes it robust against overfitting, but in the world of neural nets, weirdness is just how things go sometimes.
Relevant links: https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf
Sep 25, 2017 • 30min
Disciplined Data Science
As data science matures as a field, it's becoming clearer what attributes a data science team needs to have to elevate their work to the next level. Most of our episodes are about the cool work being done by other people, but this one summarizes some thinking Katie's been doing herself around how to guide data science teams toward more mature, effective practices. We'll go through five key characteristics of great data science teams, which we collectively refer to as "disciplined data science," and why they matter.
Sep 18, 2017 • 28min
Hurricane Forecasting
It's been a busy hurricane season in the Southeastern United States, with millions of people making life-or-death decisions based on the forecasts around where the hurricanes will hit and with what intensity. In this episode we'll deconstruct those models, talking about the different types of models, the theory behind them, and how they've evolved through the years.
Sep 11, 2017 • 18min
Finding Spy Planes with Machine Learning
There are law enforcement surveillance aircraft circling over the United States every day, and in this episode, we'll talk about how some folks at BuzzFeed used public data and machine learning to find them. The fun thing here, in our opinion, is the blend of intrigue (spy planes!) with tech journalism and a heavy dash of publicly available and reproducible analysis code so that you (yes, you!) can see exactly how BuzzFeed identifies the surveillance planes.
Sep 4, 2017 • 23min
Data Provenance
Software engineers are familiar with the idea of versioning code, so you can go back later and revive a past state of the system. For data scientists who might want to reconstruct past models, though, it's not just about keeping the modeling code. It's also about saving a version of the data that made the model. There are a lot of other benefits to keeping track of datasets, so in this episode we'll talk about data lineage or data provenance.


