Logan: DAGs work really great sort of the smaller they are. And so you can personally inject a dagg that is less likely to be wrong. When you grow things, when you make them really big, then you just have a lot more surface area to get wrong. It takes a lot more time for you to really build up a dag that you're confident in.
What causes us to keep returning to the topic of causal inference on this show? DAG if we know! Whether or not you're familiar with directed acyclic graphs (or… DAGs) in the context of causal inference, this episode is likely for you! DJ Rich, a data scientist at Lyft, joined us to discuss causality — why it matters, why it's tricky, and what happens when you tackle causally modelling the complexity of a large-scale, two-sided market! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.