The Analytics Power Hour

Michael Helbling, Moe Kiss, Tim Wilson, Val Kroll, and Julie Hoyer
undefined
Dec 3, 2019 • 55min

#129: Data Accuracy and Completeness with Yali Sassoon

How accurate is your data? How accurate is any of our data? If our data is more accurate, will we make better decisions? How MUCH better? Why do the show blurbs of late have so many questions? THAT is a question we can ACCURATELY answer: because the shows grapple with challenging questions! On this episode, Snowplow co-founder Yali Sassoon joined us to chat about the nuts and bolts of data accuracy: the inherent messiness of client-side tracking (but, also, the limitations of server-side tracking), strategies of incrementally improving data accuracy (and the costs therein), and the different types of scenarios where different aspects of data accuracy matter in different ways! Pour yourself a drink (a 2 oz. shot of a fine Scotch will do... which would be 59.1471 ml if you want an accurate and precise metric pour), settle in, and give it a listen! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
undefined
Nov 19, 2019 • 55min

#128: Neuroscience, the Customer Experience, and the Data Therein with Diana Lucaci

READ ME!!! LISTEN!!! DO YOU KNOW WHY THIS IS IN ALL CAPS?! IS IT RAISING YOUR HEART RATE?! IS IT MAKING YOU A LITTLE IRRITATED?! IT MIGHT BE! IF IT IS, WE COULD MEASURE IT, AND MAYBE WE WOULD REALIZE THAT WE WERE INDUCING A SUBCONSCIOUS EMOTIONAL RESPONSE AND REALLY SHOULD TURN OFF THE CAPS LOCK! That's the topic of this episode: the brain. Specifically: neuroscience. Even more specifically: neurodesign and neuromarketing and the measurement and analytics therein. We're talking EEGs, eye tracking, predictive eye tracking, heart rate monitoring, and the like (and why it matters) with Diana Lucaci from True Impact. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
undefined
Nov 5, 2019 • 52min

#127: Is Multi-Touch Marketing Attribution Dead? Should It Be? With Priscilla Cheung

Multi-touch attribution is like fat free cheese: it sounds like a great idea, it seems like technology would have made it amazing and delicious by now, and, yet, the reality is incredibly unsatisfying. Since we've recently covered how browsers are making the analyst's lot in life more difficult, and since multi-touch attribution is affected by those changes, we figured it was high time to revisit the topic. It's something we've covered before (twice, actually). But interest in the topic has not diminished, while a claim could be made that reality has gone from being merely a cold dishrag to the face to being a bucket of ice over the head. We sat down with Priscilla Cheung to hash out the topic. No fat free cheese was consumed during the making of the episode. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
undefined
Oct 22, 2019 • 56min

#126: When the Data Contradicts Conventional Wisdom with Emily Oster

Did you hear the one about the Harvard-educated economist who embraced her inner wiring as a lateral thinker to explore topics ranging from HIV/AIDS in Africa to the impact of Hepatitis B on male-biased sex ratios in China to the range of advice and dicta doled out by doctors and parents and in-laws and friends about what to do (and not do!) during pregnancy? It's a data-driven tale if ever there was one! Emily Oster, economics professor at Brown University and bestselling author of Expecting Better and Cribsheet, joined the show to chat about what happens when the evidence (the data!) doesn't match conventional wisdom, and strategies for presenting and discussing topics where that's the case. Plus causal inference! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
undefined
Oct 8, 2019 • 55min

#125: Modern Browsers and the Destruction of the Analyst's Dreams with Cory Underwood

Are you down with ITP? What about ETP? Are you pretty sure that the decline in returning visitors to your site that has everyone in a tizzy is largely due to increasingly restrictive cookie handling by browsers? Do you really, really, REALLY want Google, Apple, Mozilla, and even Microsoft to get on the same page when it comes to cookie handling and JavaScript subtleties? So many questions! Lucky for us (and you!), Measure Slack legend (and L.L. Bean Senior Programmer/Analyst) Cory Underwood has some answers. Or, at least, he will depress you in delightful ways. For complete show notes, including links to items mentioned in this episode, a transcript of the show, and an update on ITP 2.3 from Cory, visit the show page.
undefined
Sep 24, 2019 • 59min

#124: Image-ine What the Analyst Can Do Using Machine Vision with Ali Vanderveld

Have you ever noticed that 68.2% of the people who explain machine learning use a "this picture is a cat" example, and another 24.3% use "this picture is a dog?" Is there really a place for machine learning and the world of computer vision (or machine vision, which we have conclusively determined is a synonym) in the real world of digital analytics? The short answer is the go-to answer of every analyst: it depends. On this episode, we sat down with Ali Vanderveld, Director of Data Science at ShopRunner, to chat about some real world applications of computer vision, as well as the many facets and considerations therein! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
undefined
Sep 10, 2019 • 1h 1min

#123: Ad Fraud with Augustine Fou

What percentage of digital ad impressions and clicks do you think is actually the work of non-human bots? Pick a number. Now double it. Double it again. You're getting close. A recent study by Pixalate found that 19 percent of traffic from programmatic ads in the U.S. is fraudulent. David Raab from the CDP Institute found this number to be "optimistic." Ad fraud historian Dr. Augustine Fou, our guest on this show, has compelling evidence that the actual number could easily be north of 50 percent. Why? Who benefits? Why is it hard to tamp out? Is it illegal (it isn't!)? We explore these topics and more on this episode! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
undefined
Aug 27, 2019 • 60min

#122: Dealing with Disparate Stakeholders with Astrid Illum

It's 1:00 AM, and you can't sleep. The paid search manager needs to know whether brand keywords can be turned off without impacting revenue. The product team needs the latest A/B test results analyzed before they can start on their next sprint. The display media intern urgently needs your help figuring out why the campaign tracking parameters he added for the campaign that launches in two days are breaking the site (you're pretty sure he's confusing "&" and "?" again). And the team running the site redesign needs to know YESTERDAY what fields they need to include in the new headless CMS to support analytics. You're pulled in a million directions, and every request is valid. How do you manage your world without losing your sanity? On this episode, analytics philosopher Astrid Illum from DFDS joins the gang to discuss those challenges. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
undefined
Aug 13, 2019 • 50min

#121: Onboarding the Analyst

Somewhere between "welcome to the company, now get to work!" and weeks of tedious orientation sessions (that, presumably, include a few hours with the legal department explaining that, should you be on a podcast, you need to include a disclaimer that the views expressed on the podcast are your own and not those of the company for which you now work), is a happy medium when it comes to onboarding an analyst. What is that happy medium, and how does one find it? It turns out the answer is that favorite of analyst phrases: "it depends." Unsatisfying? Perhaps. But, listeners who have been properly onboarded to this podcast know that "unsatisfying" is our bread and butter. So, in this episode, Moe and Michael share their thoughts and their emotional intelligence on the subject of analyst onboarding, while Tim works to make up for recent deficiencies in the show's use of the "explicit" tag. For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.
undefined
Jul 30, 2019 • 49min

#120: Causal Inference with Bradley Fay

Listen. Really. That's what you can do. You can listen to this episode and find out what you learn. Or you can NOT listen to the show and NOT find out what you learn. You can't do both, which means that, one way or the other, you WILL be creating your very own counterfactual! That, dear listener, is a fundamental concept when it comes to causal inference. Smart analysts and data scientists the world over are excited about the subject, because it provides a means of thinking and application techniques for actually getting to causality. Bradley Fay from DraftKings is one of those smart data scientists, so the gang sat down with him to discuss the subject! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app