AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Navigating Marketing Metrics and ROI at Uber
This chapter explores the complexities of predictive modeling and measuring marketing effectiveness within Uber, particularly in the context of churn, lifetime value, and brand marketing. The conversation underscores the necessity of clear communication around measurement expectations, as well as the importance of curiosity and strategic thinking in bridging the gap between data science and marketing efforts. Additionally, it highlights the balance between quantifying marketing ROI and fostering growth, emphasizing the need for resilience in both personal and professional dimensions.
What’s up everyone, today we have the pleasure of sitting down with Sundar Swaminathan, author of the experiMENTAL newsletter and part time Marketing and Data science advisor?
Summary: After leading Uber's Marketing Data Science teams, Sundar shares insights that work for both tech giants and startups. Beyond uncovering that Meta ads generated zero incremental value (saving $30 million annually), they mastered measuring brand impact through geo testing and predicting LTV through first-week behaviors. Small companies can adapt these methods through strategic A/B testing and simplified attribution models, even with limited sample sizes. Building data science teams that embrace business impact over technical complexity, and maintaining curiosity, like when direct driver engagement revealed that recommending Saturday afternoon starts over Friday peak hours improved retention.
About Sundar
Marketing Incrementality Testing Reveals Meta Ads Ineffective at Uber
Performance marketing often reveals surprising truths about channel effectiveness, as demonstrated by a fascinating case study from Uber's marketing operations. When confronted with unstable customer acquisition costs (CAC) that fluctuated 10-20% week over week despite consistent ad spend on Meta platforms, Uber's performance marketing team, led by Sundar, decided to investigate the underlying causes.
The investigation began when the team noticed significant volatility in signup rates despite maintaining steady advertising investments. This inconsistency prompted a deeper analysis of Meta's effectiveness as a primary performance marketing channel. The timing of this analysis was particularly relevant, as Uber had already achieved substantial market penetration eight years after its launch, especially in major urban markets where awareness wasn't the primary barrier to adoption.
Through rigorous data analysis, the team implemented a three-month incrementality test to measure Meta's true impact on user acquisition. The test utilized a classic A/B testing methodology, comparing a control group receiving no paid ads against a treatment group exposed to Meta advertising. The results were striking: Meta advertising showed virtually no incremental value in driving new user acquisition, a finding that was validated by Meta's own data science team.
The outcome of this experiment led to a significant strategic shift, resulting in annual savings of approximately $30 million in the U.S. market alone. While this figure might seem modest for a company of Uber's scale, its implications were far-reaching when considered across global markets. The success of this experiment also highlighted the importance of data-driven decision-making and the willingness to challenge assumptions about established marketing channels.
Key takeaway: Established marketing channels should never be exempt from rigorous effectiveness testing. Regular incrementality testing can reveal unexpected insights about channel performance and lead to substantial cost savings. Marketing teams should prioritize data-driven decision-making over assumptions about channel effectiveness, even for seemingly essential platforms.
How to Run Marketing Experiments With Limited Data
Most companies don’t have the volume of signups or users that an Uber does. Marketing experiments require a mindset shift when working with small data samples. While A/B testing remains the gold standard for measuring marketing effectiveness, Sunday thinks that companies with limited data can still validate their marketing efforts through strategic pre-post testing approaches.
Pre-post testing, when properly implemented, serves as a valuable tool for measuring marketing impact. The key lies in isolation: controlling variables and measuring the impact of a single change. For instance, a marketplace company successfully conducted a pre-post test on branded search keywords in France by isolating specific terms in a defined region. This focused approach provided reliable insights despite not having the massive data volumes typically associated with incrementality testing.
That being said, Sundar adds that early-stage companies should prioritize high-impact experiments capable of delivering substantial results vs testing tiny changes that will barely have detectable effects. With small sample sizes, tests should target minimum detectable effects (MDE) of 30-40%. These larger effect sizes become measurable even with limited data, making them ideal for fundamental changes such as exploring new ideal customer profiles (ICPs) or revamping core value propositions, rather than pursuing minor optimizations.
An example that Sundar recalls while working at a travel tech startup demonstrated the value of running A/B tests even with limited data. Despite having only 100-200 weekly signups, they detected a 40% conversion drop after modifying their onboarding flow. While the test might have been considered "poorly powered" by strict statistical standards, it successfully prevented a significant negative impact on the business. This illustrates how even small-scale testing can provide crucial insights; it's better to have 60% confidence in a positive change than to miss a catastrophic drop with 95% confidence.
The confidence level in marketing experiments operates on a spectrum, with A/B tests providing the highest confidence and pre-post tests offering valuable but less definitive insights. Success depends on maintaining experimental discipline, carefully controlling variables, and understanding the tradeoffs between confidence levels and the humbling reality of practical constraints. Marketing teams must balance their confidence requirements against their risk tolerance when designing and interpreting tests.
Key takeaway: Companies with limited data should focus on measuring high-impact marketing changes through carefully controlled pre-post tests. Success comes from isolating variables, targeting substantial effect sizes, and maintaining experimental discipline. This approach enables meaningful measurement while acknowledging the practical constraints of smaller data sets.
The Difference Between AB Testing and Incrementality Testing
Marketing experimentation terminology often creates unnecessary complexity in what should be straightforward concepts. The fundamental structure of both A/B testing and incrementality testing follows the same principle: comparing outcomes between groups that receive different treatments.
Statistical analysis remains consistent across both testing approaches. Whether using Bayesian or frequentist methods, the underlying comparison examines differences between groups, regardless of what those groups receive. The statistical calculations remain indifferent to whether one group receives no treatment (as in incrementality tests) or a variation of the treatment (as in traditional A/B tests).
Incrementality testing extends beyond simple presence versus absence comparisons. For example, marketers can test spending increm...
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode