This chapter critiques the common 30% win rate benchmark used in experimentation, arguing for its irrelevance in enhancing organizational intelligence. The discussion highlights the need for more meaningful metrics and draws a connection between the limitations of benchmarks and the philosophical implications of generative AI in problem identification.
It's human nature to want to compare yourself or your organization against your competition, but how valuable are benchmarks to your business strategy? Benchmarks can be dangerous. You can rarely put your hands on all the background and context since, by definition, benchmark data is external to your organization. And you can also argue that benchmarks are a lazy way to evaluate performance, or at least some co-hosts on this episode feel that way! Eric Sandosham, founder and partner at Red & White Consulting Partners (and prolific writer), along with Moe, Tim, and Val break down the problems with benchmarking and offer some alternatives to consider when you get the itch to reach for one! For complete show notes, including links to items mentioned in this episode and a transcript of the show, visit the show page.