Chad Sanderson breaks down the most successful types of experimentations based on company size and growth ambitions
For Chad Sanderson, head of product – data platform at Convoy, the role of data and experimentation are inextricably intertwined.
At Convoy, he oversees the end-to-end data platform team — which includes data engineering, machine learning, experimentation, data pipeline — among a multitude of other teams who are all in service of helping thousands of carriers ship freight more efficiently. The role has given him a broad overview of the process, from ideation, construction to execution.
As a result, Chad has had a front-row seat that most practitioners never do: The end-to-end process of experimentation from hypothesis, data definitions, analysis, reporting to year-end financials. Naturally, he had a few thoughts to share with AB Tasty’s VP Marketing Marylin Montoya in their conversation on the experimentation discipline and the complexities of identifying trustworthy metrics.
Introducing experimentation as a discipline
Experimentation, despite all of its accolades, is still relatively new. You’ll be hard pressed to find great collections of literature or an academic approach (although Ronny Kohavi has penned some thoughts on the subject matter). Furthermore, experimentation has not been considered a data science discipline, especially when compared to areas of machine learning or data warehousing.
While there are a few tips here and there available from blogs, you end up missing out on the deep technical knowledge and best practices of setting up a platform, building a metrics library and selecting the right metrics in a systematic way.
Chad attributes experimentation’s accessibility as a double-edged sword. A lot of companies have yet to apply the same rigor that they do to other data science-related fields because it’s easy to start from a marketing standpoint. But as the business grows, so does the maturity and the complexity of experimentation. That’s when the literature on platform creation and scaling is scant, leading to the field being undervalued and hard to recruit the right profiles.
When small-scale experimentation is your best bet
When you’re a massive-scale company — such as Microsoft or Google with different business units, data sources, technologies and operations — rolling out new features or changes is an incredibly risky endeavour, considering that fact that any mistake could impact millions of users. Imagine accidentally introducing a bug for Microsoft Word or PowerPoint: The impact on the bottom line would be detrimental.
The best way for these companies to experiment is with a cautious, small-scale approach. The aim is to focus on immediate action, catching things quickly in real time and rolling them back.
On the other hand, if you’re a startup in a hyper-growth stage, your approach will vastly differ. These smaller businesses typically have to show double-digit gains with every new feature rollout to their investors, meaning their actions are more so focused on proving the feature’s positive impact and the longevity of its success.
Make metrics your trustworthy allies
Every business will have very different metrics depending on what they’re looking for; it’s essential to define what you want before going down the path of experimentation and building your program.
One question you’ll need to ask yourself is: What do my decision-makers care about? What is leadership looking to achieve? This is the key to defining the right set of metrics that actually moves your business in the right direction. Chad recommends doing this by distinguishing your front-end and back-end metrics: the former is readily available, the latter not so much. Client-side metrics, what he refers to as front-end metrics, measure revenue per transaction. All metrics then lead to revenue, which in and of itself is not necessarily a bad thing, but that just means all your decisions are based on revenue growth and less on proving the scalability or winning impact of a feature.
Chad’s advice is to start with the measurement problems that you have, and from there, build out your experimentation culture, build out the system and lastly choose a platform.
What else can you learn from our conversation with Chad Sanderson?
- Different experimentation needs for engineering and marketing
- Building a culture of experimentation from top-down
- The downside of scaling MVPs
- Why marketers are flagbearers of experimentation
About Chad Sanderson
Chad Sanderson is an expert on digital experimentation and analysis at scale. He is a product manager, writer and public speaker, who has given lectures on topics such as advanced experimentation analysis, the statistics of digital experimentation, small-scale experimentation for small businesses and more. He previously worked as senior program manager for Microsoft’s AI platform. Prior to that, Chad worked for Subway’s experimentation team as a personalization manager.
About 1,000 Experiments Club
The 1,000 Experiments Club is an AB Tasty-produced podcast hosted by Marylin Montoya, VP of Marketing at AB Tasty. Join Marylin and the Marketing team as they sit down with the most knowledgeable experts in the world of experimentation to uncover their insights on what it takes to build and run successful experimentation programs.