How To Build a Culture of Experimentation at Work
Unlock business growth through experimentation
While working at VWO, I got the opportunity to study businesses like Netflix, Amazon, Booking, Google, and Microsoft Bing — organizations that have developed something we called a culture of experimentation.
We interviewed veterans like Ronny Kohavi, Lucas Vermeer, Brain Massey, Chris Goward to culminate growth through conversion optimization.
One thing which echoed in these conversations was that all these leaders and companies have successfully democratized experimentation.
That means if an employee has a potent hypothesis, then they’re free to launch an experiment to validate it.
Culture of experimentation, what’s that?
Fortune 100 businesses credit experimentation as one of the reasons for their hypergrowth. These organizations pay more attention to data over the highest-paid person’s opinion (HiPPO)
Like, do you know employees once vetoed Booking.com CEO as his recommended version of their logo wasn’t A/B tested?
Google tested 11,000 shades of blue before finalizing their final logo.
That’s how the culture of experimentation looks like in action — where data is the backbone of decision-making, and everyone shares the freedom to build and test their hypothesis.
By the end of this article, you’ll:
- Understand ways to become an experimentation driven organization
- Learn about which data you can refer to minimize the risk of failure
- Know about the trade-offs you would need to make to prioritize better.
So, let’s dive in.
How to become experimentation-driven?
Amazon, Google, Booking, Netflix, Facebook are all the flag bearers of experimentation-led growth. However, none of these companies were running tests from day one. They all build a culture of experimentation. So, like these companies, even you can build this culture at your workplace.
Paras Chopra, the founder of VWO, explains —
There are four progressions or stages to creating a culture of experimentation:
Stage 1: Where you might not be testing at all
Stage 2: You occasionally test, like testing your newsletter subject line or creating two versions of headlines of your ad campaigns, etc.
Stage 3: Where you hire someone to own the process of experimentation.
Stage 4: Where every team in your organization collects evidence and tests their assumptions to impact their KPIs.
Now, on your road to become an experimentation-led organization, you first need to assess which stage you are at currently.
Then assign a goal to these stages.
Say, if you’re at stage 1, your goal is to start running at least 1 experiment a week.
Now, chances are your teammates or your manager might discourage you from testing, as it takes a lot of effort to test, it will take you some time to gather ideas to experiment, and so on.
So, find low-hanging fruits and go at it. Focus on demonstrating wins early on to everyone.
The objective here is not to move the needle on macro goals like revenue, conversion, etc.
Show value to others first before expecting their support. Publish experimentation wins and failures across your organization. It will help you open your team towards the idea of experimentation.
If you’re at stage 2, your goal should be to get a dedicated person or adopt a process to run the experiment at all times.
If you’re at stage 3, you need to spread this flu of experimentation and make it more contagious to other teams.
That’s how you can build experimentation maturity at work. Now, let’s talk about what are the ingredients of running experiments in the first place.
The nuts and bolts of conversion optimization
Any experimentation idea, big or small, comes from digging and observing the behavior of your user or customers as and when they interact with you, your website, your platform.
To start looking at the right places, you first by asking the right questions.
There are two methods to collect data to understand user or visitor behavior. If I talk in the context of a website, then quantitative research tells you “what’s” happening on your website, whereas qualitative research suggests “why” visitors behave in a certain way.
To gather quantitative insights, Ask yourself — what are the different pages or avenues that your visitors or prospects use to convert on your website? What’s the conversion rate of these pages? What is our bounce or exit rate?
Answer these ‘what’ questions by looking into your google analytics reports, landing page heatmaps, and funnel reports.
To gather qualitative insights, Ask yourself — why are your visitors aren’t converting on your pages? Why is your bounce rate high? And Why say X% of visitors are clicking on your call to action? Why are they completing an action but others are not?
Answer these “why” questions by watching your visitor recordings, running website surveys, and conducting visitor interviews.
A great example of running a thank you page survey is that you ask converted users the following questions:
What motivated you to buy today? What other websites did you visit before purchasing from us? Why you bought this product over the other?
Asking such questions helps you understand what supports your buyers to purchase or take action. Such insights are valuable to convert other prospects.
During my VWO days, I helped an auto insurance company to improve its pricing page experience. It had two insurance plans — lite and pro.
The problem was that website visitors were not buying the pro plan. I observed that the plan comparison section had too much information related to each plan.
We validated this observation by looking into the visitor recording and survey responses.
We saw that after spending ten or more seconds on the pricing page, website visitors were retracting to the easier option, which was either to buy the cheaper plan or to hit the exit button.
When the website owner tested a page variant by listing the differentiators between both plans versus showing all the details — she saw a 13% increase in order value.
When you start asking those what and why questions, you’ll see yourself observing more and more opportunities to run experiments.
How to prioritize your experimentation roadmap?
However, what to do when you have too many such observations? Well, you prioritize. You can use the ICE framework to rank your hypothesis. I use it for pretty much everything.
ICE stands for impact, confidence, and effort
Score each of your hypotheses based on these parameters on a scale of 3. Where one being the lowest and three being the highest.
Prioritization is about making trade-offs. If you’re starting with experimentation, I would recommend you pluck your low-hanging fruits first. Things that you see as optimization opportunities on your checkout, pricing, or conversion pages.
Take the first step towards experimentation-driven growth
Score some early wins and then take the momentum from there.
If you’re already testing, then build a cadence of testing continuously. Set a goal of keeping at least one experiment running at any given point in time.
These are the building blocks of becoming Netflix, Booking.com, Amazons of the world. They all started small and endured hundreds of experimentation failures for the wins that designed their growth story.