One accurate measurement is worth more than a thousand expert opinions

– Admiral Grace Hopper

Since its inception in 2014, we at Ogury haven’t been short of great innovative ideas from both our genius people and astute clients. The challenge then comes when deciding which ideas to proceed with, and that’s even after categorizing by short-, medium-, and long-term options. We needed a way to easily test and compare different ideas and gain reliable results. That’s when we turned to online controlled experiments, also known as A/B testing.

What is A/B testing?

A/B tests are sometimes called online controlled experiments, field experiments or split tests. They are heavily used at companies like Airbnb, Amazon,, eBay, Facebook, Google, Microsoft, Netflix, Twitter and Uber. These companies run 1,000s to 10,000s of experiments every year to evaluate and validate different ideas, hypotheses and opinions. The basic idea is to separate users into different groups or variations with different variables. You then run experiments for a specific period of time then analyze key metrics on every group and interpret the result.

Cupcakes, you say

Let’s take a look at a real-life example. Imagine you want to assess the effect of adding lemon zest to cupcakes. You bake your two batches of cupcakes; one with lemon zest and one without (plain). You hand the plain cupcakes to a group of friends, while another group gets the new lemon zest version. To ensure there aren’t any biases, each person is assigned to their group at random. After they’ve eaten the cupcakes, you gather feedback from your friends. And, if the lemon zest version gains the most votes from your friends, then it’s going to become your go-to cupcake recipe. 

In the AdTech industry, it’s not always simple to gather user feedback. We can run surveys, but the response rate isn’t always statistically significant. A/B testing is a great solution as it gives us and advertisers a more scientific method to optimize ad spend instead of relying on guesswork. A big bonus for us in AdTech is that we gain a huge amount of traffic with online advertising. With such a large sample size readily available, statistical bias is not a problem for us. 

Ogury’s A/B test ice-breaker

Our first step was to build our in-house experimentation platform which met various experiment and analysis requirements. The platform comprised two major parts: a real-time delivery decision maker system, which split inventory traffic into different strategies, and an analysis part with dashboards, which enabled us to compare results on those strategies. 

Once the team produced the first experimentation design, we ran our first test in our ads delivery process. 

We tested several ways to choose the best ads to display to customers. Ultimately, the one that achieved the best result was based on Ogury’s proprietary historical mobile journey data to predict user engagement rate. This first A/B test successfully predicted a 16% lift on the accomplished rate, which is the engagement rate of a customer when seeing an ad. After this successful ice-breaker, we were able to scale the number of experiments that we ran and the maturity of our platform.

The experimentation platform dashboard displays the results from the A/B tests.

We’re doing a walk-run

As a rough rule of thumb, A/B tests stages are determined by the number of experiments an organization is conducting. 

  • Crawl phase: approximately one test per month (~10/year)
  • Walk phase: approx. one test per week (~50/year)
  • Run phase: approx. daily testing (~250/year)
  • Fly phase: thousands per year!

Today at Ogury, we are between the walk and run phases. The culture of A/B testing has grown so much that it’s now a fixture of many product engineering meetings. Someone will always ask: “did you run an experiment?” or just suggest in response to an idea “let’s run an experiment on that”. It’s almost second nature now. This builds on our data-driven culture at Ogury as part of which we ensure the results get highlighted to the entire company. 

Chen Dai

Chen Dai,
Senior Software Engineer