# How Calypso’s A/B Test Module Works

Automattic recently open sourced Calypso, a JavaScript and REST-API powered interface that runs WordPress.com. I was fortunate to get to work on a few pieces of it, mainly its Analytics and A/B Test modules. In this post I’ll walk through how the A/B test module works because it might give you a few things to consider if you find yourself rolling your own A/B testing solution like we’ve done at Automattic.

Bucketing and Reporting

When it comes to A/B testing, there are two tools you need: one that buckets users and one that reports on the results of your tests. The bucketing tool lets you say “Show 50% of users a green button, show the other 50% the red button”. The analysis tool then lets you measure the impact of the green vs red button on other actions like signing up for an account, publishing a post, upgrading, etc.

The Calypso A/B Test module that I’ll be discussing in this post is our bucketing tool. We also have a separate internal tool for analyzing the results of the A/B tests, but that’s a topic for another post.

A/B Testing in Calypso

The A/B Test module’s README provides detailed instructions for how it works. You can also check out the module itself if you’re interested. I’ll give an overview here and elaborate on some of the decisions that went into it.

We have a file called active-tests.js that contains configuration information for all of the active tests we’re running in Calypso. For example, here’s one of the tests:

 businessPluginsNudge: { datestamp: '20151119', variations: { drake: 50, nudge: 50 }, defaultVariation: 'drake' }

view raw
example-test.js
hosted with ❤ by GitHub

What this says is that we have a test called businessPluginsNudge that started on November 19th that has two variations: drake and nudge each of which are shown 50% of the time. Also, users that are inelligble to participate in the test should be shown the drake variation (more on what inelligible means below).

To assign a user to a test, there’s a function the A/B test module exports called abtest. It’s used like so:

 // Here, the user is assigned one of the variations for this test // so variation will be either drake or nudge var variation = abtest( 'businessPluginsNudge' ); // We can then vary what the user sees based on the variation // he or she was assigned to if ( variation === 'drake' ) { // Do something } else { // Do something else }

view raw
example-usage.js
hosted with ❤ by GitHub

The abtest function assigns the user to a variation and returns that variation. For this particular test, if the user is eligible then 50% of the time the function will return drake and the other 50% of the time it will return nudge. We can then use the variation to determine what the user sees.

The abtest function also sends the test name and the user’s variation back to us via an API endpoint so that we can record it and later use it to measure the impact on other events using our internal reporting tool.

Eligibility

Consider an A/B test that tests the wording of a button. If the new wording isn’t properly translated but a large percentage of the users don’t speak English, it can throw off the results of the test. For example, if the new wording underperforms, was it because the new wording was truly inferior or was it because a lot of non-English users saw the English wording and simply couldn’t read it?

To account for that and similar issues, we have this idea of eligibility. In certain situations we don’t want users to count towards the test. We need to show them something of course, but we don’t want to track it. That’s what the defaultVariation property is for in the test configuration. Inelligible users are shown that variation, but we don’t send the information about the test and the user’s variation back to our servers. By default in the A/B test module, only English language users are eligible for the tests so they will always be shown the variation specified by defaultVariation.

We also only want to include users that have local storage enabled because that’s where we save the user’s variation. We save the variation locally because we always want to show the user the same variation that they originally saw. Saving it to local storage keeps things fast. We could fetch it from the server, but we don’t want to slow down the UI while we wait for the response so we read it from local storage instead. A side effect of this is that we don’t handle situations where users change browsers, switch devices, or clear their local storage. That’s only a fraction of users though so it doesn’t impact the results very much.

One last point on eligibility: imagine you’re testing the wording on a particular button. You run one test where you show “Upgrade Now” and another “Upgrade Today” (this is a silly test, but just to give you an idea). Lets say “Upgrade Today” wins and you make that the default. Then you run another test comparing “Upgrade Today” to “Turbocharge Your Site”. If a user participated in the original test and saw the “Upgrade Now” variation, it could impact their behavior on this new test. To account for that, if a user has participated in a previous test with the same name as a new test, then he or she won’t be eligible for the new test. We only want to include users who are participating in the test for the first time because it will result numbers that better represent the impact of each variation.

Multiple active tests

In A/B testing parlance, there’s a concept known as multivariate tests. The idea is that if you have a test running on one page (green button, red button) and another test running on another page (“Upgrade Now”, “Upgrade Today”), the combination of the variations from those tests might be important. For example, what if green button + “Upgrade Today” leads to a higher conversion rate than the other combinations? There is that possibility, but we generally don’t worry about that to keep the analysis simpler.

Dealing with A/B tests that span multiple pages

There’s one final situation I want to note:

Imagine you have two pricing pages, one for your Silver Plan and one for your Gold Plan. On the Silver Plan‘s pricing page, you assign the user a variation and use that to adjust the page:

 var variation = abtest( 'silverPlan' );

view raw
silver-plan.js
hosted with ❤ by GitHub

So far so good. Now imagine that you want to adjust the payment form on a different page if the user saw a particular variation for the Silver Plan.

If you call abtest( 'silverPlan' ) to grab the variation on the payment page, it will also assign the user to a variation for that test. Many of the users viewing the payment page though will be purchasing the Gold Plan and never have even seen the plan page for your Silver Plan. Assigning those users to a variation will distort the results of the test. To account for that, the A/B test module also exports a getABTestVariation function that just returns a user’s variation without assigning him to one:

 var variation = getABTestVariation( 'silverPlan' );

view raw
silver-plan-get.js
hosted with ❤ by GitHub

This doesn’t come up in simple tests, but for complex tests that affect multiple parts of the user’s experience, it’s essential to be able to determine if a user is part of a variation without assigning him or her to one.

Wrapping Up

As you can see, there are a lot of subtle issues that can impact the results of your tests. Hopefully this gives you an idea of a few of the things to watch out for if you do roll your own A/B testing tools.

If you have any questions, suggestions on how to improve it, or just want to chat about A/B testing tools, don’t hesitate to drop me a note.

# Visualizing the Sampling Distribution of a Proportion with R

In yesterday’s post, we showed that a binomial distribution can be approximated by a normal distribution and some of the math behind it.

Today we’ll take it a step further, showing how those results can help us understand the distribution of a sample proportion.

Consider the following example:

Out of the last 250 visitors to your website, 40 signed up for an account.

The conversion rate for that group is 16%, but it’s possible (and likely!) that the true conversion rate differs from this. Statistics can help us determine a range (a confidence interval) for what the true conversion rate actually is.

Recall that in the last post we said that the mean of the binomial distribution can be approximated with a normal distribution with a mean and standard deviation calculated by:

$\mu = np$

$\sigma = \sqrt{npq}$

For a proportion, we want to figure out the mean and standard deviation on a per-trial basis so we divide each formula by n, the number of trials:

$\mu = \frac{np}{n} = p$

$\sigma = \frac{\sqrt{npq}}{n} = \sqrt{\frac{npq}{n^2}} = \sqrt{\frac{pq}{n}}$

With the mean and standard deviation of the sample proportion in hand, we can plot the distribution for this example:

As you can see, the most likely conversion rate is 16% (which is no surprise), but the true conversion rate can fall anywhere under that curve with it being less and less likely as you move farther away.

Where it gets really interesting is when you want to compare multiple proportions.

Let’s say we’re running an A/B test and the original had 40 conversions out of 250 like the example above and the experimental version had 60 conversions out of 270 participants. We can plot the distribution of both sample proportions with this code:

 setClass( Class = "Distribution", representation = representation( name = "character", participants = "numeric", conversions = "numeric", sample_proportion = "numeric", se = "numeric", color = "character", x = "vector", y = "vector" ) ) # We rewrite the initialize method for Distribution objects so that we can # set the x and y values which are used throughout the plotting process setMethod( f = "initialize", signature = "Distribution", definition = function( .Object, name, participants, conversions, color ) { .Object@name = name .Object@sample_proportion = conversions / participants .Object@se = sqrt( ( .Object@sample_proportion * ( 1 – .Object@sample_proportion ) ) / participants ) .Object@color = color .Object@x = seq( –4, 4, length = 100 ) * .Object@se + .Object@sample_proportion .Object@y = dnorm( .Object@x, .Object@sample_proportion, .Object@se ) return ( .Object ) } ) # Given a list of distributions, this returns a list of the x and y axis range get_axis_ranges = function( distributions ) { x_all = vector() y_all = vector() for ( distribution in distributions ) { x_all = c( x_all, distribution@x ) y_all = c( y_all, distribution@y ) } xlim = c( min( x_all ), max( x_all ) ) ylim = c( min( y_all ), max( y_all ) ) # Note that by forming a list of the vectors, the vectors get converted to lists # which we then have to convert back to vectors in order to use them for plotting return ( list( xlim, ylim ) ) } get_x_axis_values = function( x_range ) { by = 0.01 min_x = floor( min( x_range ) / by ) * by max_x = ceiling( max( x_range ) / by ) * by return ( seq( min_x, max_x, by = by ) ) } # Define the distributions that we want to plot distributions = list( new( Class = "Distribution", name = "original", participants = 250, conversions = 40, color = "#00cc00" ), new( Class = "Distribution", name = "variation", participants = 270, conversions = 60, color = "blue" ) ) # Determine the range to use for each axis axis_range = get_axis_ranges( distributions ) xlim = axis_range[[ 1 ]] ylim = axis_range[[ 2 ]] # Create the plot plot( NULL, NULL, type = "n", xlim = xlim, ylim = ylim, xlab = "Conversion Rate", ylab = "", main = "", axes = FALSE ) # Render each of the curves line_width = 3 for ( distribution in distributions ) { polygon( distribution@x, distribution@y, col = adjustcolor( distribution@color, alpha.f = 0.3 ), border = NA ) lines( distribution@x, distribution@y, col = adjustcolor( distribution@color, alpha.f = 1 ), lwd = line_width ) # Draw a line down the center of the curve ci_center_y = dnorm( distribution@sample_proportion, distribution@sample_proportion, distribution@se ) coords = xy.coords( c( distribution@sample_proportion, distribution@sample_proportion ), c( 0, ci_center_y ) ) lines( coords, col = adjustcolor( distribution@color, alpha.f = 0.4 ), lwd = 1 ) } # Render the x axis axis( side = 1, at = get_x_axis_values( xlim ), pos = 0, col = "#777777", col.axis = "#777777", lwd = line_width ) # Finally, render a legend legend_text = vector() legend_colors = vector() for ( distribution in distributions ) { legend_text = c( legend_text, distribution@name ) legend_colors = c( legend_colors, distribution@color ) } legend('right', legend_text, lty = 1, lwd = line_width, col = legend_colors, bty = 'n' )

view raw
sample-proportions.r
hosted with ❤ by GitHub

Here’s the result:

What can we determine from this? Is the experimental version better than the original? What if the true proportion for the original is 20% (towards the upper end of its distribution) and the true proportion for the experimental version is 16% (towards the lower end of its distribution)?

We’ll save the answer for a future post :)

# Visualizing How a Normal Distribution Approximates a Binomial Distribution with R

My handy Elementary Statistics textbook, which I’m using to get smart on the math behind A/B testing, states the following:

Normal Distribution as Approximation to Binomial Distribution

If $np \geq 5$ and $nq \geq 5$, then the binomial random variable has a probability distribution that can be approximated by a normal distribution with the mean and standard deviation given as:

$\mu = np$

$\sigma = \sqrt{npq}$

In easier to understand terms, take the following example:

Each visitor to your website has a 30% chance of signing up for an account. Over the next 250 visitors, how many can you expect to sign up for an account?

The first formula lets us figure out the mean by simply multiplying the number of visitors by the probability of a successful conversion:

$\mu = np = 250 * 0.30 = 75$

Simple enough and fairly easy to understand.

The second formula, the one to figure out the standard deviation, is less intuitive:

$\sigma = \sqrt{npq} = \sqrt{250 * 0.30 * (1 - 0.30)} = 7.25$

Why are we taking the square root of the product of these three values? The textbook doesn’t explain, noting that “the formal justification that allows us to use the normal distribution as an approximation to the binomial distribution results from more advanced mathematics”.

Because this standard deviation formula plays a big role in calculating the confidence intervals for sample proportions, I decided to simulate the scenario above to prove to myself that the standard deviation formula is accurate.

The R script below simulates 250 visitors coming to a website, each with a 30% chance of signing up. After each group of 250 visitors we track how many of them wound up converting. After all of the runs (the default is 1,000, though the higher the number the more accurate the distribution will be) we plot the probability distribution of the results in blue and a curve representing what we’d expect the distribution to look like if the standard deviation formula above is correct in red.

 experiments = 1000 visitors = 250 conversion_rate = 0.3 expected_conversions = visitors * conversion_rate expected_sd = sqrt( visitors * conversion_rate * ( 1 – conversion_rate ) ) sd_for_axis_range = 4.5 axis_divisions = 5 results = vector() for ( experiment in 1:experiments ) { conversions = 0 for ( visitor in 1:visitors ) { if ( runif( 1 ) <= conversion_rate ) { conversions = conversions + 1 } } results = c( results, conversions ) } par( oma = c( 0, 2, 0, 0 ) ) axis_min = floor( ( expected_conversions – sd_for_axis_range * expected_sd ) / axis_divisions ) * axis_divisions axis_max = ceiling( ( expected_conversions + sd_for_axis_range * expected_sd ) / axis_divisions ) * axis_divisions hist( results, axes = FALSE, breaks = seq( axis_min, axis_max, by = 1 ), ylab = 'Probability', xlab = 'Conversions', freq = FALSE, col = '#4B85ED', main = 'Distribution of Results' ) axis( side = 1, at = seq( axis_min, axis_max, by = axis_divisions ), pos = 0, col = "#666666", col.axis = "#666666", lwd = 1, tck = –0.015 ) axis( side = 2, col = "#666666", col.axis = "#666666", lwd = 1, tck = –0.015 ) curve( dnorm(x, mean = conversion_rate * visitors, sd = expected_sd ), add = TRUE, col = "red", lwd = 4 )

view raw
normal-dist.r
hosted with ❤ by GitHub

The distribution of results from this experiment paints a telling picture:

Not only is the mean what we expect (around 75), but the standard deviation formula (which said it would be 7.25) does predict the standard deviation from this experiment (7.25). Go figure :)

As we’ll see, we can use the fact that the normal distribution approximates a binomial distribution approximates to figure out the distribution of a sample proportion, which we can then compare to other sample proportion distributions to make conclusions about whether they differ and by how much (ie, how to analyze the results of an A/B test).

# Rendering Two Normal Distribution Curves on a Single Plot with R

As a follow-up to my last post about how to render a normal distribution curve with R, here’s how you can render two on the same plot:

setClass(
Class = "Distribution",
representation = representation(
name = "character",
mean = "numeric",
sd = "numeric",
color = "character",
x = "vector",
y = "vector"
)
)

# We rewrite the initialize method for Distribution objects so that we can
# set the x and y values which are used throughout the plotting process
setMethod(
f = "initialize",
signature = "Distribution",
definition = function( .Object, name, mean, sd, color ) {
.Object@name = name
.Object@mean = mean
.Object@sd = sd
.Object@color = color
.Object@x = seq( -4, 4, length = 1000 ) * sd + mean
.Object@y = dnorm( .Object@x, mean, sd )

return ( .Object )
}
)

# Given a list of distributions, this returns a list of the x and y axis range
get_axis_ranges = function( distributions ) {
x_all = vector()
y_all = vector()

for ( distribution in distributions ) {
x_all = c( x_all, distribution@x )
y_all = c( y_all, distribution@y )
}

xlim = c( min( x_all ), max( x_all ) )
ylim = c( min( y_all ), max( y_all ) )

# Note that by forming a list of the vectors, the vectors get converted to lists
# which we then have to convert back to vectors in order to use them for plotting
return ( list( xlim, ylim ) )
}

# Define the distributions that we want to plot
distributions = list(
new( Class = "Distribution", name = "women", mean = 63.6, sd = 2.5, color = "pink" ),
new( Class = "Distribution", name = "men", mean = 69, sd = 2.8, color = "blue" )
)

# Determine the range to use for each axis
axis_range = get_axis_ranges( distributions )
xlim = unlist( axis_range[ 1 ] )
ylim = unlist( axis_range[ 2 ] )

# Create the plot
plot( NULL, NULL, type = "n", xlim = xlim, ylim = ylim, xlab = "Height (inches)", ylab = "", main = "Distribution of Heights", axes = FALSE )

# Render each of the curves
line_width = 3
for( distribution in distributions ) {
lines( distribution@x, distribution@y, col = distribution@color, lwd = line_width )
}

# Render the x axis
axis_bounds <- seq( min( xlim ), max( xlim ) )
axis( side = 1, at = axis_bounds, pos = 0, col = "#aaaaaa", col.axis = "#444444" )

# Finally, render a legend
legend_text = vector()
legend_colors = vector()
for ( distribution in distributions ) {
legend_text = c( legend_text, distribution@name )
legend_colors = c( legend_colors, distribution@color )
}
legend('right', legend_text, lty = 1, lwd = line_width, col = legend_colors, bty = 'n' )