An impractical guide to doubling your conversion rates

Let’s imagine a standard web app with three main steps:

  1. Viewed Homepage
  2. Signed Up
  3. Purchased

Furthermore, lets say 20% of the visitor to the hompage sign up, and 5% of the users that sign up complete a purchase giving you a 1% overall conversion rate.

Your boss comes to you and says “You need to double the overall conversion rate from 1% to 2%. Do whatever it takes.”

As a thought experiment, consider at a high level what the numbers have to be for this to work out. Your first thought might be to double the homepage conversion rate from 20% to 40% (giving you a 40% * 5% = 2% overall conversion rate) or double the purchase conversion rate (20% * 10% = 2% overall conversion rate). You could also increase both by some smaller amount to get a similar result: 30% * 6.67% = 2%.

In the real world, this winds up being really hard. If you increase the percentage of people signing up, it’s probably going to decrease the percentage of people who then purchase. Why? Consider what would happen to a site that has a pricing section on its homepage and then removes it completely. More people will sign up, but once they do and see the pricing, many of those extra people you got to sign up (because they thought your service was free) will leave (because they realized after signing up that your service actually isn’t free). So if you increase the sign up rate to 40% somehow, your sign up to purchase conversion rate might drop in half from 5% to say 2.5% giving you that same 40% * 2.5% = 1% overall conversion rate. If you’re lucky some of those extra users will convert and maybe you’ll get 40% * like 2.8% =  1.12% overall conversion rate. Getting closer to that 2%, but still a long way away.

The trick to getting to 2% is to improve the quality of the traffic at each step.

Consider that homepage conversion rate of 20%. How could you increase that without making any changes to your website?

That 20% conversion rate is actually a composite of different segments. For example, imagine your website has three traffic sources:

  1. Direct traffic (50% of your traffic) converts at 30%
  2. Search traffic (40% of your traffic) converts at 10%
  3. Social traffic (10% of your traffic) converts at 10%

50% * 30% + 40% * 10% + 10% * 10% = 20% homepage conversion rate.

What would happen to your conversion rate if you delisted your site from search engines completely? The numbers now become:

  1. Direct traffic (83.3% of your traffic) converts at 30%
  2. Social traffic (16.7% of your traffic) converts at 10%

83.3% * 30% + 16.7% * 10% = 26.7% homepage conversion rate.

Without making any changes to your site itself, you increased the homepage conversion rate by 26.7%/20% – 1 = +34%. Because the quality of your traffic has improved, the sign up to purchase conversion rate will likely increase as well. Instead of 5% upgrading, it might wind up being 8% (+60%). What’s your overall conversion rate now? 26.7% * 8% = 2.15%!

So in order to double your conversion rate in this made up example all you had to do was delist your site from search engines. Mission accomplished 🍻. As an added bonus many of your other metrics will increase as well. Because your site’s traffic is of a higher quality, your retention rates will go up, your churn will go down, your average revenue per user will go up, your refunds will go down, and more.

You could also do similar hacks where you make your website non-mobile-friendly so Google decreases its mobile rankings, which (assuming mobile converts more poorly than desktop users) would increase your conversion rates. You could block certain poorly converting browsers from visiting your site (I’m looking at you, IE). You could block users from poorly converting countries, or even non-English language users if your site isn’t translated.

Of course you should never do any of these things.

In the process of improving your metrics by hacking off a large portion of your traffic, you’ll also wind up decreasing the number of people who make it all the way through the funnel (purchasing in this case).

It’s tempting to want to focus on a single metric like conversion rate, but it’s also important to remember that individual metrics can almost always be artificially boosted:

  • You can increase your conversion rates by preventing low converting traffic from reaching your site (at the expense of revenue)
  • You can increase your revenue by increasing paid advertising (at the expense of profit)
  • You can increase your profit by laying off a bunch of employees (possibly at the expense of your long term growth and profitability).

Instead, try to identify the set of metrics that are most important to your company and pay attention to the group as a whole. More often than not when one metric goes up, another will go down but with solid execution and a little luck, the overall impact of your changes will be a net win for your website or company.

 

Anyone interested in collaborating on an A/B test simulator?

I recently started working on a small side project to build an A/B test simulator. My goal is to measure the long term conversion rate impact of different approaches such as:

  • What impact do various significance levels (90%, 95%, 99%) have?
  • Is it better to run more shorter tests with less statistical significance or fewer longer tests with more statistical significance?
  • What’s the best strategy if your site does not receive a lot of traffic?

I have a preliminary script, but there’s still a lot of work to be done and I’d love to collaborate with somebody on it going forward. I think the results will run contrary to a lot of the current A/B test best practices and they could have a big impact on how people run tests in the future.

If you have experience running A/B tests, a stats background, or familiarity with Ruby, those skills will come in handy but they’re not required.

Drop me a note if you’re interested in working together on it.

The Hidden Complexity of Measuring Return on Ad Spend

I recently switched from WordPress.com’s data team to its marketing team where I’ll be using my dev and data-science background to help with our marketing efforts. I’m really excited by it because marketing is an area where I don’t feel particularly strong and now I’ll have a lot of opportunities to learn the ins and outs. As I pick up new things that are generally applicable, I’ll try to share them here on this blog.

This week I’ve been learning about how we calculate the profitability of our online advertising campaigns. I’ve done some work on this previously, but am really getting a chance to dive into now. At first it might seem like a simple calculation, but this rabbit hole goes quite deep.

Lets start with a very simple example: You spend $100 on AdWords; a handful of people click on your ads and it leads to 2 purchases totaling $120. Your ROI is $120/$100 – 1 = +20%. There’s also a metric known as Return on Ad Spend (ROAS) which is just revenue divided by cost which in this case would be $120/$100 = 120%.

Nothing too complicated here… or is there?

How long do people who click on the ad have to make a purchase for that purchase to be attributed to the ad? If they clicked on your ad on May 1 and purchase the same day, you probably want to attribute it to the ad. But what if they don’t purchase until August 1? Did the ad on May 1 cause the person to make a purchase 3 months later? Probably not, but maybe.

Now, what if the the person saw your ad, did not click it, then winds up finding their way to your site and making a purchase? Should you attribute the purchase to the ad? Facebook ads, for example, have a 1-day conversion window for views by default so if a person sees a Facebook ad for your service, then makes a purchase, that purchase gets attributed to Facebook even if he or she didn’t click the ad. Should it count? Maybe.

What if the person clicks on a Facebook ad, then clicks on your AdWords ad, then makes a purchase? Does Facebook get credit or does AdWords? Or do you do fractional attribution where Facebook gets like 20% of the credit and AdWords 80%? Both services will count that conversion and there’s no way to configure them not to. If you then add up your Facebook ad revenue and AdWords ad revenue, you’ll be double counting the conversion.

One option is to build an internal tool that keeps track of which ads a user clicks on prior to signing up and making a purchase. When they click on an ad, you keep track in a cookie details about the ad. Then when they sign up for an account, you tag their account with those details. When they make a purchase, you then know which ads they clicked and in which order. Even in this case though, you won’t be able to tell who saw your ads but did not click on one.

What if a person clicks on your AdWords ad on their laptop, signs up (at which point you can tag the user in your internal database so you know they came from AdWords), then jumps on their tablet and makes a purchase. Because you tagged the user, your internal tool can attribute the purchase to AdWords, but AdWords won’t because they didn’t start and complete the purchase on the same browser. Unless they’re signed into a Google account, then AdWords can figure out that they’re the same person.

Similarly, if they click on your ad on their laptop, then sign up and make a purchase on their tablet, you won’t be able to tag their account because the tablet can’t access the laptop’s cookies. Here again, AdWords might still count it as a conversion if the user is signed into their Google account.

If you run a service with recurring revenue, do you try to estimate their lifetime value (LTV) when calculating the profitability of the ads? You should, but that also introduces complexity. The type of people who you advertise to might have different purchasing behavior than your other users. They might spend more on average – or less. They might renew at a higher rate – or lower.

If you run a service with large free userbase, how do you factor in the people who Google your service’s name to sign into their existing account, then click on your ad at the top of the search results? If a user signed up in 2012, then clicks on your ad today and makes a purchase tomorrow, do you attribute that purchase to the ad? What if they manually renew a subscription that they purchased a year ago? Do you attribute that revenue to the ad? What if they have an existing subscription that automatically renews several days after they click the ad? Does that count?

You can tell AdWords not to show your ads to existing users, but should you? Maybe seeing an ad about your plans does cause them to buy the plan?

And if any of these people receive a refund after making a purchase, do you factor that into the profitability of your ads?

This, I’m afriad, is just the tip of the iceberg. How do you quantify the impact of the ads on your brand? How do you measure how much your acquired users contribute to acquiring more users (your product’s virality)? And this doesn’t even consider offline advertising.

Depending on your service and its offerings, you might be able to get by without a lof of these considerations, but if you run a freemium SaaS and advertise on multiple ad platforms, it’s worth thinking about what makes the most sense for your business.

If you’re interested in this type of thing, drop me an email – I’d love to chat.

Calculating D1 Retention Rate

I recently learned something new about how retention rate calculations are performed and wanted to share. Consider this situation:

A user signs up for your service at 11:59pm on Tuesday, does something at 12:01am on Wednesday, leaves and never comes back again.

Should this user count towards your D1 retention rate?

When I initially built Retentioneer, a retention rate analysis script, it would say yes, the user should count towards the D1 retention rate because he was active on Tuesday then again on Wednesday.

But I realized that way of performing the calculation would inflate the D1 retention rate because it’s very common for a user to sign up for a service, try it out for a few minutes or an hour, then leave and never come back. Any of these users that overlapped midnight UTC would wind up inflating the D1 retention rate. In some test data I analyzed, D1 retention rate was 7% when D1 meant a user was active on adjacent calendar days, but only 2% when D1 meant users had to perform an event 24 – 48 hours after signing up (the magnitude of the difference will vary based on the type of service).

I decided the latter was a better reflection of the true retention rate, updated the script to perform the calculations that way, shipped it, and thought that was the end of it.

At some point though I got this idea in my head that most other analytics services don’t calculate retention rate this way because it would be too slow and that if a user is active at Tuesday and again on Wednesday, they are counted as retained. I wanted to standardize my script with these other services so it has been on my todo list for a while to revert Retentioneer to use the original calculation method.

But… turns out other analytics services do the calculations same way.

From Mixpanel’s How is retention calculated?:

Likewise, in daily retention, in order to be counted in the bucket marked 1, the customer must send whatever event we are looking for in the retention some time between 24 and 48 hours after he sent the cohortizing event.

And from Amplitude’s Retention: General and Computation Methods:

A user is counted as “next day” retained if they perform any event on at least the 24th-incremented hour. For instance, if a user performs their first event on Dec 1st, 05:59pm, the user is counted as Day 1 retained if they perform an event on Dec 2nd, 05:00pm, and Day 2 retained if they perform an event on Dec 3rd, 05:00pm.

Funny enough, there’s actually a note at the bottom of that page that explains Amplitude previously used calendar dates to perform the retention calculations:

Note: these computation methods only apply as far back to August 18, 2015. Any retention computations that include dates before August 18, 2015 will be computed by calendar days/weeks/months.

Awesome to see that they realized the issue and made the change.

To sum it up: a user should only count as D1 retained if they’ve been active 24 to 48 hours after performing the initial event. If you make the calculation using calendar days you’ll inflate the D1 retention rate.

Hat tip to my coworker Meredith for the always insightful discussions around this as well as Jan Piotrowski and Hrishi Mittal for their feedback on Twitter.