We recently launched an email campaign for WordPress.com where we sent out an email to a large number of free, active users to promote one of our paid plans. I helped analyze the results of the campaign and wanted to share a few lessons learned.
A first pass at the high level funnel looks something like this:
- We send the email to a user
- The user opens the email
- The user clicks a link in the email
- The user makes a purchase
However, unlike measuring a website funnel where it’s a lot more straightforward to track users moving from one step of your funnel to the next, the email medium adds a lot of complexity.
For example, email open rates aren’t reliable. The way open rate tracking works is that the email includes an image tag (typically a 1 by 1 transparent pixel) whose source is a URL that is unique to that specific email sent to you. When your email client loads that image, whoever sent it then can figure out that you opened the email because they can see that a request was made for that image. The problem is that some email clients don’t automatically load images so a user can read the email without you ever knowing about it. Because of this, you shouldn’t have opening the email as a step in the funnel because it’s possible for users to view your email without you knowing about it which would incorrectly eliminate them from the remaining funnel steps.
We don’t use MailChimp, but they do something neat where if a user clicks on a link in the email, they’ll automatically factor that into the open rate. That way if the user’s email client blocks the tracking pixel from loading, that person will still get counted in the open rate. But there’s still the issue of people who open the email but don’t click a link.
Ok, so lets remove that step from the funnel and see where that gets us:
- We send the email to a user
- The user clicks a link in the email
- The user makes a purchase
Better, but there are still issues. What happens if a user reads the email, doesn’t click the link, but winds up going to your site on their own and buying whatever you’re promoting? This is not an edge case either: many people read email on their phones, but switch over to their desktop to make a purchase. Maybe they’ll start that process by clicking the link in your email, but maybe they won’t. Therefore, we should remove that step as well.
Now we’re at:
- We send the email to a user
- The user makes a purchase
If you’re promoting a product that is only available to users who were sent the email, this could work. The problem arises when users who don’t receive the email can also purchase the same product.
Let’s say we sent the email to 1,000 users and 50 made a purchase. But how many of those users would have made a purchase anyway? What you really want to know is how many more people made a purchase because of the email.
To do this, you’ll need a holdout group. So you identify 2,000 users before you start who meet a certain criteria then you send the email to 1,000 of them and compare their purchase rate to the purchase rate of the ones who were not sent the email. Maybe 50 of the users who were sent the email made a purchase, but only 40 who weren’t sent it made a purchase. Ignoring statistical significance for the sake simplicity, you then have a 5% purchase rate for those who were sent the email vs a 4% purchase rate for those who weren’t so you could say the email resulted in a 20% lift. Nice.
We ran into another issue analyzing this on our first pass because we were only looking at the purchase rate of users after they were sent the email and comparing it to the purchase rate of everyone in the holdout group. The problem was that we sent out the emails over the course of several days so we wound up excluding purchases made by those users before they were sent the email. This artifically decreased the purchase rate for users we sent the email to and made it seem at first like our email resulted in fewer purchases. Once we realized this, we adjusted our queries to look at purchases made over the same date range (including purchases made by some users before they were sent the email) to give us an apples to apples comparison.
It’s interesting to consider how deep you can go on this type of analysis. For us, the analysis ended around here. But what else makes sense to look at? Should we try to look at the long term effect of the email on user behavior? What about short term: what did users do on the site immediately after being sent the email? What about the time distribution between being sent the email and making a purchase? This rabbit hole can go pretty deep, but you may reach a point of diminishing returns where additional analysis doesn’t yeild additional insights.
If you’ve done this type of analysis before, I’d love learn about what other metrics you look at.