Analyzing the impact of an email campaign on user behavior

We recently launched an email campaign for WordPress.com where we sent out an email to a large number of free, active users to promote one of our paid plans. I helped analyze the results of the campaign and wanted to share a few lessons learned.

A first pass at the high level funnel looks something like this:

  1. We send the email to a user
  2. The user opens the email
  3. The user clicks a link in the email
  4. The user makes a purchase

However, unlike measuring a website funnel where it’s a lot more straightforward to track users moving from one step of your funnel to the next, the email medium adds a lot of complexity.

For example, email open rates aren’t reliable. The way open rate tracking works is that the email includes an image tag (typically a 1 by 1 transparent pixel) whose source is a URL that is unique to that specific email sent to you. When your email client loads that image, whoever sent it then can figure out that you opened the email because they can see that a request was made for that image. The problem is that some email clients don’t automatically load images so a user can read the email without you ever knowing about it. Because of this, you shouldn’t have opening the email as a step in the funnel because it’s possible for users to view your email without you knowing about it which would incorrectly eliminate them from the remaining funnel steps.

We don’t use MailChimp, but they do something neat where if a user clicks on a link in the email, they’ll automatically factor that into the open rate. That way if the user’s email client blocks the tracking pixel from loading, that person will still get counted in the open rate. But there’s still the issue of people who open the email but don’t click a link.

Ok, so lets remove that step from the funnel and see where that gets us:

  1. We send the email to a user
  2. The user clicks a link in the email
  3. The user makes a purchase

Better, but there are still issues. What happens if a user reads the email, doesn’t click the link, but winds up going to your site on their own and buying whatever you’re promoting? This is not an edge case either: many people read email on their phones, but switch over to their desktop to make a purchase. Maybe they’ll start that process by clicking the link in your email, but maybe they won’t. Therefore, we should remove that step as well.

Now we’re at:

  1. We send the email to a user
  2. The user makes a purchase

If you’re promoting a product that is only available to users who were sent the email, this could work. The problem arises when users who don’t receive the email can also purchase the same product.

Let’s say we sent the email to 1,000 users and 50 made a purchase. But how many of those users would have made a purchase anyway? What you really want to know is how many more people made a purchase because of the email.

To do this, you’ll need a holdout group. So you identify 2,000 users before you start who meet a certain criteria then you send the email to 1,000 of them and compare their purchase rate to the purchase rate of the ones who were not sent the email. Maybe 50 of the users who were sent the email made a purchase, but only 40 who weren’t sent it made a purchase. Ignoring statistical significance for the sake simplicity, you then have a 5% purchase rate for those who were sent the email vs a 4% purchase rate for those who weren’t so you could say the email resulted in a 20% lift. Nice.

We ran into another issue analyzing this on our first pass because we were only looking at the purchase rate of users after they were sent the email and comparing it to the purchase rate of everyone in the holdout group. The problem was that we sent out the emails over the course of several days so we wound up excluding purchases made by those users before they were sent the email. This artifically decreased the purchase rate for users we sent the email to and made it seem at first like our email resulted in fewer purchases. Once we realized this, we adjusted our queries to look at purchases made over the same date range (including purchases made by some users before they were sent the email) to give us an apples to apples comparison.

It’s interesting to consider how deep you can go on this type of analysis. For us, the analysis ended around here. But what else makes sense to look at? Should we try to look at the long term effect of the email on user behavior? What about short term: what did users do on the site immediately after being sent the email? What about the time distribution between being sent the email and making a purchase? This rabbit hole can go pretty deep, but you may reach a point of diminishing returns where additional analysis doesn’t yeild additional insights.

If you’ve done this type of analysis before, I’d love learn about what other metrics you look at.

Podcasts I’m listening to: July 2016 edition

Until recently, the only two podcasts I listened to were the Tim Ferriss podcast and Zen Founders podcast by Rob and Sherry Walling.

Tim’s interviews are top notch and I’ve learned a lot from them, but I’ve been listening to it less and less as time goes on. I think a big part of that is that I often finish listening to an interview and wind up with this feeling like I’m not doing enough with my life. For example, Tim recommends using the question “Am I working on something that I’ll be remembered for in 200 years?” to guide your efforts – not because being remembered is the goal, but because if you’re working on something at that scale that it’s probably going to be something really important. Maybe it’s because of the arrival of my daughter a few months ago, but I find myself caring less about professional ambition and more about family ambition.

Which brings me to Zen Founders, a podcast about building startups and balancing that with your family. I don’t have any plans to start another startup, but the discussions really resonate with me and Rob and Sherry are doing a huge service to the founder community through the podcast.

There are two other podcasts I recently started to listen to as well:

Revisionist History by Malcom Gladwell. I’ve listened to all of Gladwell’s books and am a huge fan of his writing. If you like the kind of insightful storytelling that he’s so well known for, definitely check out this podcast. The episode titled The Big Man Can’t Shoot and the the three-part series on higher education in America are great places to start. I learned about this podcast though Tim’s recent interview with Malcom.

Exponent by Ben Thompson and James Allworth where the two of them discuss Ben’s writing on Stratechery, a blog about how tech, society, and how the internet is fundamentally changing how the world works. The podcast and Stratechery will give you a new lens to understand what’s happening in the tech world. Highly recommended.

Any recommendations for other podcasts to check out?

In Search of Unhackable North Star Metrics

In my last post, I wrote about why focusing too much on conversion rates alone is a bad idea because conversion rates can easily be manipulated by changing the quality of traffic to your site.

It got me thinking about whether there are metrics that can’t be manipulated. The key for such a metric is that improvements to it have to always be correlated with the health of the site or business. Put another way, if you can artifically inflate a metric or an improvement to that metric could actually be a bad thing, then it wouldn’t qualify; it wouldn’t be a so-called north star metric.

For example, a homepage to purchase conversion rate wouldn’t work. You can improve your conversion rate by dropping search traffic, reducing the price of your product, and more – all of which could actually be bad for your business.

It gets interesting when you start making the metric more and more specific to overcome ways it can be manipulated. What if instead of looking at your overall homepage to purchase conversion rate, you just looked at the conversion rate of direct traffic? That’s better, but it can still be manipulated by increasing your offline advertising which could change the quality of the direct traffic and therefore your conversion rates.

I go back and forth about whether it’s even possible. Like, will there always be a way to improve a metric but have it actually be bad for the business? Or is it possible to avoid manipulation by being really, really specific about the metric?

I’d love to hear from folks on this topic, especially if you have a metric in mind for your site/product/business that seems unhackable.

An impractical guide to doubling your conversion rates

Let’s imagine a standard web app with three main steps:

  1. Viewed Homepage
  2. Signed Up
  3. Purchased

Furthermore, lets say 20% of the visitor to the hompage sign up, and 5% of the users that sign up complete a purchase giving you a 1% overall conversion rate.

Your boss comes to you and says “You need to double the overall conversion rate from 1% to 2%. Do whatever it takes.”

As a thought experiment, consider at a high level what the numbers have to be for this to work out. Your first thought might be to double the homepage conversion rate from 20% to 40% (giving you a 40% * 5% = 2% overall conversion rate) or double the purchase conversion rate (20% * 10% = 2% overall conversion rate). You could also increase both by some smaller amount to get a similar result: 30% * 6.67% = 2%.

In the real world, this winds up being really hard. If you increase the percentage of people signing up, it’s probably going to decrease the percentage of people who then purchase. Why? Consider what would happen to a site that has a pricing section on its homepage and then removes it completely. More people will sign up, but once they do and see the pricing, many of those extra people you got to sign up (because they thought your service was free) will leave (because they realized after signing up that your service actually isn’t free). So if you increase the sign up rate to 40% somehow, your sign up to purchase conversion rate might drop in half from 5% to say 2.5% giving you that same 40% * 2.5% = 1% overall conversion rate. If you’re lucky some of those extra users will convert and maybe you’ll get 40% * like 2.8% =  1.12% overall conversion rate. Getting closer to that 2%, but still a long way away.

The trick to getting to 2% is to improve the quality of the traffic at each step.

Consider that homepage conversion rate of 20%. How could you increase that without making any changes to your website?

That 20% conversion rate is actually a composite of different segments. For example, imagine your website has three traffic sources:

  1. Direct traffic (50% of your traffic) converts at 30%
  2. Search traffic (40% of your traffic) converts at 10%
  3. Social traffic (10% of your traffic) converts at 10%

50% * 30% + 40% * 10% + 10% * 10% = 20% homepage conversion rate.

What would happen to your conversion rate if you delisted your site from search engines completely? The numbers now become:

  1. Direct traffic (83.3% of your traffic) converts at 30%
  2. Social traffic (16.7% of your traffic) converts at 10%

83.3% * 30% + 16.7% * 10% = 26.7% homepage conversion rate.

Without making any changes to your site itself, you increased the homepage conversion rate by 26.7%/20% – 1 = +34%. Because the quality of your traffic has improved, the sign up to purchase conversion rate will likely increase as well. Instead of 5% upgrading, it might wind up being 8% (+60%). What’s your overall conversion rate now? 26.7% * 8% = 2.15%!

So in order to double your conversion rate in this made up example all you had to do was delist your site from search engines. Mission accomplished 🍻. As an added bonus many of your other metrics will increase as well. Because your site’s traffic is of a higher quality, your retention rates will go up, your churn will go down, your average revenue per user will go up, your refunds will go down, and more.

You could also do similar hacks where you make your website non-mobile-friendly so Google decreases its mobile rankings, which (assuming mobile converts more poorly than desktop users) would increase your conversion rates. You could block certain poorly converting browsers from visiting your site (I’m looking at you, IE). You could block users from poorly converting countries, or even non-English language users if your site isn’t translated.

Of course you should never do any of these things.

In the process of improving your metrics by hacking off a large portion of your traffic, you’ll also wind up decreasing the number of people who make it all the way through the funnel (purchasing in this case).

It’s tempting to want to focus on a single metric like conversion rate, but it’s also important to remember that individual metrics can almost always be artificially boosted:

  • You can increase your conversion rates by preventing low converting traffic from reaching your site (at the expense of revenue)
  • You can increase your revenue by increasing paid advertising (at the expense of profit)
  • You can increase your profit by laying off a bunch of employees (possibly at the expense of your long term growth and profitability).

Instead, try to identify the set of metrics that are most important to your company and pay attention to the group as a whole. More often than not when one metric goes up, another will go down but with solid execution and a little luck, the overall impact of your changes will be a net win for your website or company.

 

Effectiveness

If you followed me back in the day while I was working on my failed Lean Designs startup, it probably would have been hard to tell that things were headed in the wrong direction. Even for me, it took a long time to realize how many problems there were with the idea and execution.

I’ve thought about that a lot since then. It’s really hard to observe something and evaluate how things are going based on activity alone. Activity is necessary and important, but only if it’s applied effectively.

How can you tell whether activity is effective? Looking ahead, it’s difficult unless you have a deep understanding of the problem and the possible solutions. For example, it might have seemed like setting up an LLC and having a logo designed for Lean Designs were important tasks, but really they weren’t. If you’ve built an online business before you might realize that, but if not it would be tempting to look at the completion of those tasks and think I was making meaningful progress.

Looking backwards is a bit easier. Compare your current results for some goal with where you were a month ago, six months ago, a year ago. Have you made progress in whatever metric that matters? If your goal is to build and grow a SaaS startup, have you launched? What do your growth and retention rates look like? If you’re trying to lose weight, are you? If you’re trying to write more often, have you been?

It’s tempting to want to explain why when the results aren’t there. In the short run, there can be a lot of valid reasons why things aren’t going well. Bad luck, illness, etc etc. But the further back in time you go, the harder it becomes to attribute the lack of results to these types of factors.

If you look at where you are today with a goal and your results haven’t changed from six months ago despite lots of activity, it’s worth taking a step back and trying to identify the root cause. Ask why a bunch of times. Maybe there is a valid reason, but in my experience it’s often something more fundamental. Maybe I’m not as motivated by the goal as I thought which is causing me not to put the effort into it that is really required. Like with Lean Designs, maybe my entire approach is wrong. Maybe I shouldn’t have been working on Lean Designs in the first place.

And to be clear I’m not saying the ends justify the means. Just that results are important when goal-setting and the longer you go without meaningful results, the greater the odds are that something is fundamentally wrong with the goals or the execution.

Put another way, if I’ve been working on something for a while and the results aren’t there, I’m probably doing something wrong and need to re-evaluate my approach.

This might be obvious, but it took me a long time to really internalize it so I wanted to share in case it helps anyone.

Would love to hear your thoughts.

A Long Overdue Lean Designs Post Mortem

lean-designs.png

In 2010, inspired by the success of the mockup tool Balsamiq, I started working on a competitor called jMockups. My original goal was basically to build a Balsamiq clone, but instead of it being downloadable software it would be web-based. I named it after jQuery.

After a while I realized that competing head-on with Balsamiq was a bad idea so I pivoted to building a high fidelity mockup tool – one that would create mockups that looked like real websites instead of sketches – and I renamed the project Lean Designs. Instead of competing with Balsamiq, I’d be competing with tools like Photoshop that a lot of designers use to design sites before building them. And as an added bonus, Lean Designs would also export your high fidelity mockup to HTML and CSS, saving you from having to manually code up the design when you were finished. I was pretty excited by it.

Here’s an early demo of Lean Designs:

As you might have guessed from the title of this post, things did not go well.

I never wound up writing about what went wrong – mostly out of embarrassment – but reading about my mistakes might save of some of you some heartache with your own projects.

First and foremost, Lean Designs didn’t solve a clear pain point for a specific group of users. If you had asked me back then who Lean Designs’s audience was, I would have said something like: it’s for developers who want to design a website without writing HTML/CSS, it’s for designers to create a high fidelity mockup of their websites before coding it, and it’s for everyday people who want to create a simple static website but don’t want to code 😱.

These are three very different usecases and Lean Designs didn’t solve any of them well. For example, for developers and everyday people who want to create a website, having a What You See Is What You Get (WYSIWYG) web design tool where every element is absolutely positioned makes it extremely hard to create a well-designed site. I touted on Lean Designs’s homepage how easy it was to create beautiful websites with it, but almost everything created with it looked terrible.

If you asked me which of those use cases was my focus, I probably would have said Lean Designs was a Photoshop replacement for designers to create high fidelity website mockups. But… I am not a designer. And I’ve never used Photoshop to create high fidelity mockups. I read online about how many designers use it, but I had no expertise in the problem I was trying to solve. And if Lean Designs was truly for designers to create high fidelity mockups, why did I build an export to HTML/CSS feature that would never be used by these designers in a production website?

One way to mitigate this is to do customer development: to get on the phone and meet face to face with people in this market to learn about their real problems. I never spoke with anyone in Lean Designs’s target audience to verify my assumptions. This is ironic given that Lean Designs is named after the Lean Startup movement, a big part of which is to do customer development to avoid making incorrect assumptions.

I also didn’t research any of my competitors. I rationalized it by saying to myself that I didn’t want to be influenced by what was currently on the market. To this day I’m still learning about competitors that existed back then that I had no idea about.

I formed jMockups LLC to add an element of seriousness to the project even before the site had launched. That’s $500/year to operate in Massachusettes.

I paid hundreds of dollars for a logo that I never wound up using because after I bought it I realized it resembled a swastika:

jmockups-logo.png

When I later renamed the site to Lean Designs, I filed a trademark for it with the help of a local lawyer. He also helped write its Terms of Service. 💸

With all of these administrative things taken care of, I proceeded to spend months building the first version of the product.

I thought I was being innovate by blindly following my vision for the product, but I was being extremely naive.

When I received feedback about jMockups on HackerNews, I took every bit of praise as validation that the product was on the right track. Same thing when I announced the pivot to Lean Designs.

The first version was completely free. I wanted feedback – any feedback – and thought that the best way to do that was to get as many people using it as possible. Feedback from free users can be very misleading; what you really need is feedback from people who will pay.

Eventually I transitioned to a freemium model where you could create a limited number of mockups for free or unlimited for $9 per month. This was a tool that I was aiming at professionals and I charged the same amount as a burrito at Chipotle.

The only marketing I did for it was to create a blog where I only posted about feature updates. I should have been writing – or paying someone to write – content that would be interesting to my target audience (whoever that was).

And for the few people that did use it and did pay, I eagerly implemented almost every feature request in the hope that each one would turn the tide on my failing product.

I was in a mastermind group at the time, but we spent too much time talking about features and not whether the Lean Designs was viable in the first place.

I didn’t pay close attention to metrics nor did I look at what people were actually creating with the site.

Finally, I spent way too much time trying to turn what I knew to be a flop into a successful startup. It’s hard to know sometimes whether an idea is bad or whether you just need to keep pushing, but in Lean Design’s case I kept at it way too long.

After about a year and a half and hundreds of hours spend toiling away on nights and weekends, I stopped accepting new users, put up a notice that it would be shut down in a few months, then killed it.

The upside of my experience with Lean Designs is that I picked up a lot of new technical skills and learned a lot about what not to do which I later applied to Lean Domain Search and other side projects.

Was it worth it? Could I have learned these things by reading blog posts like this one when I was deciding whether to work on Lean Designs? It’s hard to say. I’d like to think so. Maybe some things just have to be learned the hard way.

On that note, if anyone reading finds themselves working on a project with similar issues and is looking for feedback, don’t hesitate to drop me an email.