Using Data to Forecast the Impact of a Pricing Change

Back in April of this year Help Scout announced we would be raising prices for customers on some of our old legacy plans in six months time (in October). I recently helped with an analysis to estimate what the impact would be on our monthly recurring revenue (MRR). We performed a similar analysis prior to the announcement, but severals months had passed so it was time for fresh forecast.

At a high level, we performed the analysis as follows:

1. Identify the variables that impact the forecast

For us, this meant variables such as:

  • Of of the customers who we announced the price increase for in April, how many have churned between then and now? And how many can we expect to churn between now and October? And how many can we expect to churn after the price increase?
  • How many can we expect to upgrade or downgrade after the price increase?
  • How many can we expect to change from monthly payments to discounted annual payments?
  • Because customers pay per user, what % of users can we expect to lose after the price increase?
  • And so on.

2. Create a spreadsheet that lets you adjust the variables to see what impact they have on the forecast

For example (and simplifying a lot), if we had W customers originally and X have churned between the announcement and now, and we expect another Y to churn between now and the price increase, and we expect Z% to churn after the price increase, paying $N/month on average, we’ll wind up with a future MRR of (W – X – Y) * (1 – Z) * N.

Our actual spreadsheet wound up become quite complex to take into account all of the variables, but in the end we had something that took into account all of the key factors that impact will the outcome.

3. Come up with a range of estimates for each of variable

Using historic trends and educated guesses, we estimated the range for each of the key variables we identified. With those in hand, we create several forecasts (pessmistic, realistic, optimistic) showing what the outcome looks like for each of those situations:

Screen Shot 2017-08-04 at 3.40.52 PM.png

My original instinct was to come back with a single number: “The forecast is $X” but my lead wisely suggested calculating several outcomes to account for the range of possibilities.

This was a fascinating exercise because it forced us understand on a deep level what the inputs are (churn rate, etc) and what impact they have on our bottom line (MRR).

If you’re interested in trying this for your business, try to create a simple spreadsheet model that takes into account various metrics (number of unique visitors, trial sign up rate, trial to paid rate, etc) that comes close to predicting your historic results, then see how it well it does going forward. You’ll likely learn a lot in the process about which metrics you need to change and by how much to achieve your growth goals.

Automattic is hiring a Marketing Data Analyst

We’re now accepting applications for a new Marketing Data Analyst position at Automattic that might interest some of you:

https://automattic.com/work-with-us/marketing-data-analyst/

In this role you’d be helping us use data and analytics to guide the direction of our marketing efforts on WordPress.com.

Here’s the official description:

Automattic is looking for a data analyst to join the marketing team. You will distill data into actionable insights to guide our customer marketing and retention strategy as well as inform product development efforts.

Primary responsibilities include:

  • Build and maintain standardized reporting on key metrics across the customer lifecycle.
  • Develop customer segmentation models to inform tailored, multi-channel marketing strategies.
  • Conduct ad hoc analyses to better understand customer behavior, needs, and individual test results.
  • Partner with other analysts and developers to increase data accessibility across the organization.
  • Design a process for prioritizing and communicating data requests and priorities.

You:

  • Are proficient in SQL and Excel.
  • Have experience with web analytics platforms such as: Google Analytics, KISSmetrics, or Mixpanel.
  • Have experience working with marketing teams to deliver analyses and answer business questions.
  • Are able to communicate data in a way that is easy to understand and presents clear recommendations.
  • Are highly collaborative and experienced in working with business owners, executives, developers and creatives to discuss data, strategy and tests.
  • Have excellent prioritization and communication skills.
  • Ideally, have web development experience (though not required).

Like all positions at Automattic, you’ll work remotely, and can be based wherever you live. It’s not a requirement that you live in or relocate to the Bay Area to do this job.

If this sounds interesting to you (and how could it not?!?) there are instructions at the bottom of the job description about how to apply.

And if you have any questions about Automattic or this data analyst position, feel free to drop me an email: mhmazur@automattic.com.

 

Visualizing Your SaaS Plan Cancellation Curves

If you work on a SaaS product, you probably have a good idea of what its cancellation rates are, but chances are you don’t know how that changes over time. For example, what % of users cancel after 1 day? How about after 7 days, 30 days, etc?

I worked on a project at Automattic this week to help us understand the cancellation curves for WordPress.com’s plans and am open sourcing the R script so anyone can do the same for their service.

Here’s an example of how the cancellation curves might look for a service with a Gold, Silver, and Bronze plan:

example.png

We can see that most users who cancel do so pretty quickly and that long term about 30% of Gold plan, 20% of Silver plan, and 10% of Bronze plan subscriptions wind up cancelled.

To generate this data for your own product, you’ll just need three data points for each subscription: when it was purchased, when it was cancelled (if it was), and the name of the subscription. The script will take care of analyzing the data and generating the visualization.

You can check out the script and additional details on GitHub here: Cancellation Curve Visualizer.

If you have any questions or run into any issues, don’t hesitate to drop me a note.

The impact of GoDaddy’s 5-year default domain registration option

Andrew Allemann has a great post on Domain Name Wire where he tries to estimate the impact of GoDaddy’s five-year default domain name registration option.

GoDaddy’s shopping cart defaults to a five-year registration period when you place a domain name in your cart. Most people switch this back to just one year, but some don’t. Whether they merely overlook this or decide it makes sense to register the domain name for five years, about 3.5% of new .com registrations at GoDaddy each month are for five years.

Here’s a summary of the math:

  • In June, the .com registry reflects 26,750 five-year registrations which account for 3.48% of all their .com registrations so 26,750/.0348 = 769K total.
  • On average across all registries, only 1.66% of of new .com registrations are for 5 years.
  • Had GoDaddy met the average, it would have only registered 769K * 1.66% = ~12,750 five year registrations.
  • That works out to be a difference of 26,750 – 12,750 = 14K five year registrations or 14K * 5 = 70K years of registrations.
  • Assuming those users would have purchased 1 year registrations if 5 wasn’t the default, that works out to be 70K – 14K = 56K extra years of registrations per month thanks to that five-year default.

Regardless of how you feel about GoDaddy, you’ve got to admit that they’re really effective at upselling.

Hattip to my coworker Wendy for sharing the post.

The Meta Funnel: From User Activity to Product Changes

When we think about funnels, we tend to think about how users move through our product: what percentage of people who visit our homepage sign up, what percentage of those users pay, etc.

If we zoom out, there’s another even more important funnel that we can use to measure how sophisticated an organization is with its data:

User Activity → Data → Analysis → Insights → Product Changes

User Activity → Data: When users interact with your product, are you capturing the relevant data related to their activity? For example, you might track how many people visit your homepage, but how about what percentage scroll below the fold, where they click on the page that they shouldn’t be, the bounce rate, how they’re getting to your site and how that’s changing over time, etc. What’s important to track will vary by product and not everything you track will be important, but if you’re not recording the data, it will be impossible to analyze it.

Data → Analysis: You have data, but is anyone looking at that data regularly? All of the data in the world doesn’t matter if no one ever analyzes it. For some types of analysis your analytics tools will make this step easy, but it also might require complex queries or scripts depending on what questions you’re trying to answer.

Analysis → Insights: One of the hardest things about analytics is that it’s often difficult to look at all of the numbers and draw actionable insights from them. You may discover that your conversion rate is 15%, but is that good or bad? If it goes up to 20% is that because your product has improved or just because the quality of your traffic has changed. If the top term your users are searching for on your support page is “domains”, is that an indication that you need to improve the instructions you provide users in your product, or just an inevitable result of domains being very complex?

Insights → Product Changes: And finally, once you have insights from the analysis you’ve done, are you making any changes to your product as a result? Maybe insights into your product’s support search terms indicate that you do need to improve the guidance you provide to users within your product. Does your team then execute on that by actually improving the guidance within your product?

In my experience with both many years of side projects and at work, the conversion rate across this entire funnel is typically very low. Part of it is just the nature of the beast: it’s hard to set up tracking to collect everything that’s important, it’s hard to analyze the data you do collect, it’s hard to come up with insights from that analysis, and it’s hard to make changes to your product when you do have those insights.

But just because it’s hard doesn’t mean it’s not worth optimizing. If you can double your organization’s conversion rate between any of these steps, it should double the number of improvements you wind up making to your product as a result.

One thing that can help is to discuss with your team and document your organization’s processes for each of these steps. Things like:

  • Who is responsible for implementing analytics on your team?
  • If they don’t have experience setting it up, where can they go to learn?
  • Where can they go to learn what data is important to collect?
  • How do they analyze the data?
  • How do you ensure people are looking at the data often enough?
  • Can you automate the reporting? Should you?
  • Who on the team needs to be involved to maximize the number of insights you’re discovering from your analysis?
  • What does your process look like for turning those insights into actual product changes?

There’s probably a lot of low hanging fruit here for your team to work on. The better your team gets at moving down this funnel, the more improvements you’ll make to your product leading to happier users and more impact on your company’s bottom line.

Luca on Data and Marketing

My friend and coworker Luca Sartoni participated in an AMA on ManageWP where he was asked about using data to guide marketing efforts.

His response explains well why making data-informed decisions will lead to better results than making data-driven decisions:

Interconnecting data science with marketing is one of the keystones of what is commonly called growth hacking. I’m a not a fan of growth hacking by itself though, because business is more complex than just hacking things.

When your decisions are solely based of data science you can call yourself data-driven, but there are tremendous blind-spots. Data looks neutral, but in reality it’s not. The way you plan your data sources, you collect the data, and you process such data to get answers is not neutral and impacts on the final outcome, therefore the information you extract cannot be considered neutral. If you drive your decision-making process solely with that information, your think you are considering data, but in reality you are considering the whole not-neutral process.

For this reason you need to keep in mind more elements when it’s time to make a decision, where data is one of them. There is vision, mission, experience, data, and other elements. When you mix them, giving data the right amount of value, you make a data-informed decision, which is usually better than data-driven, simply because it’s more informed by the other elements.

The whole AMA is worth checking out.

Understand your metrics better by trying to predict how changes will impact them

A few weeks ago, right before launching a new email marketing campaign at work, I asked folks on our team for any predictions about what the email’s conversion rate would be. It wasn’t intended as a serious exercise – more in line with the game where you guess the date a baby will be born or how many marbles are in a jar.

Someone threw out a guess and then a few more people followed with their own. We launched the email campaign and waited. When the results came in, the actual conversion rate was much lower than any of us had expected. We were all surprised and I think a big part of that was due to the guesses we had made beforehand.

I’ve been thinking a lot about that experience and what we can learn from it.

Trying to predict how your metrics will change is a great way to force you to think deeply about those metrics. It’s easy after you make a change to your product to look at a metric and conclude that the change – or lack thereof – makes sense. But if you have to make a prediction beforehand, you’re forced to consider all of the factors. In this example, what do our email conversion rates typically look like? Is it 0.5% of 2% of 5%? Have we tried something similar in the past? Are there other channels that we can learn from? Do we expect this new campaign to have higher or lower conversion rates?

The more you try to predict how your metrics will change, the better you’ll get at predicting them and the better you get, the more likely it is you’ll be able to move those metrics. Imagine two teams, one that always tries to predict what the metrics will look like and one that doesn’t. After doing this several times, which team do you think will be better at understanding how to influence those metrics?

Don’t ask people to guess in public because the subsequent guesses will tend to be anchored towards the first guess. If no one has an idea initially and someone guesses that a metric will be 5%, then people will tend to anchor their guesses around 5%: 4%, 6%, etc. It’s much less likely someone will say 0.4% even if they would have guessed that without seeing any other responses. The easiest way to get around this is probably sending out a survey and have people guess there. Afterwards, you can reveal who was closest.

Put your guesses in writing. It may take some time for the results to come in and it’s easy to forget what your original prediction was. Also, it will make you take the exercise more seriously because of the extra accountability that putting it in writing adds.

Try to predict how your metrics will change before embarking on a project. For example, before we set up this email campaign, should we have tried to predict how it would perform? I think so. Not only would it cause us to think more carefully about the objectives and how to achieve them, but it may have led us not to pursue it in the first place.

Look for other opportunies to make predictions. This doesn’t have to be limited to product changes. For example, lets say you’re about to publish a big announcement on your blog:

  • How many visitors will that post bring over the first 24 hours? How about a week later?
  • How many people will share it on social media?
  • How many of those people will convert into free/paying customers?

A lot of what we do online can be quantified – keep and eye out for ways to hone your predictive skills.

Do any of you all do this type of thing on a regular basis? I’d love to hear about your experience and what you’ve learned from it.