A New Adventure: I’m Taking the Leap to Focus on Preceden and Analytics Consulting

As many of you know, I’ve had a long-running side project called Preceden, a tool that helps people create professional-looking timelines:

preceden-2018.png

I launched Preceden back in early 2010 while I was a still lieutenant in the Air Force and kept it running as a nights-and-weekends side project throughout my time at Automattic and Help Scout.

Over the years its revenue has slowly grown to the point where it’s a healthy little business these days. As its revenue has grown, the amount of time I have to put into it has dwindled to an hour or two each week. Between my work at Help Scout and my family (three kids under four now!), I basically have had time to do support and not much else, despite there being so much more I want to do.

There was never going to be a perfect time to take the leap to focus on growing Preceden, but with its revenue and growth being what it is, my wife and I have decided now’s the time to do it.

For the foreseeable future I’ll be focused on growing Preceden, but also doing some analytics consulting on the side. Going all-in on Preceden was an option, but I really enjoy analytics and business intelligence work and want to continue leveling up there. I’m thrilled to have both Help Scout and Automattic as my first consulting clients.

With my hours now reduced at Help Scout, we’re looking to hire a new analytics lead. Help Scout is an incredible company and you’ll get to work with an amazing group of people who care deeply about building a business and a product that people love. As the lead analyst, your work will have a huge impact on the direction of the business. If you’re interested in this role, check out the job description here: Data Analyst at Help Scout and feel free to shoot me an email with any questions.

I have no idea how this will all play out long term, but I’m really excited to see how it goes.

Building a Looker-Powered Daily Metrics Email Report

One of the main ways we evangelize metrics at Help Scout is with a daily metrics report that is automatically emailed to the entire company every morning.

In the email we highlight the performance of our key business metrics (New Trials, New Customers, etc) for the day prior and for the month to date. We also include our projection and target for the month to help us understand how we’re doing for the month.

Here’s what it looks like (with a shortened list of metrics and no actual numbers):

Screen Shot 2018-06-26 at 10.50.17 AM.png

Because the report is delivered over email and doesn’t require logging into a separate tool, it makes it easy for everyone to keep up to date about how the business is doing. It’s also a frequent cause for celebration in Slack:

daily-metrics-yay.png

In years past a prior version of this report was generated with a lot of PHP code that was responsible for calculating all of the metrics and delivering the email.

When we adopted Looker as our Business Intelligence tool last year we ran into a problem though because we had refined how a lot of our metrics were calculated as we implemented them in Looker. As a result, Looker would sometimes report different values than daily metrics report. This obviously wasn’t ideal because it caused people to mistrust the numbers: if Looker said we made $1,234 yesterday but the metrics email said we made $1,185, which was correct?

Our solution was to rebuild the daily metrics email to use Looker as the single source of truth for our metrics. Rather than calculate the metrics one way in Looker and painstakingly try to keep the PHP logic in sync, we rebuilt the metrics email from scratch in Ruby and used Looker’s API to pull in the values for each of the metrics. This ensured that the numbers in Looker and the daily metrics email always matched since the daily metrics email was actually using the metrics calculated by Looker.

Building a Daily Metrics Report for your business

If you use Looker and want to build something similar for your organization (which I highly recommend!), I open sourced a super-simple version of ours to help you get started:

https://github.com/mattm/looker-daily-metrics-email

For this demo, it assumes you have a Look with a single value representing the number of new customers your business had yesterday:

Screen Shot 2018-06-26 at 10.46.43 AM.png

When you run the script, it will query Looker for this value, throw it into a basic HTML-formatted email, and deliver it:

Screen Shot 2018-06-26 at 10.48.33 AM.png

You will of course want to customize this for your business, make the code more robust, style the email, etc – but this should save you some time getting it off the ground.

If you run into any issues feel free to reach out. Good luck!

A Frequent Communication Mistake I’ve Made as a Data Analyst

Looking back at my time so far as a data analyst, some of the biggest mistakes I’ve made were not technical in nature but around how I communicated within the organization.

Two real-world examples to illustrate:

A few years ago at Automattic our ad revenue was way down from what our marketing team expected it to be. For example (and I’m making up numbers here), for every $100 we were spending on ads, we had been making $150 historically, but in recent months we were making $25. Either the performance had gone way down or there was some issue with the tracking and reporting.

I was on the data team at the time and volunteered to work with the marketing team to investigate. As it turned out, there was indeed an issue: there was a problem with the way AdWords was appending UTM parameters to our URLs which was breaking our tracking. For example, a visitor would click an ad and land on wordpress.com/business&utm_source=adwords – note that there’s an amperstand after the URL path instead of a question mark, so the correct UTM source wouldn’t get tracked and the customer wouldn’t get attributed to AdWords.

Fortunately, we had some event tracking set up on these pages (Tracks for the win) that recorded the full URL, so I was able to go back and determine which customers came from ads and calculate what our actual return on ad spend was. After figuring out the issue and determining how much unattributed revenue we had, I wrote up a lengthy post about what happened and published it on our internal marketing blog without informing the marketing team about it first.

Second example: a few months ago at Help Scout, we had an ambitious revenue target for Q1. With a few days left in the quarter, we were still projecting to come in short of the target and no one realistically expected us to reach it. Something about the projection seemed off to me so I dove in and realized there was a mistake in one of the calculations (it was my fault – in the projection we weren’t counting revenue that we earned that month from customers that were delinquent who then became paying again). As a result, our projection was too low and we likely were going to hit our target (and eventually did!). I wrote up a lengthy message about what happened and published it in one of our company Slack channels without informing any of the leadership about it first.

To understand the problem, it’s important to note that as a data analyst, I haven’t typically been responsible for the performance of our metrics. I help set up tracking and reporting and help ensure accuracy, but someone else in the organization is responsible for how well those metrics were doing.

In both of the cases above, I wasn’t intentionally bypassing people. At the time, it was more like “oh, hey, there’s a bug, now it’s fixed, better let everyone know about it” – and probably an element of wanting credit for figuring out the issue too.

However, not consulting with those responsible for the metrics before reporting it was a mistake for several reasons:

  • They didn’t have an opportunity to help me improve how the issue and impact were communicated with the rest of the company and its leadership.
  • I missed an opportunity to have them doublecheck the revised calculations, which could have been wrong.
  • Even though we were doing better than we had been reporting in both cases, it may have indirectly made people look bad because they had been reporting performance based on inaccurate data. They should not be finding out about the issue at the same time as the rest of the company.

In neither case was there any big drama about how I went about it, but it was a mistake on my part nonetheless.

Here’s what I’d recommend for anyone in a similar role: if someone else in your organization is responsible for the performance of a metric and you as a data analyst discover some issue with the accuracy of that metric, always discuss it with them first and collaborate with them on how it is communicated to the rest of the company.

It sounds obvious in retrospect, but it’s bitten me a few times so I wanted to share it with the hope that it helps other analysts out there avoid similar issues. Soft skills like this are incredibly important and worth developing in parallel with your technical skills.

If you’ve made any similar mistakes or have any related lessons learned, I’d love to hear about them in the comments or by email. Cheers!

Tracking Daily Unique Visitors to Recently Published Blog Posts with Looker, Fivetran, Mixpanel, and BigQuery

If you work at a company that publishes a lot of content, it’s important to understand how well that content is performing. Not just in terms of page views and unique visitors, but whether it converts visitors into trials, etc.

At Help Scout we have a Looker dashboard to help us track all of these things and more. In this post, I’ll walk you through how we track the daily unique visitors to our recently published blog posts. For example, on May 2nd we published April’s Release Notes – how many people viewed that post on the day it was published? How about the day after? And how does that compare to our other recently published content?

Overview

Big picture, we fire a Viewed Page Mixpanel event on every marketing page. We then use Fivetran to get that event data into BigQuery, where we analyze it in Looker. You can read more about the setup here: Tracking What Pages Your Visitors View Prior to Signing Up Using Mixpanel, Fivetran, BigQuery, and Looker.

Querying for Recently Published Posts

With this data in hand, we need to figure out a way to determine what the recent blog posts were so that we can limit our analysis to them.

Here’s the query we use:

For Help Scout blog URLs (ie, URLs that begin with https://www.helpscout.net/blog/), we need to determine when the post was published. That’s the same as the first day it was viewed. However, because we launched Mixpanel page view tracking on April 3rd, this would make it look like every one of our posts was published on April 3rd or sometime after that. That’s why we limit the results to April 4th or later. Also, we want to limit it to posts that received at least a certain number of visitors that first day, otherwise the results will wind up including low traffic posts that were first viewed after April 4th.

This query gets us a list of those recently published posts:

Modeling the Data in Looker

Over in Looker, we’re going to create a derived table with these results so that we can determine whether a given Mixpanel event URL is a recently published blog post:

The reason we have the is_new_blog_content dimension here is because we’re going to LEFT JOIN all Mixpanel events on this derived table by the URL. Not all URLs will have a match in this table, so this dimension will let us limit the analysis to just events that were a recently published blog post.

Here’s how we model the relationship between our main Mixpanel events model and this derived table:

One other key piece of this is that we model how to calculate unique visitors in the main Mixpanel events view:

Creating the Chart in Looker

With these foundations in place, we can then create the chart we set out to.

We want to use that Is New Blog Content dimension to limit the results to recently published posts, then pivot the daily unique visitor count on the URL:

looker-recent-posts-explore.png

Then it’s just a matter of setting up the chart in Looker and voila, there we have it:

Screen Shot 2018-05-11 at 1.51.34 PM.png

Going forward, without anyone having to log into Google Analyics, we’ll be able to track the popularity of our new blog posts and track the trends over time.

By the way, that spike on April 19th is from our CEO’s Beacon 2.0 Preview: The User Interface post, all about Help Scout’s soon-to-be-released live chat & improved self service tool. If you’re interested in getting notified when it launches, you can sign up here.

Happy querying!

Analyzing a Conversion Funnel in BigQuery Using Fivetran Powered Mixpanel Data

mixpanel-funnels_fqmumg.png

In a recent post I outlined how to use Fivetran to sync Mixpanel data to BigQuery for analysis in Looker. Today we’ll walk through how to write a SQL query to analyze a funnel using the Mixpanel data in BigQuery.

For this analysis, we’re going to create a three-step funnel showing how many visitors who start on Help Scout’s pricing page click through to the sign up page and then sign up.

One Step

To begin, lets just look at visitors who viewed the pricing page:

You might wonder why we need the % at the end of the URL; that’s simply to make sure the results include pages with URL parameters such as https://www.helpscout.net/pricing/?utm_source=adwords.

Two Steps

Next, we’ll join these results on sign up page data, making sure that the sign up page views occurred after the pricing page views:

The reason we LEFT JOIN is because not all visitors will make it to the next step of the funnel and we want the data to reflect that.

Note that we join on both the distinct_id (to ensure each result is for a single visitor) and on the time the events occurred.

Three Steps

Extending this to the third and final step of the funnel, the Signed Up event, we get:

Determining the First Event Occurrence for Each Step

The query above will return every combination of pricing page views, sign up page views, and sign up events for each visitor. For our funnel though, we don’t care whether they loaded the pricing page or sign up page multiple times, we only care that they did it at all. So we modify the query to only return the first instance of each event for each visitor:

Measuring the Funnel

Finally, we count how many visitors made it to each step to get our funnel:

Voila!

From here, you can take it a step further and segment the results on a property in the first step of the funnel or add a funnel duration to limit how long visitors have to complete the funnel, etc.

If you have any questions about this, don’t hesitate to reach out.

Tracking What Pages Your Visitors View Prior to Signing Up Using Mixpanel, Fivetran, BigQuery, and Looker

setup.png

One of the things we’ve been analyzing at Help Scout recently is what paths individual companies take before signing up for a trial. For example, looking at a company that signed up last month, did they start their journey with Help Scout on our homepage? Or did they find us via our blog? Or one of our marketing landing pages? And once we know that, what other pages did they visit before signing up? Did that company wind up becoming a customer? How much did we make from them?

This post is about how we’ve wrangled the data using Mixpanel, Fivetran, BigQuery, and Looker to help us answer these questions.

Big picture, we’re tracking page view and sign up events in Mixpanel, syncing that data to BigQuery using Fivetran, then tying it to our internal company data in Looker for easy analysis.

Step 1: Tracking Page Views and Sign Ups in Mixpanel

If you’ve used an event-based analytics service, this step will be pretty straightforward. We load the Mixpanel script on every page, then fire a Viewed Page event:

And then if the company signs up for a trial, we fire a Signed Up event with that company’s id as a property:

These are the only two Mixpanel events we track. If we wanted to track actions in-app, we could also fire custom events for those, but we don’t at Help Scout because we tie this Mixpanel data together with our internal data about what companies have done in-app, eliminating the need for additional Mixpanel events.

Here’s what the Viewed Page event looks like in Mixpanel’s Live View:

Screen Shot 2018-04-06 at 1.42.07 PM.png

Also as we’ll see later, Mixpanel automatically tracks a lot of details about the visitor: things like the browser, country, URL, referrer, OS, etc, all of which we can use use in our analyses.

Step 2: Syncing Mixpanel data to BigQuery with Fivetran

Fivetran is this amazing service that specializes in helping you centralize all of your data in a data warehouse. For example, we have Fivetran connectors set up for MySQL (which we use internally at Help Scout), Salesforce, HubSpot, Google Sheets, and now Mixpanel:

Screen Shot 2018-04-06 at 1.46.42 PM.png

Screen Shot 2018-04-06 at 1.51.58 PM.png

Taking Mixpanel as an example, we provide Fivetran out Mixpanel API credentials, then Fivetran queries Mixpanel’s API periodically, cleans up the results, and throws it all in BigQuery:

Screen Shot 2018-04-06 at 1.50.21 PM.png

This lets us analyze our Mixpanel data just like we would other SQL data:

We can also access the custom event properties that we’re tracking for the Signed Up event using Standard SQL’s json_extract_scalar function:

Once we have the company id, it’s just a matter of querying for a specific company id to view their event history:

Step 3: Modeling the data in Looker

If you have access to a Business Intelligence tool like Looker, you can model this Mixpanel data and how to join it with your other data.

First, create a view to model the Mixpanel event data:

Then create a view with a derived table to model the relationship between a distinct_id in Mixpanel with a company id:

Finally, connect those two views and any others you want to be able to analyze together:

Step 4: Analyzing the data in Looker

Once you’ve modeled the data, you can analyze the event history for specific companies, track trends like page views and unique visitors, and more:

Screen Shot 2018-04-06 at 2.28.57 PM.png

In future posts I’ll walk through some of the other interesting analyses you can perform with this data.

If you have any questions about this setup, don’t hesitate to reach out.

Prioritization for Data Analysts

 

next-looker.png

For the past two years I’ve worked as a data scientist, first on the marketing team for WordPress.com and now on the growth team at Help Scout. In both of these roles I’ve been the sole data analyst on my team so I tend to have more work to do than I have time for. As a result, I spend a lot of time thinking about how to prioritize what’s on my plate. My goal with this post is to share some of what I’ve learned to help other data folks who are in similar roles.

The prioritization problem

Here’s a made up scenario:

Your manager asks you for help with a medium priority analysis that will take several days. Shortly after you begin working on it, a coworker pings you for help with an urgent request that will only take a few hours. A little while later, your boss’s boss asks you for help with a low priority one-day project.

You now have three requests from people with differing seniority each with a different duration and urgency – which do you work on next and why?

Ad hoc requests vs projects

At a high level, there are two types of data requests:

Ad hoc requests are questions that can be answered fairly quickly. They may take a few minutes or a few hours and the urgency can range from “drop what you’re doing” to “no rush at all”.

Projects take longer and usually have a higher impact on the business. For me, they range from a few days to (rarely) a few weeks.

While it may be tempting to only want to work on ad hoc requests or only on projects, the reality is that they’re both important and if you’re the only analyst, you’ll need to work them both into your schedule. For example, even if you’re working on a big project, you can’t simply ignore the urgent ad hoc requests that come up during the course of a week. Telling your CEO that you’ll get to his request in three weeks when you’re done with your project is not recommended.

Therefore my advice is to set aside about two hours each day to work on ad hoc data requests and spend the rest of your time working on your highest priority project. This will give you time to work on those longer, high impact initiatives but still give you time to work on the shorter requests as well.

I wouldn’t loop your manager into the prioritization of ad hoc data requests; he or she probably has better things to do than helping you decide “spend 30 minutes on this then spend an hour on that” etc etc. Use your judgment: work on whatever is the most urgent or will have the highest impact on the business. When in doubt, base it on the seniority of whoever is asking.

For projects, I definitely would recommend getting your manager’s help prioritizing what to work on. He or she help you decide where you can be the most impactful and when questions come up about why you’re not working on some other thing, it’s not just you who made that call.

I would only work on one project at a time. I’ve tried doing like 3 hours a day on one project, 3 hours a day on another, but all that’s going to do is make both take twice as long if not longer due to the frequent context switching.

When you meet with your manager, keep him or her up to date about where you’re at with your project. Often it’s hard to predict at the start how long a project will take. Something you thought might take a week will take a day. Something you thought would take three days will take two weeks. That’s just the nature of it. Keeping your lead informed.

Sometimes priorities will change. Your expected three-day project that is now in its second week may get put on the back burner in order to work on something else. It’s easy to get frustrated when this happens, but remember that you’re there to have the biggest impact on the business, not to finish projects just for the sake of finishing them. Sometimes what’s highest priority will change so be flexible.

When priorities change and you have to change projects midway, try to estimate what impact that will have on the completion of the original project. If you estimated you would complete your original project by the end of the week, but by switching projects it will now take an additional week, communicate that to the people waiting on the result of the original analysis.

Get clarity before beginning your analysis

Regardless of whether a request is ad hoc or a project, I’d recommending asking the requestor several questions:

  • How important is this?
  • Is there a deadline?
  • How accurate do you need the answer to be?
  • What type of end result are you looking for?

If you don’t ask about importance, you risk spending a lot of time on an analysis that is not that important to the person asking about it. It’s common for someone to ask a quick hey-I-wonder-about-this question, but they only care if you can find the answer quickly.

Similarly, ask whether it needs to be done by a specific date. If the answer is holding something up, it should get higher priority than something that doesn’t.

Asking about accuracy is something that took me a while to appreciate too. I’ve run into a lot of data requests that I can give a 95% accurate solution to fairly quickly, but that would take much longer to get to 100%. A lot of times, people will be fine with a 95% quick solution. Understanding when 95% is acceptable and when 100% is necessary can save you a lot of time.

Sometimes I also ask what type of end result the person is interested in. It often helps them and you clarify what the analysis is all about.

For longer projects, I’d also recommend periodically updating the person who requested it with your progress. As they see your progress, they might want to refine the request or may be happy with the results as-is, freeing you up to work on something else.

Staying organized

I’ve seen some data teams ask people to fill out a little template (usually in a Trello card) to help people ask good data questions. I prefer to discuss the details with the person, then create a Trello card for myself with the relevant information.

I would recommend keeping a list of all potential projects and ad hoc data requests. That is, don’t just try to keep it all in your head. I do this in Trello with four lists:

  • Multi-week Projects
  • Multi-day Projects
  • Multi-hour Projects
  • Quick Requests

Whenever someone asks a question, I get the details from them then create a new card and add it to the appropriate list.

You could also have a Completed list where you drag cards as you complete them to keep a record of it. I’ve also seen some teams have a list for each month or quarter (January 2018, February 2018, etc) and drag completed cards to the relevant list when it’s complete. I did that in the past, but tend to just archive the cards now to avoid cluttering up the Trello board.

Invest in a proper BI tool like Looker

One other big win has been switching from manual data analysis (using MySQL queries and R) to Looker, the premier Business Intelligence (BI) tool on the market today. I could go on for hours about how amazing Looker is, but at a high level it lets you:

  • Create dashboards that are automatically updated as the underlying data changes.
  • Use LookML, Looker’s modeling language, to teach Looker how our data fits together. With that in place, I rarely have to write queries by hand anymore. I can use Looker’s interface to quickly ask and answer questions.
  • This also means that other people at the company who may not have querying skills can answer their own questions without asking me every time.

I’d highly recommend checking out Looker if you still find yourself wrangling data a slow manual way.

If this post resonates with you, I’d love to connect

If you’re a data analyst, especially if you’re the only one at your company or part of a small team, I’d love to chat to learn more about what you’re working on, how you prioritize, etc. Drop me a note: matthew.h.mazur@gmail.com. Cheers!