JOIN Talk: Fostering a Data Culture with a Daily Metrics Email Report

A few months back I wrote about how Help Scout generates a Daily Metrics Report using Looker. I wound up giving a talk about this topic at JOIN, Looker’s annual user conference, and wanted to share the recording for anyone who wants to learn more.

Here’s the description:

How do you keep everyone in your organization informed about the performance of its key metrics when not everyone uses Looker regularly? In this session, we’ll talk about how to leverage Fivetran and Looker to build a daily metrics email report that will help educate end engage your entire team.

Huge thanks to Fivetran for inviting me to speak during their sponsor session and Looker for putting on the conference.

A Few Thoughts on Image Upload Usage at Preceden

One of Preceden’s most popular feature requests over the years has been the ability to upload images to Preceden and have those images appear on timelines.

A lot of competitors offer that functionality, but I procrastinated for almost 9 years for two reasons:

  1. It’s complex to implement, both in terms of actually handling the uploads and having them appear on the timelines.
  2. Most of the people that requested it were using Preceden for school timelines and that segment of users tend not to upgrade at a high rate. People using Preceden for work-related project planning timelines didn’t request it much. Therefore, it never was much of a priority because it likely wouldn’t move the needle on the business.

That said, since I’ve had more time to work on Preceden recently, I decided to finally do it. For handling uploads, I wound up using Filestack.com which simplified the implementation a lot. And updating Preceden’s rendering logic took time too, but in the end it all worked out.

I recently checked on the usage stats and – not surprisingly – it’s used most heavily by people using Preceden for education:

Screen Shot 2018-11-15 at 2.27.44 PM.png

For users that have signed up since this launched:

  • Teaching: 29% uploaded an image
  • School: 26%
  • Personal Use: 16%
  • Work: 12%

In other words, it’s used very heavily (which is great!) but not with the segment of users with the highest propensity to pay.

This dilemma comes up fairly often: do you build Feature A that will be used heavily by mostly-free users, or Feature B that will be used heavily by mostly-paying customers?

For better or worse, I never wound up focusing on one market or use case with Preceden: it’s a general purpose timeline maker that can be used for any type of timeline. As a result though, I often get into these situations. If I was just building Preceden for project planners, I’d never implement image uploads. If I was just building it for students creating timelines for school, I’d probably have implemented it years ago.

It also comes down to goals: if my main goal is growing revenue, I probably shouldn’t work on features like this. But if I want Preceden to be the best general purpose timeline maker then it does, but there’s an opportunity cost because I’m not building features for the folks who will actually pay.

I operate in the middle for product development: work mostly on features that will make money, but also spend some percentage of my time on features like this that will make it a better general purpose tool.

If I were to start something from scratch today, I’d probably pick a narrow niche and try to nail it. No general-purpose tools. I’d recommend that to others too.

Going broad is fun in a way too though, it just has it’s challenges :).

 

Analyzing 89 Responses to a SQL Screener Question for a Senior Data Analyst Position

At Help Scout, we recently went through the process of hiring a new senior data analyst. In order to apply for the position, we asked anyone interested to answer a few short screener questions including one to help evaluate their SQL skills.

Here’s the SQL screener question we asked. If you’re an analyst, you’ll get the most value out of this post if you work through it before reading on:

This question – designed to be answered in 10-15 minutes – proved incredibly valuable because it served as an easy way to evaluate whether applicants had the minimum technical skills to be effective in this role. This position requires working a lot with SQL, so if an applicant struggled with an intermediate SQL challenge, they would likely struggle in the role as well.

What surprised me is that almost none of the answers were identical, except for a few towards the end because someone commented on the Gist with a (slightly buggy) solution :).

For reference, here’s how I would answer the question:

There are a lot of variations on this though that will still result in the correct answer – and many which won’t. For example, no points lost for using uppercase SQL instead of lowercase. But if the query doesn’t union the tables together at some point, it probably wouldn’t result in the correct answer.

If you’re interested in data analysis and analytics, you can subscribe to my Matt on Analytics newsletter to get notified about future posts like this one.

Analyzing the Differences

It would be impossible to list every difference – as you’ll see at the end of this post in the anonymized responses, there are and endless number of ways to format the query.

That said, there are a lot of common differences, some substantial, some not.

SQL Casing

Does the query use uppercase or lowercase SQL or some combination of the two?

Note that in these examples and all that follow, the answers aren’t necessarily correct. They’re just chosen to highlight different ways of approaching the query.

Common Table Expressions (CTEs) vs Subqueries

Common Table Expressions go a long way towards making your query easy to read and debug. Not all SQL dialects support CTEs (MySQL doesn’t, for example), but using them in the query was almost always an indicator of an experienced analyst.

Meaningful CTE Names

CTEs benefit a lot from meaningful names that make it easy for you and other analysts to interpret.

INNER JOIN vs LEFT JOIN

Either works, but INNER JOIN performs better and is more intuitive here.

Implicit vs Explicit INNER

“INNER” is implied if you just write “JOIN”, so it’s not required, but can make the query easier to read. Either way is fine.

Filtering in the WHERE clause vs JOIN condition

The standard way to filter the conversations is to use a WHERE clause to filter the results to only include those that have a beacon-interest tag. However, because we’re using an INNER JOIN, it’s also possible to add it as a join condition and get the same result. In terms of performance, it doesn’t make a difference which approach you take.

I lean towards a WHERE clauses because I think it’s clearer, but including it in the JOIN condition is completely viable as well.

Converting Milliseconds

HubSpot stores the form submission timestamp in milliseconds. Queries that didn’t account that would not result in the correct result.

UNION DISTINCT vs UNION ALL

There are two types of UNIONs: UNION DISTINCT and UNION ALL. The former – which is the default when you just write UNION – only returns the unique records in the combined results.

Both result in the correct answer here, but UNION ALL performs better because with UNION DISTINCT the database has to sort the results and remove the duplicate rows.

Accounting for Multiple Interest

Many applicants didn’t take into account that people could have expressed interest by filling out the HubSpot form and contacting support. Neglecting to account for this could result in multiple rows for individual email addresses.

GROUP BY Column Name vs Number

I lean towards using column numbers, but either is fine, and using the column name can have benefits. When there are 5+ columns (which is not an issue with this question), lean towards using column numbers which will be a lot more sane than typing out all the names (hat-tip Ray Buhr for this tip).

Single vs Multiple Lines When Listing Multiple Columns

Another common style difference is whether people put multiple columns on the same line or not. Either is fine, but I lean towards one column per line because I think it’s easier to read.

Comma First vs Comma Last

While not that common in the responses, it’s perfectly valid when listing columns on multiple lines to put the commas before the column name. The benefit is that you don’t have to add a comma to the previous line when adding a new column which also simplifies query diffs that you might see in your version control tool.

Comma-first folks tended to have software development backgrounds.

PS: For anyone interested in SQL coding conventions, I highly recommend checking out dbt’s coding conventions which have influenced my preferences here.

All Responses

We were fortunate to receive over 100 applications, most of which included an answer to the SQL question. I suspect if the application didn’t include this question, we would have had twice the number of applcants, but the presence of this question led some underqualified folks not to apply.

You can check out all 89 responses on GitHub.

If you have any suggestions on how to improve my query or feedback on any of this analysis, please drop a comment below. Happy querying!

I’m speaking at JOIN next week!

Screen Shot 2018-10-03 at 8.52.19 AM.png

Next week I’m excited to be speaking at JOIN, Looker’s annual user conference in San Francisco.

My talk (at 11:15am on Wednesday the 10th) is about how we use Fivetran, Looker, and email to get more people at Help Scout interested in and engaged with our metrics, a topic which I’ve written about previously on this blog. Huge thanks to Fivetran for sponsoring this session.

I’ll also be at the dbt meetup on Tuesday the 9th.

If you happen to be attending either the meetup or the conference, drop me a note – I’d love to say hey 👋.

Automating Facebook Ad Insights Reporting Using Fivetran and Looker

For one of my recent consulting projects, I worked with a client to automate their Facebook Ad Insights reporting in Looker. There are some nuances with the data that made it a little tricky to figure out initially, but in the end we wound up with a pretty elegant model that is going to let them report on all of their key metrics directly from Looker. This post is about how you can do the same.

If you’d like to get notified when I release future tutorials like this, make sure to sign up for my Matt on Analytics newsletter.

The Objective

By the end of this tutorial, you’ll be able to analyze your Facebook Ad Insights data in Looker using the following 15 fields: Account ID, Account Name, Ad ID, Adset ID, Campaign ID, Campaign Name, Country, Report Date, CPM, CTR, ROAS, Total Conversion Value, Total Impressions, Total Link Clicks, and Total Spend.

explore-facebook.png

Setting up the Connection in Fivetran

Fivetran makes it incredibly easy to centralize all of your data in a data warehouse for analysis in a BI tool like Looker. This tutorial assumes you’re using Fivetran, but if you’re using another ETL tool like Stitch to grab your Ad Insights data, the modeling should be fairly similar.

There are a lot of ways to slice and dice your Facebook data, but for the key metrics mentioned above, here’s what your setup should look like:

facebok-fivetran.png

  • Breakdown should be country – this means all of the reporting data will be segmented by country. You could segment it in additional ways like by age, gender, etc depending on your needs – just make sure to adjust the model accordingly if you do.
  • Action Breakdowns should be action_type.
  • Fields should be account_name, action_valuesactions, campaign_id, campaign_name, impressions, inline_link_clicks, and spend.
  • Click Attribution Window for us is 28 days and View Attribution Window is 1 day.

Once connected, Fivetran will pull all of the relevant data from Facebook using the Facebook Ad Insights API and throw into your data warehouse:

facebook-bigquery.png

There are two key tables:

  • ad_insights – This table has data related to the spend: campaign_id, country, date, account_id, account_name, ad_id, adset_id, campaign_name, impressions, inline_link_clicks, and spend.
  • ad_insights_action_values – This table has data related to how much revenue was earned as a result of that spend: campaign_id, country, date, _1_d_view, _28_d_viewaction_type, and value.

For example, to see spend metrics by campaign for a given day, we can run a query like this:

And to see conversions by campaign on a given date:

One key note about the conversion data that will come into play later: there may be several different values for action_type, but the only one that matters for measuring total conversion value is offsite_conversion.fb_pixel_purchase; everything else can be ignored.

Another important point: conversion data is cohorted by the day of the spend, not the day the conversion happened. That matters because it means there will never be conversions on days without spend. Put another way: every row in the conversion data has a corresponding row in the spend data. As we’ll see, that means we can join the spend data to the conversion data and we’ll capture everything we need.

Modeling the Data in Looker

Identifying the primary keys

Spend data in the ad_insights table can be uniquely identified by the combination of the date, campaign id, and country. We can set up a primary key dimension like so:

For the conversion data, this comes close, but there can also be many action_type records for each date/campaign/country combination so we can’t just use that as the primary key.

That said, because we only care about action_type of offsite_conversion.fb_pixel_purchase, it simplifies the modeling to create a derived table that consists of only actions of this type, that way we can use date/campaign/country as the primary key.

You can model this in dbt or simply create a derived table in Looker by filtering  fb_ad_insights_action_values accordingly (we’ll wind up calling this fb_conversions below).

select * 
from fivetran.fb_ad_insights_action_values
where action_type = "offsite_conversion.fb_pixel_purchase"

By only working with this derived table, there will be a one-to-one relationship between the spend data and the conversion data.

Creating the Model

Here’s what the model winds up looking like:

We’re left joining the spend data to the derived conversion table and because the conversion data is already filtered to only include the fb_pixel_purchase action_type, there’s a one-to-one relationship.

Creating the Spend View

Here’s what it looks like:

All pretty straightforward.

Creating the Conversions View

At the top you’ll see that this is a derived table from the original fb_ad_insights_action_values provided by Fivetran.

The only noteworthy metric here is the ROAS measure which takes the total conversion value measure and divides it by the total spend measure from the spend view.

And… drum roll… that’s it. You can now explore your Facebook Ad Insights data in Looker to your heart’s content and create dashboards that your leadership and teammates can consume without logging into Facebook or relying on its limited reporting capabilities.

Feel free to reach out if you run into any issues with any of the above – I’m happy to help.

Lastly, if you found this helpful, I encourage you to join the newsletter as there will be several more posts like this in the coming weeks including how to model AdWords and DCM data as well as how to combine all of these into a single report.

A New Adventure: I’m Taking the Leap to Focus on Preceden and Analytics Consulting

As many of you know, I’ve had a long-running side project called Preceden, a tool that helps people create professional-looking timelines:

preceden-2018.png

I launched Preceden back in early 2010 while I was a still lieutenant in the Air Force and kept it running as a nights-and-weekends side project throughout my time at Automattic and Help Scout.

Over the years its revenue has slowly grown to the point where it’s a healthy little business these days. As its revenue has grown, the amount of time I have to put into it has dwindled to an hour or two each week. Between my work at Help Scout and my family (three kids under four now!), I basically have had time to do support and not much else, despite there being so much more I want to do.

There was never going to be a perfect time to take the leap to focus on growing Preceden, but with its revenue and growth being what it is, my wife and I have decided now’s the time to do it.

For the foreseeable future I’ll be focused on growing Preceden, but also doing some analytics consulting on the side. Going all-in on Preceden was an option, but I really enjoy analytics and business intelligence work and want to continue leveling up there. I’m thrilled to have both Help Scout and Automattic as my first consulting clients.

With my hours now reduced at Help Scout, we’re looking to hire a new analytics lead. Help Scout is an incredible company and you’ll get to work with an amazing group of people who care deeply about building a business and a product that people love. As the lead analyst, your work will have a huge impact on the direction of the business. If you’re interested in this role, check out the job description here: Data Analyst at Help Scout and feel free to shoot me an email with any questions.

I have no idea how this will all play out long term, but I’m really excited to see how it goes.

Building a Looker-Powered Daily Metrics Email Report

One of the main ways we evangelize metrics at Help Scout is with a daily metrics report that is automatically emailed to the entire company every morning.

In the email we highlight the performance of our key business metrics (New Trials, New Customers, etc) for the day prior and for the month to date. We also include our projection and target for the month to help us understand how we’re doing for the month.

Here’s what it looks like (with a shortened list of metrics and no actual numbers):

Screen Shot 2018-06-26 at 10.50.17 AM.png

Because the report is delivered over email and doesn’t require logging into a separate tool, it makes it easy for everyone to keep up to date about how the business is doing. It’s also a frequent cause for celebration in Slack:

daily-metrics-yay.png

In years past a prior version of this report was generated with a lot of PHP code that was responsible for calculating all of the metrics and delivering the email.

When we adopted Looker as our Business Intelligence tool last year we ran into a problem though because we had refined how a lot of our metrics were calculated as we implemented them in Looker. As a result, Looker would sometimes report different values than daily metrics report. This obviously wasn’t ideal because it caused people to mistrust the numbers: if Looker said we made $1,234 yesterday but the metrics email said we made $1,185, which was correct?

Our solution was to rebuild the daily metrics email to use Looker as the single source of truth for our metrics. Rather than calculate the metrics one way in Looker and painstakingly try to keep the PHP logic in sync, we rebuilt the metrics email from scratch in Ruby and used Looker’s API to pull in the values for each of the metrics. This ensured that the numbers in Looker and the daily metrics email always matched since the daily metrics email was actually using the metrics calculated by Looker.

Building a Daily Metrics Report for your business

If you use Looker and want to build something similar for your organization (which I highly recommend!), I open sourced a super-simple version of ours to help you get started:

https://github.com/mattm/looker-daily-metrics-email

For this demo, it assumes you have a Look with a single value representing the number of new customers your business had yesterday:

Screen Shot 2018-06-26 at 10.46.43 AM.png

When you run the script, it will query Looker for this value, throw it into a basic HTML-formatted email, and deliver it:

Screen Shot 2018-06-26 at 10.48.33 AM.png

You will of course want to customize this for your business, make the code more robust, style the email, etc – but this should save you some time getting it off the ground.

If you run into any issues feel free to reach out. Good luck!

Edit: Here’s a recording of a talk from JOIN 2018 I gave on this topic.