The Meta Funnel: From User Activity to Product Changes

When we think about funnels, we tend to think about how users move through our product: what percentage of people who visit our homepage sign up, what percentage of those users pay, etc.

If we zoom out, there’s another even more important funnel that we can use to measure how sophisticated an organization is with its data:

User Activity → Data → Analysis → Insights → Product Changes

User Activity → Data: When users interact with your product, are you capturing the relevant data related to their activity? For example, you might track how many people visit your homepage, but how about what percentage scroll below the fold, where they click on the page that they shouldn’t be, the bounce rate, how they’re getting to your site and how that’s changing over time, etc. What’s important to track will vary by product and not everything you track will be important, but if you’re not recording the data, it will be impossible to analyze it.

Data → Analysis: You have data, but is anyone looking at that data regularly? All of the data in the world doesn’t matter if no one ever analyzes it. For some types of analysis your analytics tools will make this step easy, but it also might require complex queries or scripts depending on what questions you’re trying to answer.

Analysis → Insights: One of the hardest things about analytics is that it’s often difficult to look at all of the numbers and draw actionable insights from them. You may discover that your conversion rate is 15%, but is that good or bad? If it goes up to 20% is that because your product has improved or just because the quality of your traffic has changed. If the top term your users are searching for on your support page is “domains”, is that an indication that you need to improve the instructions you provide users in your product, or just an inevitable result of domains being very complex?

Insights → Product Changes: And finally, once you have insights from the analysis you’ve done, are you making any changes to your product as a result? Maybe insights into your product’s support search terms indicate that you do need to improve the guidance you provide to users within your product. Does your team then execute on that by actually improving the guidance within your product?

In my experience with both many years of side projects and at work, the conversion rate across this entire funnel is typically very low. Part of it is just the nature of the beast: it’s hard to set up tracking to collect everything that’s important, it’s hard to analyze the data you do collect, it’s hard to come up with insights from that analysis, and it’s hard to make changes to your product when you do have those insights.

But just because it’s hard doesn’t mean it’s not worth optimizing. If you can double your organization’s conversion rate between any of these steps, it should double the number of improvements you wind up making to your product as a result.

One thing that can help is to discuss with your team and document your organization’s processes for each of these steps. Things like:

  • Who is responsible for implementing analytics on your team?
  • If they don’t have experience setting it up, where can they go to learn?
  • Where can they go to learn what data is important to collect?
  • How do they analyze the data?
  • How do you ensure people are looking at the data often enough?
  • Can you automate the reporting? Should you?
  • Who on the team needs to be involved to maximize the number of insights you’re discovering from your analysis?
  • What does your process look like for turning those insights into actual product changes?

There’s probably a lot of low hanging fruit here for your team to work on. The better your team gets at moving down this funnel, the more improvements you’ll make to your product leading to happier users and more impact on your company’s bottom line.

Teach everything you know

When I was a lieutenant in the Air Force I had the privilege of serving as an Executive Officer to Major (now Colonel) Heather Blackwell while she commanded the 87th Communications Squadron at McGuire Air Force Base, New Jersey.

One of my many takeaways from the experience stems from a conversation we had about her taking leave and who would take over her various responsibilities while she was away. She said “One of the measures of how effective I am as a leader is not how poorly the unit performs while I am away, but how well it performs.

It’s counterintuitive at first because you might think that if someone who plays an important role within an organization suddenly leaves, that the organization would suffer as a result. But her point was that if things fall apart, that means she hasn’t done an effective job teaching us about what she does and how to do it. That’s not only important if she goes on vacation, but also because by teaching us she’s helping us become more effective leaders and preparing us for commands of our own one day.

I’ve been thinking about this a lot lately because I realized that in my role at Automattic, there are several things that I work on where I’m basically the only one who knows how those things work. For example, I built a system for tracking visitors who click on our ads so that we can measure our return on ad spend, but haven’t done a good job making sure other developers on the team understand how it works. Similarly, I work a lot on building email marketing lists for users who meet certain criteria, but never took time to document how to do it until recently.

Teaching others has so many advantages:

  • It ensures you’re not a bottleneck for the work that you do
  • If you go on vacation, change roles, or leave the organization, it ensures your team will continue operating smoothly because someone else will be able to carry out the tasks that you previously performed
  • It helps you learn from others because they’ll likely have feedback that will help you improve the way you do things
  • It helps you clarify your own thinking and processes
  • It will help others develop their skills and grow professionally

Also, if you’re an entrepreneur, teaching everything you know also has huge advantages which Nathan Barry has written about at length (there’s even a t-shirt!).

If you find yourself in a position where you’re the only one who knows how certain things work, find ways to involve your coworkers or hold a learn-up or just write documentation – whatever you do, don’t let yourself continue being the only one who knows how to do those things. Good things will follow.

More hours != more results

I used to play a lot of online poker and thanks to a combination of third party analytics tools like PokerTracker and ones I built myself, I was able to gain a lot of insights into my own play.

For example, the generally accepted best time of day to play online is usually late at night because that’s when people are the most tired and therefore make poor decisions. I once looked at how much money I made per hour broken down by hour of the day. Because I played a lot at night to take advantage of the poor play by others, I expected my highest hourly rate to be in the evenings and early morning. But the numbers told a different story: my hourly rate after 11pm was about break even. After countless hours of late night play, my overall profit would have been the same had I not played at all after 11pm.

Turns out that even though others were playing poorly during those hours, I was too, which negatated any advantage that I had.

The late night play also took its toll on my wellbeing during the day because I’d inevitably be tired from staying up late to play.

I bring this up because the experience taught me that working harder often isn’t the solution to a problem. It’s tempting to think that you can just put in more hours and you’ll achieve more results, but if the quality of your work suffers you might not make any meaningful progress or worse, you could undo the previous progress you made.

Next time you catch yourself trying to push through on a task even though you’re not feeling 100%, consider taking a break until you are.

Luca on Data and Marketing

My friend and coworker Luca Sartoni participated in an AMA on ManageWP where he was asked about using data to guide marketing efforts.

His response explains well why making data-informed decisions will lead to better results than making data-driven decisions:

Interconnecting data science with marketing is one of the keystones of what is commonly called growth hacking. I’m a not a fan of growth hacking by itself though, because business is more complex than just hacking things.

When your decisions are solely based of data science you can call yourself data-driven, but there are tremendous blind-spots. Data looks neutral, but in reality it’s not. The way you plan your data sources, you collect the data, and you process such data to get answers is not neutral and impacts on the final outcome, therefore the information you extract cannot be considered neutral. If you drive your decision-making process solely with that information, your think you are considering data, but in reality you are considering the whole not-neutral process.

For this reason you need to keep in mind more elements when it’s time to make a decision, where data is one of them. There is vision, mission, experience, data, and other elements. When you mix them, giving data the right amount of value, you make a data-informed decision, which is usually better than data-driven, simply because it’s more informed by the other elements.

The whole AMA is worth checking out.

A 70/20/10 approach to time management

If you have a lot autonomy in your job to choose what to work on, it’s worth spending some time thinking about how to effectively choose tasks and distribute your time among everything you have to do.

Lets say you have three tasks:

  1. Task A (High Priority)
  2. Task B (Medium Priority)
  3. Task C (Low Priority)

In this example, it’s clear that you should work on A, then B, then C.

But in the real world, each task takes a certain amount of time and that can complicate things. If you’re lucky, it looks like this:

  1. Task A (High Priority, 1 Day)
  2. Task B (Medium Priority, 1 Day)
  3. Task C (Low Priority, 1 Day)

In which case you should still do A, then B, then C.

This also works when the shortest tasks are also the highest priority tasks:

  1. Task A (High Priority, 1 Day)
  2. Task B (Medium Priority, 3 Days)
  3. Task C (Low Priority, 1 Week)

But what happens when the highest priority tasks also take the most time?

  1. Task A (High Priority, 1 Week)
  2. Task B (Medium Priority, 3 Days)
  3. Task C (Low Priority, 1 Day)

Is it still true that you should work on A, then B, then C?

We can test our principles by looking at how well they hold up under extreme circumstances:

  1. Task A (High Priority, 1 Month)
  2. Task B (Medium Priority, 1 Week)
  3. Task C (Low Priority, 1 Hour)

Should you still work on A then B then C when A will take a month and C will take an hour?

Probably not… but you also shouldn’t just work on your shortest tasks first either. You could wind up spending a lot of time working on low priority tasks without spending time on things that matter.

I’ve found that a 70/20/10 approach works pretty well:

Spend 70% of your time on your high priority tasks, 20% of your time on medium priority tasks, and 10% of your time on low priority tasks. In a standard 5-day work week, that works out to be 3½ days on high priority tasks, 1 day on medium priority tasks, and ½ a day on low priority tasks. And if you have multiple tasks with the same priority, work on the shortest ones first.

That will ensure that you’re spending most of your time on the things that matter, but still are making progress on the medium and low priority tasks that need to get done as well.






Understand your metrics better by trying to predict how changes will impact them

A few weeks ago, right before launching a new email marketing campaign at work, I asked folks on our team for any predictions about what the email’s conversion rate would be. It wasn’t intended as a serious exercise – more in line with the game where you guess the date a baby will be born or how many marbles are in a jar.

Someone threw out a guess and then a few more people followed with their own. We launched the email campaign and waited. When the results came in, the actual conversion rate was much lower than any of us had expected. We were all surprised and I think a big part of that was due to the guesses we had made beforehand.

I’ve been thinking a lot about that experience and what we can learn from it.

Trying to predict how your metrics will change is a great way to force you to think deeply about those metrics. It’s easy after you make a change to your product to look at a metric and conclude that the change – or lack thereof – makes sense. But if you have to make a prediction beforehand, you’re forced to consider all of the factors. In this example, what do our email conversion rates typically look like? Is it 0.5% of 2% of 5%? Have we tried something similar in the past? Are there other channels that we can learn from? Do we expect this new campaign to have higher or lower conversion rates?

The more you try to predict how your metrics will change, the better you’ll get at predicting them and the better you get, the more likely it is you’ll be able to move those metrics. Imagine two teams, one that always tries to predict what the metrics will look like and one that doesn’t. After doing this several times, which team do you think will be better at understanding how to influence those metrics?

Don’t ask people to guess in public because the subsequent guesses will tend to be anchored towards the first guess. If no one has an idea initially and someone guesses that a metric will be 5%, then people will tend to anchor their guesses around 5%: 4%, 6%, etc. It’s much less likely someone will say 0.4% even if they would have guessed that without seeing any other responses. The easiest way to get around this is probably sending out a survey and have people guess there. Afterwards, you can reveal who was closest.

Put your guesses in writing. It may take some time for the results to come in and it’s easy to forget what your original prediction was. Also, it will make you take the exercise more seriously because of the extra accountability that putting it in writing adds.

Try to predict how your metrics will change before embarking on a project. For example, before we set up this email campaign, should we have tried to predict how it would perform? I think so. Not only would it cause us to think more carefully about the objectives and how to achieve them, but it may have led us not to pursue it in the first place.

Look for other opportunies to make predictions. This doesn’t have to be limited to product changes. For example, lets say you’re about to publish a big announcement on your blog:

  • How many visitors will that post bring over the first 24 hours? How about a week later?
  • How many people will share it on social media?
  • How many of those people will convert into free/paying customers?

A lot of what we do online can be quantified – keep and eye out for ways to hone your predictive skills.

Do any of you all do this type of thing on a regular basis? I’d love to hear about your experience and what you’ve learned from it.

Precision and metrics

I spend more time than I’d like to admit investigating why metrics that should be identical vary from one tracking source to another. Sometimes the difference is due to bugs, but often just due to the nature of how the tracking system works. It’s also given me an appreciation for just how difficult it is to get truly precise metrics.

For example, consider this simple example:

You run a site that where people sign up for an account and then create a post. What percentage of accounts create a post?

Sounds pretty simple, right? Here are two possible approaches and where they can lead your metrics astray:

Using your database to figure out the answer

With this approach, we look at how many accounts there are, figure out how many of them have a post, and calculate the percentage that way. If there were 1,000 accounts created in August and 200 of them have a post, 20% have created a post (yay math).

But… what about users who create a post then delete the post? It’s possible that 20 people wound up creating a post then deleting it, so the “true” number of accounts that created a post is actually 220, but your database only reflects 200 posts so you’ll wind up reporting 20% instead of 22%.

What about users who deleted their account? Instead of 1,000 accounts there were actually 1,100 accounts, but 100 of their owners wound up deleting their account. Unless you set up some special tracking for this, you’re now in a spot where you don’t know exactly how many accounts were created or how many of them created posts.

If your account records have a unique id that increments by 1 each time a new one is created you could figure out the number of accounts relying on that, maybe.

One way around this is to not let people delete their posts or accounts. Maybe you tell them their post was deleted, but don’t actually delete it from your database. Just set a “deleted” flag on the database record and don’t show it to the user. Same thing for accounts. But what happens if they think they deleted their account then try to sign up again with the same email? Will you tell them that their account already exists even though supposedly you deleted it? There are technical solutions around this, but things are getting pretty complicated already.

Using an analytics tool to figure out the answer

Another approach is to use a tool like Mixpanel or KISSmetrics to try to answer the question. Set up a funnel to measure the conversion rate from your Sign Up event to the Publish Post event.

The good news here is that if users delete a post, it won’t impact the conversion rate that the funnel reports.

The bad news is that if they create additional accounts, they’ll only count once in the funnel because funnels try to measure the actions of people, not accounts.

One possible solution

If I really, really wanted to answer this question precisely, I’d set up a new database table that keeps track of accounts and whether they’ve created a post. These records wouldn’t be impacted if the post or account gets deleted so we can trust that the data is accurate.

Alternatively, you can simply redefine your metric so that you can be precise: instead of “what % of accounts created a post?” you ask “what % of non-deleted accounts have a non-deleted post?”. Not very elegant, but far easier to answer than the original metric.

Does perfect precision matter?

I’d argue that for most metrics (with the exception of revenue metrics), being perfectly precise is not critical. The metrics are probably fine as long as they’re close to the true value and that the way you calculate it is consistent over time.

tl;dr: data analysis is fun :).