Luca on Data and Marketing

My friend and coworker Luca Sartoni participated in an AMA on ManageWP where he was asked about using data to guide marketing efforts.

His response explains well why making data-informed decisions will lead to better results than making data-driven decisions:

Interconnecting data science with marketing is one of the keystones of what is commonly called growth hacking. I’m a not a fan of growth hacking by itself though, because business is more complex than just hacking things.

When your decisions are solely based of data science you can call yourself data-driven, but there are tremendous blind-spots. Data looks neutral, but in reality it’s not. The way you plan your data sources, you collect the data, and you process such data to get answers is not neutral and impacts on the final outcome, therefore the information you extract cannot be considered neutral. If you drive your decision-making process solely with that information, your think you are considering data, but in reality you are considering the whole not-neutral process.

For this reason you need to keep in mind more elements when it’s time to make a decision, where data is one of them. There is vision, mission, experience, data, and other elements. When you mix them, giving data the right amount of value, you make a data-informed decision, which is usually better than data-driven, simply because it’s more informed by the other elements.

The whole AMA is worth checking out.

A 70/20/10 approach to time management

If you have a lot autonomy in your job to choose what to work on, it’s worth spending some time thinking about how to effectively choose tasks and distribute your time among everything you have to do.

Lets say you have three tasks:

  1. Task A (High Priority)
  2. Task B (Medium Priority)
  3. Task C (Low Priority)

In this example, it’s clear that you should work on A, then B, then C.

But in the real world, each task takes a certain amount of time and that can complicate things. If you’re lucky, it looks like this:

  1. Task A (High Priority, 1 Day)
  2. Task B (Medium Priority, 1 Day)
  3. Task C (Low Priority, 1 Day)

In which case you should still do A, then B, then C.

This also works when the shortest tasks are also the highest priority tasks:

  1. Task A (High Priority, 1 Day)
  2. Task B (Medium Priority, 3 Days)
  3. Task C (Low Priority, 1 Week)

But what happens when the highest priority tasks also take the most time?

  1. Task A (High Priority, 1 Week)
  2. Task B (Medium Priority, 3 Days)
  3. Task C (Low Priority, 1 Day)

Is it still true that you should work on A, then B, then C?

We can test our principles by looking at how well they hold up under extreme circumstances:

  1. Task A (High Priority, 1 Month)
  2. Task B (Medium Priority, 1 Week)
  3. Task C (Low Priority, 1 Hour)

Should you still work on A then B then C when A will take a month and C will take an hour?

Probably not… but you also shouldn’t just work on your shortest tasks first either. You could wind up spending a lot of time working on low priority tasks without spending time on things that matter.

I’ve found that a 70/20/10 approach works pretty well:

Spend 70% of your time on your high priority tasks, 20% of your time on medium priority tasks, and 10% of your time on low priority tasks. In a standard 5-day work week, that works out to be 3½ days on high priority tasks, 1 day on medium priority tasks, and ½ a day on low priority tasks. And if you have multiple tasks with the same priority, work on the shortest ones first.

That will ensure that you’re spending most of your time on the things that matter, but still are making progress on the medium and low priority tasks that need to get done as well.

 

 

 

 

 

Understand your metrics better by trying to predict how changes will impact them

A few weeks ago, right before launching a new email marketing campaign at work, I asked folks on our team for any predictions about what the email’s conversion rate would be. It wasn’t intended as a serious exercise – more in line with the game where you guess the date a baby will be born or how many marbles are in a jar.

Someone threw out a guess and then a few more people followed with their own. We launched the email campaign and waited. When the results came in, the actual conversion rate was much lower than any of us had expected. We were all surprised and I think a big part of that was due to the guesses we had made beforehand.

I’ve been thinking a lot about that experience and what we can learn from it.

Trying to predict how your metrics will change is a great way to force you to think deeply about those metrics. It’s easy after you make a change to your product to look at a metric and conclude that the change – or lack thereof – makes sense. But if you have to make a prediction beforehand, you’re forced to consider all of the factors. In this example, what do our email conversion rates typically look like? Is it 0.5% of 2% of 5%? Have we tried something similar in the past? Are there other channels that we can learn from? Do we expect this new campaign to have higher or lower conversion rates?

The more you try to predict how your metrics will change, the better you’ll get at predicting them and the better you get, the more likely it is you’ll be able to move those metrics. Imagine two teams, one that always tries to predict what the metrics will look like and one that doesn’t. After doing this several times, which team do you think will be better at understanding how to influence those metrics?

Don’t ask people to guess in public because the subsequent guesses will tend to be anchored towards the first guess. If no one has an idea initially and someone guesses that a metric will be 5%, then people will tend to anchor their guesses around 5%: 4%, 6%, etc. It’s much less likely someone will say 0.4% even if they would have guessed that without seeing any other responses. The easiest way to get around this is probably sending out a survey and have people guess there. Afterwards, you can reveal who was closest.

Put your guesses in writing. It may take some time for the results to come in and it’s easy to forget what your original prediction was. Also, it will make you take the exercise more seriously because of the extra accountability that putting it in writing adds.

Try to predict how your metrics will change before embarking on a project. For example, before we set up this email campaign, should we have tried to predict how it would perform? I think so. Not only would it cause us to think more carefully about the objectives and how to achieve them, but it may have led us not to pursue it in the first place.

Look for other opportunies to make predictions. This doesn’t have to be limited to product changes. For example, lets say you’re about to publish a big announcement on your blog:

  • How many visitors will that post bring over the first 24 hours? How about a week later?
  • How many people will share it on social media?
  • How many of those people will convert into free/paying customers?

A lot of what we do online can be quantified – keep and eye out for ways to hone your predictive skills.

Do any of you all do this type of thing on a regular basis? I’d love to hear about your experience and what you’ve learned from it.