Friday Updates: Data Retention at Preceden, Closing Down Timeglider, Organizing Looker, the Psychology of Money, and the 3m Mario Speed Run

What I’m Working On

At Preceden, I’ve been thinking a lot about data retention. For example, it has users that signed up a decade ago, used it briefly, and then never again. Unless they deleted their account, their user and timeline data still exist in Preceden’s database. In the past, I never gave any thought to whether it made sense to keep this data around forever. Maybe the user eventually returns one day, so why not keep it around? Analyzing all of that data has also been tremendously valuable. And for publicly shared timelines, there are SEO and user acquisition advantages to keeping those timelines around. But imagine a person that signed up in 2012 to create a private timeline about a messy divorce and hasn’t used it since then. Should Preceden retain that data forever? Probably not.

Addressing this is way easier said than done though. After I tweeted about this dilemma, Emilie Schario pointed me to an insightful Basecamp podcast episode about how they built something called the Data Incinerator to deal with this at their company. It’s worth a listen for anyone thinking about data retention. I’m probably going to build something like this to improve data retention in Preceden in the coming months.

Last summer I acquired Timeglider, a competitor timeline maker tool. I recouped the acquisition cost in two ways: I kept Timeglider running for its existing customers (which have continued to bring in revenue) and also redirected Timeglider’s homepage to Preceden (which brought in a lot of new customers for Preceden).

Because Timeglider hasn’t had any new sign ups in the last 14 months, its recurring revenue has gradually dwindled as its customers have churned. So the question I’ve been debating for a while now is how long to keep Timeglider running. On one hand, maintenance and costs are minimal, so it’s basically a small stream of completely passive income that would probably continue for years. On the other hand, the remaining recurring revenue wasn’t significant, most of the customers were inactive, its codebase and infrastructure are very brittle, and operating it takes up headspace that I’ve love to free up. After weighing it all, I decided earlier this week to announce it was shutting down at the end of November.

Executing on this plan took more work than I anticipated though.

  • Many Timeglider customers paid annually. If someone paid $50 for access to Timeglider through August 2021, but I shut down the service in December 2020, it would be unethical to keep their full payment. I wound up issuing a lot of prorated refunds for recent annual payments.
  • Stripe doesn’t provide a way in the UI to cancel multiple subscriptions at once. The quantity was more than I wanted to do manually, so had to write a little Ruby script to iterate over all of the subscriptions and cancel them.
  • I added a prominent notice at the top of Timeglider with a link to FAQs about the closure including instructions on how to move their Timeglider timelines to Preceden.
  • I had to notify all of the customers whose subscriptions I cancelled to make sure they knew the site was shutting down. Timeglider also has a free plan which had some active users, so I had to email them as well. Probably not the most efficient way to do this, but I BCC’d all of the impacted users 25 at a time in Help Scout (its BCC limit), using the Saved Reply functionality to compose each message.
  • Most of the responses from users were understanding and they appreciated that I was giving them notice and also provided a way to move their timelines over to Preceden.
  • There were some bugs I had to deal with though. By cancelling everyone’s subscriptions, it reverted everyone to Timeglider’s free plan which had feature limits including the inability to access more than 3 of their timelines. For users with more, this meant they couldn’t export their data. Had to hardcode in some logic to give everyone access to Timeglider’s top premium plan until the site shuts down.

Running that Ruby script to cancel all of the subscriptions was a little bit painful, but in the end I think shutting it down will be a good decision.

In my ongoing machine learning journey, I finished DataCamp’s Feature Engineering for NLP in Python course and started on its Introduction to TensorFlow in Python course. 8 courses including this one to completely finish the ML track.

At Help Scout, I started on a project to clean up our Looker instance. We’ve been using Looker for business intelligence reporting for more than 3 years and over that time we’ve accumulated hundreds of dashboards and more than a thousand individual Looks. We mostly try to organize these by function: a folder for marketing reporting, a folder for sales reporting, a folder for finance reporting, etc. But over time it’s gotten very messy. Lots of unused dashboards and Looks. Lots of duplicate reporting. Even some completely empty folders. For any employee venturing into Looker to try to understand our metrics, figuring out where to go was a challenge to say the least. I’ve been cleaning it up and with just a few hours of work so far it’s way, way better than it was. I regret not spending more time on it earlier.

What I’m Reading

The Psychology of Money by Morgan Housel. Really enjoyed this book. Lots of sage advice on how to think about personal finances, uncertainty, risk, etc. Morgan is worth following on Twitter too. And for you podcast fans, here’s an interview with him from earlier this month (thanks Jason for the recommendation).

What I’m Watching

This deep dive explanation of a 3 minute Super Mario Bros 3 speed run is mind blowing:

I thought it was just going to be someone playing through the game perfectly, but it’s so much more than that. The record is made possible by painstaking work to reverse engineering the game to figure out and exploit a very subtle glitch. Here’s the HackerNews discussion about it.

What Else

I wrapped up the 28-Day Keto Challenge. Writeup here. After being back on my normal carb-heavy diet now for 5 days and feeling like a glutton, I’m giving serious thought to going back on keto. We’ll see.

Hope everyone is doing well. Thanks for reading.

Friday Updates / 2020-09-11

What I’m working on

At Help Scout, a customer using Beacon wrote in asking about a warning that Chrome was displaying on their site:

The root cause seemed to be some attribution tracking cookies that were set by a script I worked on, so the support request made its way to me. Digging into it, we can see a lot of Help Scout cookies (including attribution tracking ones like _A_FirstTouchURL) are being set when Beacon is loaded on the customer’s site:

Despite having worked a lot with cookies and tracking scripts in the past, I was fuzzy on how these cookies were being set. Why, for example, were there Mixpanel and Google Analytics cookies when Beacon wasn’t loading them? Turns out that because the customer is loading Beacon from a helpscout.net domain, all of the helpscout.net cookies are made available on their site unless they’re explicitly set the cookie not to be passed along.

This can be achieved by setting the SameSite cookie attribute to Strict. In this case, there’s no need for most of these helpscout.net cookies to be passed along to third party sites loading Beacon, so we’re going through the steps to mark the cookies as strict when possible. Some cookies on helpscout.net are set by third party scripts and can’t be set to strict, so some amount of cookies getting passed along is inevitable, but at least we can minimize it.

This week I knocked out two DataCamp courses in my machine learning adventures: Hyperparameter Tuning in Python and Introduction to Natural Language Processing in Python. Typically I only am able to finish one course per week, but a lot of the material in these was covered by other courses so it made these courses fairly quick to work through.

One new thing was the introduction of TPOT, a tool that uses genetic programming to find an optimal classifier and hyperparameters for a given data set.

Long time readers may remember my Evolution of Color project in 2014. It uses genetic algorithms to evolve a population of colors towards a goal color. Unlike most of the other Emergent Mind projects that were I re-implemented other people’s projects in JavaScript, the Evolution of Color was completely original, making it one of the projects there I’m most proud of.

I hope as I continue down this machine learning path I get more opportunities to work with genetic algorithms.

And on that note, I’ve been mulling over where to focus in the coming months to keep leveling up my machine learning skills. Currently my plan is:

  • Finish the 11 remaining machine learning DataCamp courses by end of the year
  • Spend a few months in early 2021 focusing on Kaggle competitions to gain experience applying what I’ve learned
  • Figure out how to integrate machine learning features into a web application, whether it be for Preceden, a new app, or maybe even Help Scout if there’s an opportunity. Apparently deploying ML applications is quite difficult though. We will see.

What I’m watching

My wife and I just finished watching Umbrella Academy on Netflix:

It’s about a dysfunctional family of superheroes that need to work together to save the world. I enjoyed it and would recommend.

Product Recommendations

I’m in a mastermind group with Tom Davies and Jason Rudolph. Tom has two popular Shopify apps, Best Sellers and Flair, and Jason has BuildPulse, a tool that helps automatically identify flaky tests (automated tests that randomly pass sometimes and fail other times).

If you run a Shopify store or your team is banging their head against the wall dealing with flaky tests, their tools are definitely worth checking out.

What else

I’m on day 26 of a 28-day keto challenge. I’m participating through the Wearable Challenge initiative which basically means I’ve been wearing a Continuous Glucose Monitor (CGM) on my arm and for every day that I stay below a certain blood glucose limit I get $25 on my own money back. I’ve lost about 5 pounds and feel pretty healthy, but probably won’t continue keto after the challenge ends in a few days. I’ll write more about this whole thing in a separate post.

Hope everyone’s doing well 👋.

Friday Updates / 2020-09-04

I’m going to try a little experiment for a few weeks and post a short update every Friday about what I’m up to.

I enjoy writing and used to blog here a lot, but family, work, and projects tend to take up a lot of my time these days so I haven’t prioritized blogging for a long time. These short updates are a way to get me back into writing and also serve as a way to stay closer with you all. I’m time-boxing myself to about an hour every Friday to write these so we’ll see how they turn out :).

What I’m working on

At Help Scout, I’ve spent a lot of time recently helping with a big Go To Market (GTM) stategy project. A GTM strategy is a comprehensive plan for launching a new business or growing an existing one. Help Scout has been around for more than 9 years so we have a lot of data that we can use to understand what industries our product resonates in, how features are being used, who the buyers are, how our customers grow over time, etc. All of this can be used to help inform our GTM strategy for the coming years.

At Preceden, I addressed an issue this week stemming from a bug feature in wkhtmltopdf, a tool that converts web pages to PDF files. It’s what Preceden uses to let users export their timelines as PDFs. The problem with it though is that if a webpage contains an image that points to an invalid (404) image, the PDF conversion completely fails. This happens with Preceden because users can paste an image URL into an event’s notes and Preceden will try to display the image. But if the URL doesn’t point to a valid image, the export fails and I get support emails like this:

The link above refers to a World Civilization Preceden timeline that I want to download, but for some reason each time I try, an error message shows “There was an unknown error while processing the download. Please contact support.” I tried on multiple devices, but the same problem persists.

In the past I’ve solved this by parsing the image URLs from event notes and using the FastImage gem to verify that each one is valid. It then caches the results (valid or not valid) for each image and doesn’t attempt to render the image if it’s invalid. Problem is, I permanently cached the results of the validation check. For a timeline that’s been around for years, sometimes the previously valid image URLs become invalid. As a result, a timeline that once exported successfully might eventually start failing, leading to frustration and support tickets.

I addressed it this week by introducing some code that revalidates all of the image URLs in a timeline if an export fails.

Sometimes building a SaaS product is sexy and exciting, but often it’s fixing random issues like this.

I’m still continuing to learn machine learning, probably spending 10-15 hours/week on DataCamp, writing documentation for myself, and building small projects. This week I finished a fantastic course on Dimensionality Reduction in Python. The big idea with dimensionality reduction is that you can often reduce or simplify the inputs to a machine learning model to improve it’s performance and how well it generalizes to new data.

What I’m reading

Stumbled across Ethereum is a Dark Forest on HackerNews. I own a small amount of Ethereum but honestly have no idea how it all works behind the scenes. This article offers a glimpse of what’s going on under the hood. Eventually I’d love to dive into it more.

In that article the author mentions he enjoyed The Dark Forest, a sequl to Cixin Liu’s popular Three Body Problem novel. I read the latter a few months ago and seeing the sequal praised so highly in this post made me pick it up and start reading. Been enjoying it so far.

What else

  • My son is wrapping up his third week of virtual kindergarten today. I give the school a ton of credit, they’re really doing the best they can with it. Just sad that he’s not getting to experience kindergarten like he would in a world without Covid.
  • Speaking of virtual, I attended Microconf’s one day online conference earlier this week. I attended the live ones maybe 5 times in the past, but haven’t been for a few years. Kudos to Rob, Mike, Xander, and the rest of the team for organizing the virtual one this year.
  • If you’re trying to come up with a bilingual baby name, check out MixedName.com, a new baby name generator from a buddy of mine, Bemmu Sepponen. The service recently got a ton of attention and praise on Reddit and HackerNews.
  • And last but not least, I want to recommend Hey, the new email service from Basecamp. I was skeptical at first, but having used it for a few weeks now I’m a huge fan. The combination of their screen out feature and ability to categorize senders as Paper Trail or Feed have drastically reduced the amount of emails I’m exposed to each day, leaving me more time to focus on important things.

I hope this email finds you all well. If anyone wants to catch up sometime, I’d love to jump on a call. Drop me a note at mazur@hey.com.

Cheers!