Friday Updates: Jest, Puppeteer, Data Lead, Tabata, Wahl Street, Vacation

Photo by Elizeu Dias

What I’m working on at Preceden

There’s a scene from Malcom in the Middle that I think of often where Hal sets out to fix a light bulb and gets sucked into fixing a bunch of other things in the process:

That’s largely the story of what I’ve been doing with Preceden.

Preceden lets you drag an event around on your timeline to change its dates. When you move it to a new position, Preceden updates the event’s dates to reflect the new location. Figuring out the format for the new date is tricky and something Preceden doesn’t do well currently. For example, if you originally specified the event ended on “23 Apr 2021” and you move it to the end of April, ideally Preceden would give it a format of “30 Apr 2021” but currently it would set it to “April 30, 2021”.

Preceden has a boatload of backend unit tests, but zero frontend unit tests which would help me better test things like this. I said alright, time to figure out how to do that:

My buddy Hrishi (hey friend!) recommended Jest which proved to be a great suggestion. Within a few hours I had some basic unit tests set up and things were looking promising. But then I realized that integrating Jest made Preceden’s PDF and PNG export features break, ugh.

Behind the scenes Preceden has used wkhtmltopdf to convert HTML to PDF/PNG, but wkhtmltopdf uses an old version of the QT web browser to do the conversion and QT wasn’t working with the modern JavaScript that Jest was introducing into the codebase. Working with wkhtmltopdf has been a major hassle over the years and I’ve spent countless hours getting it to work well for Preceden. This was the straw that broke the camel’s back though and I decided to research other solutions.

That eventually led me to Puppeteer a tool that basically lets you use Chrome to generate PDFs/PNGs. There’s a Ruby gem called Grover which makes it possible to integrate Puppeteer into a Ruby on Rails app. And thankfully it wasn’t too hard to make the switch… locally.

But then I pushed to staging and got a bunch of cryptic errors. That led me down a 10 hour path of updating webpacker and a myriad of other Node modules that Preceden uses and trying to figure out how to make it all play nicely together on Heroku.

Introducing Puppeteer and these other updates also caused Preceden to blow past Heroku’s 500mb slug limit (I think it came in at like 700mb?), so I had to spend a few hours figuring out how to reduce the size.

In the end, I got Puppeteer working on staging and then pushed it to production. All was well. But then a few days later Milan, the front-end developer I work with, reported that Preceden’s responsive navigation wasn’t working in production. Turns out that the various updates made it so Alpine.js wasn’t getting loaded properly. That led to more debugging.

Eventually we got that fixed too and for the moment everything is working AFAIK.

Next week I can finally circle back and add some unit tests and improve the original drag and drop date formatting issue. Hopefully.

What I’m working on at Help Scout

Our hiring funnel for the Data Lead position went like this:

  • 15 applications
  • 7 interviews
  • 3 trial projects

Last week we made an offer to one of the folks who completed the trial project and he accepted! He’ll take over the Data Lead role from me in mid-May and help take Help Scout’s data operations to the next level.

What I’m experimenting with

I had been doing a lot of YouTube HIIT workouts, but my knees were killing me so I stopped and just focused on rowing and yoga for a few months. But I miss the intensity of the HIIT workouts so started trying to figure out other ways I could get that intensity without killing my knees. I wound up finding an app called Tabata Trainer which lets you create custom HIIT-style workouts. I’ve been experimenting with different exercises to gauge how they impact my knees and seem to have found a combination that both gives me a good workout and isn’t bad on my knees:

  • Jumping Jacks
  • Push Ups
  • Russian Twists
  • Mountain Climbers
  • Plank
  • Crunches
  • Dip

Tabata involves working through these exercises 20 seconds on then 10 seconds off, then repeating the whole thing 8 times. For these 7 exercises, it works out to 28 minutes. I’ve been alternating each day between this and yoga and not surprisingly am feeling pretty healthy as a result.

What I’m watching

Juggling Preceden, Help Scout, etc has had me feeling spread pretty thin recently. It’s nothing compared to Mark Wahlberg’s life though:

I really enjoyed this series and it got me thinking a lot about what is enough, what are my priorities, etc.


I haven’t been good about taking vacation over the last few years but I’ve been feeling pretty wiped recently so took this mostly week off. Every time I take a few days off I’m like “Man, I really need to do this more often” and this week was no exception. Now I just need to do it more often.

Hope all is well with you all – thanks for reading.

Friday Updates: Moving Fast and Breaking Preceden, Interviewing, Story of Your Life, the Last Cruise

What I’m working on at Preceden

Upgrading Postgres

I received an automated email from Heroku informing me that Preceden was running Postgres 9.6 which would be deprecated later this year. I read through the documentation detailing the upgrade steps which mentioned there would be some downtime but optimistically imagined it would be in the order of ~10 minutes, but it would up being 45 minutes. I performed the upgrade on a Tuesday morning around 10am EST which meant it impacted a lot of users.

Improving zooming

Preceden lets you zoom into sections of your timeline, but until recently you couldn’t share a link to that zoomed-in section of your timeline. Instead, viewers would always see the full zoomed-out timeline. Similarly, if you were viewing a zoomed-in portion of the timeline in your browser but then exported it to PDF or PNG, those exports were entirely zoomed out.

I worked on a project to improve this behavior, but rolled it out too quickly and wound up introducing a bunch of bugs. Fortunately Michael (a contractor who helps me with support and QA) wound up catching a lot of them.

Optimizing performance

Every few years I take a few days and try to optimize Preceden’s performance. For small timelines, Preceden and nice and speedy but for larger ones (with like 50+ events) it can take 5-10 seconds or more to render the timeline. Every time you make a change to the timeline it has to re-render so this can lead to a lot of wait time for users with large timelines.

I had never used Chrome’s Performance tools before, but watched some videos and tested it out on Preceden. The flamegraph showing the execution time of JavaScript functions wound up being super useful for identifying why large timelines were taking so long to render.

One of the big issues is that behind the scenes Preceden has to figure out how many pixels wide each event name is when it’s displayed on the timeline. The same text with the same font name and size can have different widths on different computers, so it’s not something I can just save with the event and re-use. It turns out though, measuring the width (by creating a hidden DOM element with the text and grabbing its width) takes some time, and when your timeline has hundreds of events it can really add up.

My solution was to save the name widths in Local Storage after they’ve been calculated that way Preceden can try grabbing the saved width from there before creating the hidden DOM element. The text is hashed to minimize storage and also for privacy reasons:

I wound up optimizing a ton of things like this. During the course of fixing this I realized that some previously-built front-end caching functionality had been broken for years. A few days later I realized that Preceden’s backend caching had also been broken for years. So basically Preceden has had zero caching for a long time and I never realized it.

Needless to say, fixing the caching and optimizing the front-end performance has had a big impact on performance. Timelines that previously took 10+ seconds to load are often now taking less than a second which translates to much less wait time for users.

But once again, I didn’t test these changes throughly enough before pushing them to production. Not only that, I pushed them and then took a break for lunch without verifying everything went smoothly. I returned to an overflowing support inbox stemming from various issues:

In the end there wound up being 4 separate bugs. Two I wound have caught with more testing locally, and two I wound have found if I had tested the changes on my staging server.

Moving fast and breaking things

A recurring theme in these mishaps is an an unfounded sense of urgency I have around shipping things. I lean heavily towards the move fast and break things style of entrepreneurship, but lately I think I’m trying to move too fast and it’s bordering on sloppy.

  • I should have given notice and performed the Postgres upgrade early on a Sunday morning to minimize the impact.
  • I shouldn’t have launched an incomplete iteration on the zooming features
  • I should have testing the caching changes on the staging server before shipping to production

Some extra testing and QA could go a long way without slowing things down much.

What I’m working on at Help Scout

Last week I led the first round of interviews to hire a new Data Lead. A coworker (KJ) and I interviewed 7 people, several of whom have moved onto additional interviews and a trial project. In the coming weeks we should wind up hiring one of them to take ownership of the data function at Help Scout.

I don’t have a ton of experience interviewing, but thankfully Help Scout provided some helpful interview training beforehand. One thing they didn’t talk about but which I learned over the course of the week is that interviewing is exhausting, at least to me. Probably part of why I lean towards individual contributor/solo entrepreneur roles.

What I’m reading

I finished Ted Chiang’s collection of stories released as Exhalation. I really enjoyed these stories and have moved onto Story of Your Life and Others, another collection of short stories by him.

What I’m watching

The Last Cruise, a HBO documentary about the Diamond Princess:

Adios for now 👋

Friday Updates: Logo Support, Pricing Updates, Complexity Book, Coming 2 America

Photo by Rohan

What I’m working on at Preceden

Upload your own logo

A frequent customer support request is adding a way for users to upload their own logo so it can be included when they share their timeline with others. With some work this week, that’s now possible.

It had been a long time since I did any feature work in Preceden so it felt really good to dive back into it this week.

Updated pricing

What’s a Friday Update without something about me updating Preceden’s pricing?

In February I switched the pricing from a $29/$69/$129 per year plans to $29/$99/$199 per year. It’s hard to draw conclusive results from the outcome (I didn’t A/B test it, and even if I did there’s not enough conversions to get significance in a reasonable amount of time), but my impression was that the newer prices were too high and overall it was hurting sales. And so I launched a new set of plans with prices in between the prior ones at $29/$79/$149 per year.

The new upload-your-own-logo feature is limited to the Professional plan as a way to make that plan more appealing (and people using Preceden for school or personal use probably don’t need it anyway).

Pricing is hard.

What I’m reading

Complexity: A Guided Tour by Melanie Mitchell:

I’ve been interested in emergence for a long time so this book is right up my alley. It makes me want to resume Emergent Mind projects… but we will see. Not enough time in the day for everything.

What I’m watching

Coming 2 America on Amazon Prime:

Be well my friends.

Friday Updates: Preceden’s Data Retention Dashboard, Hiring a Data Lead at Help Scout, Hanna

Photo by Jason Leung

What I’m working on at Preceden

This week I created a data retention dashboard (using Mode) to help me monitor how everything is progressing.

There are 225k users scheduled for deletion in the next few months and 46k have already been deleted (these are folks who hadn’t been active in years and had little to no content in Preceden). It has sent out 8k “Your account will be cancelled in 60 days” notices which has led to 63 users logging back into Preceden, and a whopping $58 in reactivation revenue. These emails have gone almost entirely to people who never paid in the past, so it’s not surprising not many have paid.

This week I got former customers added to the schedule and they should reactivate at a much higher rate. Not expecting a ton of reactivation revenue from this project, but if it winds up bringing in a few thousand extra dollars in revenue this year I’ll be happy with it.

Here’s the number of users deleted each day:

We can see some initial experiments in the beginning there, followed by a few days of bug fixes before it ramped up again.

It’s sending about 1,500 notifications per day now informing folks their accounts will be deleted in 60 days (and eventually 7-day reminders too):

And here’s the 225k scheduled for deletion:

These are spread out just to reduce the risk if something goes awry.

The only segment of inactive users remaining to handle are those with public timelines. Some of these timelines have hundreds of thousands of page views so I don’t want to delete them even if the user who created it hasn’t been active in years (since people are still getting value from them, plus some people go on to sign up and pay for Preceden). This week I spent a lot of time building out a system for quickly measuring how many total views a timeline has had. Preceden does have a views table that tracks every view a timeline has gotten, but with tens of millions of records it was impossible to query efficiently to say like “give me a list of timelines with more than 100 views”. Took creating a daily timeline views table to make it possible to query quickly.

It’s all coming along. Should wrap up this project in the next 2 weeks or so and move onto something new.

What I’m working on at Help Scout

This week we published a Data Lead job opening that I’ve been working on recently. I’ve been the Data Lead for the past 4 years, but with me being a part-time contractor now and preferring an IC role anyway, it makes sense to bring on a full time person who can take Help Scout’s data initiatives to the next level.

If this role sounds interesting to you, I’d encourage you to apply (and bonus points for being a reader of this blog!). Some highlights:

As our Data Lead, you will
  • We’d look to you to lead, develop, mentor, and manage our data team, currently consisting of an analytics engineer, and oversee the hiring efforts to grow the team to 5 people in the next 12-18 months.
  • Collaborate with leadership from across the company including marketing, sales, finance, support, and product to gain a deep understanding how data is used at Help Scout today and how we can make it even more useful in years to come.
  • Take ownership of our data stack which currently includes Looker, dbt, BigQuery and Fivetran. You’ll maintain the long-term roadmap for our data infrastructure and make sure it can grow alongside the company. 
  • Evaluate and implement new tools and processes to expand Help Scout’s data capabilities.
  • Prioritize and assist with data initiatives and data requests and communicate analyses to stakeholders.
  • Initially spend roughly 50% of your time on individual contributor/technical/analysis work and 50% on management/strategy work, but gradually spend a larger % of your time on the latter as you grow the team.
In your first 90 days at Help Scout, you will
  • Learn the ins and outs of the business, our data stack, our processes, our key metrics, etc.
  • Gain experience by applying what you learn to solving real-world data requests.
  • Meet with other team leads to understand how they use data currently and brainstorm opportunities for improvement that the data team can tackle in the future.
  • After you get spun up and better understand the state of data and the needs of the business, you’ll decide on what role we need to hire for next and lead the effort to hire that person in Q3.

What I’m watching

Hanna on Prime: