I wouldn’t say I fit the description of an indie hacker, since my formal background is in analytics, and I’ve mainly worked at established publicly traded companies throughout my career. However, I consider myself pretty lucky because I’ve had the opportunity to work on a startup initiative within one of those companies. It was the best of both worlds. I was able to be part of a team that built a SaaS product from zero to over 500K customers, with minimal risk.

A large part of why the team was successful was because of how we used analytics to guide our decision making. The first part of this article will cover what we did to build, test, and grow the SaaS product. The second part will cover two additional analytics projects I originally created for separate company initiatives, but were used to help grow the SaaS product.

Context

Around the start of 2013, the senior leadership team of my company decided to make a big change to our business model. Our company switched from selling stand-alone software products, to selling an all-in-one platform with a good/better/best pricing model. You would have access to certain products/features depending on which package you purchased. Overall, we believed this change would be beneficial for both our company, and for our customers.

In order to bring this idea to life, senior leadership put together an internal startup team that was made up of about twenty-five people. There was some representation from different departments in the company including marketing, analytics, billing, sales, support, and a few others . I was one of three analytics members on the team. Although we collaborated on everything analytics related, each of us had areas of work that we were most involved in. Among other things, we were tasked with providing some input into product features, leading quick and iterative A/B testing, building easy-to-consume reporting and analyses, and giving updates to C-level executives. For this post, I’m going to skip “other things” and “providing input into product features” mainly because they were pretty basic or were primarily done by the other two analysts. I’m going to give a little more detail on the three areas I was involved in and contributed to the most.

DISCLAIMER: For confidentiality reasons, the actual metrics values included in this post have been adjusted. However, the story is still an accurate portrayal of the project, and what occurred.

A/B Testing

The first thing to figure out for testing was how much of a sample size we would need. We based this on two things: first, to get to, or as close as we could to, statistical significance with our testing. That would give us confidence in the results we were seeing; and second, to see that if this completely tanked (zero revenue), that we wouldn’t harm the overall business. Keep in mind that although we were treating this like a startup, we were still part of a public company with revenue targets.

Like most SaaS businesses, we have a funnel that narrows down from: website visitors → free trial-ers → paying customers → cancelled customers. The metrics we traditionally used for this funnel are Visitor:Trial rate (V:T), Trial:Pay rate (T:P), Average Revenue Per User/Customer (ARPU), and Cancel rate. After playing around with different conservative scenarios based on historical trends, we figured out that if we took 10% of our overall website traffic to put into this new experience, we should have enough sample to test with. We also worked with the financial analyst on the project to confirm that 10% wouldn’t put our revenue targets in jeopardy.

Along with the 10% of website traffic that we were putting into this new experience, we held out another 10% of our old business experience as a control group. The control group was held out of any promotions, offers, or any other special activity that we may have been offering at the time. That would keep the control group clean for the best analysis. For this project, we added a new metric that was our primary measure of success. It was called Average Revenue Per Visitor (ARP-V). It combined the four previous metrics into one that gave a simple view into whether or not the new experience as a whole was working better than our old experience. An example of ARP-V is below.

If you follow the funnel example from start to finish, you’ll see that the control group has a much higher T:P rate than the test group. However, the test group has a higher V:T rate, a higher ARPU, and a lower Cancel rate. When you divide the total revenue by the starting visitor number, you get more revenue from each visitor in the test group than you do in the control group.

The last part of the testing was to figure out how long we would have to run each test. As I mentioned before, the goal for testing was to test, learn, and iterate quickly. We wanted to make as many tweaks as possible in the shortest amount of time. After playing around with more scenarios of how some of these early tests might go, we came up with about 14 days for each test iteration.

After running a test for 14 days, we would take any learnings we could (click tracking data, feedback from the sales reps, etc), to make tweaks to our website, the product UI, pricing, and more. In total, we went through about twenty iterations of testing. Originally, we had a goal of 5% lift in ARP-V. By the end of our testing iterations, we were able to achieve close to 20% lift in ARP-V.

Reporting and Analysis

Once we had our testing plan in place, we needed to be able to see how our new experience was performing, so we could make appropriate recommendations. To do this, we built some advanced excel reporting that was updated daily. In addition to the funnel metrics mentioned earlier, this reporting showed product engagement metrics like % of trial-ers starting our product flow, % of trial-ers uploading their contacts into our system, and the % of customers logging into the product in the last 90 days. Each of these detailed metrics were indicators of some of the funnel metrics, so if funnel metrics weren’t performing well in the new experience, we know which engagement metrics to look at. Based on what we saw, we would know if it was worth tweaking the product for the next iteration of testing, or if the sales reps may need to adjust how the pitch the new experience.

Below is a sketch of what some of the visitor and trial reporting looked like in excel.

That’s still a little vague, so I’ll walk through a specific example of how we used the reporting, along with an ad-hoc analysis we did occasionally, to discover insight about trial performance in the new experience.

One of the biggest metrics we struggled with was Trial:Pay (T:P) rate. Whether it was the new UI, pricing, or sales positioning, we always had a decline, or a gap, between the new experience and the old experience. The reporting showed this was consistent no matter which test iteration we were looking at. We brainstormed with the product team to figure out if there was a change we could make to the UI that would close the T:P gap.

As I mentioned before, we have a few product engagement metrics that we know are indicators of a higher T:P rate. One of them, which is a little intuitive, is successfully getting through our product flow. We also looked at T:P rates of trial-ers in other segments including 1) starting the product flow, 2) clicking a button on the UI homepage, 3) having a sales interaction only, no UI, and 4) no interaction at all. Based on the % of trial-ers falling into each of these segments, combined with the T:P rates of these segments, we were able to figure out which segment was the biggest cause of the overall T:P gap we had (basically a weighted averaged).

It was the segment of trial-ers who were starting the flow. The T:P rate for this group was similar between the new experience and old experience, however, there were less people starting the flow in the new experience. If we could get more trial-ers in the new experience to get into the product flow, our overall T:P gap should close.

After collaborating with the product team on different changes we could make for the next test iteration, we agreed on the change below.

Our hypothesis was that the initial UI homepage that trial-ers saw was too busy with some of the menus and additional features we added in to the new experience, and trial-ers were getting overwhelmed.** **We simplified the UI by giving trial-ers the ability to only click the button for the product flow. Once they went into the product flow, they would see the additional menus whenever they went back to the homepage. This change was successful, and did close the overall T:P gap we had by a few percentage points.

Updates to Senior Leadership

A very important aspect of this initiative was how we communicated with the rest of the team, our executives, and the company as a whole. We had a different communication based on whether it was a daily, weekly, monthly, or ad-hoc update.

Every day, we’d send out an email to the startup team on the results of the latest test iteration. This mostly included the funnel metrics, and whether or not we were seeing statistically significant differences. If there was anything special to note, we’d include that as well.

Once a week, we would send an email out to startup team, as well as the executives. This would include the funnel metrics for the latest test iteration, as well as insights we found, and updates on the changes we were going to make like the UI change I highlighted above.

Whenever we were giving updates to the team, we always used a Fact/Meaning/Action structure. This was the simplest and most impactful way we were able to get our message across and make progress.

The Facts are the actual results of the test. In other words, what the data is telling you. In the UI change, the facts were that we were seeing a 30% increase in trial-ers starting the product flow, a 10% increase in trial-ers finishing the product flow, and the T:P gap close 2%. All of these results were statistically significant. This **meant **that trial-ers were more successful in the new experience, and more likely to buy based on ease of use over the appearance of many features. The **action **was that we kept the simplified UI as the default in the following test iterations.

When updating your crew, keep this Fact/Meaning/Action idea in mind. If the rest of your startup team doesn’t understand the impact of your analytics, they might as well be a blank page. Break down the analytics in a way that can be easily digested and stay in frequent contact to normalize consuming these updates, if you want the most bang for your analytical buck.

SaaS Product Recap

Due to the 20% lift we saw in ARP-V, our executive team made the decision to launch the new experience to 100% of our web traffic after just seven months of testing.

That wouldn’t have been possible if the twenty-five person startup team didn’t work so well together. I think it was a great example of what can happen when you put great employees together, and empower them to do great things.

Additional Project 1: What if we don’t have statistical significance?

In a few cases where A/B test results were to close to be statistically significant, but not quite there, we supplemented them with an A/B testing forecast that I had previously built. That gave us a little more confidence in the recommendations we were making. This is a great tool that can be used for many bootstrapped SaaS products, especially when the user base is still small and sample sizes in tests are an issue. Here’s “the how” and “the why” of the testing forecast originally built…

Awhile back, the conversion team at my company came to our analytics team about doing testing on our website. We used to have a trial sign up form that was very long, that required eight-to-ten data points, including name, email, industry, etc. This was overkill if you compare it to a lot of digital products today that want to get people into their UI as fast as possible.

The conversion team wanted to test out a much shorter trial-ers form, and see what that would do to our conversion rate. We were happy to help them out, but there was one big challenge: they were hoping to do a series of five-to-six tests over the next three months.

That was extremely difficult to do with our business model. Not only did we usually need to let tests run for a few weeks to get enough sample size, we also offered a 60-day trial period. To get the complete test results and have them be statistically significant, it could take us almost three months to get a read on one test, never mind five or six. In order to solve this problem, I built a testing forecast that would help us make decisions on test results in three to four weeks.

To provide some background on the metrics being measured, the conversion team typically ran tests that affected our website sign-ups, which we measured by Visitor to Trial Rate (V:T), as well as tests that affected our customer conversion rates like Trial to Pay Rate (T:P). For this forecast, the primary measurement combined the two into Visitor to Pay Rate (V:P). This would offset differences between the two metrics. For this shorter trial form test, although our hypothesis was that a shorter trial form would lead to a higher V:T rate, we were concerned that T:P rate may decrease because the trial-ers may not be as invested. V:P let us see the net effect of that, and would help us declare a winner.

As far as the forecast went, the V:T part was pretty straight forward. We would let the test run for a few weeks and track V:T rates for the test group(s) and the control. I included inputs that would allow people to add X number of additional days the test could run for, with Y total visitors being added each day. The forecast would then take those input values, appropriately split them between the test/control groups, and apply the historical V:T rates that we had seen so far. This forecasted visitor data, would be added to the historical visitor data, and we would have a much bigger sample size.

The T:P part of the model was a little more in depth. Now that we had our Total Visitor and V:T data (Historical + forecasted), the model next needed to trend out what the T:P data would look like. In order to do this, I captured T:P rate snapshots at seven day increments up to 56 days (Not a perfect 60, but close enough for my purposes). Because most of our conversions came in the first few days of our trial period, our conversion rate was a logarithmic curve. The logarithmic formula was then applied to the few snapshots we had, so we were able to predict what our 56 day T:P rate would be with only a few weeks of data. Although this was great for the forecast, we would be happy to see the actual data come in as people aged through their 60 day trial. The more actual data we had, the more accurate the prediction became.

At this point, we had our total visitor data that I mentioned before, as well as a predicted 56 day T:P rate. The last piece of the forecast was adding inputs for future T:P performance. As another option, the forecast had adjustment inputs to be either more conservative or more aggressive. If the conversion team thought T:P might be better than what we’d seen so far, they had the option to increase forecasted T:P by 10%. If they wanted to be more conservative, they could also decrease T:P performance by 10%.

To recap, the total visitor data (historical + forecasted), the forecasted T:P data, and the T:P adjustment data (optional) was combined together to come up with a forecasted V:P rate, and statistical significance reading. Although using a forecast like this isn’t always going to be 100% accurate, it definitely saved us time, gave our conversion team more comfort in making decisions on tests, and allowed us to test more in a shorter time span than we previously could.

Although the testing forecast wasn’t specifically built for the new SaaS product initiative I was working on, we did use it multiple times throughout the testing process, and it helped us a lot as we made decisions on the product.

Additional Project 2: How to Measure Customer Retention Using Cohorts

Analyzing customer cohorts for retention purposes has become very common in SaaS businesses. Many analytics platforms, like Google Analytics, have this type of view included in their offering. However, there was a time when this wasn’t as much as a focus for companies. Here’s the backstory on my experience building this cohort analysis…

When I first started working at my company, the business was still in “growth” mode. Our goal was to consistently grow our customer base, by bringing in more new customers each month. At a certain point, that growth started to slow down. At the time, one of the leaders of the finance organization recommended that we should focus more on customer retention. We had a good amount of new customers coming in each month, but we had barely any strategy for keeping customer successful and satisfied throughout their tenure with us.

The company started to put more focus on customer retention shortly after that. One important part of this was getting an idea of what retention looked like historically. We knew that our average customer tenure was 34 months, but that was it. We didn’t know if there was a certain month of tenure, or a certain customer milestone where retention would drastically decline. To build a story of historical retention, I built a retention cohort analysis.

Each row included a cohort, which is the number of customers that were new to our business each month. Each column represented a different month of tenure for the customers in that specific cohort (first month, 2nd month, etc). If you look at the spreadsheet below, you can see that you can get a sense of how customers are staying over time. Data like this can reveal any glaring declines in retention after a certain tenure, or after a certain point in time.

After putting the analysis together, the next step was to find insights from the data. In our case, retention had been very consistent. We never saw any significant declines after a certain tenure or after a certain cohort month. Our retention rate would decline a few percentage points each month, and it was pretty consistent from cohort to cohort. The only times it dipped were certain cohorts where we offered a month end promotion for X number of months. After X months, we’d see a significantly larger decline than normal.

Now that we had our historical story, we wanted to use this to track the new cohorts coming in each month. For the most part, we made a note of changes the company made, and would follow retention performance after that. For example, if we made a change to our pricing in October 2014, we would note that, and if performance changed, we would have a good hypothesis as to why. Keep in mind that we would always A/B test something like that, but this cohort analysis is a good compliment, and it’s useful for senior leadership who just want the high level company metrics, without going into too much detail. Knowing whether an A/B test is successful is very important, but it doesn’t show the impact on the overall business.

If your business model is a month-to-month subscription model, this analysis is pretty simple to build. As long as you can track when customers started paying for your business, and have records of monthly invoices/payments, then you can put an analysis like this together. If you’re subscription model requires payments in six/twelve/etc. month contracts, then this may be harder to build. It’s harder to pinpoint exactly where customers decided to stop paying. My suggestion here would be to figure out what customer engagement metrics are indicators of retention.

Whether it’s customer logins, product engagement, etc., you could build a similar analysis with that data, and use that as a proxy.

Wrapping Up

Just like the A/B testing forecast, this wasn't initially built for the SaaS product I worked on. However, we did start looking at retention cohorts as the customer base for the product grew. It helped us figure out what some of our customer retention efforts would look like, including retention testing, engagement programs, etc. That information, coupled with strong communication and a team dedicated to improving, allowed our project to close gaps and achieve success through analytics.

Have you improved your startup’s success through analytics? Timid to begin your foray into this subject and need some more guidance? Start the conversation below!