Why Your Startup Shouldn't Care About NPS Benchmarks

NPS is hot. 🔥🔥🔥

If you haven't been asked to rate your favorite SaaS businesses recently, chances are you will be before too long — we've been asked about ten times in the past week, courtesy of NPS measurement startups like Promoter.io and Wootric.

Increasingly, our clients are using tools like these to measure how satisfied their customers are with their services, either as a complement to informal feedback conversations or (in some worrying cases) as a replacement for them.

Being so numbers-driven, startups tracking their NPS tend to want a benchmark for it — and that’s perfectly understandable. What’s the point of measuring something if you’re not able to see how you stack up against everyone else, right?

Well actually… There is a killer reason to measure NPS, but it's got nothing to do with your competition.

We don’t pay attention to NPS benchmarks, and we'll discuss why you shouldn't either in this post.

But first, a quick primer.

What is NPS?

The Net Promoter Score (NPS), for the uninitiated, is generally taken to represent the percentage of users who would recommend a brand, product or service to a friend or colleague.

NPS was originally developed in the early noughties and was first introduced to the world by Fred Reichheld of Bain and Company in the Harvard Business Review.

In the article, (well worth a read), Reichheld explains that the seed for the concept came from car rental company Enterprise, who distilled hitherto complex customer surveys into two simple questions about the quality of the experience and the likelihood of the customer to rent from the firm again.

Reichheld, surprised by the ability of Enterprise to drive growth from such simple survey questions, began a two-year study into whether short — or even single-question — surveys could serve as a useful predictor of growth.

It turned out that they could.

Across multiple industries, Reichheld discovered that the question "“How likely is it that you would recommend [company X] to a friend or colleague?” was consistently the question that had the highest statistical correlation to repeat purchases or referrals (the key drivers of profitability).

Reichheld developed a scale for his question which effectively divided respondents up into three groups — 'active promoters', who gave answer ratings of nine or ten on a ten-point scale, the 'passively satisfied', who gave between seven and eight, and then 'detractors', who selected between one and six.

The key differentiator baked into the NPS calculation is that it only focuses on customers who fall into the ‘active promoter’ bucket — answers from ‘passively satisfied’ customers are written off (as these customers are unlikely to contribute to a company’s word of mouth growth).

The percentage of detractors, who don’t contribute any positive word-of-mouth effect and may actually harm growth by increasing service costs, damaging employee morale or forcing businesses to rely on incentivization to keep their business, is then subtracted from the active promoter percentage, to give the final NPS score.

NPS = % Promoters (9–10) — % Detractors (0–6)

When Reichheld applied this formula across multiple industries, the effect was striking — NPS consistently served as a predictor of companies’ average growth rate, regardless of their size.

NPS has gone on to become one of the most widely-used measurements of customer satisfaction and has also been modified to serve as an indicator of employee satisfaction as well.

Detour over.

What does this mean for my startup?

Fundamentally, NPS can be a useful metric for startups to track — used correctly, it’s a strong indicator of growth potential even in the smallest companies.

Unfortunately, there are some gotchas to NPS measurement which many startups either don’t know about or choose to ignore, giving us serious pause for thought whenever we see a high-score lauded as a marker of success or ‘above a B2B benchmark'.

For a start, the simple nature of NPS means that it is subject to a lot of noise — you need a high number of responses to have a reliable statical base that you can make decisions on.

Secondly, NPS is most useful measured over time — it’s unlikely that you’ll pick up a representative sample with just a week of questioning, unless you can question a huge base in that short time, and even then the result may be clouded.

As a corollary of this point, startups often choose to hit users with an NPS survey just as they are using the product. This is a dangerous trap to fall into, as people using a product are far more likely to be satisfied with it than those who chose not to log in that day. An NPS which is reflective of the total customer base needs to question the users who aren’t logging in — which generally means asking many more people.

Thirdly, NPS ratins are heavily influenced by culture and need to be segmented accordingly — Americans are far more likely to give a positive score than Europeans, for example.

Fourth, NPS is affected by business strategy, with changes in customer composition (for instance, average customer size) or market conditions (for instance, pricing) likely to produce swings in numbers.

And finally, there is one gigantic gotcha for startups, which renders any boast about NPS scores almost meaningless, and it's this — most startups haven't found their market.

NPS benchmarking works fine if you're using it to compare an airline, an energy firm or a car rental company, all of whom serve broadly the same type of customer (and an awful lot of them, at that).

But startups (especially B2B ones) normally have a relatively small number of customers and there's a good chance they'll be different from even their closest competitor's customers. Even a startup which represented a challenger brand in an established space is likely to attract a slightly different audience (normally of early adopters) than the incumbents.

So with these caveats, is NPS useful for startups?

Actually, we believe it is — and not only because it often comes with qualitative feedback which can be used to understand the problems faced by users.

Key Point: NPS is very useful... if you’re benchmarking against yourself.

While comparing your own NPS to others’ is essentially meaningless for many startups, NPS is useful to startups as a moving metric to be measured across time.

The question NPS benchmarks taken across time can answer is “how far has this company honestly come in improving the number of active promoters among its customers”?

Knowing the answer to this question gives a view of evolving product-market fit, as it gives a true view of how close a business is to turning its user base into evangelists.

Knowing the answer to this question gives an insight into how the score is changing with customer composition and can provide valuable insights that can feed strategy decisions (for example, if a reliable NPS metric is high in a certain segment, a business may decide to actively pivot to serve only that segment).

And finally, knowing the answer to this question shows that a company is committed to earning the love of its customers both in words and deeds.


Reichheld alludes to the importance of this final point in his article, noting that when the simplified customer satisfaction measurements were first introduced, CEO Andy Taylor felt that scores were not improving quickly enough, and that the company needed a greater sense of urgency around improving scores.

It was only when the management team decided that field managers would be ineligible for promotion unless their regional scores matched or exceeded the company’s average scores that survey numbers really started to rise, with employee rewards firmly linked to customer feedback.

Essentially this means that Enterprise was able to achieve stellar growth by benchmarking NPS against only itself - ensuring that it set high standards, and then striving to beat them.

For startups, this is the true power of the system — NPS is most effectively used to provide a clear view of the status quo, and to track your journey in improving it.

comments powered by Disqus