Hacker News new | past | comments | ask | show | jobs | submit login

I see proposals against NPS like this from time to time, but they always tend to ignore the underlying science that went into making NPS in the first place, primarily that there was a very strong correlation between the calculated NPS score and business performance.

Also, I find that many of these articles have somewhat of a caricaturish view of how NPS is used in the real world. "It's not 'oh no, our NPS score is down, let's hire more people to focus on customer 'happiness'", it's instead that NPS can be like a canary in a coal mine - an unexpected drop in NPS is something to investigate and get more information on what went wrong, usually from more detailed/free-form questions in the NPS questionnaire.

As a software engineer I love reading our NPS reports, because they always have great tidbits of info that are often difficult to deduce just by looking at standard analytics.




> As a software engineer I love reading our NPS reports, because they always have great tidbits of info that are often difficult to deduce just by looking at standard analytics.

What you love is the customer feedback--this can be accomplished in multiple ways and you don't need NPS for it.


NPS has the advantage of being concrete and simple, producing evenly sampled feedback. This is hard to get any other way.

A proper NPS asks only one question and requires only a single click to answer -- eg "Scale of 1-10 how likely are you to recommend <product> to a friend?" > click [8] > done.

There's a free-text "Tell us more" field that is totally optional.

This means users actually answer it. NPS feedback is the closest I've ever seen to being a clean sample.

By contrast:

- Everyone hates an n-question "would you like to fill out our survey" feedback form. Any data from those is going to be skewed, because only a certain kind of person fills those, and usually only if they're angry enough that they want to vent.

- Feedback from a passive "Submit feedback" button in-app is also skewed. Maybe useful as a bug report mechanism, but it won't tell you what's making your happy users happy.

- Proactively reaching out and talking to/observing users is obviously good, and nothing ever substitutes for that. But you want bulk data, not just anecdotal evidence.

- Most importantly, NPS really shines at telling you what's making your almost-happy users almost-happy. When someone clicks [10] and writes "I love it", that feels great. But when someone clicks [7] and tells you what's annoying them, that's often extremely useful.

- NPS does a great job priming people to give useful feedback. By forcing you to pick a single out-of-10, you just did a quick mental accounting. "9? Nahh, <reason>." Then you click 8 and write the reason.

--

NPS was popularized by that famous HBS study where they found it correlates well with product growth.

I wonder how much of that result is simply because it's one of the least obnoxious ways of sampling user feedback, and therefore produces clean data.


In an almost Goodhart’s Law case, now that I know how NPS is used, I answer these questions differently, being far more prone to use 6, 7, or 8 to express my opinion.


Eh this is a niche issue. I bet less than 1% of people answering NPS have ever heard of "NPS".

The only way this would be a meaningful effect is for a dev tools or similar product where your audience is the HN audience.


It probably depends on the survey. If I just want to get out the door of the service department at the car dealership I'll probably give a perfect score unless they really screwed something up. I don't want to explain a non-perfect score and probably shouldn't give one so long as the experience was "OK."

On the other hand, like many, I probably tend to follow something like the XKCD star rating levels (https://xkcd.com/1098/) for products. And for employee surveys, I generally answer somewhere in the mildly positive range to most questions.

I'm not sure I've ever answered an NPS survey but, for most companies, I'd probably be somewhere in a similar range, even for companies I'm perfectly fine with most of the time.

The more complex the transactions/products the more reliably you'll have areas of gripe-age. A book or movie is rarely 5 stars for me. A USB cable pretty much works or it doesn't.


The "science" is quite dubious: https://articles.uie.com/net-promoter-score-considered-harmf...

The reason why NPS exists is because it's an easy number to calculate that sounds convincing, and manager-types love numbers without having to think too hard about how they got them.


That article just links to the wikipedia page. The main criticism paper[1] reasonably successfully debunks NPS as uniquely valuable, but more or less shows it as of comparable value to other measures of customer satisfaction.

1: https://web.archive.org/web/20200716065914/https://pdfs.sema...


Yes, that's correct. The broader point is that it's really hard to boil down something as complex as user satisfaction to a number. My time spent with UX professionals has taught me that all of these measures are lossy, and my time with data scientists has taught me that once you start depending too much on lossy measures, you're going to be led astray.

But management-types love little numbers like NPS, so it usually gets done anyways, especially in big orgs. And then it goes downhill, because PMs and leads are incentivized to optimize for the particular number their management chain tells them matters, and they game it because that's how people work. Later on, frustrated engineers and PMs who aren't a part of that game wonder if they're the crazy ones because they see very real customer frustrations brushed aside by an org structure that doesn't seem to care much about what users actually tell them anymore. Or they say they do, but never incentivize the rest of the org to address issues coming in through verbatim feedback.

Maybe, someone with the title of VP eventually wonders why a competitor is doing really well, looks at verbatim feedback themselves (it's usually been given to them already but they forgot about it), and then realizes they've been steering a big ship in the wrong direction, and causes chaos all kinds of chaos in the process. The sycophants in the org all line up to agree with them and declare they were right all along (and get rewarded at the end of the year), line managers and their reports are left confused because of the jolting priority shift, and people who might have felt vindicated by the direction shift are wondering if their leadership are actually fit to be leaders or not.

Is that all the fault of NPS or other numbers? Of course not! But they're an easy and generally accepted way for bad leaders to hide their bad leadership.


Like sibling comment said, I'd love to see actual science behind it.

> a caricaturish view of how NPS is used in the real world. "It's not 'oh no, our NPS score is down, let's hire more people to focus on customer 'happiness'"

That sounds exactly like what responses at many companies would be. There's often much nuanced discussion about how/when/why to adopt certain metrics, but once they are they become sacrosanct and any deviation from "good" results is a crisis needing immediate action.


Agreed. The first line of the article says specifically for "companies and their investors to stop using Net Promoter Score (NPS) as a KPI", and NPS doesn't provide a measurement that directly supports specific actionable steps; KPIs are supposed to be about planning for actionable, measurable steps toward a clear business outcome.

(And I have plenty of opinions on how useless KPIs have been in practice for me and the places/teams I've worked. They're hard to get right unless everyone in management is on the same page about what a clearly defined business goal is, and what an actionable step is, and what a strong metric for determining whether the action met the goal is.)


> they always tend to ignore the underlying science that went into making NPS in the first place, primarily that there was a very strong correlation between the calculated NPS score and business performance.

There is no underlying science behind NPS. It was an arbitrary measure with no research or model behind it. Also, it doesn't work very well, in large because because there is no strong correlation between NPS and anything else, because - as this article makes clear - NPS is an extremely noisy number. Look at the graphs - you can have enormous variations in NPS, with zero change in underlying customer sentiment. Obviously that can't correlate with anything useful.

I think what you're doing is conflating the idea of "asking how likely people are to recommend your product" with NPS. The former is a decent idea, and it does have some (limited) science behind it! There really was a study done that really did find that question was an excellent predictor of many key results. (...and some others that found it was not, but hey, it's something.)

So yes, asking people the question and reading the results is probably a good idea. ...but that's not what NPS is.

> an unexpected drop in NPS is something to investigate

It really, really isn't. And I would challenge you to point to any evidence to the contrary.


I would think an unexpected and sustained drop in NPS is something to investigate. A single daily measure wiggling low? Of course not.


Honest question: what science? Any good papers you could recommend?

After a quick search it seems that NPS originated in a Harvard Business Review article, which I don't consider a credible source of scientific results. The scientific papers I'm seeing mostly seem pretty skeptical, judging from the abstracts.


They're using the popular meaning of "science": A claim from an impressive or impressive-sounding person or institution.


I generally agree, but I think one problem is when improving the NPS score becomes the sole objective and not improving the underlying problems that caused it to go down. In other words, calling a doctor to treat the canary does not improve the conditions in the coal mine that made the canary sick.


IDK, in my experience NPS prioritization trends towards "hiring more people to focus on customer happiness."

Source: Worked in multiple startups that IPO'd using this as one of their 'growth strategies'


> primarily that there was a very strong correlation between the calculated NPS score and business performance.

There is avsolutely no science linking performance and NPS. Just pseudo science.


> As a software engineer I love reading our NPS reports, because they always have great tidbits of info

My only experience of NPS is when I worked retail/sales in my first job, and it absolutely sucked in that context.

Less “great tidbits” and more “I just lost 30% of my commission this month because of something 100% outside of my control”.


> "It's not 'oh no, our NPS score is down, let's hire more people to focus on customer 'happiness'"

This is precisely what happened at my company, hah.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: