Hacker News new | past | comments | ask | show | jobs | submit login

> As a software engineer I love reading our NPS reports, because they always have great tidbits of info that are often difficult to deduce just by looking at standard analytics.

What you love is the customer feedback--this can be accomplished in multiple ways and you don't need NPS for it.




NPS has the advantage of being concrete and simple, producing evenly sampled feedback. This is hard to get any other way.

A proper NPS asks only one question and requires only a single click to answer -- eg "Scale of 1-10 how likely are you to recommend <product> to a friend?" > click [8] > done.

There's a free-text "Tell us more" field that is totally optional.

This means users actually answer it. NPS feedback is the closest I've ever seen to being a clean sample.

By contrast:

- Everyone hates an n-question "would you like to fill out our survey" feedback form. Any data from those is going to be skewed, because only a certain kind of person fills those, and usually only if they're angry enough that they want to vent.

- Feedback from a passive "Submit feedback" button in-app is also skewed. Maybe useful as a bug report mechanism, but it won't tell you what's making your happy users happy.

- Proactively reaching out and talking to/observing users is obviously good, and nothing ever substitutes for that. But you want bulk data, not just anecdotal evidence.

- Most importantly, NPS really shines at telling you what's making your almost-happy users almost-happy. When someone clicks [10] and writes "I love it", that feels great. But when someone clicks [7] and tells you what's annoying them, that's often extremely useful.

- NPS does a great job priming people to give useful feedback. By forcing you to pick a single out-of-10, you just did a quick mental accounting. "9? Nahh, <reason>." Then you click 8 and write the reason.

--

NPS was popularized by that famous HBS study where they found it correlates well with product growth.

I wonder how much of that result is simply because it's one of the least obnoxious ways of sampling user feedback, and therefore produces clean data.


In an almost Goodhart’s Law case, now that I know how NPS is used, I answer these questions differently, being far more prone to use 6, 7, or 8 to express my opinion.


Eh this is a niche issue. I bet less than 1% of people answering NPS have ever heard of "NPS".

The only way this would be a meaningful effect is for a dev tools or similar product where your audience is the HN audience.


It probably depends on the survey. If I just want to get out the door of the service department at the car dealership I'll probably give a perfect score unless they really screwed something up. I don't want to explain a non-perfect score and probably shouldn't give one so long as the experience was "OK."

On the other hand, like many, I probably tend to follow something like the XKCD star rating levels (https://xkcd.com/1098/) for products. And for employee surveys, I generally answer somewhere in the mildly positive range to most questions.

I'm not sure I've ever answered an NPS survey but, for most companies, I'd probably be somewhere in a similar range, even for companies I'm perfectly fine with most of the time.

The more complex the transactions/products the more reliably you'll have areas of gripe-age. A book or movie is rarely 5 stars for me. A USB cable pretty much works or it doesn't.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: