Well, this resonates. One of the things I do when scouring online reviews is to search out the negative reviews. I dismiss reviews that are irrelevant to the product itself and do (what I hope) is a reasonable job filtering out the less-informed negative reviews. When I find a good, informed negative review I tend to search through that particular reviewer's history to then find what they HAVE reviewed positively. Because I do tend to think that person has more expertise/knowledge.
Until I find a good, informed negative review on something they've rated positively.
An aside: This site has lovely design.
Also I have seen religious people constantly rally around negativity for decades and it always rubbed me the wrong way. This also seems to be the idea that the negative stance is seen as the most intelligent. I have years of Theology and learned 3 dead languages to better understand Theology and the scriptures and yet some yahoo with a high school education will have the upper hand ever talking about Kindness or Trust or Love. The message of fear is very powerful and sadly the message of hope and love is seen as weak.
For I am come to set a man at variance against his father, and the daughter against her mother, and the daughter in law against her mother in law.
And a man's foes shall be they of his own household."
I guess I go with Nietzsche's idea that Christianity is an inherently negative faith. There isn't anything inherently wrong with that, but you're basically trying to live divine law in a material world. That's not exactly going to bring out the kind and trusting in people, even if the divine law is love.
I think a positive review could have a similar impact but it would need to be really detailed like say "I did the math and this is the optimal price point for data storage vs cost".
The issue mentioned of expected rank inflation from unrealistic standards is probably why. We are used to cheap crap being called awesome and the greatest thing ever and filter out the noise.
Negative reviews are the anti-marketing it is no wonder people pay attention.
Saying "XYZ" is great is just an expression of wanting to fit in with the group, and doesn't require explanation.
Saying "XYZ" is awful is probably hurtful to someone nearby, and you're going to have to think about what exactly is bad about it. And not only is it bad, it's so bad you are willing to risk the happy consensus in order to help the group.
Of course things that worked in small, close groups might be gamed in larger, more loose ones...
I remember it also being like that in the small forums I remember fondly. We didn't have upvotes, but I still noticed that agreement is shorter than disagreement. But that was fine, since we kept talking through the months and years, we actually learned a lot. The flamewars very never so bad because we were few and close enough to be able to not "stop" them, but simply throw all fuel into the fire right away, and then cleared out the air after it was all burned. I remember a guy thanking us when he finally got around to not being a bit of a racist anymore.. We ribbed on him occasionally for several years, but never in a mean way, and he ended up writing us a thank you letter style post. I forgot all the details, but I'll never forget that actually happened. Still proud of that guy whatever the nick was ^^ That really can only happen in small, long-term communities, I think. Maybe it's simply not possible to "scale" that indefinitely, because it's about human capacity.
> either we agree and elaborate (which is hard when you simply don't have additional knowledge)
meaning in that case we can only upvote, i.e. just agree and don't elaborate. I more often find myself agreeing with comments than in a position to add anything to them.
The general consensus tends to be that it's easy to just agree with something but if you disagree with a topic or post it encourages better conversation to have to explain your stance and why you disagree.
I've posted on those forums for a long time and generally i notice this is what tends to happen more as opposed to a place like reddit, or even here sometimes, where a post just gets downvoted away with no counterpoints or discussions.
That line makes your comment look unintelligent and unconvincing ;-)
The design is very neat, but I found it's actual readability on my MacBook Pro to be awful.
I thought it was a call back to praise being dismissed and thought less intelligent.
There's a strange sort of parallax-scrolling effect where some items actually hang off the side distractingly.
It looks more like an example of funky design rather than something designed for readability.
This is all IMHO.
Optimized for mobile maybe?
In other words, it is not negative thought that's praiseworthy, but critical thought—thought that questions—which must begin negatively. Often this questioning begins (and unfortunately ends, for most) by questioning a thing on its aspects that disappoint or dissatisfy, that's after all what urges the questioning. To emerge from that position and to be able to, in turn, acknowledge the positive qualities of the thing is to dwell in that serene and moderate realm called reason, not too negative, not too positive—reasonable.
Sophomoric joke notwithstanding, you've hit on the distinction between a thought out critique and mere criticism.
It looks like Prof Sandage ultimately turned the lecture into a book: http://www.washingtonpost.com/wp-dyn/articles/A43081-2005Jan...
If someone complains all the time about everything, the people around them quickly learn to ignore it. Our systems should too.
In other words, we successfully encode the difference between "wow, juhzy only complains when its really bad" vs "ignore juhzy, they complain about everything." This vs. the more naive and destructive "5 bad reviews and you're out" we seem to do too much of now.
I suppose it should work the other way too. Someone who always "5-stars" everything should be ignored as well.
The point is that the system right now seems to have a negative bias and is able to do so because of a glut of providers which it burns through like an expendable, renewable resource. (Which morally sucks because they are real people with real lives.)
Yelp has no idea how often I eat out at restaurants.*
* just kidding; of course they do; they subscribe to a feed of my location data published by some game on my phone.
When I say "seldom" I of course mean "uses the service often, but seldom complains". The unit of time in this exponential weighting is not chronological, but uses of the service.
This of course is harder with yelp than uber.
Some people can complain about anything. (When I'm in a grumpy mood, I certainly will. Puppies? Too wiggly. Sunshine? Too bright!) When I read a review, my goal is to find out what I would think of the place or thing. The reviewers I look for are balanced and thoughtful, able to point the good points about something they hate and the bad points of the things they love.
I have a very smart friend who apparently has never liked a movie. If one comes up, he will have a complaint about it. If you see one at a theater with him, before the credits have finished rolling, you'll be getting a list of his issues. It's exhausting. And it means I'll never listen to him on the topic of whether a movie is good.
It's the classic stopped clock: it's right twice a day, but you can't know which times, so why look at it?
From a perspective that is external to your inner monologue, I have no way of knowing. Even if your criticisms are valid, I know too many people who seem to never talk about the positive parts to know whether you're just perpetually unimpressed. Or, more likely, you're the type that never talks about the nice parts. I'm guessing this because of your comment elsewhere in the tree that specifically asks why one should compliment when something is doing what it's supposed to do. (The answer is that people usually look at reviews for confirmation that X is as advertised.)
Or there's actually nothing redeeming about the subject of your review.
So yes, taken in context I would end up throwing your opinion away. In practice, I don't check every reviewer's review history, but maybe that would be a useful signal to see.
A summary of the opinions of many strangers has some signal.
The facts of a stranger have some utility.
For the record, I rarely leave reviews. When I do, though, it's only for the exceptionally bad or the exceptionally good.
It's hard for a product or service to exceed its expected value by more than a fraction or small multiple, while it's easy to cause misery well in excess of a dozen or even a hundred times the expected value. I believe that's more a reflection of it being easier to destroy vs create rather than a reflection of psychology. Combine this distribution with a heuristic to only report deviations from expectation in excess of a minimum threshold and the "average review score" will reliably undershoot the actual quality.
That's only a problem if you want to interpret the average review score as an absolute measure of quality, though, and I don't think anyone really does. Most of us are more interested in communicating and informing our decision processes than in passing moral judgement, and if our goal is to optimize communication then we should expect negative reviews to dominate the discussion because they're inherently capable of more meaningful excursions from the mean.
I agree with your overall point: it would be fantastically useful to be able to contextualize reviews against reviewer psychology. That way I could ignore both shouting from negative-nellys and forced positivity from those who feel compelled to balance the universe :-)
In reality, however, it seems that positive reviews tend to dominate. Using Google Maps reviews as my barometer, I hardly ever see any place rated less than 4.5 stars. So, I tend to think to myself "4.5-5 stars: might be good. 4 stars: probably okay. Less than 4: maybe steer clear."
Though, in practice I disregard reviews, take a plunge, and then decide on my own. Often I find myself in conflict with the average majority opinion.
If I have a specific complaint for a place close to my heart, like a coffee shop or restaurant or local shop, I’ll talk to the manager, privately and calmly, and be on my way.
It's interesting that your main criteria is how you feel you were treated, though. Discussed in the article:
"Restaurant reviews in which people sound traumatized by perceived injustice don’t tend to comment much on the food — it’s usually the perception of being treated rudely or uncaringly that seems to have pushed people into processing by writing out their feelings in a public forum."
I honestly feel the same about poor service. I'm usually accommodating and understanding, but I'm no monk. Of course there are times when I feel either the poor food or poor service merit some mention.
Most of the time, however, I think "they're human, going through human things. No big deal."
Doesn't mean you have to, just means that it'd be more helpful to others if you write "it actually is what it claims to be" reviews.
Do you consider them equally attractive?
> If someone complains all the time about everything, the people around them quickly learn to ignore it. Our systems should too.
That assumes that "too much" negative criticism is false and merely the result of a disagreeable personality. I think that assumption is false.
I think that's false, people who frequently post negative reviews may just be the kind of people who have legitimately higher standards vs. people who are happy with even objectively crappy things.
I don't think the frequency of negative reviews, by itself, gives you any real information about the quality of the reviewer.
Amazon, as we all know, basically expresses quality as a raw average of stars plus a histogram, with no correction except for removing reviews. Meanwhile, BeerAdvocate is essentially just a labor-of-love beer tracking site, but it offers multiple secondary stats about ratings. Each beer gets a percentage deviation stat (pDev) to show how varied their rating are, each specific review lists its deviation (rDev) from the product's average review, and each reviewer's profile offers their frequency of reviewing above, below, and inside the average window (|rDev| - pDev > 0, then rDev - pDev).
I don't think anything is actively done with those stats, but even that's enough to spawn forum threads where reviewers discuss which beers are most controversial, which beers they gave outlier reviews to, and whether they're typically harsh or generous reviewers. The site also recommends rating sub-categories to get people thinking about different aspects of a problem, while Amazon is laden with reviews that are either myopic (e.g. that XKCD about a tornado tracker) or completely off-topic (e.g. about the seller instead of the product).
A site that wanted to go a little further could use the same stats I mentioned for spam detection (there are several papers on doing that effectively) and for score correction. (Basically, take a user's average deviation over all reviews and use that to scale or adjust their impact on averages.)
Given how easy all that is, why do Yelp and Amazon consistently have some of the least useful reviews and averages of any site I know? I suspect this article nails it - casual readers appreciate the simplicity more than depth.
The main drawback I see (assuming that you are seeking an honest picture of that provider's quality, and not a tool for punishment) is that you don't meaningfully capture people who only write reviews to flag serious problems.
Yes - reading the comments here I'm realizing a major problem with any reviewer-centered system is that people decide whether to review on hugely varied conditions.
An always-five-stars reviewer might just be easily impressed (or a fraud), but they equally might subscribe to "if you don't have anything nice to say...". And quite a lot of people write exclusively bad reviews, but it's not obvious how to discern grumpy reviewers from people who only speak up about major issues.
A partial fix might be available by analyzing how far a given review is off user's average difference from product average, which could discern a five-star bot from a person who only reviews great products. But even that doesn't solve the XKCD problem where a product has median-case appeal but a high rate of critical failures. In true "what can't meta-analysis fix?" style, this could be improved by looking at a user's average distance outside 1SD of the mean review, or perhaps by special-casing products with multimodal reviews.
Of course, it's deeply unclear how to convert this to an output. Scaling reviews based on reviews sounds like a nightmare, reviews shouldn't be differential equations and no one wants to see 4+ layers of statistics to buy a new lamp. Perhaps all of the indirect work could be done off raw ratings behind the scenes to produce a general "adjusted average" for display?
(More realistically, the serious-problem case only seems solvable by reading text reviews, and just devaluing outright fraud and always-angry cranks would be a massive improvement over existing systems.)
Back when I was a kid the choices available within the budget of a lower middle class person of holiday, house location, schooling, medical care, dentistry, food, restaurants, everything was much less.
You were delighted to even be in a restaurant, on best behavior and it would have to be terrible to warrant complaint.
People can now eat and drink pretty much whatever they want from around the world at any time of the year having service business available to cater to every whim that people of a certain level affluence can have their lives very tailored to their tastes and expectations of service.
To then not get that, not be treated as the in crowd or perceive being valued, can be very jarring.
Easy to dismiss as first world problems but I think it’s a real point.
Quite. If I'd bought stock in that logical fallacy site that was doing the rounds a while back HN would've made me a billionaire by now.
Also obligatory Black Mirror reference: Nosedive S3E1
1 - Eek! Methinks not.
2 - Meh. I've experienced better.
3 - A-OK.
4 - Yay! I'm a fan.
5 - Woohoo! As good as it gets!
But the subjects of review seem to interpret the ratings like this:
1 - A black hole.
2 - Don't bother.
3 - If you're desperate.
4 - Could be better.
5 - Met all expectations.
To compound the problem, many people actually rate like this, causing what the article calls "reputation inflation." The result is that I end up having to use this higher scale when looking for service, because enough people have been rating that way.
This is why I would prefer a binary up or down vote, with the option to comment. My 1-2 would would translate to No, and 3-5 to Yes.
I've also heard of
A. Something was wrong
B. Good job—met expectations
C. Something was unusually good
Sort of like ride-share "give a complement" feature, since they know everyone gives 5 stars.
Another option (that I've never seen) is to tell the rater their response will be weighted based on their previous scores. So if someone who always gives 5 stars gives a 4 star, it's interpreted as a negative response. But if someone who gives majority 3 star reviews gives a 4 it's interpreted as positive.
Basically, you sit down at a restaurant, order a meal, and on the way out, the host or hostess ask you to mark off checkboxes rating the experience on a scall of 0 to 11. Eleven being the Spinal Tap interpretation, and Zero being the hospital visit.
We now face three realities at this point, completely disconnected and untethered from one another, doomed to resolve as if it were some Cellular Automaton rule set from hell.
On the customer's side of things, maybe they let their small child mark the check boxes to satisfy the child's curiosity, maybe it's the waitress' mom dropping by for a visit to take pictures at their first summer job. Maybe it's an actual normal customer, a random person with with a pulse, fresh off the street. Anything goes. Straight A's or all balls.
The wait staff, have no control and don't actually know how they're being judged. By what criteria, or rules. They just know what it means to wait tables. They've been in a restaurant before, and they know what middle of the road is. Unless specifically advised of the level of service, they observe their surroundings, and follow their instinct, drawing on prior experience and whatever they learn as they go.
The analyst that receives the feedback however, operates according to rules both alien and strange to normal people. Only 9, 10 and 11 are positive ratings. All others are an insult to the business, and all parties associated with an 8 rating or less must be eliminated from the system. They don't want to see paying customers that aren't ecstatic, nor do they want employees delivering service that catches a middling rating. To them, a 9 represents a danger zone, threatening profits with a backslide.
The person who explained this principle to me was my boss' boss, and it was in this moment that I knew the ship I was on was aimed at an iceberg, and that I needed to escape. He was referring to our OKR process, while simultaneously explaining this principle in relation to our review process. It was a very "Steve Ballmer/Stack Ranking/Cut the Weakest Link" sort of discussion, and I stuck my thumb out and found another job, for better pay, and less demand within weeks.
Edit: This article has less detail than I remember, but covers the gist (and makes the same Black Mirror reference): https://www.newyorker.com/magazine/2018/02/05/customer-satis...
I guess that makes sense. Individual end-users only ever see the tip of the ice-berg when interacting with a large organization, so they are likely to blame the parts of it that they interact with while ignoring the deeper root-causes. If you let them blame the customer service rep's attitude or the size that their vegetables are chopped then you wind up trying to over-optimize around making the reps smile or definining exactly how to chop the veggies. But if all you know is that people aren't happy with the experience you send a process minded person who is familiar with the whole stack of how things get done to figure out why the service reps are in a foul mood or how come irregularly chopped veggies are winding up on peoples' plates.