They show a graph of positive/negative reviews over time, making review bombing stand out.
If Valve identifies the anomaly as being off-topic they can even mark it as such an exclude the reviews from affecting the overall review scores. Users who don't like this can even disable the exclusion and get the "raw" review score.
The only thing missing is a country filter, so I can just disable reviews from certain countries prone to excessive national pride and the accompanying "that Island is ours, it wasn't on your map. ALL REVIEWS BE BAD" completely. The five good reviews out of these areas never outweigh the rest.
I remember when Company of Heroes 2 was released and a certain cohort believed Russians were portrayed poorly, and the game was review bombed to hell by hoardes of Russian nationalists. Unfortunately this was before the review reforms and it severely affected the game's score on the platform.
"there's now a checkbox in your Steam Store options where you can choose to have off-topic review bombs still included in all the Review Scores you see."
Things like this is so nice. If I don't trust Gaben's bullshit review filter, I can see what the result would be without, just to check.
I too like their transparency, but for the opposite reason: I find review bombings are the only time reviews are actually useful.
Reviews are a dime a dozen, positive or negative; they normally aren't useful for judging whether a product is worth buying.
Review bombings on the other hand only happen when something happened to trigger the internet's sweaty wrath, and that is very useful as a unit of measure. Any particular review in the bombing is worthless like any other review, but the collective act of review bombing is very valuable information.
How is that a useful metric at all? Usually it means that there is some controversy, with no value judgement on said controversy actually implied, if you are unfamiliar with that controversy. So unless you research every review bomb causing controversy, it's better to just ignore them.
I do look into what the controversy that triggered the review bombing is. If it's a problem that I have issue with, I can think twice about hitting that big, green Purchase button.
But I wouldn't have that chance without the review bombing telling me there's something I might want to check first. Developers and publishers are very good (and incentivized) at hiding the problems review bombings can otherwise reveal.
It's a great way to weed out woke nonsense, yes. It's also great to weed out draconian DRMs and EULAs, stealthy price hikes immediately prior to a sale, shitty developer behaviour in general, and other such factors I would not want to spend my money on.
Reviews by themselves don't tell me anything important, especially in times of peace. Review bombings on the other hand tell me there is or was a problem I might want to look into before I open my wallet, and that's very valuable information for making informed purchases.
"Woke" started as a term used by young American social progressives as a reference to becoming aware of things which they had previously been unaware. Sort of a "Neo waking up from being in the Matrix" type of waking from a dream, delusion or false reality.
This could cover almost any social, economic or political issue. Generally though it was used around issues related to race, gender or sexuality. People would sometimes use the phrase "Stay Woke" as a shorthand for something like "Stay aware of racial or gender disparities and work to counter them".
The term was subsequently used in a derisive, pejorative or mocking manner by people who perceived the first group (the "woke" people) to be factually incorrect or excessive in their demands.
People using a phrase like "woke nonsense" are expressing disdain for inserting race/gender/sexuality issues where they don't belong or being frustrated by too-extreme measures being advocated to address such issues.
There's my attempt at writing a very neutral etymology of the usage of the term.
I also don't know what woke means, but I recall that Superhot VR got review-bombed after an "update" removed some content because it contained self-harm.
They could use this and a bubble in the site explaining this might be review bombing and show the score of the book disregarding these events.
I’ve never understood why people have such a negative opinion on goodreads, though. I guess this is the only thing I know of that makes it bad.
If anyone at Goodreads or Amazon cares about the platform at all they have a funny way of showing it. It’s been stagnant and shoddy for years.
Goodreads really has only one thing going for it: a pristine data set of books, authors, cover images, etc. There are so many other reading apps with a better UX but they are frustrating to use because the data is spotty or low quality.
> Goodreads really has only one thing going for it: a pristine data set of books, authors, cover images, etc. There are so many other reading apps with a better UX but they are frustrating to use because the data is spotty or low quality.
And spoilers in book titles (the goodread ones, not the physical books)
Just a quick search for "a shocking twist" on goodreads finds many many books:
Great, now I know that the killer (or whatever) they catch right before the end isn't the real one, and that there will be a twist in the last few pages.
It depends on what you use it for. I use it as a database of books I've read, and would liek to read, and sort them with tags. I could not give less of a shit about other users on the site.
OpenLibrary offers that kind of dataset as a public service, on a non-profit basis. (Note: this has zilch to do with the whole controversy over them lending in-copyright books. The book metadata is an entirely separate thing and quite legal.)
Maybe start with showing reviews that are verified or somehow legitimate. There is a problem on both ends of inflating and bombing titles.
I personally cannot stand the former where publishing teams will buy or fake mass amounts of reviews to play into the NYT bestseller formulae.
It is fairly easy to tell an ad hominem in a "review bomb" in my experience. Usually it relates around politics or something controversial rather than the actual thing itself.
Not just reviews, I thought it was a well established practice to purchase one's own book to ram it up the rankings.
I have also heard that it is a backdoor way of spending political campaign money. Candidate "writes" a book, the Campaign to Elect X buys a warehouse of the books, candidate then gets the profits.
I had a brief vision of a warehouse filled with e-books.
But seriously, I have a similar feeling about book deals. That is, it's a great - and legal - way to pay (off) someone for "prior service", meanwhile the book's sales are shite. But selling books was never really the point.
Review bombing isn’t just negative. People also review bomb their competitors with positive reviews in order to get their competitors’ listings taken down. How do we stop that?
The harder we police against fake positive reviews, the more we incentivize this sort of anticompetitive sabotage.
Accounts should be old enough before contributing reviews.
Accounts should only be able to review X amount of titles an hour/day/etc.
Moderators should have tools to identify influx of reviews and decide mitigation strategies.
Moderators should be able to put up a banner on a title that may be review bombed so the community can report suspicious reviews.
Etc.
These aren't honestly policing much. Just general common sense for these platforms. Also heavily needed on many other platforms that get spam in similar ways.
The question is not whether they can detect an influx of fake positive reviews on a product, it’s what to do about it. How do you tell the difference between someone review bombing their own products with fake positive reviews and their competitors review bombing them with fake positive reviews to try to get them banned?
Maybe you decide that sounds too difficult, so they should stop banning listings for that and focus on policing the reviews themselves. But that actually turns out to be a very difficult problem in itself! Sellers of crappy products will often include notes that induce/bribe the customers into leaving fake reviews. A seller of cheap products can rack up thousands of verified purchaser fake reviews this way.
They can also do things like hire a botnet to purchase the product in bulk using thousands of hacked PCs all over the world and get those PCs to review the product, generating more fake verified purchaser reviews.
In the end, if you don’t ban sellers for fake positive reviews then you run into these tactics and it’s very hard to police individual reviews. If you do decide to ban sellers then you have the opposite problem of dealing with sabotage.
As for detecting “influxes”, that problem is far from trivial. If a review bomber is using a botnet of thousands of PCs then it’s quite trivial to stagger the timing of the fake reviews over a period of weeks. It’s not like they’re forced to push a button and have them all hit the review page at the same time.
Sounds like there is a need for identity verified reviews. Ie the account doing the review has provided some government issued id during their registration process. Then add in another criteria that they can’t post reviews unless their account is say 3 months old (or some timeframe).
You could still record unverified reviews, but hide them and their ratings by default until the user explicitly wants to see them.
Who is going to scan in their government ID and send it to a book review site except for the people on the pay roll of a book publishing company that requires their employees to register an account on their first day of work so they can "review" books in three months.
I'm a legit user of Amazon. My account is probably over 20 years old. I've ordered tons and tons of products. I've never reviewed anything, however I have received products containing "gift certificates" promising me a $10 reward for leaving a fake review (of the product I just purchased). How does Amazon stop me from accepting this bribe?
I'm unsure if I'm playing devil's advocate or not, but should that not be allowed? That seems more about getting over the bias that people who are content with something are generally less inclined to leave a review than somebody who is unhappy with something. If you get something and it's complete crap, I doubt a $10 gift certificate is going to convince you to go write a positive review of it. If anything, that'll probably end up motivating you to write a negative review, with further mention of it.
If you get something and it's complete crap, I doubt a $10 gift certificate is going to convince you to go write a positive review of it. If anything, that'll probably end up motivating you to write a negative review, with further mention of it.
I think it is if the product itself is cheap enough. A $10 product where you get a free $10 gift certificate is essentially free. I think lots of people would be willing to leave a quick 5 star review in exchange for getting the thing they just bought for free, even if the product is kinda crappy. As long as it’s not totally broken and useless!
False flag operations? Amazon slips their own false review gift certificate in. If you use it, they flag you as an ignorable reviewer (shadow ban or whatever).
What incentive does Amazon have to do this? Best case scenario: they ban and alienate a bunch of customers who bought stuff on the site. Worst case scenario: they spend tons of money on these false flag operations and end up removing a lot of bestselling products from the site, reducing sales overall.
It would be a review shadow ban only. Wouldn’t stop them buying anything or leaving reviews.
I doubt it would cost a lot of money from Amazons perspective.
But yeah I agree Amazons incentives are not the same as their customers. They obviously have a lot of insight into what drives and lowers sales.
Maybe, although I seriously doubt it, at some point they will see enough reduction in sales to take action against poor quality products and their mispresentative reviews.
The overwhelming majority of review bombs are from real people, of which ID would do nothing. Steam has shown a really simple solution the problem - a mixture of proof of purchase, and then algorithmically walling off 'irregular activity.' Review bombs on Steam remain relatively regular, but now have basically 0 impact - even extremely large scale ones like when somebody gains the ire of Chinese gamers.
Proof of purchase would require some modest effort on the part of publishers to create a workable and shared system, but it should otherwise be trivial to get going. Publishers have a strong motivation to opt-in since fake reviews are probably more detrimental than not to their bottom line.
Once someone is identified as a serial review bomber, then you’d just ban them and all their reviews. Or at least flag their reviews so they aren’t shown by default (similar to a shadow ban).
That person could no longer write verified reviews. Maybe it’s a permanent ban, maybe it’s a 1-5 year ban. They wouldn’t be able to verify another account with their identity until the ban is listed.
Yeah Steam is good, but mostly because it is a very closed loop system as they own the entire transaction-review-lifecycle. Not soo easy with other things like books.
You can’t identify the review bombers. These are 3rd parties hiring the reviewers to leave reviews. That hiring process takes place offline (from the perspective of your site), so you have no way to monitor and detect these directly.
As for the individual reviewers, there are tens of thousands or millions of these, and review bombers can trivially cycle these in and out, so no individual reviewer is a “serial fake reviewer.”
It’s trivial under current systems. But once you start placing higher value on verified identities, those millions of bombers are going to turn out to be a lot smaller.
It doesn't take millions to boost a specific product's sales. As few as a thousand fake reviews can get the job done. And bombers will find ways to corrupt the verified reviewers, since their reviews will go up in value. And the cost for Amazon to verify the reviewers is really important to take into consideration. If verified reviewers end up being untrustworthy, they've put in that effort (time and money) for nothing!
This is really just a special case of the wider issue of fraud. And I'm fairly convinced of the argument that the optimal amount is nonzero [1]. A further wrinkle to this problem is that fraud tends to only get more sophisticated over time, so the costs of fighting it always go up. At some point we may reach a scenario where fraud is so rampant that entire business models become non-viable.
That Steam idea of giving the viewer more options on how to filter the reviews sounds great actually: Let reviews come in, but allow the viewer to filter against accounts that burst reviews, or by country, etc. Pretty cool idea (because in particular it allows privacy.)
Although really the current concept of "review" is terrible. Few reviews are detailed and specific enough and multi-dimensional enough. They cannot be searched well enough (Although we might be nearly there with LLMs - asking them to critique a book from our point of view and preferences.) It's hard to follow reviewers that you have learned to trust (hard enough to notice them in the first place). Star ratings are not useful in the intended way and only a few sites ask for multi-dimensional star ratings (generally asking to rate irrelevant things :-) Bleh!
So far "verified" has run into the issue of how to verify while maintaining privacy and while allowing "not purchased on this platform". Verified sounds gtreat in theory but it's very hard to imagine how it would work.
Separating account verification and restricting the amount of reviews allowed - separated from privacy-respecting reviews might be a direction. But currently it's hard to trust anyone's "privacy-respecting" features.
(My impression is that in consequence, currently, "verified" means "mostly positive".)
To me this is just one in a continue set of examples of GR's lack of care for their platform. The issues mentioned that lead to and encourage this behaviour have existed for well over a decade. Their solution is to make it a userland issue.
Name one social platform that doesn't have this issue. It's not a trivial thing to solve. I don't want to have to submit my passport to share a book review.
Although I know you said this in jest, it got me thinking that it would be really fun to have a system where you have to submit answers to a short quiz about the book before you can submit your review. No reviews would be removed, but you would, if you wanted, be able to filter the global average by people who scored over a certain threshold on the quiz.
Probably you could even generate new questions every time with GPT, so that people couldn't cheat. This system would have a million problems, sure, but would it have more problems than the current system?
The next step would be to just ask ChatGPT to write a couple of reviews about the book, and not bother with letting people submit potentially fake reviews at all.
> I don't want to have to submit my passport to share a book review.
Why is it important that you, a person apparently unwilling (hyperbole aside) to prove that you have actually read a particular book, should be able to transmit (ostensibly) your opinions about said book into the heads of other potential readers via a software platform? A small number of verified reviews is almost certainly better than a large number of unverified reviews, for exactly the reason highlighted by OP's article: when there are few or no controls on who or what can post however many reviews, the system is inevitably exploited by bad actors, and said actors often operate at scales that dwarf the impact of a single legitimate reviewer.
Some people are suggesting verification that the reviewer actually bought the book like what is done on amazon.com itself. It is different as Goodreads is not amazon.com, the goal at least what people are using it for) is social network that is built around people organizing their reading lists, sharing it with others and follow what their friends are reading. In addition to reviews to know more about books and people's reactions.
Some of the problems with this proposal
1- Books borrowed from friends or libraries
2- Book bought from different platforms or stores
3- Books downloaded from from the internet legally (or illegally)
Most of my readings are not books that amazon can verify I actually owned the books and I usually write reviews if I want to share it with my friends on Goodreads. That would be hard to implement and if so, I think personally I will stop using Goodreads, probably with most of the people I know.
Is that "log-rolling"? In ancient times, Spy Magazine had a "Log-rolling in Our Time" section, where they would quote mutually complimentary reviews by two writers.
It's not super difficult to only show reviews you are highly confident AREN'T completely fake.
Instead they prioritize having as many reviews as possible - when possibly a majority of them are fake.
And the cost is - the reviews are completely meaningless.
All they care about is engagement - and if users have to dig through reviews manually trying to sort out signal from noise - that's engagement to them, not an awful product.
Not all they care about is engagement. They also care about keeping costs down.
If costs were no object they could simply hire well-paid, trustworthy, professional reviewers to review every product and listing. These high quality reviews could establish their site as the gold standard, strengthening the brand (think Apple levels of brand value) and drawing even more users. But all of this could be so expensive they’d actually lose money for doing it. So they don’t.
Meet Kirkus Reviews. They will actually read it and write a (I guess) well-reasoned review of it.
But you, the author, have to pay them. When I checked, it was $450.
If you're an author, you also get a deluge of offers to write a review. I didn't follow up on any of these, but you'd assume that they couldn't get any repeat business if they panned your book.
Maybe the Bureau of Consumer Protection, and for fraudulent and deceptive practices? There’s plenty of precedent, and Amazon’s failure to do anything about fake products and fake reviews, if not active encouragement of fraudulent behavior, is a real economic problem where we the consumers pay the costs.
Reviews are, by their nature, personal opinions. I don't see how you can put a framework of fraud protection around that. Maybe in extreme marginal cases, but for the central idea of reviews, it just doesn't work.
Where there’s a will, there’s a way. This isn’t about policing valid opinions, it’s about identifying what is currently a massive and obvious trend of actual fraud. Requiring authentication for reviews and products is the first step, and isn’t that hard, we know how to do it already. Amazon is choosing to allow reviews and products by people they know are bad actors. They look the other way and allow fruadsters to be anonymous because there are no consequences yet. They’re not imposing consequences on fraudulent sellers or reviewers, even though they could.
I don’t believe the main problem is that they don’t know how to identify fraud, I think the main problem is they don’t have any real incentive to.
Shouldn't they start by fining the reviewers? Have they tried sending a subpoena to Amazon for all the reviewers' IPs and purchase histories? Did Amazon block it or something?
This won't stop people from lending things. It just means borrowers can't leave reviews.
I'd rather have a much smaller set of verified reviews than a bunch of fake ones. Decreasing the quantity in this case often means improving the quality and signal to noise ratio.
There are lots of people who will be receiving books on Christmas. With your scheme, they won’t be able to review them. Or people who buy from other bookstores. Or other countries.
There is a difference between reviews on Amazon, where limiting to purchaser makes sense, and Goodreads that is open to everyone.
Yes, this will absolutely decrease the number of eligible reviewers. That's a given. But the argument is that it would also help combat spam and fake reviews, so while the number of reviews goes down, the signal to noise ratio goes up. People who read the book without buying it wouldn't be able to review it, yes, and that is an unfortunate side effect. However, it still leads to more useful reviews than if those borrowers are also lumped together with spammers and bots, which make the reviews useless for everyone. It's not a perfect solution, just an improvement over the existing mess (on Amazon).
And yes, agreed that Goodreads needs a different system. I was responding to a post about Amazon, and even there mentioned that goodreads would need something different, maybe reputation based or whatever.
If you do that, malicious writers will have bots buy their books, get the precious "verified review" badge (and bumping the book's ranking at the same time), and give the book raving reviews. At that point, it just becomes an expense, probably more efficient than buying an ad. In the meantime, legitimate readers whos didn't buy the book by themselves won't be able to comment on it.
Not sure it would really improve the signal to noise ratio.
In this case, someone who borrowed the book is more likely to be trustworthy (IMO) than someone who verifiably purchased it from the site hosting the reviews (particularly if reviews were limited to the latter).
It would be thought of as an advertising expense to buy books from the platform in order to qualify to write some of the first reviews.
As a hypothetical, sure, I can see how that makes sense. However, verified purchaser reviews have been a helpful reality for the better part of a decade or so, across different media and vendors: games, physical goods, Amazon books, electronics, etc.
Even though it's an imperfect barrier to review quality, at least it is a barrier versus none, and time and time again retailers who've experimented with it have found it a useful signal.
Yes, you'll lose out on the borrowers who fell in love with a title, but they can always spend the $20 to buy that book to review it if they really want to (I've done that for things I loved, just to leave a review). But even if they don't, it's still usually a net win: you lose not just the honest reviewers but the 100x spam that outnumber them.
When shopping for low-cost products these days, it's not the individual review that matters anymore, just the aggregate ratings. So it doesn't really matter if a few borrowers don't get to leave their rating, it's still more useful for buyers to cut out the spam.
----------
Maybe a simple compromise is to just allow filtering (and adjust the rating) by "verified purchasers only", so you can see both the unfiltered and filtered scores. I'd just leave out the unverified ones altogether, myself, but others could see the raw scores and reviews if they wanted to.
But then that means goodreads wouldn't be a platform for readers anymore. It would be a platform for Amazon customers only. We already have amazon.com for that. What would be goodread's added value?
How do you define "fake"? Review bombing is generally real people with real opinions. It takes advantage of the fact that a) it's mostly impossible to verify someone has read a book, and b) most people don't read, and c) most people who read don't leave a review, so anyone motivated to review gets an outsized voice.
Yes, and then Goodreads reviews vecomes the most trusted review source on the internet. Thus making it critical to have good reviews on the site. Thus causing the cheaters to try even harder to post on the site. It's probably not a winnabl3 battle.
Reviews on Amazon are heavily gamed and you should not trust them at all. I get little business cards offering gift cards for 5 star reviews regularly. Product listings get swapped for entirely different products while the review stay.
What works for me is, the reviews should fall off like a neat stair case, 5s the most, then 4s, 3s, 2s and 1s. If the 1s are bigger than the 2s by a lot, it's usually the classic "Sell 500 good quality ones, get the reviews, switch to the knock offs".
This doesn't seem like an impossible problem to solve for at least 80% of cases. I'm sure there's a long tail of review bombing, but in any given week, how many different titles are being brigaded? They say there are 300 volunteer "librarians" who moderate reviews: a relatively safe plan would be to get another couple dozen of them, give them a dashboard to show clusters of unusual review activity, and see how many wildfires they can put out just by throttling the number of reviews, or temporarily halting reviews for a book.
It’s problem for Amazon not wanting to spend money. They could significantly reduce the problem by taking Goodreads off of back burner status instead of expecting volunteers to do it for free.
It is important to stress that this is a problem related to the sales of the new titles, not the established and time tested ones. There is a huge competition as we have more and more books published each year as well. Also, show me at least one marketplace where the reviews are 100% legitimate.
I think publishers and authors want to free ride such platforms to reduce the marketing spend but then they face trolling and there is nothing to do but cry (but the monetary effect is unclear).
Instead they could invest more into multichannel ads and build alternative platforms for the readers to check what the book has inside. But yeah lets depend on amazon and just wait until it will start asking for money to mitigate review bombing ;)
Besides who reads the new titles when there are thousands of amazing time tested books?
> Corrain's downfall came after internet sleuths published a Google document detailing a number of Goodreads accounts praising Crown of Starlight and giving low reviews to works by other writers, many of them people of color.
I had the same reaction, however if you read the linked Google Doc from the article, "Lily" (the author under a pseudonym) even mentions how racist "her" account is.
That does strike me as the author being aware of her own choices and 'getting ahead' of the story.
Seems like a problem that could be solved, in part, by reopening Goodreads’ APIs (so that people could write ring detectors, inauthentic behavior, etc.). But Amazon doesn’t appear to be interested in that.
I am asking Amazon to please let go of Goodreads and restore it to it's proper state before they ruined it. But neither of us is going to get what we want.
Thanks for the pointer. I've been checking this out, and it looks pretty good.
I can't find any way on the site to ask questions, either of the developers or of the community. What I'm trying to figure out is the semantics of their reading preferences exclusions.
The actual question asks about "types of content that you never want to read about", and that "about" confuses me. Suppose that I choose "racism" there. Does this indicate that I never want to read anything that somebody has tagged as racist (which I think is probably the case, but is less than useful because it seems like darn near everything gets tagged with that one way or another), or is the "about" saying that I don't want to read any meta-discussion of racism, that is, I don't want any books that are "about" racism - which is absolutely the case.
> I can't find any way on the site to ask questions, either of the developers or of the community.
I can see a "contact us" button both on mobile (in the hamburger menu, top right) and on desktop (bottom right corner).
Right now it seems to be a bit too sensitive and hides books a bit too eagerly, but the team is aware of that. That's about the extend of info I have though
If people want to look for better books, they're better off signing up for zlibrary anymore. Free books, people can review if they want, and probably real humans doing so (until bots invade tor).
I actually get great book recommendations there via "Personally recommended" when I go to fetch something new to read. Far better than anything commercial.
Chess and tennis rating systems work reasonably well. Academic grading does not. Academics will never change; the humanities will never go along.
While sites like Netflix use all the AI they can muster to tune and individualize their recommendation systems, the review systems we know best rely on straight democratic votes that can be gamed. Or worse, they rig the votes for profit.
The AI objective function should be to synthesize advice relevant to each user. As a component of this, treat each reviewer as a tennis tournament, to develop a tennis rating system summarizing reviews? One could apply weights tuned to each user for the relevance of each other user's reviews. If there was a shadow economy behind these uses, as if a hedge fund had to pay pennies to include each voter, the "review bomb" voter's inputs would quickly become useless, and be disregarded.
What we have instead is singularly dumb. Is there room for a startup here? Google page rank was obvious in hindsight. Won't this too be obvious in hindsight?
I thought there were some studies done that showed that 5-star rating systems can give worse results than a simple thumbs up/down rating. IIRC that's why Netflix moved to a simpler rating system (though they added a "two thumbs up" at some point).
While on first glance it might seem useful to have the nuance of being able to award between 1 and 5 stars, those intermediate star rankings tend to mean different things to different people, so a 2-star rating from one person might mean something fairly different coming from someone else.
Also consider how Uber/Lyft ratings are decidedly nonlinear. Anything below 5 stars -- even 4 stars -- essentially means there was a fairly large problem, and drivers will get "fired" by the platform well before their average even drops to 4.0. If any platform would benefit from a simple thumbs up/down rating system, I think it's that one.
> And not subjective at all, like book quality.
Certainly in chess and tennis they're not subjective, but a simple thumbs up/down just means "I liked this" or "I didn't like this". Sure, if you only have a handful of reviews, that maybe doesn't tell you much, but as you accumulate more, the average starts being more useful. And of course this doesn't preclude the option of allowing people to write something in addition to the binary choice.
I was opening up the question of how different models for aggregating reviews would be naturally immune to review bombing, using Chess and Tennis ranking systems as inspiration. There's nothing about that style of aggregation that locks one into a particular kind of base signal.
I’m interested to hear what value HN folks have in book reviews from strangers. I thought the value in something like Goodreads would be to create a community of people you know IRL to share what you read. I don’t see myself having much in common with a bunch of people I don’t know.
That's an interesting perspective. A total stranger has gone much of the same trials and tribulations as me, simply due to the fact that they're human, and alive at the same time. Being born, how to eat, how to afford to eat.
If you know someone IRL, you don't magically have something in common with them that you didn't before. A stranger might be born in a foreign country so we don't have that in common. After I get to know them and become friends IRL, we're still not going to have that in common. So the idea that you don't have anything in common with people you don't know seems weird to me because of course you have things in common, you just haven't discovered them yet.
Is a book review by John Updike at all useful to me? I didn't go to Harvard and don't have that in common with him, so why would I value his book review?
For me the main value of reviews, kinda like ChatGPT, is a vague summarization. I read a few high rated ones and a few low rated ones and you can kinda tell based on what the reviews focus on is this the kinda book I’m likely to like?
Do the negative reviews focus on SJW topic I don’t care about? Do the negative reviews focus on opinions or do they focus on evaluating expertise/facts? do the positive reviews focus on world-building, writing, quality, or characters; what do they like about it? What do they compare it to?
Sometimes I do this when I’m 10 or 20% into a book and I’m trying to evaluate if I care enough to continue. I’ll see if other reviewers say the book picks up or slows down etc.
like many others who recall the early days of the open Internet, I am disgusted by the practices and business model of sites like Goodreads for the last decade+. Now they cry "foul" and ask for help? cry me a river.. better self-organizing reader circles are the answer.. "winner take all" is working out just as many envisioned.. ensh*tification
It means that Amazon does not seem to invest anything into this aquisition anymore. Even the maintenance is lacking, this year they had an outage measured in hours.
Re-read the article, or at least the second paragraph. They (and some buddies or sock puppet accounts) were low-ranking other authors’ books, specifically targeting authors of color.
> “I boosted the rating of my book, bombed the ratings of several fellow debut authors, and left reviews that ranged from kind of mean to downright abusive,” she tweeted.
The sock puppet accounts were given stereotypically of-color names, and targeted authors were disproportionately of-color. There’s pretty clear indication of bias there.
> They posted a 31-page Google document containing screenshots of the activity of a number of accounts, with usernames including “Chantal B” and “Oh Se-Young”, suspected to have been created by Corrain.
> “There’s something extra despicable about using clearly POC [people of colour] names in the fake accounts to upvote every negative review on POC books so the top ones are all 1 star and 2 star. Like what in the yellow face,” Zhao tweeted. Many of the accounts linked on the document Zhao shared have now been deleted.
> The sock puppet accounts were given stereotypically of-color names, and targeted authors were disproportionately of-color. There’s pretty clear indication of bias there.
AIUI, authors in the subgenre are disproportionately non-white. If that's the case, it would be expected for those targeted to be similarly non-white.
None of this is a defense of the offender: she's definitely a big jerk. But it doesn't necessarily mean that we have to make every freakin' issue about race.
Break this down to just the facts presented. There are 31 pages of screenshots. They mention two names that were included that sound like POC. Note that they avoid saying anything beyond that. And they make sure to include the disclaimer that they are "suspected" to be sock puppets. They are clearly grasping at straws to bring race into this.
Is "Chantal" a stereotypical POC name in your opinion? News to me.
You have a lot more confidence on a bunch of people doing OSS "research" than I do. The quote itself says "suspected". I would go as far as to say that believing this was racially motivated (rather than just a scummy move) based on this "evidence" is weird, but not surprising.
>Is "Chantal" a stereotypical POC name in your opinion? News to me.
Interesting. I would have guessed that Chantal is a 90% black name, but I was wrong. It is only slightly more common for black women, and there are many whites with the name too. This may be a regional distinction.
The list of the authors targeted have all been people of color, iirc. Given publishing is like 75+% white, it's very improbable to just accidentally target half a dozen or more debuts and they just all happen to not be white. Additionally, there is a conversation that the author faked having with a "friend". The "friend" actually admits to targeting poc authors in order to "sabotage" cait.
Whilst I don't disagree with your conclusion, the sibling comments cover:
- The subgenre this all happened in has a very different makeup than western publishing broadly, and
- The authors targeted were not all 'people of colour'.
I agree she was likely secondarily racially motivated in some way - maybe due to trust in that subgenre, or the view in some area of the internet that 'people of colour' cannot be questioned. Or maybe she's just a racist killing two birds with one stone. She seems a serial liar, so I doubt we'll ever know for sure.
>But now we have the internet. The assholes can see you, touch you, talk to you, even decide what you are allowed to say.
I would say the direction is the opposite. Assholes dont seek people out, people seek out the assholes.
In the real world, people dont walk up to strangers and ask them to disclose their most offensive opinion. However, on the internet people will scroll through pages of agreeable comments looking for something disagreeable.
The same is basically true for reviews too. The physical world default is you dont know what people think. The internet allows people to know what others think, including their dumb opinions, swings, and trends.
>on the internet people will scroll through pages of agreeable comments looking for something disagreeable.
I thought that I was the only person who does this.
Therefore, given a forum full of people, the chance that you will offend somebody is increased. It becomes less "say something offensive" and more "say something merely noteworthy".
We get a “The nail that sticks out gets hammered down” situation. On the scale of the whole internet.
The internet becomes a huge engine for imposing conformity.
If you find a critic who usually aligns with your opinions on what's good about something that's a valid approach. The problem is if you don't have one, cause well, it's all subjective in the end, even if it's from a "well known name" like a critic.
They show a graph of positive/negative reviews over time, making review bombing stand out.
If Valve identifies the anomaly as being off-topic they can even mark it as such an exclude the reviews from affecting the overall review scores. Users who don't like this can even disable the exclusion and get the "raw" review score.
More info from when this was introduced: https://steamcommunity.com/games/593110/announcements/detail...