How susceptible is HN to ideological meddling, eg by governments or “Big xyz”? If susceptible, what would be the signs, how could it be managed, would the HN API aid such meddling, and does it even matter?
Paid by whom? There are companies and employees on HN posting, commenting and promoting their business and services, and there's absolutely nothing wrong with that.
Is someone paying to get upvoted? Almost certainly, but I doubt it's worth a lot here where the sorting algorithm is much more opaque than Reddit. On Reddit getting to the top is just a numbers game.
Also, even though there's far too much political posts on HN, they're the minority and I can't see a reason for some secret government agency to pay staff to change our views about the political world. Reddit is much more politicised, and it'd be naive not to assume every major government has a propaganda team of Reddit posters.
(I'm using Reddit as another example of popularity-contest forum like HN is)
The political posts on here are amazing though. You get well thought out arguments from both sides. The only downside is the down voting/silencing of people’s opinions when the masses at HN don’t agree. The down vote feature should be disabled, let the responses/post chain do the judging.
Considering how reddit filters down to twitter and instagram and other local social networks, followed by local tabloids and so on, it's entirely reasonable to assume that.
Any anonymous forum is subject to astroturfing. Given the outsize tech audience on this forum, it would be worth money for some companies to pay someone to suppress unfavourable posts.
The answer is obviously "yes, there are paid commenters on HN".
The only real question is "how many", and "is that enough to render the discourse on HN worthless?".
It's hard to quantify, but in my many years on HN I have not noticed a very noticeable change in the quality of discussions on topics that I follow closely.
I get downvotes for stuff that seems odd from an intellectual person trying to have a discussion. I don't know if it's a bot searching for keywords, or just someone with a grudge with several accounts or legit downvotes. I don't let it silence me though. I mean numbers (karma) are just 1s and 0s. Ideas and honest discussion are much more than that.
Whatever happens speak your mind, you've earned it by existing. Don't get discouraged by downvotes. The future depends on it.
If people will interpret you comments as being on the wrong side of the most in vogue political ideology of the moment, you will get downvoted to the lowest deeps of Tartarus, may Hades have mercy of your account.
For gov examples, try starting a discussion about America's torture program, war crimes, or independent anti-war media. Your post will be flagged, and it won't get back up because such discussion 'goes against the guidelines which are actually rules'. The moderation will be praised by high karma accounts, who are quite happy with the status quo thank you very much.
Check out any thread ever concerning Assange. Those comment sections are masterclasses in the derailment of productive conversation. Since pointing out obvious shills is against site rules, you're more like to be banned for pointing out meddling than for spreading outrageous smears.
For "Big xyz", try writing a comment about GMOs, or agricultural pollution, monopolies, enviromental destruction, the not too distant horrors perpetrated by pharma co's, or any of a number of other third rail topics like unions or healthcare costs. 'Totally normal accounts' will slither out of the bushes and sealion your thread to death.
Yes, it matters - a lot. "The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum." - Noam C
I see how manipulating opinion on social networks can yield a gain. But how would entity X profit from manipulating opinion on a small community of tech enthusiasts?
Or maybe the recent resurgence of Javascript frameworks in Web, desktop and mobiles is a plot of Russians to slow the computing and software engineering for the westerners by manipulating online tech communities?
> I see how manipulating opinion on social networks can yield a gain. But how would entity X profit from manipulating opinion on a small community of tech enthusiasts?
Having your posts yelled at and downvoted can reduce your willingness to participate in discussion or debate. Especially when there's a stupid "karma" score that subconsciously gamifies our participation here.
I'd be willing to bet that having one's opinions silenced online trains one to stop talking about the same topics offline as well. Like a sort of conditioning.
And there are lots of powerful parties that want to silence certain opinions.
Why on HN? Because of the abundance of smart, wealthy, influential, and well-connected people here. Change these people, and your social engineering ripples throughout society.
One of the reasons I tend to lurk more than post. It’s disappointing to throw out an opinion and get attacked for it. I’m fine if you disagree (and I look forward to it) but it seems like some are conditioned to be offended by others opinions… or maybe paid? :)
There has been an obvious influx of paid accounts in my opinion. The quality of discourse has dropped.
I often check an account and notice it was not active for a substantial period of time before a sudden change in tone is noticed.
Seeing the same anti-vax talking points (for instance) spread from Reddit to here has been upsetting. The main difference I have noticed is the responses here are more sound technically, typically, call out the flaws more quickly, and are obviously not being influenced by the astro turfing. It's still tiresome, but here we are.
Internet users tend to interpret comments they don't like as "paid accounts", bots, shills, spies, astroturfers, foreign agents, brigaders, and sundry other sinister manipulators. This is one of the biggest psychological mechanisms on the internet, so commonplace that the site guidelines specifically ask people not to do it: https://news.ycombinator.com/newsguidelines.html. No doubt there's something hard-wired in us that makes us jump from displeasure to suspicion.
That's not, of course, to say that abuse doesn't happen. It's simply to say that when considering abuse, we need to look for actual evidence. It's impossible to evaluate vague general claims. We need to see specific links. For example:
> I often check an account and notice it was not active for a substantial period of time before a sudden change in tone is noticed.
As the site guidelines ask, you should send us specific links so we can look into them. Such perceptions usually turn out toe be rife with assumptions and projections, but occasionally we find evidence of abuse, and when we do, we crack down hard on it. The vast majority of it that we've seen is small-scale commercial abuse, though—voting rings, commenting rings, and various types of spam.
When it comes to disagreement on hot political/social topics, we all need to get used to the fact that society at large (indeed the world at large, since HN is a highly international site) is a lot more divided than we would like, and stop resorting to cartoonish Boris-and-Natasha explanations to explain away the unwanted facts that (a) a lot of people sincerely and strongly disagree, and (b) everyone has good reasons for their views, even when they're wrong or we feel they are.
I wrote a long post about this here: https://news.ycombinator.com/item?id=27398725. Note that all these arguments are limited to HN. Larger sites have different problems to deal with.
Even if you assume that there are Sufficiently Smart Manipulators [1] planting carefully crafted comments in undetectable ways, the conclusion that leads to is pretty much the same as what the site guidelines call for anyway: we should respond to bad arguments and false claims by patiently and respectfully providing better arguments and true information. If a community can do that, then hostile pathogens aren't going to kill it—and if it can't do that, then it's going to self-destruct anyhow. In either case, hostile pathogens aren't a fundamental cause, and the solution is to focus on and strengthen our own responses. The only viable long-run answer to the Sufficiently Smart Manipulator is the Healthy-Enough Community.
It’s a full time job for many people in Eastern Europe and other places. It’s something of a weapon of modern warfare, and really quite effective. A cohesive US response could have saved at least 500k lives. As a military power we’re somewhat undefeatable, so sewing division is an alternative. Who is paying the bills? I’ll leave that exercise for the reader.
I think these are mostly political interests like "weakening the western economic" or "divide western liberal societies" and maybe a few richer people who had a bad experience with vaccs and think they do something good.
Actually comment trolls cost near nothing when hired in India or russia. And they scale by infecting some other people with the same thought, who themselfes comment then in similar ways.
Often "they" buy some celebrities that run out of money. In Germany for example this is near ridiculous. Just look at Michael Wendler.
I don't know, but I don't really read the comments here any more.
A while ago the comments were the gold, now I can't really get them. In my subjective experience, contradictory comments (to the post) became the most popular. That doesn't make sense. You can't contradict always to anything.
So HN became like any other new site: I'm skimming through the headlines in a minute, and done.
I'm fairly convinced there are. Often comment discussions for newer technical books pop up here on HN that get shilled for weeks like it's some legendary book that has been in print for decades. Then those discussions just die out until the next book makes the shill cycle (I guess?).
I wonder if there is research to see if karma based moderation systems (HN, Reddit, Stack Overflow) actually solve the problems they are designed to solve.
I've seen what look like classic Russian+Chinese troll farm talking points & argument tactics. NLP can flag this, but without accompanying metadata for basic digital forensics, confidence is too hard for that vs a regular troll.
If into this sort of analytics, we are always looking for data engineers and data scientists for Project Domino - see our DefCon AIV keynote for more info.
Most HN responses are thoughtful and reflect some background. Timezones vary, but are self-consistent. Similarly, different people gravitate to different topics. But when a reply is on a talking point associated with state misinfo and written in a misinfo/troll style (short, lies, unfalsifiable, red herring, pushes asymetric work to the other person, ...), I check to see if from an account mostly only making non-technical comments, time zone range, and other surprises. Unfortunately, no public way to do IP/browser-level correlation for account reuse and other common forensics.
While such an account is unlikely to be a programmed bot, human troll farm employees are paid to do exactly this sort of campaign. NLP is great for catching funny behaviors and talking points, but without the forensic info, generally too hard to increase confidence. NLP on a message is like an alert, and NLP on the message history can tell you if someone acts like a troll, but metadata forensics are still typically needed to get proof points for saying it's a troll-farm-style troll. (Then again, if it walks like a duck and quacks like a duck, it's enough of a duck for many community leaders to treat it like a duck.)
If HN's trust and safety team prioritized this stuff, they should have the needed metadata already, and the forensic analytics patterns are pretty established nowadays - you don't need to pay FB's $1B/yr on it. I'm sure they do some of it already in addition to the human moderation. Project Domino is all about building open AI so others can do the same without the high budget.
Unlike chronological internet forums where all opinions must be heard, it's very difficult to successfully astroturf an internet forum that self-polices with upvotes and downvotes.
EDIT: In the case where someone says something against common perception, downvotes in particular deemphasize it (turning the comment gray on HN, collapsing it on Reddit) as this comment unintentionally demonstrates, thus why astroturfing in such forums successfully can be hard to do nowadays. The real astroturfers do Facebook/Twitter instead where content can't be downvoted.
The most obvious ones are where meme t-shirts are posted to medium-sized niche subreddits and get mysteriously upvoted into the low hundreds, and then a sockpuppet asks OP for a link to buy it. Goodness knows what the more subtle bot accounts are up to. Eggs, bacon, and spam.
That's not astroturfing/ideological manipulation, and it's not strictly evidence of voting manipulation either: people do legit like meme T-shirts and want to buy them (no, I am not in that category).
Regardless, voting is just one check on bad behavior. The other one is moderation, and at least on the subreddits I visit, moderators are strict on financial self promotion, but it can still vary.
Even on Hacker News comments there's people talking about "hey, I'm working on a startup." and someone asks "what startup?", and they reply "here it is." with a link. I've rarely seen those exchanges get downvoted.
> it's not strictly evidence of voting manipulation
Oh, it's very, very obvious, and Actual Human commenters all ask "what is this doing here?" And that style of message board spamming is decades old, so it's about as obvious to me as a flashing gif banner ad.
> The other one is moderation
True, once the mods wake up, the posts are always removed. But arguably the damage was already done, and like I said, if this many obvious manipulated posts slip through the cracks, we can make some statistical assumptions about the presence of more evolved predators.
Without anyone watching the Watchmen it's very easy for mods to be financially rewarded for protecting blatant adverts in posts even if it goes against a sub's guidelines.
Anyone that starts pointing this out gets shadowbanned or downvoted by the herd.
The most obvious ones I have seen are submissions that get silently deleted in minutes by the moderators, and that goes to hundreds of upvotes later on. That's usually brigading but also spam and dog whistle topics.
Also some early vote manipulation on the new submission can have a lot of impact using very few accounts.