Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This comment breaks the site guidelines: "Please don't post insinuations about astroturfing, shilling, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data."

https://news.ycombinator.com/newsguidelines.html

Sinister insinuation about astroturfing is the internet's favorite pastime. The overwhelming majority of this, as far as we can tell from countless hours looking at the data, is pure imagination.

Is it possible that the manipulation is so sinister and so clever that it leaves no traces we can see in the data, and yet thousands of internet commenters see what we don't? Sure it's possible. But following that path means abandoning evidence. That way leads to the wilderness of mirrors. The only sane way to look at this is to require some evidence, some objective peg of some kind (we'll take anything!) to hang your suspicions on. The presence of opposing viewpoints, downvotes, and flags on divisive issues is no evidence at all. It just means that the community is divided.

As far as I can tell, the psychological phenomenon driving this phenomenon is that people are deeply reluctant to take in how wide the range of legitimately opposing views is. We're probably hard-wired to see the world as much smaller than it is. Bring us all, with that hard-wiring, into a community of millions of people on the internet, and the inevitable result is that people see spies, shills, astroturfers, and foreign agents everywhere. No—what you're seeing is that there are a lot of humans with very different backgrounds from yours. And on any issue with an international dimension, multiply that phenomenon by a hundred.

I've written about this a zillion times: https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme.... See also https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu... and https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu... for how often I repeat myself.



@dang, I collected 5000 exit points of what seems to be sources of Kremlinbot activity here: https://gist.github.com/mikhailian/5d65694fdaaf0ccbab4c6cf39... watch out these are IPv4 and IPv6 formatted lists of subnets as exported from ipset.

There are some specifics to my use case, take this with a grain of sault. Hope this helps sorting genuine Putin-lovers from Kremlin bots.


I picked a random IPv4 address from this list and looked at it. It's a dynamic mobile IP address from Belgium, used by thousands of customers.

Come on people, apply some common sense and stop the hysteria.


I know, there are quite a few of those. Pick one of the subnets, they are more interesting.


But what is somebody supposed to do with this list, when it contains dynamic IP addresses used by many people? Ban everything on it, based on the assumption that some of them are actually endpoints of suspicious activity, thereby preventing many innocent people from using the Internet?

At my previous company I dealt with all the scraping bots for 15 years, in the end I even banned all of Tor and many of the commercial proxy network providers, with the justification that our site (CSE) didn't need anonymous posting because there was nothing sensitive and no private information on it. But I couldn't ban dynamic IP addresses for more than a few minutes since all the abusers originating from them happily obtained a new address within seconds and continued the scraping, rendering the IP address pool used by their provider completely banned from using our site.


I should have filtered IP addresses, leaving only ranges before posting. Mea culpa.

IP addresses are blocked in a different context but land on the same list.

Still, to answer your question, dynamic IP addresses can be sticky. Where I operate, some ISPs lease the same IP address for each IP lease renewal. The only way to get an IP address is to wait until the lease expires by e.g. switching off the router.


Thanks! I'll take a look. Feel free to ping me at hn@ycombinator.com if you like—it can take time to get to something like this but we're definitely willing.


It would be very helpful if there was a follow up post to confirm whether in this specific case the insinuations were justified


In which specific case?

Edit: I've now had a chance to examine this data. It turns out that the IP ranges in that list are so broad as to cover more than 50% of the posts on HN.


> countless hours looking at the data

Will you provide that data for independent review?

Edit: it's not that you shouldn't be trusted. The issue is the old “It is difficult to get a man to understand something, when his salary depends on his not understanding it.” For example, what if a large segment of the user base, that regularly contributed extraordinarily positive engagements, existed solely for the opportunities to frame certain conversations, even in the slightest, or even in preparation for something in the distant future.


No, for the privacy reason pvg mentioned (edit: since deleted, but he made the simple point that it would violate users' privacy), and also for a different reason: it would convince no one. The only people with enough time and energy to bother looking at it would be people who already have strong views, and they would inevitably pull their preconceptions back out of the data and claim that they'd proven something.

And there's a third reason: sooner or later (maybe we're already there, maybe not yet) we have to assume a sufficiently smart manipulator (SSM) who's able to do whatever they want and be indistinguishable from a legit user. This is exactly what your phrase alludes to, I think: "regularly contributed extraordinarily positive engagements, existed solely for the opportunities to frame certain conversations". Once we're past that SSM threshold, all we can do is fall back on what the community ought to be doing anyway: answering false information with correct information, answering worse arguments with better ones, preserving community-ness without falling into war.

Since that's the only long term solution and it's what we want anyway, we might as well be practicing it now.


Dang, I think that's a solid response. Thank you.

The theory behind SSM seems well thought out.

I think it would be helpful to enlighten people about the SSM when noting that a response is breaking the rules for an accusation of astroturfing.

For me, I think a brief explanation that well formed arguments are the only defense against an SSM would have removed a feeling of helplessness derived from reading your initial comment. And instead, lead me down a path which could really be helpful.


That's great feedback. I only thought of that "SSM" wording just now but will try to weave it into future explanations. You can see that this argument has been cooking for a while: https://news.ycombinator.com/item?id=23839602. There are lots of other instances but I can't think of a search query to dig them up.


The theory behind SSM seems well thought out.

And it's fun to extend in the obvious ways. For instance, what if a SSM (an 'essessum', as they are often referred to in the technical literature) is orchestrating a campaign to write comments insinuating astroturfery into HN threads? After all, taken as a group such comments look very much like the kind of commentary a nefarious corporate or state actor might promote, as pointed out by the shillologists themselves! A curious and under-explored connection.


As someone looking to learn more, can you suggest any specific technical literature on the topic that digs further into this?


I'm just being overly self-satisfied with the idea a shadowy army of shillologists and astoTERFs are diabolically manipulating HN by darkly warning about shilling and astroturfing for unknown but definitely nefarious ends.


It sounds like you’re saying you’re fine with users doing things like creating unlimited sock puppet accounts and engaging in vote manipulation (which is what the troll farms you’re describing as SSMs have been accused of doing)? I’m genuinely not trying to be snide here—I’m just reading “we won’t enforce the rules because someone’s probably smart enough to break them (i.e. ‘do whatever they want’) and get away with it”. Is it officially okay for HN users to engage in coordinated inauthentic behavior?


Absolutely not. We're not fine with those things and we ban people for astroturfing and other abuses. I believe that is the one thing that's in both the site guidelines and the FAQ: https://news.ycombinator.com/newsguidelines.html, https://news.ycombinator.com/newsfaq.html. That's how important that is.

It's just that we need to have some evidence, something objective to go on, before we ban people. Absent that (<-- note that I said absent that), what one is encountering on the internet is a gigantic Rohrschach diagram, in which people routinely see the opposite of what they like or identify with. This internet experience is like being surrounded by demons of one's own creation: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...

Sock puppet accounts and vote manipulation are not what I understood BrianOnHN to be talking about upthread. Maybe one could adapt the point to those cases though. Imagine someone who was coordinating inauthentic behavior so cleverly that we couldn't detect it at all. Fortunately I don't think we're there yet—but of course we wouldn't know it if we were. Either way, just imagine for a moment that this is the case. What are our options? The way I see it, we have two:

1. Go after each other based on subjective interpretations of whatever deviousness we imagine we see;

2. Have a functioning community with a healthy immune system based on clear thinking and good arguments.

Surely we all choose #2, at least in our better moments. The key insight here is that #2 doesn't depend on what manipulators, even the cleverest manipulators, do. It depends on us. I'm not saying that it's the only defense we have against manipulation; I'm saying that if the day ever comes when it is the only defense we have, well, it's what we ought to be doing anyway, so we should work at getting good at it now.


> Sure it's possible.

I don’t think it’s possible. We have huge amount of evidences. This article alone has 150 references: https://en.wikipedia.org/wiki/Internet_Research_Agency They include analysis of internet activity of state-sponsored trolls, insider leaks, and even documents from US courts.


[flagged]


I don't think that wording is very accurate, but certainly there is a wide range of geopolitical opinion among real, legitimate HN users. There's no question about it—anyone who wants to pore over just the public history of commenters here can easily establish this. Besides that, it would be bizarre if it weren't the case. Why should the disagreements and conflicts that pervade the world somehow be absent here?

That's my point, in fact. The spectrum of legitimate opinion is much wider than people imagine it is. When they encounter views which don't belong on their much-smaller imaginary spectrum, they don't go "wow, I wonder what your background is that you would see this so differently than I do!", they go "Shill! Manipulator spy! Putin poison puppet! GRU agent!" and similar variations.

Here's another version of this comment from a few months ago: https://news.ycombinator.com/item?id=23867707. That was in the different context of China not Russia, but the internet dynamics in all these cases are exactly the same—which, btw, is strong evidence that they're not really about what people say they are.

I didn't say there was "no consistent astroturfing", but rather that (a) we've found no evidence of it (on geopolitical topics—corporate propaganda is a different issue) and (b) there needs to be evidence of it before taking action against it. I've been posting this for years—alas it's not clear that it has any effect. This is starting to feel a tad Sisyphean.

Are we the best at finding evidence, or even particularly qualified? How would I know? I'd never claim we were. But surely you're not arguing that we should take action without finding evidence, and the argument, "get better at finding evidence until you find the evidence that I'm sure must be there" is pretty questionable too. This way of thinking leads into the wilderness of mirrors. There are much simpler explanations for everything we see in the comments on China, Russia, unions (that one came up today: https://news.ycombinator.com/item?id=24344699) and whatever else people are battling about in HN threads.


Astroturfing doesn't exist?

Are all companies selling such services charlatans?

Not sure I buy it. It's well established that you can buy followers, reviews, positive comments and even stories on all large public platforms, and they are quite public about it.


Sure it exists. We've banned people for it. I've scolded them publicly for it, too, in the hope of discouraging others from doing similar things. So no, what I'm saying is that it's wrong to claim astroturfing without evidence, and the mere fact of someone posting a comment that you feel is wrong—even outrageously, perversely wrong—is not evidence of astroturfing. It's evidence merely that they have a different view than you.

I can't speak about other forums but as far as I'm aware there's one outfit that sells HN votes and comments and stories. What their customers don't seem to realize is that using that service will get them banned here. At this point I figure the spammers know this too, but since people still buy from them, they probably don't care. I doubt they get much repeat business though.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: