The mods certainly do a great job of keeping things running smoothly here, but I wouldn't say it's _primarily_ because of them.
I think it's primarily due to the self-moderation of the community itself, who flag and downvote posts, follow the community guidelines, and are still overall relatively civil compared to other places.
That said, any community can be overrun by an Eternal September event, at which point no moderation or community guidelines can save it. Some veteran members would argue that it's already happened here. I would say we've just been lucky so far that it hasn't. The brutalist UI likely plays a part in that. :)
I think it has happened actually. Early on HN was almost purely entrepreneurial although through a tech POV. These days, it’s much more general or broadly tech related. The discussion I gather is most people here are tech employees and not necessarily entrepreneurs.
It’s obviously has not gone to hell like the bot ridden examples, but it’s drastically different IMO.
The bots aren't completely dominating here yet, because the price/benefit isn't really there yet.
Twitter is a source of news for some journalists of varying quality, which gives them a motivation to influence.
On HN, who are you going to convince and what for?
The only thing that would come to mind would be to convince venture capital to invest in your upstart, but you'd have to keep it up while convincing the owners of the platform that you're not faking it - which is gonna be extra hard as they have all usage data available, making it significantly harder to fly under the radar.
Honestly, I just don't see the cost/benefit of spamming HN to change until it gets a lot cheaper so that mentally ill ppl get it into their head that they want to "win" a discussion by drowning out everything else
> On HN, who are you going to convince and what for?
There are plenty of things bots would be useful for here, just as they are on any discussion forum. Mainly, whenever someone wants to steer the discussion away from or towards a certain topic. This could be useful to protect against bad PR, to silence or censor certain topics from the outside by muddying up the discussion, or to influence the general mindset of the community. Many people trust comments that seem to come from an expert, so pretending to be one, or hijacking the account of one, gets your point across much more easily.
I wouldn't be so sure that bots aren't already dominating here. It's just that it's frowned upon to discuss such things in the comments section, and we don't really have a way of verifying it in any case.
>On HN, who are you going to convince and what for?
Eh, following individuals and giving them targeted attacks may well be worth it. There are plenty of tech purchasing managers here that are responsible for hundreds of thousands/millions in product buys. If you can follow their accounts and catch posts where they are interested in some particular technology it's possible you could bid out a reply to it and give a favorable 'native review' for some particular product.
Restatement of op's point. Small reason of agreement based on widely public information. Last paragraph indicating the future cannot be predicted and couching the entire thing in terms of a guess or self-contradiction.
This is how chatgpt responds to generic asks about things.
Real question for those convinced HN is awash in HN bots: What actual value do you believe there is, other than curiosity, that is driving people to build the HN spam bots you think you're seeing?
Karma doesn't help your posts rank higher.
There is no concept of "friends" or "network."
Karma doesn't bring any other value to your account.
My personal read is it's just a small steady influx of clueless folks coming over from Reddit and thinking what works there will work here, but I'm interested in your thoughts.
Hype. HN is _the_ platform to create hype among the early adopter, super-spreader/tech exec kind of people and because of that has an absolutely massive indirect reach.
Just look how often PR reps appear here to reply to accusations - they wouldn't bother at all if this was just some random platform like reddit.
I'm not convinced HN is awash in bots, but there are certainly some inauthentic accounts here.
What if you want to change public opinion about $evilcorp or $evilleader or $evilpolicy? You could explain to people who love contrarian narratives how $evilcorp, $evilleader and $evilpolicy are actually not as bad as mainstreamers believe, and how their competitors and alternatives are actually more evil than most people understand.
HN is an inexpensive and mostly frictionless way to run an inception campaign on people who are generally better connected and respected than the typical blue check on X.
Their objective probably isn't to accumulate karma because karma is mostly worthless.
They really only need enough karma to flag posts contrary to their interests. Even if the flagged posts aren't flagged to death, it doesn't take much to downrank them off the front page.
You can't underestimate this being a bot playground/training ground with no particular purpose beyond getting the bot to say realistic/interesting replies.
I have zero interest in bots, but if I did, the hacker news API would be exactly how I would start.
I've hardly seen here any proselytizers from Oracle, Salesforce, IBM and they are dong just fine. Ditto for Amazon/Google/Microsoft/Facebook - they used to be represented more here, but their exodus hardly made any difference.
Gartner has more influence on tech than Hacker News.
Vote-rings are trivial to detect though, automated or manual. I'd be surprised if HN hasn't figured out ways against it during the time it's been online.
True but at least for Hacker News you have to at least click through to the member profile to see how many banana stickers and external validation they've accrued.
If you turn on show dead, you'll see that some accounts just post spam or weird BS that ends up instantly dead. I think Evon LaTrail is gone now, but for years posted one or more links to his/her/their YouTube videos about personal sanitation and abortion per day.
There is a stream of clueless folks, but there are also hardcore psychos like LaTrail. The Svelte magazine spammer fits in this category.
I've definitely seen comments that feel very authentically posted (not LLM generated) but are a weird mixture of vitriol and spite, and when you list other comments from that user it's 90% marked dead.
I often wonder if the user is even aware that they're just screaming into the void.
> What actual value do you believe there is, other than curiosity, that is driving people to build the HN spam bots you think you're seeing?
Testing.
And as siblings say, karma is more valuable than you might think. If you can herd a bunch of karma via botting, you can then [maybe] use that karma to influence all sorts of things.
I'd like to think we have enough of a proactive community to mitigate this issue for the most part - just set your profile back to Show Dead / etc. if you want to see the amount of chaff that gets discarded.
Also, wasn't the initial goal of lobste.rs to be a sort of even more "mensa card carrying members only" exclusive version of Hacker News?
In the seven years I've been on HN, it has gone through different phases, each with a noticeable change in the quality of the comments.
One big shift came at the beginning of COVID, when everyone went work-from home. Another came when Elon Musk bought X. There have been one or two other events I've noticed, but those are the ones I can recall now. For a short while, many of the comments were from low-grade Russian and Chinese trolls, but almost all of those are long gone. I don't know if it was a technical change at HN, or a strategy change externally.
I don't know if it's internal or external or just fed by internet trends, but while it is resistant, HN is certainly not immune from the ills affecting the rest of the internet.
This place has both changed a _lot_ and also very little, depending on which axis you want to analyze. One thing that has been pretty consistent, however, is the rather minimal amount of trolls/bots. There are some surges from time to time, but they really don't last that long.
HN has mechanisms to detect upvotes and comments that seem to be promoting a product or coordinated in some other way. I'm not sure what they do behind the scenes or how effective it is but it's something. Also other readers downvote bot spam. Obvious bot/LLM-generated comments seem to be "dead" quite often, as are posts that are clearly just content/ad farm links or product promotions or way off-topic.
How are you so sure these users are actually bots? Just because someone disagrees with you about Russia or China doesn't mean that's evidence of a bot, no matter how stupid their opinion is.
I don't know about anyone else, but to me a lot of bot traffic is very obvious. I don't have the expertise to be able to describe the feeling that low quality bot text gives me, but it sticks out like a sore thumb. It's too verbose, not specific enough to the discussion, and so on.
I'm sure there are real pros who sneak automated propaganda in front of my eyes with my notice, but then again I probably just think they are human trolls.
Could you give some examples of HN comments that "sticks out like a sore thumb"?
> It's too verbose, not specific enough to the discussion, and so on.
That to me just sounds like the average person who feels deeply about something, but isn't used to productive arguments/debates. I come across this frequently on HN, Twitter and everywhere else, including real life where I know for a fact the person I'm speaking to is not a robot (I'm 99% sure at least).
sorry, I didn't mean to give the impression that I was talking about HN comments specifically. I was talking about spotting bot content out on the open Internet.
as for verbosity, I don't mean simply using a lot of text, but rather using a lot of superfluous words sentences.
people tend not to write in comments the way they would in an article.
Hackernews isn’t the place to bring that up regardless of your opinion. So out of context political posts should be viewed with at least some scrutiny.
That's true, but maybe there should be a meta section of the site where these topics can be openly discussed?
While I appreciate dang's perspective[1], and agree that most of these are baseless accusations, I also think that it's inevitable that a site with seemingly zero bot-mitigation techniques, where accounts and comments can be easily automated, doesn't have some or, I would wager _a lot_, of bot activity.
I would definitely appreciate some transparency here. E.g. are there any automated or manual bot detection and prevention techniques in place? If so, can these accounts and their comments be flagged as such?
There are a few horsemen of the online community apocalypse,
1) Politics
2) Religion
3) Meta
Fundamentally - Productive discussion is problem solving. A high signal to noise ratio community is almost always boring, see r/Badeconomics for example.
Politics, religion are low barrier to entry topics, and always result in flame wars, that then proceed to drag all other behavior down.
Meta is similar: To have a high signal community, with a large user base, you filter out thousands of accounts and comments, regularly.
Meta spaces inevitably become the gathering point for these accounts and users, and their sheer volume ends up making public refutations and evidence sharing impossible.
As a result, meta becomes impossible to engage with at the level it was envisioned.
In my experience, all meta areas become staging grounds to target or harass moderation. HN is unique in the level of communication from Dang.
This I agree with, off-topic is off-topic and should be removed/flagged. But I'm guessing we're not talking about simple rule/guidelines-breaking here.
>How are you so sure these users are actually bots? Just because someone disagrees with you about Russia or China doesn't mean that's evidence of a bot, no matter how stupid their opinion is.
If the account is new and promoting Ruzzian narrative by denying the reality I can be 99% sure it is a paid person copy pasting arguments from a KGB manual, 1% is a home sovieticus with some free time.
> If the account is new and promoting Ruzzian narrative by denying the reality I can be 99% sure it is a paid person copy pasting arguments from a KGB manual, 1% is a home sovieticus with some free time.
I'm not as certain as you about that. Last time the US had a presidential election, it seems like almost half the country is either absolutely bananas and out of their mind, or half the country are robots.
But reality turns out to be less exciting in reality. People are just dumb, and spew whatever propaganda they happen to come across "at the right time". Same is true for Russians as it is for Americans.
I think it's mostly a timing thing. It's one thing for someone to say something dumb, but it's another for someone to say it immediately on a new account. That, to me, screams bot behavior. Also if they have a laser focus. Like if I open a twitter account and every single tweet is some closely related propaganda point.
Will I be allowed to say just provide some links instead and let the community inform themselves if I am not allowed to share my observations? Or links to real news events are also not allowed.
OK< I will use wikipedia links, my problem is with Ruzzians (ZZ refers to the Russians that support invasion and war crimes) making new accounts and commenting here, we should not let this people spread misinformation here, or bring bullshit like "Russia is as bad/good as USA". At least they should use a regular , years old account so they can risk banning like I am risking when my account when debating them.
Anyone who’s spent any amount of time in this space can spot them pretty quickly/easily. They tend to stick to certain scripts and themes and almost never deviate.
In my experience, that's not true. Rather, people are much too quick to jump to the conclusion that so-and-so is a bot (or a troll, a shill, a foreign agent, etc.), when the other's views are outside the range of what feels normal to them.
I've written a lot of about this dynamic because it's so fundamental. Here are some of the longer posts (mini essays really):
Since HN has many users with different backgrounds from all over the world, it has a lot of user pairs (A, B) where A's views don't seem normal to B and vice versa. This is why we have the following rule, which has held up well over the years:
"Please don't post insinuations about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data." - https://news.ycombinator.com/newsguidelines.html
In my research and experience, it is. I’m making no comment about bots/shills on this site, either, I’m responding to the plausibility of the original comment.
> I’m making no comment about bots/shills on this site, either, I’m responding to the plausibility of the original comment.
The original comment:
> I wonder the same about HN. Has anyone done this kind of analysis? Me good LLM
Slightly disingenuous to argue from the standpoint of "I'm talking about the whole internet" when this thread is specifically about HN. But whatever floats your boat.
The claim is not "zero bot activity" - how would one even begin to support that?
Rather, the claim is that accusations about other users being bots/shills/etc. overwhelmingly turn out, when investigated, to have zero evidence in favor of them. And I do mean overwhelmingly. That is perhaps the single most consistent phenomenon we've observed on HN, and it has strong implications.
If you want further explanation of how we approach these issues, the links in my GP comment (https://news.ycombinator.com/item?id=41710142) go into it in depth. If you read those and still have a question that isn't answered there, I can take a crack at it. Since you ask (in your other comment) whether HN has any protections against this kind of thing at all, I think you should look at those past explanations—for example the first paragraph of https://news.ycombinator.com/item?id=27398725.
Alright, thanks. I read your explanations and they do answer some of my questions.
I'm still surprised that the percentage of this activity here is so low, below 0.1%, as you say. Given that the modern internet is flooded by bots—over 60% in the case of ProductHunt as estimated by the article, and a third of global internet traffic[1]—how do you a) know that you're detecting all of them accurately (given that it seems like a manual process that takes a lot of effort), and b) explain that it's so low here compared to most other places?
Most of the bot activity we know about on HN has to do with voting rings and things like that, people trying to promote their commercial content. To the extent that they post things, it's mostly low-quality stuff that either gets killed by software, flagged by users, or eventually reported to us.
When it comes to political, ideological, nationalistic arguments and the like, that's where we see little (if any) evidence. Those are the areas where users are most likely to accuse each other of not being human, or posting in bad faith, etc., so that's what I've written about in the posts that I linked to.
There's still always the possibility that some bad actors are running campaigns too sophisitcated for us to detect and crack down on. I call this the Sufficiently Smart Manipulator problem and you can find past takes on it here: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....
I can't say whether or not this exists (that follows by definition—"sufficiently" means smart enough to evade detection). All I can tell you is that in specific cases people ask us to look into, there are usually obvious reasons not to believe this interpretation. For example would a sufficiently smart manipulator be smart enough to have been posting about Julia macros back in 2017, or the equivalent? You can always make a case for "yes" but those cases end up having to stretch pretty thin.
Dang, I agree with and appreciate your moderation approach here and I completely agree with most of what you said. IME the last 18 months or so I’ve been here, this has been a welcome bastion against typical bot/campaign activity. Nowhere in the web seems safe the last ~dozen years. Most of what I’ve written here applies to my research of foreign bot activity on social networks, particularly in election years, in which you can far more easily piece together associations between accounts and narratives and writing style and piece together a lot more dots than on a site like this - and conclude very definitively that yes, this is a bot.
My original comment was just meant to chime in that, in the wild the last ten years, I’ve encountered an extraordinary amount of this kind of activity (which I confirmed - I really do research this stuff on the side and have written quite a lot about it) - that would support credibility to anyone that felt they experienced bot activity on this site. I haven’t done a full test on this site yet, because I don’t think it’s allowed, but at a glance I suspect particular topics and keywords attract swarms of voting/downvoting stuff, which you alluded to in your post. I think the threshold of 500 upvotes to downvote is a bit low, but clearly to me what you are doing is working. I’m only writing all of this out to make it very clear I am not making any criticisms or commentary about this site and how it handles bots/smurfs/etc.
Most of my research centers around 2016,2020 political cycles. Since the invention, release, and mass distribution of LLM’s I personally think this stuff has proliferated far beyond what anyone can imagine right now, and renders most of my old methods worthless, but for now that’s just a hypothesis.
Again, I appreciate the moderation of this site, it’s one of the few places left I can converse with reasonably intelligent and curious people compared to the rest of the web. Whatever you are doing, please keep doing it.
I think that HN may in general be an outlier here. Typically outright political content is not allowed, along with religious which is quite often intertwined with politics. Because of the higher quality of the first pass filter here (users flagging this stuff), you don't see the campaigns here that you do on typical social media.
For example in Reddit you'll see accounts that are primed, that is they reuse other upvoted/mostly on topic older existing user replies on new posts of the same topic to build a natural looking account. Then at some point they'll switch to their intended purpose.
Thank you. I appreciate your positive outlook on these things. It helps counteract my negative one. :)
For example, when you say "The answer to the Sufficiently Smart Manipulator is the Sufficiently Healthy Community", that sounds reasonable, but I see a few issues with it.
1. These individuals are undetectable by definition. They can infiltrate communities and direct conversations and opinions without raising any alarms. Sometimes these are long-term operations that take years, and involve building trust and relationships. For all intents and purposes, they may seem like just another member of the community, which they partly are. But they have an agenda that masquerades as strong opinions, and are protected by tolerance and inclusivity, i.e. the paradox of tolerance.
2. Because they're difficult to detect, they can easily overrun the community. What happens when they're a substantial percentage of it? The line between fact and fiction becomes blurry, and it's not possible to counteract bad arguments with better ones, simply because they become a matter of opinion. Ultimately those who shout harder, in larger numbers, and are in a better position to, get heard the most.
These are not some conspiracy theories. Psyops and propaganda are very real and happen all around us in ways we often can't detect. We can only see the effects like increased polarization and confusion, but are not able to trace these back to the source.
Moreover, with the recent advent of AI, how long until these operations are fully autonomous? What if they already are? Bots can be deployed by the thousands, and their capabilities improve every day.
So I'm not sure that a Sufficiently Healthy Community alone has a chance of counteracting this. I don't have the answer either, but can't help but see this trend in most online communities. Can we do a better job at detection? What does that even look like?
If you come up with good ideas on this problem you should share them, but the core of this thread is that having commenters on thread calling out other commenters as psyops, propaganda, bots, and shills doesn't work, and gravely harms the community, far more than any psyop could.
Does it, though? The reason why I ask such a loaded question is because I believe this is actually part of the 'healthy community' framework. It can be thought of as the communities immune system responding to what they perceive as outside threats to the system and is, in my opinion one of the most well known phenomenon in internet communities that far predates HN.
The modern analogy of this problem is described as the 'Nazi Bar' problem and is related to the whole Eternal September phenomenon. I think HN does a good enough job of kicking out the really low quality posters, but the culture of a forum will always gradually shift based on the fringes of what is allowed or not.
How is that different from humans? Humans have themes/areas they care more about, and are more likely to discuss with others. It's not hard to imagine there are Russians/Chinese people caring deeply about their country, just like there are Americans who care deeply about US.
If the comment is off-topic/breaking the guidelines/rules, it should be removed, full stop.
The difference is that the bots comment should be removed regardless if the particular comment is breaking the rules or not, as HN specifically is a forum for humans. The humans comment, granted it doesn't break the rules, shouldn't, no matter how shitty their opinion/view is.
If posts make HN a less interesting place to converse I don't see why humans should get a pass & I don't see anything in the guidelines to support that view either.
C’mon. When you have an account that is less than a year old and has 542 posts, 541 of which are repeating very specific kremlin narratives verbatim, it isn’t difficult to make a guess. Is your contention that they are actually difficult to spot, or that they don’t exist at all? because both of those views are hilariously false.
I feel like you're speaking about specific accounts here, since it's so obvious and exact. Care to share the HN accounts you're thinking about here?
My contention is that people jump to "It's just a bot" when they parrot obvious government propaganda they disagree with, when the average person is as likely to parrot obvious propaganda without involving computers at all.
People are just generally stupid by themselves, and reducing it to "Robots be robotting" doesn't feel very helpful when there is an actual problem to address.
No, I'm not. And I don't/won't post any specific accounts. I'm speaking more generally - and no one is jumping to anything here, you're projecting an argument that absolutely no one is making. The original claim was that russian/chinese bots were on this platform and left. I've only been here about 1.5 years, so I don't know the validity of that claim, but I have a fair amount of experience and research in the last ten years or so on the topic of foreign misinformation campaigns on the web, so it sounds like a very valid claim, given how proliferate these campaigns were across the entire web.
It isn't an entirely new concept or unknown, and that isn't what is happening here. You're making a lot of weird assumptions, especially given the fact that the US government wrote several hundred pages about this exact topic years ago.
> and no one is jumping to anything here, you're projecting an argument that absolutely no one is making
You literally claimed "when you have accounts with these stats, and they say these specific things, it isn't difficult to guess..." which ends with "that they're bots" I'm guessing. Read around in this very submission for more examples of people doing "the jump".
I'm not saying there isn't any "foreign misinformation campaigns on the web", so not sure who is projecting here.
Not at all - ten years ago russian misinformation campaigns on twitter and meta platforms were alive and well. There was an entire several hundred page report about it, even.
> For anyone not using `merge.conflictStyle = diff3` I highly recommend trying it. It removes a lot of ambiguity when dealing with conflicting changes.
Yes, and to say that another way, it's literally impossible to resolve merge conflicts correctly with only the standard conflict style. See my post on StackOverflow for more details: https://stackoverflow.com/a/63739655/997606
I use kdiff3 or bc4. I find those tools more powerful for fixing complicated merges and just as easy for simple ones (eg manual alignment for when the merge algorithm can’t recognize what is supposed to be aligned)
I’ve tripped over the exact scenario you mention in that post more than once. Usually, it’s not that I end up including an entire function that shouldn’t be there, though. I find it tends to be more like “Oh, here’s an extraneous bit of code that just showed up in the middle of this method. Hmmm....” Or, alternatively, I end up removing too much code, rather than too little.
It displays two diffs: one from the ancestral commit to “ours” and one from the ancestral commit to “theirs”. I find it helpful for understanding conflicts and how to resolve them — any feedback appreciated.
Yessssssssss! zdiff3 was by far the most exciting thing in the whole announcement for me. It’s a terrible feeling that comes over me when, not only do I get the pleasure of resolving a merge conflict, but I do it incorrectly, leading to a bad merge. Minimizing the amount of lines I have to consider when doing so seems like the most straightforward way to increase the accuracy of humans when resolving merge conflicts.
It's semantically slightly different. In the case of diff3 if you remove everything but the base part, then you get exactly the base version, while in the case of zdiff3 you also get the prefix and suffix parts from the two branches that merge cleanly.
I guess the difference is subtle, especially that there can be other merges within the same file that merge cleanly in both cases.
This is a standard approach to software development: you introduce a new alternative for something that exists, then mark the existing inferior option as "deprecated". This gives developers a chance to learn about the new feature and move over to it by the time it actually gets removed in a future version. Cutting it immediately would cause a huge amount of frustration since the users of the feature would demand to know where it went or why it suddenly no longer works while the maintainers would have to scramble to find a solution. It would turn people off from using that software in the future.
I would argue that for most cases, that's preferable to simply cutting old functionality. The only exception would be in environments where the upgrade process is more rigid where users/admins know what's changing with an upgrade and have a safe and easy way to roll back in case of any issues.
Pretty sure the "you're a liar" is just a convenient way to blow it off without going into details.
The buy was probably already questionable even for $50m, a number Steve probably already knew even before they entered the room. The exchange showed that the founders were going to be stubborn about the price.
So in the end this isn't "trust is everything in business, and Steve was offended" but simply "don't jump into negotiations with your pants down".
What I think is slightly unreasonable and probably a sub text a few people is that anyone, the CEO of this company or not, thought that their value added to customers was worth $150 million to Apple. It was music suggestion and a like button.
Clearly nothing the Apple couldn’t and didn’t just copy.
$150 million for that? Perhaps another explanation is that Steve Jobs heard that number, realize the after enough of these meetings and acquisitions that dollars had lost their meaning, but this wasn’t worth it.
The solution is the same as in many other programming languages: adopt a coding standard and enforce it in an automated fashion, i.e. in the CI/CD pipeline. Code style then becomes mostly a non-issue and not an object of discussion.
I just would like to point out that even though that is the most sane way, it comes with it owns set of problems. One of them is when developers start to code to cheat the linter, or they complicate the code just to "make the linter happy", another is when the linting rule introduces problems/errors like https://github.com/rubocop-hq/rubocop-rails/issues/418
Yeah I would never recommend relying on just a linter. The linter can reduce scut work, but you always want to have at minimum a thorough code review process that’s looking at things like “ok, we made the linter happy; but are we happy with the result, or should we have just disabled the linter here?” and “the linter didn’t catch that we solved this thing with X here but with Y&Z last time. Let’s rationalize our approaches and get everything on the same page”