Hacker News new | past | comments | ask | show | jobs | submit login

There are a number of problems with this, but the main one is that people's perceptions of dishonesty, disingenuousness, and manipulation in others are terribly exaggerated. Odds are that the other person simply disagrees with you—and if their view seems obviously outrageous, wrong, or stupid, this is because people are much more divided than we realize. Disingenuousness exists, but the assumption of disingenuousness in others is nearly always wrong and comes out of a failure to understand how different someone else's experience is.

Users are much too quick to reach for explanations like "you must be a foreign agent" when even the public record of the other user's comments—let alone the private data we look at—show that to be trivially unlikely. Foreign agents exist, of course, but foreign agents as an explanatory device for things one finds provocative online is, to a first approximation, a fiction. Same for astroturfing, shills, and the other things users accuse each other of in arguments.

That doesn't mean ignoring the possibility of manipulation—it just means that we should look for evidence. I can tell you that when we look for evidence, we basically never find it. Even if we're being fooled by clever manipulation in some cases, it's painfully clear that in the overwhelming majority, there's no there there. What there is, is people reaching for 'disingenuousness' as a simple explanation for what they find painful and offensive. That assumption blocks any solution. There's no way to resolve pain and offense without recognizing the experiences of the other side.

You mention China. I can tell you that all the flamewars I've seen about China since they started blazing in the last year or so have been examples of what I've said here.




China was chosen as the example “agent” in this case because they are a nation that is emblematic of manipulation (in currency, IP theft, free markets, etc.). China was also chosen for the historical imagery, forcing the Dalai Lama to flee while capturing a peacefully sovereign Tibetan territory in 1959 and killing 87,000 directly in the conflict and 430,000 dead in the ensuing occupation.

That is to say that China was chosen for reasons beyond the literally obvious example of state agents. I know from experience state actors are rare, as well as being rare to detect. My rub is that latching onto China or Russia state agents as the only concerning actors that spread misinformation is not accurate. Somewhat more common are paid or unpaid social media shills/trolls that have many generalized accounts to forcefully influence topics.

But in my experience misinformation is spread most by those with vested interests, deeply gross misunderstandings of the world, strong attachment to personal biases they only believe and never bother to confirm or disprove empirically, reductionist and oversimplifications and un-nuanced tidbits learned from an introductory course of some topic, and so on.

These individuals overwhelmingly choose to ignore facts presented to them and espouse their incorrect views (perhaps hiding behind politeness or qualifications or an institutional authority). The view that such an individual will eventually with enough patience, see truth... is rather intractable and untenable.

I’m afraid you have latched onto but one example, possibly missing its purpose of imagery in the comment and neglecting to consider the other examples of bad information and bad actors which are actually fairly common that spread damaging tropes (that while inaccurate, continue to circulate nonetheless).


You've made this so general that it applies to literally everyone in every contentious argument. We all have vested interests, deep misunderstandings, strong attachments, personal biases, oversimplifications, and everything else you mention. So in that sense, yes: there's falsehood and misinformation all over the place. But I don't think seeing others as the problem is going to get us very far; seeing others as the problem largely is the problem.


That really doesn’t get to the heart of the issue, I’m afraid. It seems to offer a form of cover and protection for views that are known the be problematic and continue to spread. Partisanship and hyper-focus on views corresponding your identify are bad, we should not provide cover for it and partisan views not supported by facts via polite discourse. We should not allow these poorly supported views to spread. It seems HN has no actual stance or response that directly addresses this issue. Some of HN’s guidelines provide shelter for enabling unsupported views. Choosing to the focus the light inwards on yourself doesn’t really solve or address this issue in the modern age of misinformation, and the Tibetan analogy was chosen to illustrate the dangers by turning inward too much and ignoring the dangers. Does that make sense? I think I’m being fairly direct and specific here.


Of course people post "unsupported views". We don't ban users for being wrong—who would be left if we did?—and we don't have a truth machine.

When people argue like this, in my experience, what they mostly want is for us to ban the views they disagree with. We can't do that. Running a complex community like HN is nowhere near that simple.


I agree that is not a good reason to ban everyone that is simply mistaken. But I have seen users flag one another for absolutely no reason other than they dislike their views being colored as incorrect, or dishonest when the behavior is repeated again and again. That is rather absurd and provides shelter for ignorance. I’m sure those people are very intelligent in their actual area of expertise, but their immaturity shows through in other areas and it’s rather toxic to witness a mod stepping in to say their pride has higher priority than letting someone directly confront their ignorance with facts and truth. Yes this is the internet, we all have better things we could do with our time than debate with strangers. However this also one of the most intellectual and influential havens for discussing tech and nearly anything else found to be interesting.

Modern times have also brought on about a host of new issues where technology can be both beneficial and a detriment to society. The rapid spread of misinformation is a major technological and social issue. How are we going to navigate the new era that is becoming more complex, conflicts are increasing, and people are becoming more partisan and incorrectly reinforced because technology and modern life makes it very easy to filter out the inconvenient facts that they need not be confronted with?

My point is we should not be assisting the enablement of misinformation. Being a hotbed for powerful people and powerful ideas, there is a certain amount of responsibility that needs to be accepted in preventing the spread of misinformation. Rules of discourse that prevent resolutions is something I believe is harmful rather than helpful at HN, and enables the spread of misinformation.

Alternatively, those that don’t like an atmosphere where less than well informed views are actually challenged may very well choose to leave on their own accord, and they will no longer be spreading misinformation here. Bans are probably not needed at all really, we just shouldn’t be enabling.


> Rules of discourse that prevent resolutions is something I believe is harmful rather than helpful at HN, and enables the spread of misinformation.

Very well put. This is my biggest concern as well. HN mods prevent resolution by punishing participants in back and forth discussion for being part of a “flame war”. It is an incredibly coarse and un-nuanced view of debate.


Most back-and-forth discussion here doesn't get moderated. The ones that do are not the kind that usually end in resolution.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: