Hacker News new | comments | show | ask | jobs | submit login
India’s millions of new Internet users are falling for fake news (washingtonpost.com)
77 points by JumpCrisscross 11 months ago | hide | past | web | favorite | 79 comments



The obvious problem is that the dissemination of news these days is determined by how many arbitrary or ignorant individuals have clicked "like" or "upvote" on whatever social network they're using.

It doesn't need to be true, it just needs to be shocking, and people will promote it.

This is only fixed by good moderation, with prominent (and powerful) "flag" buttons like Hacker News has. Hacker News and Reddit don't suffer from nearly as much fake news, because moderation teams tend to do a good job of weeding out garbage, and have an easier time with the prominent report buttons.

Facebook and social networks have absolutely no moderation.


>Facebook and social networks have absolutely no moderation.

Sure they do. They just have zero incentive to remove fake news content. It makes them money, brings in users, everyone is happy! What could be wrong? What's good for Facebook is good for the world!


Oh, you're right. I forgot that they moderate things that are illegal in nature.

But it's still the bare minimum.


You sound like you want them to do more moderation?

It amazes me that we've reached the point where there are commenters on Hacker News arguing in favour of online censorship.


> You sound like you want them to do more moderation?

> It amazes me that we've reached the point where there are commenters on Hacker News arguing in favour of online censorship.

Moderation includes improving links (such as changing the primary link from a PR note to an actual research paper), improving titles (to reflect what is actually said in an article without unrelated hyperbole from a submitter), or even just adding the year of publishing for older articles in parentheses.

I’m just providing examples that few would debate as censorship. There are certainly others. I absolutely support constructive moderation.


But many people on Hacker News don't use Facebook. Don't you wonder why that is?

I'm not really sure that removing blatantly false nonsense would be considered "censorship". It's more about improving the quality of the site, and filtering things that are obviously wrong.


> It amazes me that we've reached the point where there are commenters on Hacker News arguing in favour of online censorship.

Why? Do you think that HN is a moderation-free bastion of unfiltered speech?


Sorry, I emphasised HN because I perceive it to be a place where "enlightened people" (in a non-sarcastic sense) tend to hang out. And therefore, of all the discussion forums I can think of, HN is the one that is most likely to have a view on censorship that prefers freedom of speech rather than suppression of wrong-thinking.


> And therefore, of all the discussion forums I can think of, HN is the one that is most likely to have a view on censorship that prefers freedom of speech rather than suppression of wrong-thinking.

You appear to be suffering from the misapprehension that "freedom of speech" means "the HN community should tolerate bullshit" rather than "you can't be put in jail for bullshit". The difference is extremely important to many of us.


I wasn't talking about moderation on HN at all. I was talking about moderation on Facebook.


> I wasn't talking about moderation on HN at all. I was talking about moderation on Facebook.

So was I. But you're talking about the people here.

I don't see why you think it matters where scam blocking happens. So far in these comments you seem to be saying that scamming needs to be defended as a practice. That's a hugely psychopathic position, so I hope the interpretation is wrong. But when you use the phrase "freedom of speech" against moderation of fake news, which are scams perpetrated on the vulnerable public, not a synonym for "news you don't want to hear", it becomes hard to interpret otherwise. I think that most people here would probably say that Facebook isn't a place where social responsibility should vanish, and part of responsibility, part of having a conscience, is protecting people, both individually and collectively, from scammers. Because Facebook is not a copper wire. Facebook is an interconnected network of its participant members, just like HN.


We can’t all be ideologically motivated to the point of societal suicide.


Censoring content you disagree with is societal suicide.


>Censoring content you disagree with is societal suicide.

There's a big difference between censoring truths you don't agree with, and just moderating verifiable outright lies. The problem is that most people are decent human beings, and find it impossible to believe that someone would write something on the internet that is just a completely fabricated lie. Fake news hucksters take advantage of this psychological "hack" and exploit it, which should not be allowed to happen.


One trouble is that it's much faster to create fake news than to shoot them down with evidence.

And as we have seen (for example with conspiracy theories), some people refuse to believe no matter how much facts you try to throw at them.

With centralized platforms we might be able to still block the fake stuff and discredit people who post junk, but in decentralized world this will be impossible. People choose who they follow and to which fact checking service (if any) they subscribe.


Would you clarify what you mean by online censorship? Do you believe that there should be no changing or altering content at all? That people should be able to post whatever they want wherever they want? Are there any limits that are reasonable in your opinion?


Facebook is trying to be the only place people communicate. It's just as unacceptable for Facebook to be removing or hiding information as it is for the email protocols to be removing or hiding information. If people don't want to look at something, they can unfriend the person posting it.

So, yes, I believe there should be no changing or altering content at all. I believe Facebook should act like a dumb utility instead of trying to impose themselves as a moderator over people's private lives.


Only on Facebook? Or anywhere? No limits on posting illegal material?


There should be no limits on anywhere that is trying to be a general-purpose communications utility. I'm fine with people, for example, deleting trolling from the comments on their personal blog or whatever.

FTP doesn't care if you're transferring "illegal material". Neither does TCP, neither does HTTP, neither does email. Why should Facebook be different just because it's implemented as a corporation instead of a protocol?


While a protocol may not care, jurisdictions do have a say as to what is transmitted over them. A mailbox doesn't care what's put into it, but it's illegal to send a bomb throw the mail.

I think your argument with respect to Facebook attempting to be a utility is interesting, but I'm not sure how true it is, and whether that actually implies no restrictions. Utilities are generally subject to regulation. Edit to add: One can discuss what form that regulation should take and what it's limits should be, but that's different from saying there should be no limits.


If you think people shouldn't be allowed to say certain things, you need to be going after the people saying those things, not the technology they're using to say it.


I think here is where a potential misunderstanding is. I've said nothing about "people shouldn't be allowed to say certain things", I've been asking purely about whether or not there should be regulations or items removed from a site, including illegal material. If I'm understanding you correctly, you're looking at this purely as a censorship issue. There's a difference here, and one that's worth at least acknowledging. For example, if someone is posting something that is illegal, the appropriate authorities will both remove it from the post and attempt to go after the persons responsible for posting it, whether that's online or in the real world.

There's room for discussion, but first we have to figure out exactly what where' discussing, otherwise we're just taking past each other, and accomplishing nothing.


Those old enough would remember fake news used to go through email chains. Even if we ban Facebook, people would do the same through email, should we ban or censor that too? Don't blame the messenger. People enjoy sharing fake stories that sound too good to be true because it makes them feel good, this is probably how most religions got to exist and propagate, way before facebook. This problem is as old as humanity and blaming it on facebook, social media or emails is a bit short sighted. We'll get through this :)


To state the obvious, there's a difference between Facebook and email. It takes effort to hit "Forward", select everyone in your address book, and send that email. Meanwhile, a "Like" is input to Facebook's stupid world-destroying algorithm (they put Trump in power, 'nuff said) to "Let's show this content to more people!".

And email is more personal, if I got email with some stupid content from someone I knew, I would probably take time to reply to show them that the mail is full of crap.


> It takes effort to hit "Forward", select everyone in your address book, and send that email

This was mostly done by computer illiterate grandpas, so I would say it was pretty easy to do.


>...this is probably how most religions got to exist and propagate

FWD: FWD: FWD: FWD: FWD: FWD: FWD: FWD: FWD: FWD: SHOCKING! Learn this one secret trick a carpenter used to resurrect from the dead!


Indan here.

Just want to say majority of these fake news come from Whatsapp forwards that spreads like wildfire amongst Indians and not Facebook.Check out the subreddit r/theunkillnetwork that covers many of such forwards.


I really like HN, but I don't find the "flag" link all that obvious (nor a button). It's front and center for the top of the comment thread, but there is no obvious way to "flag" a comment in a thread other than to click on... the "age" link first, in order to load the view with that comment at the top.

If I wanted to spread disinformation that's where I'd do it; I doubt many people would take that extra step unless it was super obvious and offensive to them. Meanwhile everyone else has bought it.

I think the relative resilience of HN to FN comes from the technical nature of the audience (high expectation of evidence on most topics) and its irrelevance to political manipulations.

It seems to me that


And will never have good moderation because once you pass through the initial stages of set-up; good moderation is good editorial ability.

And that costs manpower, which would obliterate the business model of Facebook or other social media platforms.


> And that costs manpower, which would obliterate the business model of Facebook

This presents a clear legislative solution. Extend libel liability to social media platforms. Defamation law is well established. Extending it to platforms forces them to moderate, keeps things de-centralised and adds a scaling limit to boot.


I think that'd probably hurt large internet forums and other non/minimal profit communities more than anything. I mean, Facebook wouldn't like being liable, but they do at least have the money to fight any claims and to theoretically pay staff if forced to.

A site that's a fair bit smaller but still popular enough to have trouble moderating things would be utterly screwed.


Forget large internet forums; what am I supposed to do when someone alleges that my small forum with five active members gets hit by this? It would be a death knell for any new venture that doesn't have several million dollars in funding for legal defenses alone.


I used to be n flicker and many communities had pretty good moderation --there was also a flagging mechanism. It worked pretty well. It seems the issue is scalability. The bigger a group gets, the more unwieldy they become. Small ones also get ignored.

Yet, for all the good, there was also the specter of in-crowds and echo chambers. That is also manifested in Reddit.

Still, I think group-centered moderation is key. If a group is "censoring" you, go create your own group with its own rules. They might also put a limit on group size and auto-split at a given threshold.


I would assume facebook and such social network would love fake news, puffed up news since they are more likely to be sensational, raise controversy, get people talking/sharing. A win win for them. Who cares if it's right/wrong, truth/false. More clicks mean more money.


The problem in India is WhatsApp, which cannot have moderation because it is end-to-end encrypted.


Another problem is that many of us seldom correct or inform them in fear due to not wanting to antagonize them.


> Facebook and social networks have absolutely no moderation.

Do you prefer moderated fake news?


This has nothing to do with poverty. The US population is just as guilty of believing in fake news, they even elected a president based on rumors. And now the president himself fabricates and spreads such fake news with support of his propaganda machine.


The US population is just as guilty of believing in fake news

Because it’s pushed to the top of algo-driven newsfeeds, and originates with people with blue checkmarks after their names


For anyone else who didn't know what the "blue checkmark" means in this context:

Twitter et al. apparently us the blue checkmark to indicate they have vetted the identity of that social media account, and they vouch for @ev[0] being Mr Williams while @finkd may or may not be Mr Zuckerberg.[1]

At least on Twitter, it has a "first-prize ribbon" look to it, as if being the blue-checked Mr Williams were an achievement superior to that of the second (red) Mr Williams and the others on down.[2]

[0]: https://twitter.com/ev

[1]: https://twitter.com/finkd

[2]: http://www.trophies2go.com/wp/wp-content/uploads/2016/03/RBB...


Twitter et al. apparently us the blue checkmark to indicate they have vetted the identity of that social media accoun

Not anymore - now it means Twitter endorses what they say. I mean does someone stop being themselves once they break the ToS? If not, why withdraw it?


Many of those responsible for that code and even directing the developers are reading these very comments.

They are making a £ v moral decision.


Is it an education problem? I'd love any links anyone has to studies that examine the relationship between education and skepticism of fake news.

My intuition says yes but to such a minor extent that it cannot explain away the phenomenon.


No, I don't think it's education. I think it's tribal, emotional attachments to groups and the group perspective.

Trying to fix it with critical thinking is difficult because it's a few steps back in the causal chain for agency. People have to want to question their news sources, but that increasingly conflicts with their need to feel part of a group when they're getting their news via gossip.


Simple minded people - those without a willingness to question and consider, will always be led around by false information. It doesn't matter whether it's rural Texas or India.

Until we instill in people a willingness to think, question, and consider, we will continue to be plagued by ignorant masses. It doesn't help when there exists an active, malicious business model designed to keep ignorant people as willfully ignorant followers. *autocorrect fix


I've seen so many social media hoaxes started and spread lately by influential journalists who write for respected outlets, by prominent crusaders against fake news, by well-known employees of companies like Microsoft... Sure, all the news articles are about simple people being led astray by fake news, people who are nothing like the intelligent clued-in readers of those articles - that sells better after all - but that doesn't mean that everyone else is doing any better at it.


You are forgetting there is a real time & cognitive cost associated with performing analysis, especially if you are already biased on the issue. It's the same reason people can't stick to diets. People want to feel good now. So if you can give them another reason to like/hate Trump, you will find little resistance.


Sure but there is a coarser option (especially on social networks) - unsubscribe from bad groups and generally keep yourself away from stuff that has proven itself to be an unworthy source of information. It requires some amount of thought upfront but after that, it improves your experience. This is similar to you subscribing to a good newspaper rather than any tabloid rag.


I'm optimistic this will be solved in the near future -- it is a UX problem. Like the "Not secure" in browser, what we need is a "Potentially fake" icon that can be flagged/used on various platforms.

The advent of the fact-checker platforms is the first bit. Next would be API/automation, followed by the UI/UX changes.


The problem is that the incentives are perverted. With "Not secure", the platform has an incentive to deploy https. With fake news tho, they are potentially making money from explosive news so why would they go out of their way to combat it.

Like facebook is claiming to be combating it but I don't think they care that much. I can't help but feel they are combating it due to the uproar, not due to the goodness of their heart.


> The advent of the fact-checker platforms is the first bit

This just punts the problem to another layer of abstraction. Who chooses the fact checkers? Why wouldn't every "publication", valid or not, spin up a fact checker?


Except that Facebook's "Disputed" flag on news stories lead to the stories being read more... https://gizmodo.com/facebook-ditches-disputed-news-tag-after...


So in other words... if your opinion is a minority one that the mainstream media don't agree with, then you're basically stuffed. Fact checkers have biases, and if they're trusted to basically fact check the whole internet for web browsers, they'll have even more of an incentive not to be fair and neutral. What stops them from basically labelling alternative media as false across the board? Or labelling based on the personal opinions of the staff who work there?

Seems like a political minefield with antitrust claims just waiting to happen.


I disagree- there isn’t a third party that we can safely trust the “potentially fake” icon to, because they will be a) overwhelmed by how cheap it is to product content from anywhere and b) a weak point to be comprimised, suspected or real.


So when I visit Drudge Report I’ll get a red “potentially fake” banner and when I visit CNN I’ll get a green “trusted” banner?

No thanks.


No, the problem is deciding what's fake.

With SSL, all you have to do is check if the certificate comes from a certificate authority you trust.

Do we want want certification authorities for news? (Maybe yes?)


There is going to be another Rwanda-scale genocide somewhere because of rumors on social media, eventually. Billions of people without the intellectual toolchest to determine truth from falsehood, eventually it’s going to get out of control and a lot of people are going to get killed.

One can argue it’s already happening to the Rohinga.

I think we as a species need to make a serious effort to educate people in really basic critical thinking skills and teach people a healthy skepticism, and how to evaluate evidence and chains of reasoning.

It’s counter to everything we’ve been teaching the masses up till now, which is to respect authority, because it’s easy to control them that way.

Without centralized mass media, that’s no longer an option. Manufactured consent will no longer be available as a means to maintain stability in society, and we’ll have to figure out how to do it with everyone able to think and learn for themselves.


> There is going to be another Rwanda-scale genocide somewhere because of rumors on social media

"A couple of hours outside Yangon, the country’s largest city, U Aye Swe, an administrator for Sin Ma Kaw village, said he was proud to oversee one of Myanmar’s 'Muslim-free' villages, which bar Muslims from spending the night, among other restrictions.

'Kalar are not welcome here because they are violent and they multiply like crazy, with so many wives and children,' he said.

Mr. Aye Swe admitted he had never met a Muslim before, adding, 'I have to thank Facebook because it is giving me the true information in Myanmar.'"

https://www.nytimes.com/2017/10/24/world/asia/myanmar-rohing...


Thank you for sharing this excerpt. The subsequent text provides great detail about the information being shared and how it is swaying public opinion and how authorities are “educating” and “guiding” the public.


I agree completely. However, I have a gut intuition that the best way to teach people "critical thinking" is a good, old fashioned, broad eduction in the arts and sciences. Math, Literature, Composition, Philosophy, Art, Physics, History, etc. We might not need to do anything new. Just do the old things well.


>It’s counter to everything we’ve been teaching the masses up till now, which is to respect authority, because it’s easy to control them that way.

Wait, what? I see this sort of opinion expressed a lot here and similar places, but it seems to be entirely the opposite of actual reality, at least in the USA. There has actually been a precipitous drop in respect for experts and authority, and the general teaching has been more towards "trusting one's own feelings" and "everyone's opinion is valuable" and other such "self empowerment". There are remaining reservoirs of trust amongst the public for certain classes of experts, but even then it's common that the public doesn't actually understand what the opinion of those experts is (ie., global warming denialists amongst the public may genuinely think scientific opinion is divided).

In modern society what I think what might be strongly helpful for every citizen beyond a base level of knowledge (like probability and statistics, economics and political theory) and their own specific area of expertise is not general "skepticism" so much as more training in thinking about how to evaluate what authorities are trustworthy. It's impossible to move through a world where the total sum of human knowledge is going into or well into a J-curve without extensive use Argument From Authority, which remember is not a logical fallacy so much as the weakest but also most scalable form of argument. Any tech person should be very familiar with this because it's ubiquitous in our field, we have to depend on trust up and down the stack, but that doesn't mean there aren't ways to test that. We use trees and webs of trust, sample verification, transparency, and so forth. That same sort of thinking though applies elsewhere.

It's not about "control", respecting good authority is critically important. What's needed is to help people with techniques to figure out what is a "good authority", how to continually reevaluate that as necessary, and how to hold authority to account and how to swap out mistaken or degraded choices with superior ones.


Be careful what you wish for. Skepticism about official experts/science/the government is helping fake news a lot, eg here in Italy.


'healthy skepticism'

My point was that teaching people to just trust what they read has lead them astray. They used to trust the 'mass media' who largely hewed to the 'the establishment' line. Now that the mass media isn't the only source of information for most people, they'll believe basically any nonsense.

People should be skeptical of the establishment media. They should be skeptical of everything. What you need to teach people is how to properly apply skepticism to figure out what might or might not actually be true, and why.


Skepticism about official experts/science/the government

They did kind of bring this on themselves, too much “fake news” of their own. A scientist doesn’t know any more about politics than you or I, an economist knows even less. People should be wary about trying to transfer credibility from one domain to an unrelated one.


> I think we as a species need to make a serious effort to educate people in really basic critical thinking skills and teach people a healthy skepticism, and how to evaluate evidence and chains of reasoning.

So, better education.

Stop using education as a way to cement classes.

Make sure every child goes to school. Pay teachers well. Involve the academic elite in the governance of the education system.


Wasn't the Rwandan genocide pushed through talk radio?


Yep. And the next one will happen because of facebook or whatsapp or twitter.


> Many fake news stories appear to support India’s ruling Bharatiya Janata Party and its right-wing Hindu nationalist agenda, said Jency Jacob, managing editor for boomlive.in, a fact-checking website.

That right there is a "fake news" in itself. Both sides of the aisle has to be blamed for the situation. As much there is agenda from the right wing, there has also been concentrated effort from the left wing too.


I think that humans are going to find out that algorithmic news feeds are not a good way for people get information. AI shouldn't decide what knowledge is consumed. RSS and Atom are much better solutions.


Why would they stop using algorithmic news feeds when those bring in way more clicks and ad revenue than RSS style feeds? I just don't see it changing ever now that sites can tailor their pages for each viewer to maximize their profits and keep people on their site.


> Why would they stop using algorithmic news feeds when those bring in way more clicks and ad revenue than RSS style feeds?

The probably won't. They shouldn't kid themselves that they are making the world a better place though.


Lots of people hate the way China does things (including me) but they have a long history of "harmoniously" controlling large populations and probably have learned a thing or two.

So my guess is the future consists of more "harmonizing" globally. Which I don't like, largely because it's opportunity for corruption, but the cost of that might ultimately be less than the cost of the current free for all.

In some ways I feel like we are living in the twilight of a form of savage freedom. Which I personally relish because it's what I'm used to.

I imagine for some the transition will be similar to what happened to Aboriginal peoples throughout history.

"You mean I have to wear pants and show up at 8AM for school every day? Are you crazy? That's horrid! Why would anyone do that? I certainly won't be!".

"Yes, but then you'll have all the food you can eat and will never be cold!".

"Eh, I don't know. I'd rather do what I want and go hungry sometimes. No deal!"

"Ok, now you are causing problems for others who want to have all the food _they_ can eat. Look here, now see this gun?"


> Last week, newspapers here carried full-page advertisements by Facebook that explained how to spot false news.

I guess that's cheaper than hiring additional moderators, or figuring out how to temper this issue with technological solutions - but shouldn't this be a big flag to Facebook that there is indeed a huge problem?


Basically, in doing this, Facebook is saying, "the problem is you for not being sharp enough to spot a carefully crafted lie."


Do you like end-to-end encryption? Because this is what you get with end-to-end encryption.


Sorry, are you saying that Facebook can't see the content that's stored on their servers? That when my mom posts a picture of some roses, that nobody between me and her can see what the image is of?


Not Facebook. Whatsapp.


The majority in this thread are not willing or able to put 'fake news' in correct context with propaganda and censorship. I even see a comment basically saying that freedom of expression in the West will eventually be like it is (not) in China and suggesting that is probably for the best anyway.

So I believe that apathy or this lack of/incorrect context will have very significant consequences.

I just hope that people will research the long and ongoing history of censorship and propaganda and try to see the correct connection with "fake news" and the suggestions for suppressing it.


The bigger threat than Facebook is Whatsapp. India is Whatsapp's biggest market and while sometimes people point out the fake news on Facebook, the fake WhatsApp forwards go completely unchecked. There can be hopes that Facebook will manage to curb fake news in the future, but the senseless rumors and propaganda will continue on Whatsapp.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: