We, as a society, across multiple governments and now the UN, with ample public support, are dragging Google and Facebook into the role of judge and jury of what is or isn't acceptable speech.
They may well eventually settle into that role, and come to like it.
I understand that the volume of information grew much faster than the traditional legal avenues for taking down illegal speech. And we're scrambling for a quick fix. But this fix may prove worse in the long run than the problem it fixes. These organizations may be relatively enlightened today, but there's no guarantee they'll continue to be 30 years from now.
On a real-life social network, there are safeguards that prevent blatant lies and angry-mob-like behavior to be contained. In real life, you have only one persona and when you misbehave you're risking exclusion and loneliness but on Facebook, that's not the case, a person with an agenda can do very bad things without the consequences.
Another case in a real-life social network it would be very hard for a state to shape society, on Facebook that's almost the business model and it's done on scale.
I'm' personally quite pessimistic about the long-term feature of these social network giants because I believe that their modus operandi will eventually create enough trouble that every country will follow the Chinese route of control.
Sadly, no. It just lacks "reach". It's harder to meet up with like-minded sociopaths and get a good angry mob going when every meeting has to be in person and every lie must be repeated verbally, but it's still very possible.
Facebook is more about shaping non-sociopathic people using corporate tools and money. What Cambridge Analytica did with Facebook is outstanding, they successfully managed to change Europe and America forever.
Sociopaths gathering on Reddit or 4chan are benign compared to the corporate influence of Facebook.
The potential destruction of businesses(like UK's financial sector), the lives and careers of expats, the fate of dictators(the Arab spring) and democracies(USA, France) were all determined by the power of giant social media corporations. The harm of occasional mobs on Reddit or 4Chan doesn't extend beyond few individuals.
I couldn't have said any better. Later people might have an issue on why a "public" company is now moderating free speech.
If you want to participate then you follow some simple guidlines and contribute in a positive way, which can include differences of opinion.
All of the platforms failing to provide productive and civil discourse are those that have no community moderation, or worse, are just personal soapboxes with an effortless virtue signal button.
I think the platforms are broken by design, and humanity figured out a long time ago that there are limits to uncivil discourse and you need to recognise them.
The great thing about the internet is if you feel passionate enough about your idea, you have probably thought hard about it and you can go make your own community.
The issue with effortless virtue signalling and the no-consequences soapbox is that you don't have to think that hard about something so long as it sounds like it's up your alley, and you can be led very far astray before realising you're in a toxic conversation, potentially one with a subtext you didn't recognise.
A malicious and scurrilous countrywide sms (short messaging service) campaign over mobile phones, by unknown unruly elements, led to panic buying of both petrol and diesel and long queues in the Kandy District last night.
Is anybody seriously advocating self regulation? When this comes up in my New York political circles, talked-about solutions have been bouncing between regulation by an independent commission and by a department under the Attorney General.
When they moderate too much, we all shrug, less coverage, growing pains, cannot be expected to be perfect all the time, it's-their-platform-they-have-the-right.
That incentivizes them to moderate more, always more.
Actually I hate how flagged HN items completely disappear. Graying out should be enough here.
- Factual correctness and overall decency loses out to popularity— making a funny, low-effort post will in many communities get you a lot more magical internet approval points than doing the work of engaging with an issue, seeking alternative sources, etc. Yes, there are counter-examples of this, but the long term trend is always toward a low quality of dialogue.
- At a certain scale, posts are simply concealed from a user who is likely to disagree with them— basically pre-moderation. Being challenged on a viewpoint requires energy, and people don't like to do work.
‘Kalar are not welcome here because they are violent and they multiply like crazy, with so many wives and children,’ he said.
Mr. Aye Swe admitted he had never met a Muslim before, adding, ‘I have to thank Facebook because it is giving me the true information in Myanmar.‘“
Sure, Facebook could crack down on the way people are using their service/their tool, but I don't think they are responsible for the actions of users.
- Broadband internet is spreading hatred
- Chrome Browser is spreading hatred
- ISPs are spreading hatred
Don't blame a company, or the service they provide. A crackdown on Facebook will force the hating people to another tool or communication method. They'll probably move to a more encrypted method of communication, which will make finding the wrong-doings and the people much harder.
The only thing they could do is ask Facebook to cooperate and provide details to find the haters.
Facebook is making billions by exploiting exactly that kind of behavior. The more one-sided and scandalous a headline is, the more clicks it gets. This leads to being seen by more poeple.
Also, showing people only what they want to see leads to echo chambers and is ditremental to society as those groups tend to develop into extremes that refuse to accept anything outside of their bubble.
Not sure I agree with this. The internet made it possbile for the whole world to see information coming from all kinds of sources. Before, in history, you had to rely on local news/newspapers, or even less information. Much more bubbly than this modern age.
Again: I think the reason those bubbles appear have to do with society and the urge to belong to a group, Facebook is not to blame here. Facebook reflects people and society, not the other way around.
Let’s start with logic: using Facebook is a clear indicator that the platform is preferable for the spreading of hatred, compared to alternatives. Blocking this avenue will therefore harm the propaganda effort. The discussion can only be about the magnitude.
And there is some obvious support for the idea that the magnitude will be significant: no social media platform has made significant inroads against FB’s reach. People aren’t actively searching for sources of hatred, they simply happen unto the content mixed among their friends’s baby photos.
Just look at the barren cesspool of hatred that is voat: it doesn’t have anywhere close to the reach of the market leader these xenophobes were kicked out of, Reddit.
regardless of the factual content politicians and special interests groups always were able to control what we were presented with as truths and lies through the cooperation o the press who in many nations is not independent and even in those where it can it falls into the trap of wanting access so it simply becomes a mouthpiece.
This is exactly the point: to push hate-mongering back into the fringe it belongs to. Law enforcement will still be able to crack down on them, have no fear.
This is a cop-out. Hateful false information, with violent consequences, is being spread by a company profiting from its distribution.
We don’t say, in the event of food poisoning, “we can’t blame the restaurant because if they weren’t here someone else would be poisoning people.” We fine the business and send a message to the industry that said behaviour is unacceptable.
Facebook is contributing to and profiting from atrocities. That rightfully merits scrutiny, of Facebook and ad-driven social media companies generally.
What I mean is that the service provided by Facebook is somewhat trivial - it can easily be replaced by emails, chats, or any of the countless ways to communicate via the internet. Shutting down Facebook will not solve the problems in Myanmar.
I think the only solution to this problem is education.
Facebook isn’t concerned about being “blamed for censorship” when it benefits their bottom line .
> This is a difficult problem to solve
The first step is recognising that ad-driven reaction-oriented social media does not appear to positively self-correct.
The tendency of the news feed to promote attention grabbing content biases it to extremes, and this is definitely a systemic problem, and shouldn't be ignored and swept under the rug.
Eventually - decades after the massacres in Rwanda - those responsible for the genocide-promoting radio station have been given long prison sentences.
It’s also wrong: Facebook doesn’t have a competitor in its core market, and the idea that people would start thinking “you know, Facebook is great. But I just wish there were a platform just like it with a bit more calls to violence against minority groups” is ludicrous.
You’re just asserting this without any argument, even though it was addressed in the post you’re responding to.
What would be used instead of Facebook? If that other platform is as convenient and effective as FB, why is it not used today?
And since you are agreeing that censorship of hate could help: why do you believe the appropriate distribution of the responsibility for such censorship is more important than stopping a genocide?
The content on these sites is also so far away from common decency that people will not the mistake of thinking that messages they read there are acceptable in common discourse.
The only way governments tend to control this is by blocking mobile/sms services. In this case unfortunately the only way around is to block Facebook or internet services until things calm down.
What really stood out was how much of the country ran on facebook. Groups for everything, pages for businesses, messenger for communication, etc. It resembled being in china and having everything run through wechat.
I guess this is one of those consequences.
Facebook can/should only be held responsible for amplifying a message and should not be accountable for being a medium.
The information on 'Who' paid 'When' and 'How much' to amplify 'What' message/post to 'Whom' should be publicly accountable. This model should apply to any AD agency.
Seriously? I know they probably have bigger things to worry about in Myanmar, but that's pretty crazy.
Do they, or are they forced to do so?
And as I said: I don't like Facebook. I think they're shifty and probably do have an agenda. However, it's untenable to attempt to force them to censor anything that a government doesn't approve of, or hell, society doesn't approve of. Facebook has 2.2 billion active users which make post 510,000 comments, update 293,000 statuses, and upload 136,000 photos per minute. It is impossible for a company to go through all of them in order to censor those which break whatever law of the country it was posted in.
In certain countries, to people below a certain age? Certainly.
South Africa is also a glorious example of reconciliation, overcoming a history of injustice without violence.
But the farming community simply has not used the chance they had in the last 20 years to become part of this historic process: while universities and many other groups and institutions have de-segregated, that conservative community has dug in and defended their privileges at all costs. They invited the government to eventually force their hand.
Expulsion is obviously a bridge to far. And Zimbabwe stands as the obvious example of what not to do. But some sort of legislative land reform is now inevitable.
I'm not sure the truth and reconciliation commission would entirely agree with that.