Do we wait until it can be shown that companies such as Facebook, Twitter, Google, etc. influenced elections because they refused to serve information from a candidate they didn't like? If we aren't already there, it's not long before it is.
What do you think would happen if these "private companies" with such deep hooks into our communication infrastructure suddenly decided to remove all data associated to the Republican Party? For that matter, the Democratic Party?
It seems to me that Facebook and Twitter are trying to have it both ways. They can choose to police the content provided by their users but can't be held responsible for said content? Are they a publisher or a platform?
I don't think comparing old thinking based around old methods of communication compares to what we have today, it requires new thinking. These aren't like newspapers sold by kids on the corner in a city that can have dozens of newspapers countering each other. Imagine if there were only three newspapers in the entire country, soon the world, controlled by a small group of people who wish to use their publishing for their own agendas.
Tim Pool is right, at this rate, sooner or later, the Feds will come knocking and will shut that party down.
They wouldn't do that, because there would be justified public outcry. Luckily, as moral agents, we humans are capable of differentiating "general political party" from "white nationalists", and can target the latter and not the former.
edit: Tim Pool for example was already accused. It will clearly be used to shut down unwelcome dissent.
That said, I absolutely agree that leftist positions do get censored. Even if I currently like to underline my dissociation with its authoritarian excesses and general dishonesty on certain subjects, the problem is a general one.
However not everyone thinks that it's so great.
I don't know, and it's probably not something we can just settle easily.
It's usually true, that a society cannot simply rely on "laws" to save it, and when the power imbalance get too extreme, then laws won't save anyone anyway. Though it seems having some kind of policy and publicly agreed way to stop serial inciters is not a bad idea. After all information censorship generally happens through claims of potential national security hazards and other gag orders, not by a too wide interpretation of hate speech laws. (But of course human creativity is pretty limitless when it comes to suppressing others.)
Which ones, specifically?
There are many, just some off the top of my head.
Who/what de-platformed Jimmy Dore (here's his YouTube channel; )? Or the rest?
His channel is not 'deleted', (as is the case with the majority of right-wing commentators as well btw), but it is very much 'shadowbanned'/blacklisted, (economic ruin).
I for example regularly notice the recommendation algorithm skews a lot more towards the centre & right actually even when I am specifically looking for Jimmy Dore videos.
People like Rania Khalek, Abby Martin... were even more explicitly interrogated by the likes of CNN, had their Facebook pages taken down etc.
On Twitter, a lot of them don't even come up in the search results, effectively shadowbanned.
Of course, none of them would be invited on mainstream TV because their viewpoint is not allowed in 'polite circles'.
This is why I have a huge issue with the right simplistically equating D.C. Democrats with 'the left'. I suppose the division is as to what do you care about, social leftism, (identity politics), which is easy & lazy and what many on the right focus on vs economic leftism, which is what many of those I named discuss and is not really allowed on mainstream media.
Contrast that to "Limbaugh", for which Rush Limbaugh is the (buried) #11 result, despite having 2x the followers anything else.
In Moments, there's a lot of other people talking about him, so that's what's shown, (also a lot more notable Limbaughs, as that's a pretty common surname), whereas Dore is pretty much just him, not that many other people are talking. That's how Moments always worked, nothing shady there. It's also true if you put i.e. 'Taylor Swift' in Moments, her actual Twitter account is fairly buried, because there's lots of other buzz that the algo deemed more relevant.
If anything, it speaks to his popularity.
And that's despite him not having a verified account, which are deranked for everyone over verified ones. Type in ie 'Sean Hannity' and you'd see his verified account right up.
Perhaps it's time to admit that the simplistic narrative of 'left censoring the right' is not really true and it's more complex than that. It's really the establishment silencing alternative voices.
The right is more than happy to censor the left on BDS, for example, (with the help of the Democrats even(!)) & cement that into law. I don't see any of the right-wing 'free speech worriers', like Ben Shapiro talk about how wrong that is. In fact they very much support it.
On the other hand, you have left-wing channels like Dore & Secular Talk constantly bring up how it's wrong to censor right-wing voices, even doing long rants on specific cases.
1 - https://imgur.com/a/UfCf9vF
And, as we all know, nobody ever made money off of extremist political commentary before YouTube, right kids?
FFS he has his own website and show. Just because YouTube doesn't give him money doesn't mean he's persecuted. He can feel free to host his videos elsewhere.
> (economic ruin).
> People like Rania Khalek, Abby Martin... were even more explicitly interrogated by the likes of CNN, had their Facebook pages taken down etc.
Please cite some sources. I'm not really finding any info.
All I could find was this: Peter Thiel once spoke at a libertarian conference. Libertarians, being libertarians, permit racism (read: not the same as "support racism"). THEREFORE, PETER THIEL IS A WHITE NATIONALIST.
Are you fucking serious? I pushed back on my leftist friends with what I found. "Duhhhhh, errrrrr, ummmmm, well... everyone just knows he is!" Oh, really.
Bullshit, repeated often enough as truth, becomes evidence for itself.
Feel free to do so whenever. Be sure to indicate when that has resulted in serious consequences for the "victim".
> Tim Pool for example was already accused.
TBF Tim Pool pulled the whole "I'm so neutral" act when dealing with a bunch of actual white supremacists. What "dissent" was unwelcome there? That he was pretending like these were totally fine people who just happened to think that genocide was okay?
And here he is with a bunch of others!
> You could be really helpful here, sugar.
And you could tone down the condescension, sweetie.
Good luck, kid.
That's called freedom of association: the right for an organization or individual to not wish to associate with someone.
Are you going to suggest that people start their own ISP?
At some point, private actors can carry so much economic power that their private rules effectively become laws. Much of Jim Crow was implemented in this manner, in addition to actual legislation.
But even then, VPN or Tor? The internet is built to route around damage. Has Mastadon, IRC, or ICQ ever been blocked?
Again, just because white nationalists don't have the platforms they want doesn't mean they can't get their message out. But what they want is mainstream acceptance, and that is most certainly not going to happen.
Somehow I'm skeptical this is a viable approach short of Torrifying the entire internet.
Take it up with New Zealand, which has very strong laws about hate speech and promoting extremist communities.
> whilst Facebook, which also hosted it, was left alone?
That content makes up a microscopic amount of the content on Facebook, and it was removed when reported. That content makes up the vast majority of the traffic on Voat, however.
It seems like you're moving the goalposts here, though. We're not talking about governments banning websites, we're talking about the government forcing websites to host content that they don't want to host. That's what's at play here.
And the fact is that you are free to start a public or Tor-based community of your own, unless you're in countries where Tor is blocked, in which case I think you're in far deeper shit than this discussion is focused on.
Letting paranoid xenophobes roam, recruit and incite violence on Facebook is bad. And its ill effects are already felt, and it has significant potential for doing a lot more harm in the future.
Whereas allowing a quasi-public-forum to be controlled and basically censored by an unaccountable entity (FB) is problematic - especially if said control is co-opted by the very same ideology that in the first place that particular control mechanism was supposed to, well, control.
It feels like the problem is that hate speech and other kinds of populist nonsense is currently the tool that easily leads to more and more authoritarianism, more and more xenophobes and isolationists getting into power. But censorship itself is also a great tool for power consolidation.
Where would the outcry happen?
There aren't that many public platforms with any reach, and they're all adopting similar policies.
In [blocked] no one can hear you scream!
In practice, the distinction between banning a subject and banning a conversation about a subject ban, is easy to make. For example, Germany bans racist speech, but does not ban speech about whether racist speech ban should continue (and this particular debate is definitely alive and well in Germany). Even in the US we've banned certain words from broadcast TV for many years, but this never limited people from discussing whether the ban should continue!
And equally easy to abuse, something which has happened time and again:
Complain about the "ban of X"? Discussion shut down as supporting X.
At FB, Twitter, Snapchat etc, whoever happens to work in the subject banning division arbitrarily makes these decisions, unless overridden by top management.
Apparently not for Facebook’s moderators/censors workforce. Just they other day: https://mobile.twitter.com/OzraeliAvi/status/111040092879067... The day before that they banned a local satirical comics and this keeps popping up regularly.
Okay? Private organizations are under no obligation to provide you with a platform to spread your message.
If you want, you can start your own social network or host your own blog.
This is an argument for mob rule. I'm not sure that's as comforting as you intended.
Somehow saying "yes, these multinational corporation could exert undue influence over a political system, but they just wouldn't" does not seem sufficient. I feel that such an attitude is like saying "The US government would not spy on its on citizens -- they wouldn't do that, just imagine the public outcry!" Perhaps that is a bad analogy, but the issue here is that we are nearing the point where "oh, they wouldn't do such a thing" becomes untenable.
The CFO of Google said, in the leaked video of the TGIF immediately following Trump's election, that they would use "the great strength and resources and reach we have to continue to advance really important values." Going by the reactions of everyone in that meeting, their efforts are certainly not impartial or apolitical.
If Facebook and Twitter suddenly decided to de-platform and ban any discussion praising, defending, or favoring a political party, where would the outcry happen? On Facebook? Sorry, it’s banned!
Journalists would be absolutely salivating at the thought of writing about Facebook's new policy, especially if it was that blatant.
FB and Twitter are not the entire world. It's good praxis to be involved with people in the real world.
For instance in the UK Facebook banned the pages of a political party and very little was said because the same kinds of people who control Facebook also control the mainstream media, so they're almost always in agreement, except that journalists would really love Facebook to ban even more stuff to achieve the 'right outcomes', as they see it.
There's that conspiracy language again!
And then you go on to link a piece of "mainstream media" that writes extensively about it! Did you mean to prove yourself wrong?
It's hardly a conspiracy - the worldviews of these people are formed in the same crucibles and result in the same outcomes. They want to manipulate the narrative to ensure the right outcomes, in their view.
You're comparing apples to oranges. What's it's ranking among tech/PC websites?
How about the Guardian?
How about BBC?
How about Wired?
Is that mainstream enough for you?
> They want to manipulate the narrative to ensure the right outcomes, in their view.
Look, the original comment I was taking issue with said this:
"Journalists would be absolutely salivating at the thought of writing about Facebook's new policy [of de-platforming and banning any discussion praising, defending, or favoring a political party], especially if it was that blatant."
That clearly isn't the case because it's happened already and journalists didn't salivate over it - they reported the event once and then it was never brought up again.
Really? Because from the citations I've given, it looks like a lot was said.
> That clearly isn't the case because it's happened already and journalists didn't salivate over it - they reported the event once and then it was never brought up again.
Because it was uneventful. It was a universally reviled thing that got banned, and rightfully so. There doesn't have to be "another side" to a story when that "other side" is filled with nothing but hate.
They're the British equivalent of the NSDAP.
This is highly debatable. There are plenty of examples of benign movements and opinions that have been stifled violently by society and the state.
You can find with little effort plenty of people today calling for censoring or even being violent towards harmless liberal or conservative people because they are "communists" or "fascists". People are not good judgement and measured response.
Facebook and Twitter have become a digital public commons for discourse today. The check and balance is "what Facebook decides". That doesn't exactly seem like an adversarial check and balance system to me.
Absolutely correct, however, they aren’t the only places for public discourse. People have never been able to demand a newspaper print their article or that a magazine must include their story—people have always had the choice to start their own newspaper or their own magazine and build their own audience and this is still true today, in fact it’s much easier than it’s ever been.
People who’s business is access to human inputs, whether they are newspapers, music venues, theaters, magazines, etc.. have almost always had the freedom to set their own standards and it isn’t clear to me why business owners shouldn’t have this freedom anymore.
Small comfort if they are the main places for public discourse - so cutting people and ideas there essentially means relegating them to far less reach.
Strange how when some foreign state censors FB or Twitter it's an outrage, but when FB or Twitter sensor people directly "there are other places".
Not to mention the monetary deplatforming (e.g. Mastercard, PayPal, Patreon and co not allowing funding), in which case there are no "other places" (not many in any way, and not reputable for someone to go pay there).
>People have never been able to demand a newspaper print their article or that a magazine must include their story
Which is irrelevant, since newspapers and magazines where always top-down affairs, written and curated by a specific team. Social media and platforms were supposed to be open to society (hence "social"), not only for a select team of journalists to have an account there.
Because a government has a monopoly on violence, while a private company has freedom of association. You're conflating two different situations that are only superficially similar.
> Social media and platforms were supposed to be open to society (hence "social")
Yep, and that didn't work out so well. Hence, the bans.
A government choosing what information you have access to IS censorship.
There are many organizations, anyone can start one. There is only one government and you can't escape it.
The same sort power brokers that would drive censorship in a place like China are the ones who fund political campaigns, found think tanks, control media empires, and choose advertising spend, and use this leverage to drive censorship on social media.
In the end if the rich and powerful have effectively squelched dissent does it matter if it was through government mandate or some more complex mechanism though private means?
If a group of users explicitly want access to white nationalist content, they should be able to get it. So I would oppose blogs, webhosts, and cloudflare deplatforming anyone for any reason besides the outright illegal.
Facebook and Twitter are not just about serving content to those who have the intent to view it...infact the whole point of these social networks is that they expose content to NEW people who didn't initially have any intent to view. This is promotion, not access, and I have no problem with private entities choosing what they want to promote.
I would apply this same test to payments. Users who have explicit intent to financially contribute to objectionable content creators such as Alex Jones should still have a way of doing so. When Patreon, Matercard, etc etc deplatform him it closes the door to those who already have intent. Of course, I'm all for FB and Twitter shutting down the campaign so the word wouldn't spread nearly as far.
As a moderate liberal who finds sexual content over-censored, yet am disgusted by right-wing and anti-vax (anti-vax is often leftist!) conspiracy theorists, I think this test "honor their intent" test is a great way to keep the internet relatively sex positive, and extremist content relatively niche.
But at the end of the day, aren't the companies who are providing payment processing or website hosting profiting off of extremism and hate? If you'll recall, the reason why they started deplatforming individuals to begin with was that large swaths of people boycotted their services until they chose to no longer do business with said extremists.
Isn't that voting with our dollars? Isn't that the Free Market of Ideas in action?
From McCarthyism, to J.E. Hoover, to MLK, Gary Webb, to WMD, to the Patriot Act, to Snowden, to the "collusion" BS, to today's de-platforming, the establishment and the media easily stomps on whoever they don't like with impunity.
Or have we already built the pinnacle of society at some past point, and everything we ever do in the future is doomed to be as bad or worse than what we already have?
I'd like to choose optimism here, personally.
Banning ads would also be another good start, but I don't see the idea getting very popular.
Why white? The CCP is running concentration camps for hundreds of thousands of Muslims right now, yet I see no ban on the only party that can be called National Socialist today, just because the nation they are supporting is yellow.
Not sure if CCP has a FB page, but that far is obvious.
> The CCP is running concentration camps for hundreds of thousands of Muslims right now, yet I see no ban on the only party that can be called National Socialist today, just because the nation they are supporting is yellow.
I don't see any way to infer this as saying that Facebook has banned the CCP–what am I missing here?
Also, if Facebook has indeed banned the CCP, well, turnabout is just fair play–after all, the CCP has banned Facebook from China.
I don't think they should, but I also know that if they didn't, Trump and others would get banned and the cries from the fake free speech worriers, (who are quiet as heck on ie BDS), would be a lot louder.
Really? How about plain nationalists? How about communists? How about separatists? How about traditionalists? How about "Occupy Wall Street"? How about "nationalists" in e.g Iceland, a country where nationalists would be predominantly if not exclusively white in the first place?
If your idea of "general political party" is Democrats and Republicans and the occasional third candidate, ie. the bland two-party consensus that agrees on almost everything (foreign policy, more money to big money, etc), but disagrees on token issues (and that increasingly less), then sure.
But the movements and parties that change things up historically were never welcomed as "general political parties" by the establishment and the "good people" of the 10%.
I'm from a generally center-left-leaning country, where e.g. Reagan would be considered the epitome of nationalist and/or imperialist. Would it be OK for Facebook to censor Reagan (or some modern politician with the same ideas) on those grounds?
Are they talking about racial genocide? No? Then they're a-okay.
I don't know why this is such a difficult concept for people to understand: If you're advocating genocide or violence against a people, you're gonna get banned. Pure and simple.
We’re also good at being boiled slowly, like frogs.
Doesn't matter what they believe or say:
Sam Harris? Alt-right. Tim Poole? Alt-right. Jordan Peterson? Alt-right. David Rubin? Alt-right. And that's just the leftists!
But even setting aside your specific example, I don't believe that ambiguity should paralyze us into inaction. I think it's fair to say "OK, we'll ban anyone who advocates distributing political power based on race, with the white 'race' getting the most power... when people cross that line won't always be clear, but we'll do our best." For private action especially, we should not let the perfect be the enemy of the good.
Stephen Colbert would disagree: https://www.youtube.com/watch?v=4nk0dUjYUNI
"You know why you're not supposed to use that word [nationalist]? Because it's the second half of 'white nationalist'. Chopping off the first word doesn't change what it means in our minds."
Or is this a deliberately conflated term promoted as an 'official' label in public discourse in order to dissuade association with those favoring the more benign meaning?
Agreed that ambiguity shouldn't create inaction - but it just as well shouldn't promote incorrect action either
Why would you use the term white and nationalist together? Being white has very little to do with being a nationalist unless you believe it has everything to do with it, in which case you would be racist.
Oh her? don't listen to her, shes a 'white nationalist'.
Also, if, assuming this confusion to be true, having the term 'white nationalist' existing in the discourse as a negative, those who are not aware of the nuances between 'whites who happen to be nationalist' and 'those promoting a white nation' are pre-biased via faulty discourse to discount the words of 'whites who happen to be nationalist'.
Any popular terminology which deliberately overlooks subtlety and dismisses it when it is pointed out in the discourse is problematic. It's effectively a subtle smear campaign against the non-problematic nuances.
See also the strangely similar situation with the term 'skinhead' -
Initially this was a mostly apolitical working class subculture, most listened to soul music and smoked pot and listened to reggae, and many were apolitical or left/socialist leaning. Genearlly mildly populist, mostly white, but yes somewhat 'dangerous' in that it was a popular social movement of unconventional rowdy people of all stripes. (much like the 'disenfranchised trumpians' that the media is happy to highlight as contributing to the rise of the so-called 'white nationalism' we're talking about here)
Cue one politically motivated overtly racist subgroup acting up and stealing all the headlines,
and now the entire term/culture is essentially taboo..
One can argue that this group just got the press and 'messed up the term', but at some point editorial bias is a factor.
For god sakes this is 'hacker news' I shouldn't need to explain this.
Many right-wing people claim that in practice, there's not: they're accused of being "white nationalists" for being "white and a nationalist".
If people downvoting me think I'm wrong, explain why people like Jordan Peterson, who merely espouse non-leftist positions and happen to be white, routinely get accused of supporting the alt-right and neo-Nazis.
For the people who doubt what I'm saying -- here's a video of Jordan Peterson. This is who the leftists routinely call "neo-Nazis" or "alt-right": moderates who refute their positions and calmly assert values like personal responsibility over collectivist victimhood culture.
I'd originally written "conservative"; I'm not sure Mr Peterson would describe himself in such terms. I've made it more neutral.
And this applies to both ends of the political spectrum: https://www.youtube.com/watch?v=nYlZiWK2Iy8
I’d say it’s even closer to “people I disagree with”.
I've just assumed it was a vague insult aimed at non-nominal conservatives.
To quote Merriam Webster:
"one of a group of militant whites who espouse white supremacy and advocate enforced racial segregation"
I am not of the opinion that white nationalism has no real meaning.
Nazis in Greece like their country or what they perceive as “their country”.
Note how you will never hear the term "ethnic supremacists" used by conventional media. It's too accurate and does not push the borderless agenda.
That’s different from saying things like “get that Spanish off the menu, this is America”, “go home foreigners”, and “immigrants are criminals”.
I’m sure you can see the difference.
I don't really care what FB does, I prefer it to have all the rope it needs, but this normalization of taking and modifying the meaning of terms to fit the anti-borders narrative is dishonest and manipulative. Again, it's not even remotely a new thing, the anti-borders crowd has been gunning against nationalism time eternal.
Your comment suggests you have internalized the idea that nationalist and racist are the same thing. Or were you really thinking I wanted to use "white nationalist" (or any color) in some other context?
Lets take your question as true for the sake of discussion. Do you take your language cues from these people? I don't. I don't see why you would let people you strongly disagree with decide what language to use. Are you concerned you might offend them by not using their preferred terms?
Might you be opening yourself up to some rather trivial social engineering opportunities?
Anyway... def don't talk about frames.
What do you think "it's an inherently dishonest frame" means?
Frames are important, it's why I asked why you thought I was bemoaning the loss of a phrase when I was really describing how the phrase itself is dishonest.
We’ve done this kind of thing before, and we’ll do it again. Facebook responding like this IS the marketplace of ideas reacting.
They'll crow and gloat about how the scene isn't a safe space, that it's founded in hate and intolerance. But as soon as well-known nazi/NSBM-related bands or members of the scene are deplatformed and antagonized for their views, the tone goes straight to "get these leftists out, this is a right wing scene! This is censorship!" and so on.
Apparently only right wing radicals should be allowed to have safe spaces... /s
But this isn't what will happen. Expressing skepticism about immigration - a fairly normal and mainstream opinion in the 1990s and early 2000s - can easily be construed today as "white nationalism" by many on the political left. It's not you who gets to define what your views are on these platforms, it's the "moderators".
Welcome to not having a safe space anymore.
This will be an ongoing project and will require constant tweaking and adjustment, but I’m all for their removal from the public sphere.
There are underlying economic and social issues leading to the reemergence of this worldview that need to be dealt with urgently and with peacemaking intention. Still, in the meantime this kind of thought if unchecked leads to genocide and must be stopped.
i dont consider myself republican, but the slippery slope seems all too real in this case.
Facebook banning outside ads for the Ireland abortion referendum comes to mind, as well as the case where a Christian satire site was threatened with a ban for spreading false information because Snopes had fact-checked one of their articles as false. Susan B. Anthony List had a bunch of pro-life ads banned; I think most of them were restored.
https://stream.org/328684-2/ this is a more recent case that follows a familiar pattern: Flagged, rejected appeal, then mysteriously overturned appeal some days later.
Hopefully, you also don't doubt that there are also many examples of Facebook censoring liberal/left/non-Republican whatever stuff, and then overturning on appeal, many times mysteriously so, after initially rejecting the appeal. (This post presents like a dozen examples in the genre of black activists being banned for discussion of racism, such as uploading screenshots of racist and sexist harassment they received. You might also want a warning that these examples are interspersed with the author's strongly opinionated and bellicose comments. https://medium.com/@thedididelgado/mark-zuckerberg-hates-bla... )
Due to their scale and the shittiness of their algorithms and processes at this kind of thing, surely you're not surprised there are tons of examples on every side. (This article documents more examples on all sides, including exemptions for a prominent Republican: https://www.propublica.org/article/facebook-hate-speech-cens... )
What reason do you have to believe these mistakes always seems to hurt conservative causes or that you tend to see conservative stuff get caught, disproportionately more than non-conservative stuff?
If you want to convince me that it's more of a "establishment/fringe" divide than "left/right", that ProPublica piece goes a long way towards making that case.
What party? The government is gonna come into a private organization and tell them they are obligated to spend money to preserve, host, and broadcast hate speech that doesn't align with the company's values?
Please tell me how this differs from telling small business owners they can't refuse service.
They're welcome to censor to editorialize, but they lose their protections from copyright suits for relaying copyright material uploaded by their users.
For Facebook to be immune from copyright liability for my uploads, when they display them to others publicly for profit, they cannot express prior restraint over my upload. Such commercial copyright violations carry heft penalties, in the thousands of dollars per view: Facebook and Google can't operate in an environment where they're liable to such a degree for uploads.
The government recognized this about internet services, and granted them immunity to copyright related suits in exchange for supporting the American value of free speech on their platforms. (This is a law.) They're free to not accept that deal, but they're liable for their commercial copyright infringement in that case.
I read and re-read both the DMCA 17 USC § 512 and Section 230 of the CDA, and as far as I can tell, the DMCA only requires responding "expeditiously" to DMCA takedown notices and court orders, and the CDA has no conditions at all but doesn't protect from copyright liability in the first place.
In fact, the CDA explicitly states that its liability protection DOESN'T require neutrality, and extends to “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected”. See https://www.eff.org/issues/bloggers/legal/liability/230
An anthology is not exempted under 512.c.
So we'd have to pull relevant case history. You can't just look at the words in a statute. Each single term of art could have a chain of cases arguing over the specific meaning of that term.
No, it's not. You seem to be mixing an inverted understanding of CDA Section 230 (which was instituted to avoid discouraging host moderation of user content, because prior to 230 exerting any such editorial control risked a host being treated as a publisher rather than distributor, with greater liability exposure for the user content) with the DMCA safe harbor (which, unlike CDA 230, applies to copyright claims.)
Is this a 'A therefore B' thing due to the way the laws are written, or is this explicitly called out someplace?
> The government recognized this about internet services, and granted them immunity to copyright related suits in exchange for supporting the American value of free speech on their platforms. (This is a law.)
I did a bit of looking (though not a lot) to try and find some sources on this but I wasn't able to really uncover anything that supports this.
Do you have any source available? I'm interested in reading into this idea further, I find it rather fascinating.
In fact, it links to another page which states:
Wow, is there anything Section 230 can't do?
Yes. It does not apply to [...] intellectual property law
Which directly contradicts your implication that Section 230 is the law that grandparent was referring to when they stated:
The government [...] granted [internet services] immunity
to copyright related suits [...] (This is a law.)
any action voluntarily taken in good faith to restrict
access to or availability of material that the provider or
user considers to be obscene, lewd, lascivious, filthy,
excessively violent, harassing, or otherwise objectionable,
whether or not such material is constitutionally protected
Sure, but it (1) doesn't apply to copyright—that's the DMCA safe harbor not the CDA one, and (2) was specifically created to eliminate the added liability web hosts were then subject to if they engaged in content moderation, not to require them to abstain from moderation to secure the safe harbor.
I'm pretty libertarian in respects to market regulation, but when it comes to mopolistic industries you need regulation.
That's an interesting question. What if Google and Facebook would secretly demote and hide articles about one political party and promote articles about another party? Theoretically they are absolutely free to do so but how do you think, will it cause a Congress investigation or not? Those people made noise about much smaller things.
Also, what if Google would start demoting its competitors and everyone who deals with those "untouchable" companies? That would be interesting to see.
I think people don't realise what power these new media have. The monopoly of Facebook or Google is a serious issue and we should not believe that they will always stay neutral.
> I received off-the-record information that an order had come from the very top: CNN executive Jeff Zucker didn’t want me on CNN any more. My centrist, nuanced perspective was anathema to CNN’s emerging brand as the anti-Trump network.
That should make it clear that a very small number of media executives ultimately decide what we see on the news.
In theory: All accused get representation regardless of accusation
In practice: Only the rich who are accused of vile things get a proper defense.
Small point and while I agree overall I’m not sure the media ‘selected’ Hillary. I think Obama Admin did, and after all, it was her turn.
> It seems to me that Facebook and Twitter are trying to have it both ways.
Of course Facebook and Twitter are trying to have it both ways, and, indeed, all ways — there are many more ways for people to communicate, or stances to take, than “both”.
Platforms like FB and Twitter hope to be the communications backbone of the world. The problem is, the world has opinions on what kinds of communications are acceptable. These platforms try to stay neutral, but the people are not.
Neutral does not exist. It's all relative.
In a polarized world, a neutral platform will die because either side won't like it. In a more interesting world, it might still die because people don't like people who don't think like them.
The problem with FB is that they have built a system that rewards polarizing opinions. Edward Deming said that your system is perfectly set up to give you the results you are getting, so if you want different ones, you need to change your system. Incentivize quality, disincentivize "viral-ness". Maybe limit viral-ness. Optimize for something besides addictiveness^Wengagement. Admit that people think, say, and do harmful things and build a system that is robust to it. Add some kind of negative feedback for posts.
Essentially, you have no control so the only solution is to not participate or appeal to a higher authority. Both are terrible.
In a weird tangental side-thought: The Internet is to Western Capitalism what Glasnost/Perestroika was to the Soviet Union. The opening of information, while allowing many great things through, also removed the filters that kept harmful content on the margins. In the Soviet state, it was the authority of the State that dictated content. In the Western world, it's mostly those who own/control large media platforms. Since liberalization, each situation found The Authority under acerbic attack from these new wellsprings of content, both legitimate and illegitimate.
Come over to Australia, that is what our traditional media has been like for years!
As for liability, here's why the laws intentionally shield them;
And finally, there's nothing forcing you to use them. Convenience (or lack of it) is not a sufficient argument for regulation.
Could you explain how this is any different than the handful of television companies for the past 70 years?
As best as I can tell, the older demographics seem to be the ones primarily watching the news, and it shows in polling data. Their opinions and narratives are easily manipulated by the "news" networks they are faithful to. Companies run by a "small group of people."
When they censor opinions that the top 10% agrees with, since those are that control the media and what's acceptable.
You can only remember the lessons of the past if you understood them in the first place. And frankly, arguing for more content controls by authorities is an insult to anyone actually having an understanding of those times.
In fairness to Google and Facebook, etc. TV networks have always been super biased and had far more reach than internet platforms - and it was actually conservatives who shut down (with good reason) the "fairness doctrine" that would have forced them to carry content they didn't necessarily agree with.
Repeating a previous study, which had shown bias in the 2016 election, Dr. Robert Epstein shows that bias in the 2018 election pushed voters towards democrats. Approximately 4.6 million undecided voters are likely to have flipped. Numerous districts, particularly CA 45, are likely to have been flipped by Google's weaponized bias.
There has also been an awful lot of bans of beginner politicians, all on one side, often corrected after the election (damage is done) with a lame excuse about algorithms making mistakes.
For example, he is quoting Hillary Clinton instant search debacle as an example of bias, which was debunked as being technical illiteracy:
In practice, how will this actually work out?
True story. I have a friend who got a temporary ban from Facebook (I think 90 days?) for commenting on a story about a Texas billionaire paying to hunt endangered animals, "How much would it cost to hunt Texas billionaires?" That was considered a violation of their anti-bullying stance. Knowing her, it wasn't. It was sarcasm.
Moving on to this policy, I'm happy to see neonazis and the KKK have a hard time. But there are people in the UK and EU who would see Brexit as being a separatist cause motivated by racial animosity. Will we see discussion of a future such measure banned by Facebook on such grounds? And remaining banned even for people who support it on economic grounds because they think that having to follow EU policies on GDPR, copyright, and so on will be a net negative for the UK?
Facebook is going to have some difficult conversations ahead. And the more of these lines that they draw, the more difficult boundary cases they will run into.
I also oppose it on general grounds in that I'm a big proponent of freedom of expression. Facebook can do what they want, we're not talking about the government here and I understand that, but I would rather they didn't go this route. I don't want to live in a society where what can and can't be said is strictly regulated either by force or by general consent. I seem to be one of the few that feels that way, though, any more.
There is also the practical concern of what happens if the ideas of what is acceptable and what isn't changes over time? Who knows which opinions that you hold now might become anathema at a later date. For example, the opinion I'm expressing now used to be a lot more common than it is today.
Let's prod at this:
- Should ISIS recruitment videos and propaganda be allowed (nay, encouraged) because deplatforming them will make them put their recruitment pamphlets elsewhere? Where else do they put their recruiting materials?
- Should we publicly and loudly encourage people to self harm, ideate suicide, etc. because if we don't, they'll just find secret places to do it?
- Should we consider white supremacist recruiting materials "dialogue" at all? If we're talking about dialogue, as in a formal debate between Richard Spencer and pretty much anyone else, one on one, that's potentially interesting (also embarrassing for Spencer). On the other hand, a video posted by a white supremacist isn't dialogue. It's even less dialogue when they can delete comments they can't aptly respond to, and when the people there are already interested (Richard Spencer videos weren't ever going to cross my feed). You're calling this dialogue, but it's really a one sided dog and pony show with maybe some unlucky sacrifices. Perhaps we shouldn't ban dialogue between white supremacists and normal people, but you're not advocating for dialogue, your advocating for Facebook (and whomever else) to support (distribute, platform, etc.) white supremacist propaganda and theater.
>I don't want to live in a society where what can and can't be said is strictly regulated either by force or by general consent.
How do you propose to create a society where people aren't allowed to dislike you? That's what you're asking for, essentially. "Freedom from consequences" is the common way of putting this, but really what you're asking for is an infringement on my freedom of association. If you piss enough people off, that'll come back to bite you. What's the alternative? That people can't hold you accountable for your previous words? That quickly devolves into a society of 4chan, which, well, I'm not sure why you'd want to live in that.
>For example, the opinion I'm expressing now used to be a lot more common than it is today.
How certain of this are you? Did black people or women have the freedoms you suggest 100-150 years ago in the US? Was anyone advocating for that?
How do we define "white supremacist recruiting materials? A significant number of people have said to me, un-ironically, that opposition to immigration is an instance of white supremacist speech. Same with opposition to affirmative action (they even called a crowd full of mostly Asians opposing affirmative action white supremacy in action). And I live in Bay Area, same place where Facebook is headquartered. That's the problem with trying to define ideological blacklists. Once you put a category onto the blacklist, everyone will try to push their political opponents into that category. This is how you get things like "learn to code" becoming a ban-worth offense (but only when directed to journalists).
Nowhere on the page linked in the original post does Facebook define "white nationalism" or "white separatism".
Do they define ISIS? That didn't seem to cause as much concern.
As for the rest of your post, I'm baffled that you somehow think that "preventing people from trying to kill people" is somehow a negative thing that should be criticized.
I also think that if I were consistently being mistaken for a white supremacist, I'd think about why that were happening, and probably try to distance myself from the things that were causing those mistakes. Your view appears to be that it's better to just prevent people from voicing their confusion.
ISIS is (or was) an explicit political group, an organized proto-state. It literally called itself a State. It was very clearly defined.
> As for the rest of your post, I'm baffled that you somehow think that "preventing people from trying to kill people" is somehow a negative thing that should be criticized.
"Preventing people from trying to kill people" was already against their terms of service. The whole point of this announcement is to announce the fact that Facebook is expanding their prohibited categories beyond "preventing people from trying to kill people".
> I also think that if I were consistently being mistaken for a white supremacist, I'd think about why that were happening, and probably try to distance myself from the things that were causing those mistakes. Your view appears to be that it's better to just prevent people from voicing their confusion.
As I stated earlier, multiple people have told me that desiring stronger border security and building the border wall is white nationalist. Others have told me that supporting any expansions of immigration restrictions is white nationalist. A few have even told me that opposition to affirmative action (even among groups in which Asians are the ones primarily opposing it) is white nationalism. I did not get the sense that they were saying these things ironically or in jest. Are these views tantamount to "preventing people from trying to kill people"? I live in San Francisco, which while not exactly the same environment as Menlo Park, is still in the same metro area as Facebook's HQ. There's a significant possibility that folks with similarly liberal definitions of white nationalism exist at Facebook.
Also the way you say you would "probably try to distance myself from the things that were causing those mistakes" really makes it sound like the chilling effect this has on discussion is a feature, not a bug. As significant number of people suspect that tech companies' expansions of prohibited speech is becoming a means of partisan manipulation. Statements such as yours likely reinforce this belief.
> Your view appears to be that it's better to just prevent people from voicing their confusion.
I am not trying to prevent anyone from voicing anything. The issue is that a significant number of people do confuse (or deliberately label) mainstream political views with "white nationalism" and "white separatism". Thus, Facebook's banning of these things is very likely to be seen as - and perhaps actually be implemented as - a means of suppressing legitimate political discussion. It probably would have been better to keep their prohibited categories the same, and perhaps more aggressively police certain circles and keep their policy - as you put it - "preventing people from trying to kill people"
There's already enough suspicion that Facebook is acting in a partisan manner, and more stuff like this is going to inspire ever greater calls to enforce stiffer regulation on tech companies and perhaps even breaking them up. This announcement seems like a shot in the foot for Facebook.
Are you suggesting that groups like the daily stormer, the national policy institute, etc. are not organized political groups? The national policy institute is quite literally a lobbying organization.
>As I stated earlier, multiple people have told me that desiring stronger border security and building the border wall is white nationalist.
Like I said, if I were being confused with a white nationalist with any regularity, I would take steps to correct that perception, perhaps by trying to understand why people think such things. Through conversation, directly, with those people, or on my own with research. Not by trying to appeal to some higher ethics that the people who think I might be a white nationalist are wrong.
Something about everywhere you walk smelling like shit and all.
Facebook's announcements do not specify these groups in particular, and do not seem to indicate that its definition of "white nationalism" and "white separatism" will be nearly as clearly defined as banning recruitment to the Caliphate.
> Like I said, if I were being confused with a white nationalist with any regularity, I would take steps to correct that perception, perhaps by trying to understand why people think such things. Through conversation, directly, with those people, or on my own with research. Not by trying to appeal to some higher ethics that the people who think I might be a white nationalist are wrong.
> Something about everywhere you walk smelling like shit and all.
Sorry to burst your bubble, but calling people white nationalists almost certainly isn't going to change their views. Quite the opposite, all it accomplishes is lessens the severity of these terms and makes it such that people roll their eyes when they see the label thrown around. And it further alienates people like me, who do support immigration and affirmative action etc, but are increasingly turned off by the every diminishing threshold at which terms like these get thrown around. Furthermore, it makes it even harder to distinguish between actual white nationalists and legitimate views that people are trying to make socially acceptable by painting them with the white nationalist brush.
Something about crying wolf and all.
After that, the damage has been done and trust in Facebook will have been diminished. And there is a very big difference between "crying wolf" (as in, actually mislabeling something benign as something dangerous) and voicing concern based on previously observed behavior. Calling opposition to immigration white supremacy is the former, saying that there's a distinct probability that Facebook's enforcement of these rules will impact non-white-supremacist views is the latter.
> Sure, but there are few enough white nationalists now that I don't particularly need to worry about them. The issue is if their mindshare grows. And calling them out keeps it that way.
You seem to be missing the core point I've been making. The issue is not with actual white nationalists which, as you point out, are few and far between. The issue is with mainstream views getting consistently labeled as white nationalist which introduces several issues. One, it makes distinguishing between the former and the latter more difficult making it very likely that mainstream views get banned under the label of white nationalist. And two, it makes it so that people are less concerned with claims of white nationalism thus making it more acceptable for the actual white nationalists to operate openly.
> See, I know a lot of people, and the only ones who ever say things like this are ones on the internet. I've never met a real actual human who is so concerned at the thought of someone calling someone else a name that they're going to stop supporting immigration reform or affirmative action.
You're right, but you're refuting a straw man. Alienation doesn't mean ceasing support for particular issues. More often than not it means refusing to identify with political groups. This isn't speculation, this is backed up by evidence. Record numbers of people don't identify with either Democrats or Republicans .
> Like I said, you seem really, really concerned about not being called a white nationalist. If you really do support things like affirmative action and reasonable immigration policy, I'm not sure why you have such a concern. Either no one is calling you a white nationalist, in which case, again, why do you care about the nonexistent strawpersno who might do so? Or, the one person who is is so fringe as to be easily ignored.
Yet again, you're talking about things I never wrote. I do not get called white nationalist and I don't think I ever have been called as such. But I do see co-workers and former classmates call mainstream views white nationalists, and I do see how it makes political discussion toxic and non-productive. These aren't strawmen, these are people I go to work with every day that are adopting stances that make it impossible for them to engage with people with opposing political views beyond hurling insults. Even though they aren't calling me a white nationalist, they're still calling my conservative family members and friends white nationalists when they say things like supporting the border wall makes someone a white nationalist. This isn't healthy for a democracy and it makes me not want to identify with the groups that they are a part of.
And to circle back to the original point that was made, the prevalence of mainstream getting called white nationalist makes it a real possibility that Facebook will start banning mainstream political views. If "build the wall" starts getting banned as white nationalist as many of my co-workers want it to be then Republicans are going to be very eager to bring down the regulatory hammer on Facebook and perhaps big tech companies in general.
So, you have historical evidence of facebook mislabeling things that are clearly not white supremacist as white supremacist?
If not, then yes, you're absolutely crying wolf, because you're ascribing the behavior of some entity to some unrelated entity, apparently based on geographic location (fun fact, the people enforcing FB's policies are probably in Austin or Phoenix, not MPK).
Also, your poll is outdated. The numbers are back up. And looking at broader trends, more people identify as liberal than ever before, so all of this stuff that you think is causing this shift towards conservatism, or at least away from identifying as a democrat, isn't, since people are more willing to identify as Democratic and liberal now than in 2016 or 2012. I don't think these are related, but since apparently you do, I hope that this helps you understand that your reaction appears to be the minority reaction, and that this public shaming that you so despise is working.
In a village there exists a boy that is tasked with protecting a flock of sheep by shouting "wolf!" if he sees a wolf, to alert the townsfolk to come to his aid. Out of boredom (or self-satisfaction of his ability to get a reaction out of the townsfolk, depending on the variation) he shouts "wolf!" despite not seeing any wolf. After a couple instances of false alarm, the townsfolk no longer heed the boy's alarm and do not come to his aid when a wolf really does attack the flock.
The boy knew that there was no wolf attack, but claimed that a wolf was attacking the sheep anyway. I am doing no such thing.
Facebook as only just announced this policy, so no one has any observation of how they are enforcing it. This is obvious. The post itself states that the policy will only go into effect next week. That you are asking me if I have historical evidence regarding a policy that has yet to go into effect does not indicate that the original post was read in much detail (though it does make your previous statement that this announcement is about "Preventing people from trying to kill people" a lot less surprising).
I am not, and have never, claimed that Facebook is using the guise of white supremacy to ban mainstream politics. Again, this policy isn't even in effect yet. I am, however, highlighting the fact that a significant segment of Facebook's work force (tech workers in the Bay Area) espouse a view of white supremacy that does categorize things like opposition to affirmative action and immigration as white supremacy. Will this impact the enforcement of their views? I don't know, but my take on this situation is that it's a significant risk.
Also, I'm not sure why you're claiming that my data is outdated. My data was from 2018, at which point independents were at 44%. The latest figure on your linked Gallup poll is 42% - not very far off. It's still significantly above the historical average of the mid 30s.
On the other hand, banning white nationalism as a whole would be more like banning Salafi Islam as a whole, instead of specifically ISIS, al-Qaida, al-Shabab, Boko Haram etc.
This is not correct. It is not illegal to be a member of, or recruit for, Atomwaffen. Its illegal to commit crimes. But associating with a group isn't a crime (for good reason!). That doesn't mean that we shouldn't take steps to keep people from associating with people who will cause them to commit crime.
Or like, should we just go whole hog and encourage MS-13 to post recruitment videos on youtube too.
As for salafism, no that's akin to something like the WBC, which while reasonably considered a hate group, hasn't been banned from anywhere as far as I know.
\* Which did stir up some controversy, but is not all that surprising. The precedence for this dates back to the US Civil War, it would be ridiculous to claim that the Union Army criminally murdered hundreds of thousands of US citizens on the battlefield without trial when the latter were fighting for the Confederacy.
1. Get exposed to $radical_group on social media
2. Go searching for $radical_group because their ideas are intriguing/edgy/whatever
3. Find the actual site of $radical_group, and now they control the entire process.
Think of it this way: you see a poster for some cool band that says "RadicalPandas's music will change your life. We have concerts, but they're secret and we can't tell you where they are because the man wants to keep us down". If anything, that sounds even more edgy and intriguing. So instead you just have to tear all of the posters down.
The actual planning and such happens on private sites with secret forums.
As a real-world example, a Danish right-wing nationalist network called ORG was exposed in 2011, after having allegedly been active since the mid-80s. Its members counted a number of well-to-do individuals, as well as members of the police and known violent white nationalists. The group ran and had control over several sports clubs, including martial arts and gun clubs, which they used for training members in street fighting and tactics.
They ran a database of basically every politician and public figure who ever espoused left-wing views, and thousands of ostensibly left-wing citizens as well, labeling them as "traitors" to be "dealt with".
At least one of their members was from the armed forces in some security-related capacity, and they had reasonably good opsec procedures in place.
I have no doubt that even after being exposed and having its members publicly named, the group or a new equivalent still exists, with new secret sites and a heightened level of paranoia towards possible infiltrators.
By deplatforming these people and driving them further underground and paranoid, we limit their ability to attract people through social media, for fear of exposure.
And somewhat related, does this mean you have as dismal a view of humanity as it seems, that you don't want people exposed to dangerous ideas? This all seems necessarily paternalistic. EDIT I can't imagine, for instance, trusting in democracy with such a view.
In fact, we humans already form a decent system for weeding out bad ideas/opinions, we just don't listen to our own past experiences on the matter. We found out that fascism is putrid and yet now we try to come up with a new system that will weed it out instead of just chucking it into the garbage bin and moving on.
It's not paternalistic or a dismal view of humanity or that "humanity cannot be trusted with such a dangerous idea", it's because the people advocating for it constantly lie about it and intentionally trick/indoctrinate others into following. Under the right circumstances of me growing up, I fully believe someone could have deceived me into believing it, so I don't think I'm better than anybody who got sucked in. We can trust in democracy as long as we take proper precaution against things that prey on the freedom of expression and association in order to remove those rights from others.
Put another way, what real benefit is there to "freedom of speech, except for advocating fascism" as opposed to "freedom of speech, no exceptions"? The US already doesn't have pure unrestricted free speech because there are exceptions made for outlier situations where free speech is not protected in efforts to secure the safety of others. That hasn't led to total collapse or censorship.
Personally, I feel there should be legal protections for all speech, but not protections from social ramifications. By this logic, I am totally fine with the idea of punching fascists. Which itself could be hit with assault charges, but then, how many juries would disagree with the reasoning for the violence? Let the masses decide for themselves, basically. It's what bothered me about the firing of James Gunn and Roseanne both despite their polar opposite politics. The companies that employed them cared so much about what the public thinks or feels that they couldn't be bothered to let the public decide for themselves to support either celeb or not. (And I know Gunn has since been rehired, but this happens so often, my point stands.)
I lean towards anarcho-communism myself, generally close to the original communist definition of libertarianism. So my answer to which authority should run things is "none". Facebook runs their own ship and are free to choose which content they will platform. While I would prefer a complete absence of hierarchy in all aspects of life, I think we can agree that this is probably not going to happen at FB.
I also strongly support anarchistic deplatforming and generally hindering fascists, nazis, racists and other bigots from spreading their noxious views. We have to realize at some point that some views simply aren't worth spreading.
Do you have any evidence to support that statement? Would you expand on why you think so? I would estimate that by driving them more underground, you make them seem more cool in the eyes of angry teenagers. You also make them more radical because underground their opinions are not exposed to contradicting views. In my opinion, we should do exactly the opposite - we should give them platform to speak and oppose them. Isn't that what open society is about?
There are a few studies in this thread that conclude similar.
The whole sunlight is the best disinfectant trope is only a trope, not a truth.
See also any conspiracy theory and the amtivax movement. When groups appeal to fear, not logic, as a recruiting tool, you can't logic people away.
With regards to ISIS, if their recruitment videos are publicly available, then people can contest them publicly too. They can explain why this viewpoint or whatever might be appealing to certain demographics, but look at this evidence for what happens to those people once they join.
The self-harm / suicide question is harder for me. I still think it's a good idea to have this happen in the public sphere so people can expose those who encourage such things in other people. However, if an individual is being targeted specifically then maybe there should be a line there. I don't know.
As for white supremacy, yes, I said "dialogue" when what I really meant was "speech". I don't know who Richard Spencer is, but if he's blocking comments then people can still post rebuttals in other videos.
I guess my main point is that it's better to confront such things openly then to try to hide them under the rug.
There also seems to be this idea that ISIS and white supremacists and whoever can magically infect others with their dogma as a form of mind control or something. The danger, in my opinion, lies in trying to bury it. Then when people stumble upon them, or are recruited and given a login somewhere, they have access to only one side of the argument and are effectively in a bubble. In the public sphere where this plays out openly, they have ready access to conflicting information on the same platform.
I don't want a society where people aren't allowed to dislike me. I don't quite get where you're coming from. I want a society where people can say what they think and not get censored for it for the most part. It requires that we all come to terms with the fact that some people hold different opinions that we really hate.
I wasn't alive 100 to 150 years ago. However, growing up 40-ish years ago, I remember hearing such things as "I don't agree with what you say but I will defend to the death your right to say it". Not something you hear much, any more.
At scale, this is exactly the case. If the probability of the average person becoming radicalized is greater than the probability of the average radical de-radicalizing, then open discourse will, on average, increase the number of radicals until those probabilities equalize.
So now I ask you: how successful have you seen discourse at deradicalizing Jihadis, white nationalists, westboro baptist church members, anti-vax parents, or flat-earthers? Is it more, or less, successful than those groups at radicalizing new people?
>I want a society where people can say what they think and not get censored for it for the most part. It requires that we all come to terms with the fact that some people hold different opinions that we really hate.
Someone says "I want to diddle your kid". You tell them to never speak to you again. They say it to someone else. That person blocks them too. The same person keeps telling everyone that they're interested in child molestation. Free association says that we should all be able to shun that person because yikes.
But you say something different. That we shouldn't shun the potential kid-diddler. We should continue embracing them and their views, because shunning them censors them. And all views should be cherished and protected.
Or maybe you aren't saying that, but then its very tricky, because if everyone blocks them on facebook, they've been deplatformed. So why is that okay if facebook banning the person isn't? Or maybe it's that proclaim support for child molestation isn't okay, but proclaiming support for genocide is. In which case we're right back where we are now, just that your definition of "for the most part" is "expression of child molestation is bad, but white supremacy is fine". This brings me to my final point:
>I don't agree with what you say but I will defend to the death your right to say it
This only works when what you're saying isn't a threat to me. See how willing people 30-40 years ago were to defend to the death the right of black power groups to call for black empowerment. (hint: people got so scared the republicans banned guns)
If I start saying "we should kill all free-speech absolutists", how long are you going to defend my ability to say that? Only as long as I don't have power. Once I have the ability to actually carry out my threats, say, if I'm the president, do you really want me saying I'm going to kill some subset of society? Are you going to defend my right to say that? What if you think I might actually carry out what I'm claiming?
As long as the speech isn't a threat, people are okay with defending it. This is why no one really cares about flat-earthers, they aren't killing anyone, they're just the easy butt of jokes. Same with, until very recently, anti-vaxxers. Jenny McCarthy is kooky and their stuff is nonsense, but what's the worry? Well, measles, the loss of herd immunity, etc -> hmm, maybe we should stop this kind of thing.
Sometimes its possible to make the fix reactively. Kicking unvaccinated children out of school stops much of the harm that unvaccinated children cause, and in many cases forces parents to vaccinate their kids despite their views.
Unfortunately for explicitly violent groups, there isn't an easy reactive solution. You can't un-shoot someone.
As soon as people are threatened, they stop. "Expression of white supremacy is acceptable, but child molestation isn't" is just an expression of what you find threatening. What you're seeing is actually a shift toward more people speaking. Those who were voiceless before are speaking out and saying hey, this was really shitty before, let's not go back, and all the people saying otherwise are actually, truly threatening to me, so perhaps lets have them not say these things.
And on the other side you're seeing people threatened by this change in how people are considering speech, and feeling threatened, and saying "hey actually let's legislate these platforms so that I can keep spewing my garbage". White supremacists feel threatened by censorship, black people don't. That should tell you all you need.
If we are to judge by social media companies past behavior  it’s regular conservative and liberal opinions that will be considered hate speech or white supremacy
To pre-address your other comment, neither white nationalist propaganda nor ISIS propaganda necessarily includes calls to violence. So the incitement of violence standard applies to neither.
If not, what differentiates ISIS from White nationalists? Both are violent groups that wish to kill people different from them. As far as I can tell, one just looks a lot more like me. They're both dangerous, and we shouldn't let either recruit people on facebook. What is the flaw in that line of reasoning?
As for that case, it was dismissed earlier this month, apparently because the plaintiffs didn't actually have any examples of wrongdoing on the part of companies, and just filed the suit to raise awareness of freedom watch's advocacy.
On the other hand what is being called alt right and white supremacisrs by mainstream organizations is bullshit:
- The economist called Ben Shapiro alt right today https://mobile.twitter.com/TheEconomist/status/1111248348114...
- The guardian says Jordan Peterson is supposedly alt right https://www.google.com/amp/s/amp.theguardian.com/science/201...
- Bret Weinstein is supposedly alt right
And ridiculously enough Ben Shapiro is a Jew that is the biggest hate target of the alt right, and Jordan Peterson is hated by the alt right such as Richard spencer and Vox Day.
White supremacists also advocate violence against other races often to subjugate them. This is highly similar to isis.
Or are you claiming that all members of the alt right are white supremacists?
SPLC also say they are the same: https://www.splcenter.org/fighting-hate/extremist-files/ideo...
The opinions of SPLC and ADL is especially relevant because mainstream media and social media companies such as Facebook seem to have relationships with them as well as similar orgs to define actions against people.
So no, we are taking about the same poorly defined term because these orgs is an authority to Facebook.
The reason it’s bullshit is that these orgs doesn’t relate it to truth and evidence, use their power to slap this label onto anyone that disagree with them and punish people for things they didn’t do.
Unless you're of the opinion that "racism" and "white supremacy" are synonymous, the ADL does not define them as the same.
>So no, we are taking about the same poorly defined term.
To be clear, the "alt right" isn't white nationalist, it is, however, a white nationalist recruiting tool. Sort of like how you don't just start out as a random person and then the next day you become Jihadi Jane. You're slowly radicalized. The alt right to full-on white supremacist pipeline is the same way.
You start off interested in self help, so you read 12 rules for life; then get caught up in Peterson's weird ideas about western culture; then pretty soon you're seeing not just his youtube lectures, but other lectures about western culture; and then videos about the decline of white/western culture; and then you're watching Richard Spencer; then you shoot up a church.
Now absolutely, granted, not everyone follows the entire path, very few people end up radicalized, but while I'll absolutely agree with you that Peterson isn't a white supremacist, he's absolutely a useful idiot for them.
But again, let's stick to people who have been accused of being "White Supremacists" since you're the one using the term alt-right, not Facebook. Which innocent people are getting accidentally confused for "white supremacists" specifically?
> White supremacist Richard Spencer, who runs the National Policy Institute, a tiny white supremacist think tank, coined the term “Alternative Right”
Or from the SPLC source:
> The racist so-called “alt-right,” which came to prominence in late 2015, is white nationalism’s most recent formulation
Both clearly use them in the same breath.
We can argue semantics about one being the tool for the other etc etc. But that doesn't change the fact that they are being tightly related by organizations that is an authority to Facebook.
SPLC and ADL is very aware of its power, and has related much of the prominent dissenters of wildly different ideologies to what they claim is alt-right; Maajid (an arab muslim, they lost a lawsuit on this one and apologized), Jordan Peterson (an individualist liberal), Ben Shapiro (a pretty normal conservative), Dave Rubin (a classical liberal) etc etc.
SPLC and ADL are by this evidence too often bullshitters that don't relate what they say to truth, but instead relate it to their ideology that is not preoccupied with truth-seeking. It is really a shame their views are elevated like this as a justification for Facebooks and other organizations abuse of power.
Again, Facebook didn't use the word already right. They're talking about white supremacists. You're arguing that these are the same thing. So if that's the case, you should have ample examples of people calling Jordan Peterson a white supremacist also.
If not, then maybe they aren't the same thing, and your equivocation is unwarranted and you should stop attempting to confuse the subject.
I've also shown that even the authoritarian sources Facebook trusts use alt-right and white supremacist in the same breath when defining the terms. These articles are no accident, this is what they believe.
This is not fear mongering. This is people that don't mind using their power to accuse viewpoint opponents for things they didn't do and with no evidence claiming they hold reprehensible viewpoints.
Edit: can't reply due to message depth limit, but the debate here is about facebook suppressing people like Jordan Peterson, Ben Shapiro, Dave Rubin using arguments made by SPLC and ADL such as the one in [1,2]. Why do they have the right to suppress other peoples viewpoints on dubious grounds with no recourse?
It is pretty clear from your arguments that you agree that they are related or as you say "a white nationalist recruiting tool" regardless of how you otherwise view them. The process facebook institute will therefore suppress these viewpoints based upon no evidence and by subjectively mischaracterizing their viewpoints, with no recourse for this power abuse.
Once more: please give an example of someone calling Jordan Peterson a white supremacist, or admit alternatively admit that no one has done so and you were fearmongering.
Not saying jbp associates with white supremacists, or serves as a useful idiot for white supremacists. That's all well known. You claimed that people called Jordan Peterson a white supremacist. You still haven't justified that claim, and that's because, quite simply, it's false.
You're fearmongering. No one has called Jordan Peterson a white supremacist. That's just a factually incorrect statement. They've said he unintentionally helps white supremacists, he occasionally associates with them, take selfies with them sure, but for all your trying, you still haven't been able to find someone actually call jbp a white supremacist.
So perhaps, just maybe, your worry is misplaced.
I certainly do, but evidence (e.g. historic enforcement patterns) indicates that Facebook does not.
Note that my opinion on this isn't global and without nuance. There are things that Jordan Peterson does and says that aren't, at all, controversial and are even occasionally interesting and thought provoking. Not everything he does radicalizes people. Same with, I assume, Rubin and Shapiro, although I don't pay them enough attention to know or care either way.
However, that's all beside the point. Facebook *doesn't appear to thing that they're worth censoring. I'm not sure why my opinion is at all relevant.
You still haven't given me that example, by the way.
You really don’t need hate speech and white supremacy policies to prohibit speech that is already illegal, but you need it to suppress speech that some people might disagree with when they want to abuse their power to suppress it.
How do you think about the false positives?
These orgs provide the best definition we can get right now due to Facebook relying on others to define it instead of providing clear definitions, and these orgs is an authority to Facebook. If there was ever an attempt at using power while abdicating responsibility that is it.
The reason while any action build upon this is bullshit is that these orgs doesn’t relate it to truth and evidence, use their power to slap this label onto anyone that disagree with them and punish people for things they didn’t do.
To figure out if we disagree where it matters the most, do you think it is justified to ban or suppress content on social media from people that SPLC/ADL view as part of the radicalization process? Eg jordan Peterson, Ben Shapiro and Dave Rubin.
Shouldn't the same argument apply to the videos that radicalized those murderers?
Does that answer your question?
That's not only wrong it's highly offensive.
I googled for "extremism murder statistics": This is only the US , and I have not looked at methodology, but it supports GPs claim, not yours. Please discuss, but do so using substance, not master suppression techniques.
Just to give you another source from the same people which seems to agree with Wildgoose's parent atleast with its reasoning (albeit by making a few qualifiers on the dataset that bend it to be in favor of that reasoning)  from 2001-2016 in the US.
Unfortunately trying to control others viewpoints hurts yourself as much as anyone else because you are making yourself and what you can learn from others less adaptive to reality.
Marketplace of ideas is not about throwing out unfounded claims and with a high likelihood have others treat them like a proposition founded in truth seeking. Rather it will be treated as the bullshit it most likely is unless you supply evidence to the claim.
That's a win in this case. It already festers somewhere else. White nationalists would like to spread their ideology outside of their normal bubble. They often talk about "redpilling normies." It's harder for them to do that if "normal" people have to explicitly seek out that kind of speech.
excluding people from a conversation doesn't help them learn why/how they think is wrong.. those "hidden" places that let the wrongthink fester only create more actions brought by wrongthink (guess what they talk about? wrongthink).
i dont understand the logic of removing someones ability to converse if they dont understand why they're wrong.. they cant ask questions
someone explain to me how banning content/people = deradicalizing
I've been in fb group chats with holocaust deniers etc. (mans literally inboxed me a youtube video, I never bothered clicking the links he sent but it was funny)
"banning white nationalist content" is a cute headline but doesn't hit the problem of seemingly benign normal users that would never "reveal their power level" slowly radicalizing their friends with content not on their site lol
The Rhetoric Tricks, Traps, and Tactics of White Nationalism
the overt white nationalism content you think of doesn't really exist, i guess maybe if ur a boomer it still lingers? idk
its less orchestrated and just friendly until a point
I don't think I've ever seen someone already radicalized reason their way out of it through conversing on the internet. I'm guessing the calculus is that exposing this content to people makes it easy to be suckered in, but we don't see the opposite affect.
That said, I think I generally agree that conversion via social media is unlikely. Still, I’m also not sure about my position on banning people. But the above article is a great read.
thats how quick it can take to slowly expose people to content and how useless it is to "ban white nationalist content"
v cute headline tho
Is that true? Would we have the anti-vax movement if it weren't for Facebook & co.?
And if it were banned, they'd just spin it as the pharma companies paying to suppress the truth. It would only further confirm their beliefs.
Through all of mankind we had strong mechanisms to form consensus, including social repercussions. Those don't work anymore, since what would have become outcasts in earlier generations can now easily (for example on facebook) find like-minded communities and fulfill their social needs/get approval/etc. It'll be interesting in how different realities people can believe in before society breaks apart. I'd prefer we wont let that happen.
The question is what actually works. Censoring them might reinforce their believe, but I can accept giving up on some if it does effectively stops the spread.
This is a dangerous, dangerous path to go down to if you belong to any kind of enlightment inspired ideology. What kind of things were supressed the hardest? Sexual Deviancy. questioning authority. Questioning relgion. You really want this kind of society? I think maybe we can stand some antivaxxer ...
Mechanisms to form consensus does not necessarily mean rule by mob, quite the opposite. Positive consensus mechanisms can be trust in the scientific method, institutional credibility, and acceptance of reason. These mechanisms can support an enlightenment ideology, not prevent it.
"Freedom of speech" is not an accident. Enlightenment thinkers have been pondering this for 200 years and more and objections have been successfully adressed over and over again. It's as much of the type of consensus you describe as we will ever have. 'But computers' is not sufficient to just do away with it.
Make them pay a material cost: link vaccination to welfare benefits/family tax rebates, works well in Australia (search no jab, no pay). Also make an up to date vaccination record a requirement for enrolment into schools.
They will complain and might keep on spouting shit, but at the cost of ~$10k/child/year they'll change their actions pretty quick.
So, you're saying this mechanism is a good thing? Because mechanisms that reinforce the current social consensus, whatever it might be for that era, tend to maintain the status quo. And a desire to maintaining the status quo is a big chunk of the philosophy of, yep, that's right, ... conservatives.
I've often said progressives aren't liberals because they don't agree with liberal philosophical values of the Enlightenment. Their "maintain the current societal consensus" argument is the strongest evidence of that yet.
And throughout history people lived to be 30 and died of plague wallowing in their own excrement thinking the devil did it.
If your ideology needs thought control to work it needs to be taken out at the back of the sheds and shot. The second a society stops discussing ideas openly is the second it starts sliding towards a new dark age.
But what actually happened were those numbers were inflated by his followers and without the echo chamber of the larger base he had on YouTube his numbers eventually plummeted. Instead of being a regular in the news he occupies a small corner of the internet without access to the larger impressionable ever refreshing base he once had.
He has now resorted to trying to sneak videos back on to you tube.
And people should be okay with his persecution by the people who had the political upper hand in that era? No? Then why should anyone be okay with deplatforming Alex Jones (as bad as he is) by whoever has the political upper hand at the moment?
Because one was a indispensible paragon of civil rights and an avatar of anti-racism and the other is a shrieking whackjob entertainer peddling freeze-dried survivalist food while spreading lies, mental illness and grief to families of murdered kids.
Hey, you're the one who brought up history.
Hey, you're the one who failed to think things through. The tired old "b-but deplatforming will only ever by used against _bad_ people" argument is exactly a failure to appreciate the lessons of history, of which MLK is an excellent example.
No, I'm the one who corrected an utterly insane equivalence drawn between a commercial figure who drives grieving fathers of murdered kids to suicide and a man who was literally killed for his demanding equal treatment for all people. Again: if you bring up history, you are now forced to deal with historical results and categories. Deal with them. Don't just pretend you are.
It's ironic that, on a site where the UNIX philosophy is widely appreciated, that so many fail to appreciate the wisdom of the old quote "UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things.". So too, with Alex Jones, MLK, and freedom of speech.
“Most of your peers consider this post racist neonazi propaganda.”
That isn't so clear: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=139661
It's not clear how to preserve a forum for free expression while spammers or trolls seek to suck up all the available bandwidth for their messages. You can end up with a sort of spectrum allocation problem, where if everyone is allowed to crowd the same frequency then communication on it becomes impossible. But once you start moderating content, it's really hard to find a satisfying balance. And on the margins, some trolls are indistinguishable from some people with really unbelievable opinions.
I don't know what the answer is, it seems like a really complicated problem.
> Our own review of hate figures and organizations – as defined by our Dangerous Individuals & Organizations policy – further revealed the overlap between white nationalism and separatism and white supremacy.
It doesn't appear that this content is being banned because Facebook surrogates find the politics in question abhorrent (though they likely do so find). It is being banned because it is acting as part of a recruitment funnel for an ideology that openly advocates violence. They collected evidence of this before acting.
The line is somewhat blurry, but it does exist. On one side, we have a sensible use of editorial discretion; on the other, we have censorship. Is Facebook going to cross the line at some point? Almost certainly. Basically every individual or entity to have editorial discretion has both made mistakes and abused that power.
It remains to be seen to what extent that will happen here. If anything, they've erred on the side of permissiveness so far (e.g. how long it took to ban ISIS). If that changes, we should call them out at that time.
That would be believable, if it was applied as such across the board - e.g. also removing Christian Dominionists, Stalinists, Salafi Muslims, NoI etc.
I doubt that have enough data to show the links for the other groups.
Often is isn't meant as a threat, but instead is a political statement calling for the end to the hegemony of the West. However, sometimes it is meant as hate speech (often when associated with calls for the destruction of Israel).
As for the others--when's the last time a Stalinist drove a car into a crowd or committed a mass shooting?
But it's not at all! Viewing racism as a left/right thing is wrong altogether. While it isn't obvious in US politics today, the left has often had a problem with race too.
Racism should be abhorrent to both left and right, and that should be independent of immigration views.
Which hilariously is a concept born almost exclusively out of modern leftist and current "_blank_ studies" thought/ideology.
Well actually Facebook has data showing this, so I don't think that's correct.
Obviously there are other kinds of content that Facebook could ban, but merely because those things exist doesn't mean this shouldn't also be banned.
I believe private companies are usually better, but still highly dependant on size and a number of other factors.
I definitely don't have faith that FB will wield this well, and the worse part is it will probably pretty hard to find out from the silenced parties when they f!@# up until much later.
The framers of the constitution could certainly have included a requirement that private institutions follows the same rules as the government. There were certainly big and powerful non-governmental organizations at the time -- the Catholic Church, for example.
Take email for instance: first class mail is protected by the 5th amendment, and a warrant is required to open and read first class mail. Move communications to a private digital platform and the presumption of privacy is completely flipped into a presumption that all your emails will be read and stored for future reading. When all communications go online that leads to a very significant change in how the law applies to private communications and that is exploited by the government just as much as it is exploited by the corporations. After many years of there being a free-for-all where law enforcement and intelligence claimed they could view anything online without a warrant there was finally some push back by courts to reassert the rule of law, but email still has substantially less privacy than first class mail.
Speech on public forums is a similar case. The 1st amendment applies to the town square but if a corporation creates a digital town square then it gets the right of censorship. Then the government applies pressure on the corporation to censor on their behalf and the government begins to exercise a power it did not previously have. That was not a problem when online platforms were small and inconsequential, but when they grow to billions of users globally, then the power to censor speech on those platforms becomes quite influential on the exercise of real world power.
Corporations do not have police powers but they are subjects of governments that do, so once they get power they can always be made tools of the people that hold power over them.
The whole point of defending speech you don’t agree with is it promises that arbitrary rules won’t be used when YOUR speech runs afoul or someone else’s feelings.
Let me phrase it to you like this... if Mark Zuckerburg who you may generally agree with leaves Facebook and is replaced by let’s say Donald Trump and his assistant Mike Pence; do you still think it should be Facebook as a service provider’s policy to gauge what offensive speech is?
Maybe it’s not Facebook’s job to ban legal things you personally don’t agree with.
The problem is not that I personally disagree with white supremacy. The problem is that white supremacist organizations are a significant and growing terrorist threat, and they are using Facebook to recruit new members. This is no different from the various platforms that banned ISIS.
3 of the 4 graphs against time showed right-wing extremism increasing, in Western Europe and North America, the UK, and the USA, respectively. The only graph that didn't show that right-wing extremism is a growing problem instead showed that it has been consistently bad in Germany, worse than the other two categories combined (in that graph, the other two categories were left-wing attacks and "Not Identified").
What's arguable about it?
The poster is pretty much correct.
My view on this action is a view on this action, not every potential future action Facebook might take in the future that is loosely analogous.
> if Mark Zuckerburg who you may generally agree with leaves Facebook and is replaced by let’s say Donald Trump and his assistant Mike Pence; do you still think it should be Facebook as a service provider’s policy to gauge what offensive speech is.
I also think that they would do a horrible job of it, and the board of Facebook should not choose them for that job. I also think Donald Trump does a horrible job directing the policy of the executive branch of the US government, but I don't think that fact means the the President of the United States should not direct executive policy.
If I thought that no one should have any responsibility or authority if Donald Trump would fail in the responsibility or misuse the authority, then, well, I wouldn't allow anyone to do anything.
> Maybe it’s not Facebook’s job to ban legal things you personally don’t agree with.
It's absolutely Facebook's job to decide what messages they are willing to relay on their platform. That's a direct consequence of the First Amendment.
I think it's foolish to wring your hands over the arbitrary machinations of a multibillion dollar corporation's online platform. They don't care about you or me and frankly they have no obligation to do so. This logic is akin to complaining that McDonalds puts too much lard in the fries, maybe that's true, but if that's the case then don't give McDonalds your business; the fact that there is a McDonalds on every street corner doesn't mean they are obligated to modify their business process to be within the parameters of your approval.
> if Mark Zuckerburg who you may generally agree with leaves Facebook and is replaced by let’s say Donald Trump and his assistant Mike Pence; do you still think it should be Facebook as a service provider’s policy to gauge what offensive speech is?
That's Facebook's prerogative. I don't really use Facebook, but if I felt like Facebook was unfairly targeting me I'd stop using their platform, if they elected Trump to the board of directors I'd stop using their platform. The solution to all these Facebook problems is very simple: stop using it.
Because I'm not a white nationalist? Facebook already bans child pornography, pro-ISIS content, and doxxing. Does anyone seriously believe that Facebook is slippery sloping to banning all speech?
But let's say that Trump and Pence do indeed take over Facebook and make it so they ban all users that don't loudly praise Trump. If they do, honestly, so what? Facebook can't imprison you or legally remove your property. The worst they can do is ban you from their platform. Which, well, that sucks, but there's lots of sites on the Internet. In fact, you could go ahead and legally make a non-censorship Facebook.
...of course, sites like that already exist, in this world, not in the Trump/Pence Facebook world. And they're dominated by child pornography, white nationalism, and doxxing. No one at large uses them because they're horribly toxic and disgusting. Almost like some rules about content are actually helpful for sites on the Internet.
But I beleive my comment said legal things. Having a different opinion is still legal, for now. You jumped to all sorts of illegal things.
And my context was what if “the bad people” are suddenly running these services or in positions of power and relative to them, you are the one with hate speech? Perhaps it shouldn’t be up to them, or you, or anyone to police what someone else MIGHT find offensive.
> Perhaps it shouldn’t be up to them, or you, or anyone to police what someone else MIGHT find offensive.
You have got to be kidding. If it's on a platform I control I can make any kind of legal judgments about the content I find acceptable. And if you're banned, again, so what? Facebook is not a government or even a government-adjacent entity: I have no right to participate on the site, and they have no obligation to platform my speech.
That line is grey and I'd argue that your friend's comment, while obviously hyperbole, does not respect human life.
There was an article posted about cyclists that points out the danger ous dehumanizing others; that comedy / hyperbole help group think achieve it.
That's what bullies usually say when you confront them.
Even if someone commented that as "sarcasm" it would look pretty bully to me.
We've had Nazis and Klansmen effectively shut out of public discourse for decades. They aren't on TV. Their ads don't run in newspapers. If you go shopping in a pointy white hood and a swastika armband, the store (or the shoppers, more insistently) will likely ask you to leave.
Facebook (and similar) are the ones with the odd new 'principle' thing they tried which is 'it's ok for the hyper-overt bigots to shit everywhere with no consequences'. In practice, it has not worked out well so they are belatedly changing it.
This is the end of freedom and, along with Europe decisions recently, the beginning of all the worst dystopias we have already listened/read/watched about.
In practice, we've seen a lot of examples across YouTube, Facebook, and Twitter where things are blanket deplatformed under the guise of policies like these, and it's concerning.
One thing is for certain - Facebook can't possibly do a worse job than YouTube has at policing content.
I mean, we're discussing this topic on a site that highly polices content, albeit in one that does in different ways, but such different ways that it attracts the very commenters in this thread to read and respond to content rather than visiting other sites -- or any at all.
Did she know that the person she directed that comment to did not know her?
On the other hand, there's bound to be people who believe fair-trial-judicially-decided capital punishment would be fair retribution for anyone who intentionally kills endangered animals.
I'm going to say something like: I'm against capital punishment because it tends to kill innocent people at least occasionally, not because it isn't often deserved.
What is the parallel? In both cases, people who did not recognize the author's point took offense. (And yes, a lot of people were seriously horrified by Swift.) And in both cases the author's actual point was close to the direct opposite of the one that those people thought. Some people found obvious, others didn't. To people who found it obvious, it can be surprising that it wasn't obvious to others. People who didn't find it obvious think it a horrible thing to say.
In this case the comment is directed against the justification that the billionaire offered that it is OK for him to kill the animal because he paid lots of money to do so. And her point is that just because you pay lots of money to do a wrong thing, doesn't make it OK to do that wrong thing. To see it, put the billionaire in the animal's shoes. How much money would it take to make hunting OK? Obviously no amount of money would suffice! Just as the billionaire's having paid lots of money didn't make his hunting OK either.
That said, this one actually gets complicated. The money from these hunts goes to anti-poaching efforts. So the billionaire kills one animal, and his money saves others. Which still makes the billionaire a shitty person, but there is a utilitarian argument for allowing it.
That said, how would you feel if we were talking about hunting children dying in a famine instead of black rhinos on a preserve? The same utilitarian argument applies, but I think most would be for putting the billionaire in jail. How you feel about that is likely close to how that friend feels about what actually happened.
I am quite sure that Texas billionaire Lacy Harber does not know her.
I am also quite sure that the person who shared the post she replied to about his hunting an endangered black rhino did know her.
It would also be a safe bet that some friends of friends who saw that comment did not know her.
I would say that she "directed that comment to" the second. It is impossible to tell who reported her, or what they thought.
Wouldn't Facebook know these things?
Under what circumstances do we have a right to confront our accusers?
Which means that anyone who has created an anonymous complaint system has traded off your right to confront an accuser with the accuser's right to not be intimidated and decided in favor of the accuser.
However there is usually another counterbalance, such as having the accusation silently disappear unless a neutral third party thinks that there is a point to the accusation.
Ends vs. means, buddy. Ends vs means. Every single political topic on HN seems to be arguments between people who can't separate the two, and those who do.
No one is confused about the difference between "I want to kill all Jews" and "I think the British economy would be better off with fewer Polish plumbers".
> And remaining banned even for people who support it on economic grounds because they think that having to follow EU policies on GDPR, copyright, and so on will be a net negative for the UK?
This is a non-issue and just scaremongering, not all that different from "they allow sex with children next!" from the anti-gay idiots (no one is confused between gay rights and "pedo rights").
You clearly haven't been paying attention to modern day discourse over issues like illegal immigration and such. If you listened to some people (unironically the same people pushing for stuff like this) having an issue with illegal immigration is basically treated like a confession to guilt. And that's even ignoring some of the newer types of arguments they are making.. That merely holding an opinion that could be seen (solely determined by them) as problematic or a "whistle" (in effect anything to the right of left-of-center or non-PC) and thus a "gateway to extremism". So "hate speech" needing to be banned next.
Furthermore, as soon as the literal neo-Nazis get criticized the anti-immigration crowd either outright jumps to their defence or shrugs any concerns of. Trump's "very fine people" remark is a good example of that, or a recent article which "proved" anti-conservative bias on Twitter by pointing out that people like David Duke (former KKK grand wizard) or Richard Spencer (literal neo-Nazi who wants to forcibly eject all Jews and Blacks from US) got banned from Twitter.
So, if you don't want to be treated like a duck, then don't walk and quack like one. How else am I supposed to interpret an article defending literal neo-Nazis as "Conservatives treated harshly on Twitter"? Look at the data that is presented, it literally includes the the American Nazi Party. Now, if you want to make the argument that we should allow these people because "muh free peach" then okay, but read that article again, it just talks about "Conservatives" and "Trump supporters" (aside: there are other problems with this "study" as well, such as not including various Liberal accounts that were banned for unstated reasons).
I also agree that sometimes anti-immigration views are brushed aside as "racist" far too quickly, and it annoys me as well. But this kind of confusion is a bed of your own making.
I am not as pro-immigration as you'd might think based on the above, I think there are some real problems caused by both legal and illegal immigration, and that they should be addressed. But it's plenty evident that the anti-immigration right has long since been infested by some very nasty people, which is doing a great disservice to anyone else, especially those with anti-immigration views.
I suggest you first take so effort to purge the toxicity before complaining about "PC".
Idiots like David Duke and Richard Spencer make weak strawmen here. And i'm not even going to play into that game.
The problem is that paying better attention to whom you... defend(?), works both ways. Certain groups and people are literally believed and "protected" by the mainstream media by default, when they are clearly coming from a pretty strong POV.
>your own making
No it's a silencing tactic, no more and no less.
When one side stops calling everybody else nazis, then maybe we can all come together and have an adult talk about "toxicity" but that's not going to happen as long as some keep acting like little children on the playground calling others names to shut down debate.
And "one side calling everybody Nazis"? Really? Did you forget that Dinesh D'Souza wrote an entire book calling the left "Nazis"? That there were ACA protesters with Obama defaced as Hitler? Never mind shining examples of "adult talk" such as "Assume the Left Lies, and You Will Discover the Truth".
I am opposed in principle.
White nationalism (or black nationalism, or yellow nationalism) should be able to speak on a social platform like Facebook.