I agree with the EFFs conclusion for most part here.
Like sure this DDOS provider may not have enough of a presence in belize anymore to hold those IPs, but it's an IP block, probably more than just Parler is using them. Also tons of companies open shell offices for domain names and IPs. Heck there are even services that will provide just that for you. So it's pretty common, so this seems like pretty selective enforcement.
The complained raised to LACNIC was a real one and once they investigated, it was no longer in relation to the content, but to their policies.
You may be in disagreement with that policy, but I personally believe it does make sense. There's a big shortage of IPV4, so it does makes sense that if you find out that they aren't used in your region, that you takes them out. Imagine how you would feel to be in the LACNIC region struggling to get a few IP to find out that Russian company got thousands of them from your region...
The strange thing though, is that they don't mentions anything about the actual infrastructure. Do they have the IP but used them in others regions? Or the mere fact that they weren't used locally was the issue? If I was LACNIC, I would have just required them to use local infrastructure, which I'm sure DDOS-Guard can do.
If it turns out that lots of the IP address space is doled out based on semi-fraudulent claims like this (assuming that Guilmette's claim is correct that DDos-Guard doesn't really have a physical Latin American presence), but the only cases where the corresponding NIC takes action is when someone complains for content-motivated reasons, then... that's biased selective enforcement.
In principle, it would be like police enforcing certain traffic laws (say, about changing lanes or making a turn without signaling) much more strictly against black people than against white people.
And selective enforcement at the infrastructure level is much more serious than it happening at the edge level (see https://www.techdirt.com/articles/20190712/14400642581/failu... for why this distinction is important).
Shady because they're Russians? Or shady because they're using a shell-company to achieve their goals?
I don't necessarily see this as a bad thing, it's more or less inevitable so the focus should be to have the "very objectionable things" actually confined to a set that is sort of agreed upon.
This is easily reconciled if you distinguish between the public officials doing the selective enforcing, and private citizens selectively deciding who to do research on. Let's say 20% of waiters at a restaurant under-report their tips to the IRS. It would be objectionable if the IRS decided to target one guy in particular because the agent had a grudge on him, but it's far less objectionable if the IRS acted on an anonymous report that the guy was evading taxes. If anything, it would be more objectionable if the IRS didn't act, and said "well 20% of waiters are evaders as well, so it would be unfair to investigate that guy because we don't investigate the rest".
So you're fine with private employers who do criminal background checks on black applicants but not white applicants? Or a third party entity that does background checks on all black people (and only black people) and then submits the results to employers whenever they apply for a job?
So you're fine with employers doing criminal background checks against Democrats but not Republicans? Or just investigating which one you are and then making the hiring decision directly based on it?
We're talking about the principle, not the law. The law can be changed. What constitutes a "protected class" isn't in the constitution.
The fact is that LACNIC, AFRINIC, etc. are actually contractually bound to investigate allegations like these and to follow up on them, no matter where they come from. They're doing the job they are supposed to. To bring racial qualities into a contractual dispute, ouch GP.
If a racist organization in South Africa started doing compliance checks on only black-owned businesses world-wide and providing the results to vendors, would you have companies everywhere discontinue doing business with black-owned businesses because they're "contractually bound to investigate allegations like these and to follow up on them, no matter where they come from"?
In the real example at hand, as long as the US government is happy with the de facto regulatory behavior of private individuals and organizations, they can maintain the status quo, perhaps at some cost of effort.
If this leads to justice at a low political and financial cost, then it is not necessarily a problem that defaults are selectively maintained, of course.
Because the Insurrection Act of 1807  says it is.
Less bluntly, the point of democracy is to give people peaceful means by which to affect their government. Storming the Capitol is not a peaceful mean.
Their premise might be completely ridiculous, but the logic that follows is sound.
Then again, that line of thinking means the blame shifts from those doing the assaulting to those fabricating the accusations that justify it, which happens to be mostly users of precisely those platforms that got deleted.
Ironic how the folks involved are all "law and order" when it comes to most matters of policing but now that they're facing charges of sedition it's all "wait is it actually a crime if I believe that my righteous cause justifies violence?"
Can you name some examples of this where someone was fraudulently claiming IPs, but corresponding NICs refused to take action when provided evidence like in this case?
I guess they just can't police 100% of their space, it just too big. Thus yeah it does make sense to only act once you get complains.
The fact that this time it was content-motivated change nothing to how it's enforced.
> In principle, it would be like police enforcing certain traffic laws (say, about changing lanes or making a turn without signaling) much more strictly against black people than against white people.
What? How? It more like the police answering to complains, which happens to be much more frequent against black people because of racism. Which definitely is the case right now. The solution would be to not answer to complains? Not really...
There was a complains raised, it was investigated, and they were found to be infringing on the policy. There was no selective enforcement.
There's no "deplatforming" happening here. DDoS-Guard is using IPs from Belize (a country in latin america) using a shell company there. That's against the contactual obligations. So the Latin America IP association (lacnic) is simply reclaiming the fraudulently assigned IPs.
Some of these IPs were used by Parler.
Please read the article first ....
> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."
I hope the island of Tuvalu reclaims all those .tv domains then, not to mention .io.
Whilst true, it depends on a circular argument that the domain name registrant has a significant business presence in Tuvalu by virtue of paying for the domain, thus being entitled to a regional registration in the country in the first place (you could substitute Tuvalu by Tokelau and a bunch of others which make selling the domain to non-local entities, see it nicely visualized here ).
It used to really raise some hackles at ICANN and InterNic back in the day, but nowadays, after the gTLD floodgates have been opened - no one really cares anymore about the old regional rules.
I know that my country (at least nominally) requires that its TDL is only used by citizens, residents or bussiness with "significant presence"; but that should be a choice each country makes.
Pre-ICANN, back in in 1994 when Postel wrote RFC 1591 , there were only 5 domains used for global use (.com, .net, .org, .edu, .int), two special US-only domains (.gov, .mil) and the rest were ccTLDs. InterNic has been the primary registry these days, and it also operated the gTLDs of .com, .net, .edu directly, so it had commercial interest for the country-code delegations to stay local - this perceived conflict was one of the reasons why ICANN was eventually formed to take over the general oversight, and the first topic it had to tackle - before even being properly funded! - was the emergence of some countries that did sell their ccTLDs to other parties, especially as WIPO had some words to say in 'concern' over the bidding war around .tv and insisted that ccTLDs that operate as "open" should apply the same global rules as gTLDs  and some purchasers in the simlar sales proved "not to have local community support" and/or "not operated [ccTLD] in public interest" 
Even after the ICANN was formed, in the transition period the some of the MOU documents ccTLD registrar sent to ICANN used to contain a phrase which was clearly copied from an earlier era (phrase bears a little differences in spelling, but always using something like "meet the needs of the Local Internet Community"):
[ccTLD manager] supports the concept of industry self regulation in the internet and therefore is also supportive of ICANN as the organization that performs the IANA function and consequently assures that this concept can work for benefit of all Internet stakeholders. At the same time to preserve this concept we believe that the policy and structure of the [ccTLD], *as any ccTLD, should be operated at a local level, to meet the needs of the Local Internet Community* within the framework of the relevant local laws.
That being said - all of the above is only a bit of historical interest, as by now it is accepted that it is ok to operate ccTLD which accepts registrations from anywhere should your country wish it, as long as global rules are adhered to (to prevent stuff like "cocacola.cx") and there has been some clarification on the policies to make custom gTLDs a possibility.
Lots of rules are broken all the time and no one cares. Suddenly when people want to get rid of someone, rules are selectively enforced.
So no, it's not a Parler thing.
The message is clear: help them and we're coming for you.
The result: they have a lot of trouble getting online.
> The pending disruption for DDoS-Guard and Parler comes compliments of Ron Guilmette, a researcher who has made it something of a personal mission to de-platform conspiracy theorist and far-right groups.
While this may solve immediate problems (such as the planning of unlawful or otherwise socially harmful events), we should consider the long-term impacts of de-platforming. Does censorship, even if justified, fuel anger and distrust, potentially increasing social conflict in the long-term?
Is it possible to bring fringe groups back into the fold of peaceful civil discourse, or are we simply throwing up our hands and declaring that some percentage of the population must always have their speech regulated? (I suppose this question applies for both social groups and for individuals.)
Anger and distrust are sparked, stoked and fueled far more by personalities in media and politics who actively promote them than by de-platforming.
> Is it possible to bring fringe groups back into the fold of peaceful civil discourse
People organizing and advocating violence are not fringe groups. The paradox of tolerance applies here: You can't extend tolerance to those who advocate for intolerance without destroying tolerance in the process.
I don't understand how this is a paradox. You can't extend tolerance to people who commit arson and murder, or e.g. refuse to hire black people because they're black.
You can let people say whatever they want, because then you can refute their arguments and refuse to implement their proposals.
Indeed, you can let them say whatever they want. But that doesn't mean that anyone else should or should not. It's not like there aren't companies and service providers willing to host the content.
> refuse to implement their proposals
There are real practical limits to free speech.
A "proposal" that is an incitement to organized violence is of the same kind as yelling "fire" in a crowded theater.
That is fundamentally different than a proposal to eliminate the estate tax, provide a universal basic income, or organizing to peacefully protest using civil disobedience.
Isn't the service currently down?
> yelling "fire" in a crowded theater.
> A "proposal" that is an incitement to organized violence
But then you need not worry about what some website or service is doing because that is illegal and the police can arrest them for it.
Basically, a call to violence isn't information, it's action, like ordering a hit. The prohibited act is really the order, i.e. participation in a conspiracy to commit an act of violence, and the communication is only the evidence of it.
Compare mob boss who says "whack that guy" and goes to jail for it vs. news reporter who reports that mob boss ordered a hit, unintentionally informing a hit man who wasn't aware of the order. News reporter doesn't go to jail for that because their intent was only to communicate information, not enable the act of violence, even though they conveyed the same information and it had the same result.
They are working on getting it up and running under a Russian ISP, DDos-Guard.
If sites like Gab and its ilk can figure it out, so cab Parler.
> > yelling "fire" in a crowded theater.
From that article:
"[In Brandenburg v. Ohio] the Court held that inflammatory speech--and even speech advocating violence by members of the Ku Klux Klan--is protected under the First Amendment, unless the speech "is directed to inciting or producing imminent lawless action and is likely to incite or produce such action"."
> But then you need not worry about what some website or service is doing because that is illegal and the police can arrest them for it.
You don't need to worry, but the website is well within their rights to worry about what people are posting to their site and take action to limit that if they see fit.
The article this discussion is attached to is called "DDoS-Guard to forfeit internet space occupied by Parler".
> If sites like Gab and its ilk can figure it out, so cab Parler.
Parler was bigger than Gab. It remains to be seen whether the scale presents an issue.
> "[In Brandenburg v. Ohio] the Court held that inflammatory speech--and even speech advocating violence by members of the Ku Klux Klan--is protected under the First Amendment, unless the speech "is directed to inciting or producing imminent lawless action and is likely to incite or produce such action"."
But that isn't the case where the phrase "fire in a crowded theater" came from and that quote doesn't even appear to cover the phrase itself, since a panic is potentially dangerous but not illegal.
Here's some more on why the phrase should be discontinued:
Bottom line, it's too vague to mean anything because all it implies is that unprotected speech exists without specifying any meaningful boundaries for it, so people regularly quote it in support of arbitrary overreach.
> You don't need to worry, but the website is well within their rights to worry about what people are posting to their site and take action to limit that if they see fit.
But we were talking about tolerance.
Tolerance for incitement to violence? I don't personally support tolerance of that, which is my right, and what I personally feel is better to avoid the spread of toxic cultures like found on Parler. Tolerance of diverse and non violence-provoking political opinions, including those I disagree with? That is something I totally support.
Parler clearly does tolerate incitement to violence, which is their right. Good for them for exercising it, but nobody has to support them, nor should anyone be prevented from supporting them. Likewise, if there are consequences (commericial or legal) for tolerating incitement to violence, Parler owns those too.
You are also free to tolerate it and provide services to support those who do.
That's the thing which is illegal because it isn't speech. It isn't information, it's action. See above.
> Parler clearly does, which is their right.
No, they don't. It violates their policy and they removed all that was reported to them.
It's a distinction without a difference. I don't want to host violence provoking content. I don't think others should either. You think they should, and you have your reasons. Everyone (and every organization) can make up their own minds.
Clearly, we disagree about which approach is better for society, and happily we do so cordially, which is just the sort of conversation we need more of.
> No, they don't. It violates their policy and they removed all that was reported to them.
That's false, and that it is false was upheld by the Federal judge today. Quoting https://www.cbsnews.com/news/amazon-parler-violent-content-w...
"This case is not about suppressing speech or stifling viewpoints," Amazon's lawyers stated in a court filing. "Instead, this case is about Parler's demonstrated unwillingness and inability to remove from the servers of Amazon Web Services ('AWS') content that threatens the public safety, such as by inciting and planning the rape, torture and assassination of named public officials and private citizens."
"Parler's refusal to moderate content resulted in a "steady increase" in violent content on the network, breaching Amazon's terms of service, AWS contended.""
Still no. Nobody wants to be hosting it. The question is what happens when perfect moderation is impossible, which it is.
> That's false, and that it is false was upheld by the Federal judge today.
You're quoting Amazon's lawyers, not the judge.
Wanting to do something hard and being less than 100% successful is not the same thing as not wanting or trying to do it.
Show them the method of operation of the theory building so they can recognize it and stop participating in it.
Doing something like this can be great, but it takes time and effort, meanwhile acts of terrorism are being committed now.
Most qanoners dont really believe it, and if separated from their echo chambers, will deradicalize themselves
Not only that, for many of them it's just fun - in exactly the same way that some people find supermarket tabloids entertaining. The audience for these conspiracies is quite similar (probably overlaps) with the audience of the National Enquirer.
I know a qanoner in real life, and when i talked to him about it he denied easily provable facts, shifted goalposts, assumed i made arguments i didnt make, and now has taken to literally yelling over me so he cant hear the words im saying
I never liked that study because people always read it like that.
When people receive new information, they try to make it consistent with what they already believe by making the smallest possible change to the existing belief system to make them consistent.
That could be as simple as just not believing you.
This can strengthen their belief in the existing thing because they just evaluated it against some potentially conflicting new information without rejecting it.
Just not believing you doesn't work for an article like that because it's reasoning rather than facts. They have to find a hole in the logic if they want to keep their existing beliefs. So they'll come up with something like, maybe that's how it works for other conspiracy theories, but this one is real so it doesn't apply.
But now the logic is in their head, so the next time the conspiracy theory has to be reframed to match a changing reality, they notice that what's happening is consistent with the logic. It makes them doubt.
And the more information and reasoning they're exposed to which is inconsistent with the conspiracy theory, the more they doubt. It just like how the Big Lie works, but in reverse. You expose them to truth and logic over and over until they can no longer make the conspiracy theory consistent with it.
> Doing something like this can be great, but it takes time and effort, meanwhile acts of terrorism are being committed now.
"Acts of terrorism" aren't speech so as soon as they go there they go to jail. I mean they were planning it openly on Facebook, it was kind of a discredit to law enforcement that they weren't arrested for the conspiracy to begin with.
> Most qanoners dont really believe it, and if separated from their echo chambers, will deradicalize themselves
But that's why we need free speech, right? To avoid echo chambers.
Even if private censorship is allowed, that doesn't make it a good idea if it causes people to leave for some Voat-like cesspool where they won't encounter ordinary people anymore.
Absent access to "ordinary people" to recruit to violent causes, the inhabitants of such a "cesspool" will likely bore of their own conversation.
If, as you state, any of them are engaged in illegal provocation to violence, they will be easier to find there, since they lack the shield generated by the noise of "ordinary" conversation.
Besides, nobody is going to "helped" away from violent provocation by casually interacting with them social media. The only thing that can help such people is real in-person and trusted interaction, like at church, with a community help group, or with a therapist.
At best social media can be used to identify candidates to be offered an off-ramp from the road they are on, but that needs to be done by people who do de-radicalization as a full time job or mission, not casual social media acquaintances. Very few people are dedicated to that kind of work today.
So then there is nothing at all to fear from Parler and all of the efforts to keep it offline are misguided?
> If, as you state, any of them are engaged in illegal provocation to violence, they will be easier to find there, since they lack the shield generated by the noise of "ordinary" conversation.
Even in a garbage fire like Voat, the vast majority of posts aren't illegal. You need the ordinary people there because they're the ones willing to report violence to the police when they see it.
> Besides, nobody is going to "helped" away from violent provocation by interacting with them social media. The only thing that can help such people are real in-person and community interaction, like at church
Something something COVID.
If you fear Parler being online, you are being tricked by the media.
> "Acts of terrorism" aren't speech so as soon as they go there they go to jail.
preventing terrorism is generally considered better than just arresting people after it happened
> But that's why we need free speech, right? To avoid echo chambers.
Unless you are suggesting we ban all moderation, no echo chambers form due to selecting information sources you only agree with and information sources pandering
> Even if private censorship is allowed, that doesn't make it a good idea if it causes people to leave for some Voat-like cesspool where they won't encounter ordinary people anymore.
censorship at least gets rid of the non true believers, the reason stuff like qanon spread so quickly is because they used sites like facebook and twitter and didnt stay in cesspools like 8chan, we cant get everyone but most is better than none
Even people with psychological conditions have the capacity to modify their behavior.
> preventing terrorism is generally considered better than just arresting people after it happened
That premise came out of 9/11 when the terrorists didn't care if they died. In ordinary cases such as these you don't need a precrime unit because catching them after the fact provides a deterrent that prevents them from doing it to begin with.
But also, the police already do that. They send under cover officers into extremist groups and get warrants to conduct surveillance on individuals suspected of plotting violence. Then they get arrested for the plotting violence, not for speech.
> Unless you are suggesting we ban all moderation, no echo chambers form due to selecting information sources you only agree with and information sources pandering
The largest sites could exclude only that which is illegal.
> censorship at least gets rid of the non true believers
Until you actually implement it and end up censoring a bunch of stuff which is true, preventing some other lie from being corrected. And then people find out about that happening and lose faith in the censors, start looking for "alternative" media that doesn't do that, and get sucked back into conspiracy theory land.
> the reason stuff like qanon spread so quickly is because they used sites like facebook
The reason it spread so quickly is that Facebook promoted it, because their algorithms reward controversy.
And that's the real problem. If you carry on promoting "engagement" there will just be some new conspiracy theory, which you don't even notice until there are already a million people sucked into it and somebody gets killed. Stop doing that and there is nothing to censor.
This is incorrect.
Not tolerating intolerance is objective.
Sad so many people can't wrap their head around this incredibly simple paradox.
If you don't tolerate "intolerance", you are intolerant, it's as simple as that. And if you make yourself a clause to exclude some beliefs from acceptance and still deem yourself tolerant, you are a hypocrite.
I suspect actually tolerant people wouldn't be invoking the paradox.
There are very few ideas that can be invented and spread by a media personality without that idea already having a foothold. Unless you truly believe that 40% of the voting population are dumb mindless automatons. It's pretty condescending to anyone who considers themselves center right to be generalized like that.
Due to the realities of the two party system most people are going to "fall in line" with the party that is closest to their fundamental principles. For center right this means small government, lower taxes, merit based incentives, law enforcement, etc. You're asking a lot to ask a conservative to switch sides when the other side is so far from these fundamentals.
The same goes for republicans trying to convert democrats. Hard line ssues such as anti-abortion and "taxation is theft" are too much for center left people to handle. They're going to fall in line with whatever the Democrats are into at the time in order to make sure their fundamentals are upheld.
Trump was a symptom. The seeds were already planted. Once that weed began to grow it was inconceivable to the left and center left that anyone who called themselves human could look past the rhetoric and the character defects in order to support their deeper fundamental principles. The left began to see these people as less than human and showed their opinion with every newsreel, tweet, and post. It was, understandably, their moral duty. The only choice for someone on the right is to abandon their principles or dig deeper. He can't be THAT bad right? The conservatives are locked in and the other side isn't offering any olive branches to non-humans.
Both sides are then locked in a moral dilemma and the media is just fanning the flames. And then to top it all off during the reelection cycle we're forced to isolate and have our window to the world altered by bias confirming algorithms. There's no hope for unification under those conditions.
So many differences can be overcome by two people sitting face to face and seeing each other as two humans with different ideas experiencing the exact same stimuli. I just hope we can get back there some day.
It's not really "fringe" groups that are the issue here; it's "extreme" groups - no-one is calling for the flat earthers to be deplatformed, for example.
> Does censorship, even if justified, fuel anger and distrust
Probably but I'd guess only among the people who are already inclined that way already.
> the population must always have their speech regulated
If you put "some of" after "have", it's the situation we already have.
Fringe groups have options they've always had: colo and common-carrier lines into their own server/racks/datacenter. Only thing missing is talent, which is nobody's problem but theirs.
Please elaborate why you think this is not censorship and the distinction. What do you think this is? These are not leading questions they're genuine because I don't understand your point the way you're assuming I do.
The First Amendment to the US Constitution applies to the government.
Free speech is the principle that suppressing allegedly false information is more dangerous than not suppressing it because if the information is actually false you can refute it but if the information is actually true then it allows powerful actors to suppress atrocities and lie with impunity.
Censorship is the suppression or prohibition of speech. Private censorship is not a violation of the First Amendment but it's still censorship.
It has nothing to do with who does it and everything to do with whether the idea is being suppressed in practice.
No - it’s literally the definition of the word censorship: “suppression or prohibition of speech”. It really sounds like you’re trying hard to redefine the word to mean what you want to say, but maybe you should just pick a different word?
Free speech is about being able to express ideas and these can be supressed by private entities, communities or a number of non-government entities like a church.
Should TV Guide have to publish porn because I want it to?
It's possible de-platforming these domestic groups will just further radicalize the existing members. But it should inhibit their growth.
I think a dispassionate analysis would dispel the idea that it’s not effective, and most such arguments are fueled more by emotion than objective facts.
It's just happening against people the reddit admins hate and "good hatred" doesn't count.
“Capitol Hill Attack Video Shows Rioters Pray With Lifted Hands After Violating Senate Chamber”
“How QAnon uses religion to lure unsuspecting Christians”
“How White Evangelical Christians Fused With Trump Extremism”
Edit: Changed “American right-wing and Christian terrorists” to “American right-wing terrorists and Christian terrorists” to remove the ambiguity on whether I mean “American right-wing” as an adjective or a noun (I meant it as an adjective to the noun terrorist.)
I'm open to hearing evidence otherwise.
ISIS being a foreign terrorist group, the U.S. can criminalise membership. Extremists being a domestic group, mere association can’t be criminalised [EDIT: is difficult to criminalise]. Only individual actions can be criminally pursued.
Other than that legal distinction, the two groups (right-wing extremists, not all right wing Americans, and bona fide ISIS members, not everyone in ISIS-occupied territory) are comparable. They spread their misinformation similarly. And could be expected to be similarly curtailed by deplatforming.
Senator Sheldon Whitehouse suggested RICO’ing the militia groups today. The members could be criminally liable even if individual actions can’t be proven (similar to how the mob was taken down.)
The radicalized militias like oathkeepers, 3%ers, vanguard, proud boys, etc are serious threats.
If I am following this correctly. The American right-wing(republicans) is equivalent to a terrorist organization that beheads people amongst other war crimes.
If I may make a recommendation. Don't treat half your country as if they are terrorists.
It’s insane to say “because “terrorists” use Twitter, let’s shutdown Twitter” instead we remove them from the platform.
Parlor was working with the FBI and was kicking people off their platform breaking their terms.
It makes no sense what’s currently happening.
I'd say yes, but not by shifting the whole movement away from extremism, but by slowly siphoning individuals out of those groups until only a small minority of crazy extremists is left.
The best counter to a massive disinformation campaign is information on the same massive scale. Sadly, it often seems easier to just silence lies instead of charging straight at it.
How many entities did they end up deplatforming? The outgoing president, and a few right-wing forums? Meanwhile all the republican congressman and PACs are untouched. If they really want to shut down the opposition, they seem to be doing a pretty bad job.
What's important is to being as open as possible and having open structures.
All of these are education problems not free speech problems. Speech is a signal not something to be repressed
To me, Parler has equivalencies
- AWS linked to calls for violence that were made on Parler
- The platform was actively suppressing non-conservative viewpoints
I would recommend looking at r/ParlerWatch on Reddit
On one hand you had ISIS posting beheading videos. On the other you had grandmas posting selfies in the rotunda.
Yeah, there are some nutters out there. But to claim that everyone caught up in the tech purges is somehow equivalent to ISIS seems a bit rich from my perspective. Even if you accept that some of these people are genuine domestic terrorists, reasonable people should be able to distinguish between rogue elements and ISIS, an organized terror state.
This is a bit like the claim that everyone who entered the Capitol Building was a terrorist. These are wide generalizations that only serve to confirm partisan biases. For me the use of these fallacious comparisons is illustrative of the problem at hand, toxic divisiveness.
Absolute free speech would have to include ISIS, a group that would (and has) killed people for what they say.
If you admit that absolute free speech leads to illogical and impractical consequences, then the only remaining question is where to draw the line.
Reductio ad absurdum is a rhetorical tool, not a logical fallacy. It's goal is to show that your argument is absurd, because the absurd statement follows from the argument you are making.
>I find myself wondering where these people were in 2014 when all the major social media platforms systematically removed ISIS accounts...
Perhaps these people didn't object for the reasons I described above. ISIS is nothing like the Capitol Hill demonstrators.
Sitting in New York, nutters who came within yards of hanging the Vice President, Speaker of the House and potentially several Senators are a far more clear and present danger than mullahs in the Middle East.
Furthermore, if we accept the narrative of the WoT and that there was a legitimate threat to politicians, there's still a wide swath of difference between the two. ISIS is a real threat endangering, killing, maiming and torturing an entire region. They carried out indiscriminate attacks on civilians across the world. A few nutters threatening to lynch some politicians isn't that.
Conflating the two ideas is an example of how toxic rhetoric has become. Yeah, you can turn on MSNBC, CNN or even the BBC right now for validation of those narratives. You could also turn to those same media outlets for justification for the war in Iraq or crackdowns on civil liberties during that same period. The new war on domestic terrorism (read dissent from establishment politics) is similarly worth a critical look. Especially if grandmas with thermoses are to be lumped in with dangerous extremists.
Successfully prosecuting rogue actors for plotting to use violence against politicians isn't the same as establishing equivalence with ISIS. Nor does it allow one to generalize about the content of the entire group of protesters.
There were some protesters who expelled rogue actors while accusing them of being "antifa agitators". It wouldn't be fair to cherry pick the worst actions of the group and generalize. The comparison to ISIS is an extra leap that I hope needs no further elaboration.
Sure, we could argue that "terrorist" is the wrong word to use here, but if we used another word, such as "criminals", I would guess that you would still make pretty much the same argument.
And talking about "toxic divisiveness" while defending people trying to overthrow an election? Are you even listening to yourself?
I notice many comments will start with, "Although I dislike Trump..."
The fact that comments need to begin with such a disclaimer is illustrative of the atmosphere we find ourselves in. A comment should stand on its own. Disclaimers shouldn't be necessary.
Yes, the atmosphere we find ourselves in is horrible. But I would argue that the "post-truth" strategy used by trump & friends is the main reason for this.
When it is so common to be intelectually dishonest, those disclaimers are needed, and with how easy it is to misunderstand people on the internet, I think that those disclaimers are something positive, people really should add "/s" when being sarcastic as an example.
How would you suggest one should handle dishonest liars on the internet? Personally I think that one should call them out on their lies, but everytime that is done, someone, (in this case, you), come along and call it "partisan hyperbole"...
From this point it isn't hard to understand how high emotions seem to be running. It isn't uncommon for otherwise rational people to project irrational ideas onto a discussion. This is especially common when emotions are running high. The tribal instinct seems to be one of our most base emotions.
The distinction is that I don't presume malice. Instead, as trite as it may sound, I try to work from a place of compassion.
Simply blaming the other side (RE: Trump & Friends) appears to be part of the problem here. The anti-Trump partisans appear just as emotionally irrational in my view.
Perhaps I am privileged in that I don't identify with either side, or approve of their policies. I will say that it is hard not to have sympathy when I see someone irrationally attacked.
The mob mentality is palatable. I think it was George Carlin who made a famous observation about the intelligence of the 'average' person, then concluded that half of the people are somewhere below that.
But as someone who values democracy, I have to be honest about there being a huge difference between the two american paries, especially since trump. (But arguably since nixon).
Sure, I agree that there are lots of emotions everywhere, and people are attacking trump & followers for both valid and invalid reasons, but using the invalid ones to smooth over the valid ones is in my mind (atleast) as bad as the irrational attacks.
To go back to an earlier post in this comment chain, this is the reason why I think that starting a post with "I dislike trump, but.." is a good thing, so that one can criticize the irrational attacks without smoothing over the rational ones.
Also, I don't think intelligence has much to do with this, but rather:
People don't have to be less intelligent to be victims of media manipulations. Carlin, the American comedian was being glib.
I'm something of a news junkie. I observe the drama in the same way others might watch a daytime serial. From this view, I have to admire the craftsmanship that goes into spinning and developing media narratives, the way they expertly play upon the heart-strings of the masses. Often times the viewers are all too willing to indulge their outrage. Putting aside the concept of intelligence entirely, the masses who have little time or curiosity to observe the news in-depth, are poorly matched against the master propagandists. They've had decades to hone their craft.
As you say, you're concerned about the status of democracy in the US. Yet here comes the new narrative of domestic terrorism, buoyed by the same fears expressed in this thread. The masses are outraged. The image of Trump as a Hitler-like figure has been burned into their minds by repeated invocations of Godwin's law. After repeating this ad nauseam, Biden now proclaims that Trump is like Goebbels,
“You say the lie long enough, keep repeating it, repeating it, repeating it, it becomes common knowledge”
Even I have been called a Nazi for challenging what I see as irrational. Posters here have insisted that I check my privilege in regards to race. All of this without knowing my identity, race or experiences.
While I share your concern about civil society devolving, I see it as coming as a reaction to Trumpism. The intolerance is easy to see from where I stand, and I'm no fan of Trump. The comparison to ISIS is part and parcel of this intolerance. I find it somewhat ironic that you appeal to valuing democracy (I read that as civil society) while indulging the assumptions and fears I've outlined above.
I know that you are never ever supposed to compare someone with nazis, and if you do, you automatically lose. But isn't there a point where someone can act like a nazi to the degree that it becomes dishonest to not call them a nazi?
When trump repeatedly lies about the election being stolen, is it really that far-fetched to make that comparison?
I can fully understand that you dislike that people on the internet compare trump supporters to isis supporters, but using that to make the both sides are the same argument is really farfetched.
America has some serious problems, and it's democracy is really flawed, but it is still a democracy. If you want to mend the country you can't have half the country living in an alternate reality. Truth isn't something you can compromize away just to avoid being "partisan".
Since you used the words watch and viewers in relation to news, I can recomend:
From that point, you can't argue using evidence. Widespread delusions are more powerful... 'Truth' or at least the will to perceive and the perception of it has become highly partisan.
In regards to censorious intolerance, both sides are not the same. We are supposed to value democracy, (tyranny of the masses?) yet we can't trust individuals to consume or digest information without hand-holding from partisan fact checkers. If we are concerned about democracy, this should be problematic. These are themes of technocracy, not democracy.
It isn't important to me to establish equivalence. That's not what I am working towards here. I'm attempting illustrate what I see as mass hysteria, widely held delusions. The Hitler comparisons, a common form of hyperbole are part of that.
And there was definitely suppression of non-liberal viewpoints on many sites.
I don't agree with most conservative views, but I am not so tribal and blind to irony that I think it's ok to apply different standards to one side vs another. "But they are different!" - says the blindly loyal team player.
Your argument is extremely disingenuous. You can read through AWS’s complaint against Parler and see some of the shocking content they refused to moderate.
That completely depends.
1. If the stated goal of the platform is onboarding for terrorist organizations, then that's a completely different situation.
2. If the platform is open, but known to be used as a main recruiting platform for such an organization, then I also don't think it should be shut down.
I hold the opinion that the only successful way to navigate to stability is education, open discourse, and constant discourse. The more in society that ignore other thoughts because they are convinced they are unequivocally correct on what's right or wrong (minority or majority) the more unstable society is going to be as they are driven to view everyone who thinks differently as the problem instead of searching for better answers.
By finding things someone or a company has done to deplatforming them, it's not deplatforming when they get deplatforming if?
Maybe try reading "The Dictator's Handbook", it might cover what your asking.
Someone used the mechanisms within LACNIC to deplatform.
> Deplatforming would be LACNIC terminating the addresses
No one is saying LACNIC deplatformed anyone, it was Ron Guilmette.
I guess it's up to you how you want to interpret reality.
Like I suggest read the book. It gets good reviews in it's own right even if you don't think the processes in the book would apply here.
If only we applied this same type of pressure and energy to politicians who enrich themselves at public expense or drawing down from our endless (but very profitable) wars.
Our ruling class does an excellent job keeping us fighting against each other so we don't focus on them.
I think this is going to lead to a faster restructuring of internet routing, I was going to say “antifragility” but am I correct in that this term has also been co-opted by specifically far-right extremist groups who also think these attacks make them naturally select into more resilient forces
It's nowhere near clear-cut, and I'd like to hear some opinions on this.
> It's reasonable to say that Parler should use a mix of automated tools and human judgment to block unlawful speech and/or hate speech - but it's NOT the reason that Appl/Goog have removed it. Both companies have made it clear in their public communications that Parler must install an effective hate-speech filter to qualify for inclusion in their stores.
> I hope we can all agree that "effective hate-speech filter" is a nonexistent technology, and that therefore none of the existing services whose apps appear in either app store have such a technology in place.
It started off as a small irrelevant site, as they all do.
Twitter and Facebook then started labeling content on their site with "fact check" labels. This is naturally fraught with scandal because doing this at scale involves a lot of mistake making and then you can find arbitrarily many examples of them authoritatively declaring something as false or "disputed" when it wasn't. There was a strong perception on the right that these tags were being attached to things on a partisan basis.
A number of people didn't like this and started looking to alternatives, so Parler started gaining users.
Then those jackasses stormed the capitol. Note that this was planned, openly, predominantly on Facebook.
The response is Something Must Be Done, so there is a mass banning of accounts, including the then-President of the United States. Doing this was even more controversial, there were questions about whether all of them deserved it, and a lot of people -- the vast majority them people who hadn't been banned -- started moving to Parler in protest. They hit ten million users.
This was now a moral panic because Parler is owned by right-leaning people instead of left-leaning people and the whole deplatforming system is based on the targets not having anywhere to go. So then regardless of who they really are, they now have to be demonized in order to justify their destruction, because they're getting too popular. Note that popularity and extremism are anti-correlated because extremists are by definition at the fringe, so the more people you have, the more diluted the extremists get. The logic was backwards if the real problem was what they claimed.
But people find a list of malicious posts on Parler to paint the entire site as that. These exist everywhere so of course they found them. Made all the easier by the fact that they were undergoing rapid growth and at that specific moment in time their moderation system was overwhelmed. Also, point to moderation features designed to promote stability and exclude extremism by limiting what new users can do and claim that they do the opposite under the unproven assumption that the site is already dominated by extremists and the same features would then be excluding non-extremists.
I'm sure 80% of Parler is crap, because 80% of everything is crap. But then how are we distinguishing it from Twitter? None of the evidence used to "prove" that they weren't moderating when they claimed to be couldn't be used to prove the same thing of their competitors. It's an isolated demand for rigor.
It's not. And it would be silly if it were, since corporations are exercises of state power ab initio; there is nothing separate to merge.
Parler didn’t run a particularly good service. Some of their technical decisions are baffling and/or painful. Consequently, it was easy to remove them view.
Donald Trump made it way too easy to de-platform him. Hell, if he would have used his Twitter account to spread facts and science, it would have been much harder. But over the last six weeks, his account went from vaguely funny to deeply concerning.
The US narrowly avoided devolving into a permanent fascist state. Now people who at least support human rights and democracy are in power.
This is a huge exaggeration. As looney as things got in early January, I am still certain the FBI or if necessary US military would have removed Trump from office if he refused to leave office or attempted to declare martial law. They are sworn to defend the constitution, not the sitting President.
America is not truly defined by its right-or-left fringe elements, despite the clickbait you see on the web and in the 24/7 TV news cycle. 90% of Americans don’t spew vitriol on Twitter or hate the political opposition. Most people get along. But old-fashioned democracy, compromise and pragmatic centrism doesn’t get clicks or sell ads.
But it absolutely does not mean that other people have to like you, listen to you, sell goods to you, or allow you to use their platform.
Imagine a law waiving criminal and civil liability for lynch mobs, and then government claiming due process isn't violated because it isn't a "government" action, and due process doesn't apply to individuals.
"Get rid of § 230" is a backwards way to solve the problem that they have. They don't actually want publishers to be liable for the content on their sites (which would, if anything, lead to greatly increased censorship and filtering). They want to STOP companies from censoring what the companies deem misinformation or otherwise offensive content.
In other words, they want the law to be "if you control a channel for speech, you are not permitted to censor or filter out what you view as misinformation, incitements to violence, or otherwise offensive content."
Again, this has no support in the free speech rights of the first amendment. In fact, it could be viewed as contrary to free speech, since the government would be forcing people and corporations to carry content on their websites that those people or corporations do not agree with and/or find offensive.
>would be forcing people and corporations to carry content on their websites that those people or corporations do not agree with
If you choose to build/host a public square, you must provide equal opportunity and access to it. Limiting access thereafter would involve police for criminal acts, or court / restraining orders for civil. Both are performed by govt, within the government's constitutional limits. Due process.
If you want to control content and be a publisher, then fine, but now we're talking about assumed liability for content, and an entirely different relationship with content creators. Just as the owners of a private venue are responsible for the actions of entertainers they might hire.
Because that sounds like an overly-complex theory, and doesn't explain any content on Parler that was posted more than two weeks ago, when the idea of getting Parler off the Internet actually became plausible.
Correct The Record used this exact technique to get posts, pages, and groups banned across Facebook and Reddit during the 2016 election: https://www.latimes.com/politics/la-na-clinton-digital-troll...
The campaign has been given credit by Sanders loyalists, however, for all manner of things that it has had nothing to do with, including posting pornography on pro-Sanders Facebook pages which resulted in them being temporarily taken down. (The pages went down as a result of a Facebook software glitch).
This "glitch" dismissal given in the article is open to scrutiny when WikiLeaks showed us DKIM-verified messages indicating collaboration between Hillary campaign manager John Podesta and Facebook COO Sheryl Sandberg: http://www.departmentofmemes.com/news/wikileaks-proves-zucke...
I.e. "if that speech became actions would you want those actions to be universally accepted [where you live]" or even just "applicable to everyone indiscriminately including you and your family"?
In that case "line up the firing squads" "Pence goes first" is a clear-cut scenario.
Parler absolutely endorsed and supported all that and how anyone can defend them at this point is frankly baffling. Unless all those people are ok taking Pence's place in front of the firing squads? Even if they are then it is not sustainable to have that as a universal law because at some point we will run out of people to make firing squads of (they will all be shot).
It’s incredibly complicated and I have no idea what side I’m on now either. Hell, I don’t even know whether we can call this a win or a loss for free speech.
It should be pretty obvious that this is merely a mitigating factor, and could never outweigh the initial damage done.
If you ever want to question an election result in the future you should be worried sick.
On the other hand I hate all fascists too and they are violent terrorists so there is no point in pondering philosophy for their sake.
Setting the threshold for "problem" at intent to cause physical harm to others seems a pretty low barrier?
Any barking is ok, but swift actions should be taken immediately after the very first bite.
Stepping away from calls to murder people for a moment.
You can deny climate change all you want but once you put your words into actions that destroy planet your speech can now be treated as actions.
Well that's exactly the question: Is free speech more societally useful than disallowing the kinds of speech I disagree with?
The philosophers of the enlightenment didn’t foresee that freedom of speech would be exploited by racists for the purposes of promoting hate in a tolerant society. Freedom of speech is an outdated concept.
> When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
Among other pertinent points.
 or if you're on the other side of the ideological spectrum, a "ABORTION IS MURDER" sign with pictures of bloody fetuses.
Rupert Murdoch won't put Chomsky on prime-time Fox News? Fucking fascist.
Won't force Geology students to listen to flat earthers? Fucking fascist!
Like deplatforming, the goal is to deprive people of their right to speech. That’s fundamentally anti-democracy.
Its sad that Antifa became the thing they hate; if the big internet providers follow suit, it’ll be a disaster.
I see no evidence of that being true in any substantive way. I’d be surprised if there aren't some people somewhere who fit that description and identify with the antifascist movement, which is a big, diverse, decentralized movement. But I don't see it happening in any significant degree.
> Like deplatforming, the goal is to deprive people of their right to speech.
No, it's to get other people to exercise their right to free speech to not amplify harmful voices.
This is according to the anti-defamation league. The Southern Poverty Law Center (SPLC) has also condemned Antifa in the past:
The people you are describing probably are more closely aligned, ideologically with SPLC and ADL (unless they spend their evenings doxing neo-nazis, and shopping for equipment for their next violent counterprotest).
I have been donating to and referring people to non-violent anti-fascist groups for years. That’s very different than supporting militant extremists.
(Trump designated Antifa a terrorist organization, and I refuse to use that word to describe them.)
> No, it’s to get other people to exercise their right to free speech to not amplify harmful voices.
So, compelling speech is better than silencing people? Where will it end? Surely the criteria isn’t Republican vs. Democrat. Lets go with “supports violence”
Violent far left militants definitely exist, and are “harmful”. Shall we silence them? The ADL and SPLC condemn them, but are also clearly sympathetic. Shall we silence them too?
If not, perhaps we should stop them from supporting harmful protests. Antifa could use this map to physically attack lawful groups on the right:
Perhaps it should be taken down, at least? I donated money to help create the map. Should I be put on a list and prevented from getting a job?
Its a freedom of speech issue: The right to choose what speech one will expend one’s resources promoting.
Is it a good idea for Facebook, amazon and random researchers on the web to hand out internet death sentences (as judge, jury and executioner)? without even any kind of due process?
this is setting a potentially dangerous precedent going forward.
having the FBI take the site down would be one thing but having FB or amazon or other sites do it without any kind of due process just seems wrong. what if they make a mistake? what if they don't do without biases?
This is the first time I have seen Belize referred to as a Latin American country; although I just checked its demographics and a little more than half of its population speaks Spanish.
> In a detailed report released today (PDF), AFRNIC said its investigation revealed more than 2.3 million IPv4 addresses were “without any lawful authority, misappropriated from AFRINIC’s pool of resources and attributed to organizations without any justification.”
I'm not sure this qualifies as a "legal product". Seems to be lacking legality.
It’s not all free speech or none. I don’t see Andy protesting that it is a crime to yell “Fire!” in a crowded theater when there isn’t one. I also don’t see Andy protesting when that teenage girl was sent to prison for verbally abusing her boyfriend and verbally encouraging him to commit suicide - which he eventually did. Where are his protests? Where are the guardians of free speech here?
The thing is, there is speech for the purpose of communicating, speech for the purpose of convincing, and speech for the purpose of manipulating. The first two categories are always protected and should be under most conditions. The last has heavy restrictions. You are not allowed to use the platform someone else built to manipulate the audience they built if they do not want you to. Period.
Trumps endless lies aren’t speech. They are a blatant attempt at manipulation with a reckless disregard for the truth. That needs to be restricted urgently. We’ve seen too many minds manipulated by this category of non-speech.
That’s also the entire problem with the “conservatives” - especially under Trump. They abused platforms to manipulate people into following them because they didn’t actually have persuasive points. They abandoned their commitments to fidelity, fiscal responsibility, and morality. All they had were lies that they repeated until they became undistinguishable from the truth.
This isn’t about speech. It never was. It is about manipulation and one particular political segment trying to use our core values against us.
Stop trying to muddy the waters. This whole argument is a straw man.
Stop trying to pretend this was ever about speech.
I don't think it's about being in the opposite political party. Facebook's users are more conservative than liberal. Twitter is constant screaming of all political affiliations. YouTube is famous for being the place where our relatives fall down the right-wing conspiracy rabbit hole. The thing that paints a target on your back is your platform being a landing pad for the people banned from other social networks and basically nothing else. I don't care how conservative you are Goat was a cesspool. And it wasn't because the people on it were conservative but because it was where people too vulgar for Reddit went. /r/popular might bleed dark blue but Reddit hosts plenty of active conservative communities that aren't /r/TD.
The argument "we have law enforcement are courts for that" is disingenuous as best unless you want the gov't to start employing hundreds of thousands of internet cops tied into the moderation tools of every platform. Without platform moderation there are literally zero practical consequences for online death threats, harassment, hate speech. It just doesn't happen. There is no one to call. Your local police won't take you seriously and have zero power to do anything even if they did. In spirit I actually agree with you but saying "throw it to the legal system" without any plan to make it work just feels like you don't actually want any moderation and know that this would accomplish that.
These are obviously left wing groups, but they seem to advocate a lot more than being just the anti- to fascism.
Most mainstream conservatives operate accounts on Twitter without an issue and a fair amount of libertarian and conservative-identifying individuals have used Twitter for years without being banned for their views. Framing this as a problem with political viewpoints doesn't do much besides serve a false narrative.
If you read Twitter's rationale for banning Donald Trump it didn't have much to do with his viewpoint. It had everything to do with implying and promoting violent acts. Putting deplatforming in terms of "censorship of political views" is a seductive argument but it falls apart when you recognize the content of those viewpoints have less to do with a matter of policy opinion and more to do with an unhinged ideology based around waging civil war against your opponents.