Amazon’s main argument seems to be that they’ve been asking Parler to remove overt calls to violence since mid/late November, but Parler have refused. While Twitter does moderate this sort of thing, especially when asked to do so. I believe Twitter has been deleting #KillMikePence Tweets that are actually calling for violence, but haven’t been moderating Tweets that aren’t overt threats, like “OMG #KillMikePence was trending on Twitter”, or “Capitol rioters were chanting #KillMikePence” type Tweets.
If that’s all true, then to me it seems reasonable. I’d liken it to DCMA Takedowns - if you respond to them in a timely manner, and your site’s primary purpose isn’t sharing copyrighted content, you’re fine. You’re in trouble though if you refuse to take down the copyrighted content, despite receiving DCMA Takedowns.
DMCA takedowns are defined by the legal system. The system you are describing here is completely defined by these companies. They are in their right to do so, but we ought to ask the question if we should introduce a legal structure like the DMCA to standardize it from a legal standpoint. This would cut both ways: if they de-platformed people complying with the standard, they would be in the wrong. As it is right now, AWS can just arbitrarily decide the standard. The only incentive they have to make it a reasonable one is public outcry, which is not the kind of backstop you want when it comes to possibly harming fundamental freedoms.
Yeah, at the moment companies are pretty free to refuse service like this, based on their own policies, but I think there’s a decent argument for a legal framework. The biggest tech companies are indeed a core place that communication happens, I believe formalizing what types of content should and shouldn’t be moderated could be a good idea.
This would be very contentious, though. Many, many Americans would be strongly against the government telling private companies how to moderate speech on their platforms. It may even be unconstitutional - would a formal, legal framework represent the government restricting free speech?
It really depends on the details. We already have very strong legal standards around what speech is legal under the 1st amendment. If the laws surrounding corporate governance of speech effectively extend that framework into the private sector, that would probably be a fruitful debate - those who most support corporate agency are also most likely to support speech freedom maximalism. (At least, that was probably the case until recently, since now I think a lot of people are unfortunately changing their minds around how damaging large monopolistic tech companies are for society since they're seeing them flex their unprecedented power in a direction they agree with.)
I can immediately find BLM and antifa calls for violence like this on Twitter and FB.
I can also report them and it might be removed fairly quickly. How? Remember all those articles about the practical slave labor camps of content reviewers with psychological problems that everybody was horrified by? And how we said we needed to stop doing that? Yeah, Parler doesn't have those slave camps.
And by the way, do you know how long it took for FB and Twitter to get to semi-decent moderation?
FTA:
“During one of the calls, Parler’s CEO reported that Parler had a backlog of 26,000 reports of content that violated its community standards and remained on its service.”
Parler might have refused on some of them, but the above is obviously a big part of the issue. Plus, if some people are willing to justify Twitter leaving up the "hang Mike Pence" stuff, then surely there will be some disagreement about what exactly should be removed on Parler based on context, yes?
If that’s all true, then to me it seems reasonable. I’d liken it to DCMA Takedowns - if you respond to them in a timely manner, and your site’s primary purpose isn’t sharing copyrighted content, you’re fine. You’re in trouble though if you refuse to take down the copyrighted content, despite receiving DCMA Takedowns.