The chilling effect is real. Even if service is reinstated, the precedent is now that major platforms can arbitrarily censor your news stories for weeks, shut down your servers, and remove your apps from their app stores. Why would anyone build anything remotely controversial on these platforms?
Oh please. Laws exist, platforms must ensure their customers follow the laws, and when a customer refuses to remove illegal content or implement a moderation policy the platform has the choice to accept the liability or remove the customer. Proof is right here, with some moderation changes they let Parler back in.
When will Google and Apple start moderating Chrome and Safari? Those are their apps. Therefore they should be held accountable for all the unmoderated content you can find on them, right?
Or are some apps/publishers more equal than others? If Brave becomes more popular, will they also get removed for "unmoderated content"?
Some apps and publishers have always been more equal than others.
Look in the source code for both Firefox and Chromium, and you'll find a section that glad-handles high-value extension binaries... The binaries where if they aren't configured correctly when the browser is installed, the user will assume the browser is broken, not the extension binary.
Similarly, Microsoft decompiled some of its own APIs when developing Windows XP to account for circumstances where third-party developers (mostly Adobe) had squeezed clock cycles out of their API calls by assuming certain memory footprints in clear violation of the API spec. But if Photoshop didn't work on Windows XP, it was XP that was broken, not Photoshop.
There has never, to my knowledge, been a point in the history of the computer industry where "all software is created equal."
Laws exist? How about protecting free speech and other points of view. It's clearly a right vs left viewpoint. And so political. We can easily list how much worse facebook is in terms of supporting dictators, supporting the CCP, censoring stories on Uyghurs. Not allowing political discourse on alternative viewpoints besides what is in their leftist bubble.
The first amendment protects against Congress specifically. But you as a human have the right to free speech, and that right needs to be protected against private individuals/ corporations as well. If McDonalds blew up your house, you would sue for property damages instead of shrugging and saying "well it wasn't the government, so I guess they didn't violate my property rights after all."
Whether what Apple did here is actually a violation of one's right to free speech is a pretty tenuous argument. But that is the argument you should be responding to.
"Oh please. Laws exist, platforms must ensure their customers follow the laws, and when a customer refuses to remove illegal content"
Don't make this about 'laws' - this was definitely not about 'laws' or the 'law' would have stepped in - and it's definitely not Amazon's job to enforce the law.
Amazon knowingly sells fake goods and suppresses unions so let's not give them the moral high ground.
Apple's concern I don't think was arbitrary: there was a lot of 'really bad discourse' on Parler, and it was not moderated at all (or community moderated which had no effect).
I don't like Apple's App Store 'monopoly' however they were within reason to not want Parler in the store.
If Apple allowed non-AppStore downloads, then I think it would have sat much better.
Amazon however, I think did something a few steps more inappropriate and more nakedly political.
Amazon has a zillion customers that do arguably more nefarious things, in the 'big picture' they look hypocritical, though they are surely within legal rights as well.
I do feel however, that Amazon shouldn't be allowed to drop customers without a 3 month notice unless said customers have done something actually illegal, and by that I mean 'illegal as designated by an actual Judge'.
> and it's definitely not Amazon's job to enforce the law.
It's about liability, and big corp's allergy to it.
> I do feel however, that Amazon shouldn't be allowed to drop customers without a 3 month notice unless said customers have done something actually illegal, and by that I mean 'illegal as designated by an actual Judge'.
> > I do feel however, that Amazon shouldn't be allowed to drop customers without a 3 month notice unless said customers have done something actually illegal, and by that I mean 'illegal as designated by an actual Judge'.
> That's not how liability works
The post you're replying to is describing a better process (note the use of the word "shouldn't"), and your reply seems to be saying, "no, because that's not the current process."
The fact that the system we have now exists does not mean it's the best system possible. I could have misunderstood your comment though.
Amazon hosts all sorts of content that could make them otherwise liable, and they are selling billions in knock-off products every month.
They are able to insulate themselves from it quite well - much like 'Avis Car Rental' is not 'liable' for agency or person that does something bad while renting their car.
Amazon's response was at least about PR, and probably a little bit politics.
There are some parallels as Tumblr similarly could not control its content sufficiently to both allow most of it and also ensure the problematic content did not persist. Just explaining, not defending either social network or Apple.
Presumably advocates of NSFW content on large social networks don’t see a problem with it. Hopefully you see where the other user was coming from now; not looking for you to agree with them.
This is wildly underplaying what actually happened.
Parler wasn't "censored" because it contained "new stories" that were "remotely controversial". Parler was blocked because of rampant unmoderated violent content that correlated with real world political violence.
Look, when you have major thought leaders on your platform calling for the VP to be executed, you need to expect some repercussions when a mob takes the rotunda screaming "Hang Mike Pence!".
> when you have major thought leaders on your platform calling for the VP to be executed
This doesn’t seem sufficient to me. Given that the US has the death penalty at all, how can we declare that it is not ok for people to call for it when they believe a crime has been committed?
Organizing an extrajudicial mob to carry it out, is another thing entirely, but just calling for it seems like it can’t be enough.
> This doesn’t seem sufficient to me. Given that the US has the death penalty at all, how can we declare that it is not ok for people to call for it when they believe a crime has been committed
Does the crime they believe was committed have the death penalty? If not, then calling for the death of the "criminal" is certainly not ok. ( And generally I find calling for death and the death penalty to be morally wrong, but that's another story)
I think they think the crime was treason, so yes, it has the death penalty.
I don’t agree with your logic that this is necessary though. Someone can reasonably call for a higher penalty than is currently the statute, as long as it is not clearly extrajudicial.
> (And generally I find calling for death and the death penalty to be morally wrong, but that's another story)
Yes - I’m not making a claim about the morality of calling for death or the death penalty, but I am saying that if the death penalty is legal, it must be legal to call for it.
It sure looks censored to me. I see only three tweets at that link, one calling for NOT shooting cops and the other two are just textless retweets of non-violent content that happens to include the hashtag.
If Twitter is allowing a #killcops hashtag to proliferate containing actual threats, then they should absolutely be blocking it. Seriously?
If you missed it, the point under discussion is Apple having removed Parler's access to the app store (and by extension similar action taken against Parler by other providers like Amazon). Apple, Parler and Amazon are all private entities not directly constrained by the first amendment.
Why does that technicality matter? Because it eludidates the hypocrisy of trying to claim that Lin Wood's call for Pence's execution was "technically" a call for a legal trial and execution and not for the mob violence that actually resulted. Which was a ridiculous argument.
> Because it eludidates the hypocrisy of trying to claim that Lin Wood's call for Pence's execution was "technically" a call for a legal trial and execution and not for the mob violence that actually resulted.
Um... you, upthread: "Given that the US has the death penalty at all, how can we declare that it is not ok for people to call for it when they believe a crime has been committed?"
There's nothing really comparable between calling for someone's assassination, and on the other hand, someone to be arrested and brought to justice for a crime.
And there's also the fact that there's no basis in reality for the claims to be begin with.
In all of this confusion, it is true that Parler was essentially (Edit: not! obviously 'not' moderated)moderated - and - there was a lot of really terrible stuff happening there. Like 'kill these people'. The cover that Twitter can use is that they actual do moderate people and try to get rid of them. Mostly.
Parler must have had a huge leadership failure to play that game and think they were going to get away with it. There is zero chance that hosting people directly calling for violence, unmoderated, is going to last when associated with other people's brands.
I believe the CEO actually wanted to do something but his backers didn't.
> There's nothing really comparable between calling for someone's assassination, and on the other hand, someone to be arrested and brought to justice for a crime.
Agreed, which is why “Hang Mike pence” is not sufficient. On it’s own, it doesn’t make the distinction between two.
If you want to make the case that assassination was being called for, you need to quote something else.
"Put Mike Pence to Trial and then Hang Him" isn't either.
You have to qualify 'Kill That Guy' with something more in order for it not to be calling for someone's murder.
This activity was out in the open, uncontrolled on Parler which is why it was a problem, even if technically it might be legal, Apple doesn't want to deal with that.
I think their response was on point, though Amazon's cancellation has more material effect their responsible response should have been 'you can't go live with that while on our servers you have 3 months to change policy or move out'. Granted, they may have been warned.
Generally hangings have been just as much a judicial punishment as extrajudicial. You simply can’t state that it alone is equivalent to a call for assassination.
The fact that you have to use the phrase ‘kill that guy’ which is a straw man, shows that the logic doesn’t work.
Consider, ‘he should get the electric chair’. It’s pretty clear that this is not a call for assassination.
There is some ambiguity around hanging which is less present with the electric chair, but is not the clear call for assassination you assert it to be.
The chilling effect is real, and after the Parler-fueled insurrection it was entirely necessary. I’m close to a free speech absolutist, but not actually one. Sometimes the barbarians really are at the gate.
I mean Parler hosted credible death threats and actual fascists. Not hyperbolistic "everyone I disagree with is a Fascist"s, but proper worshippers of Adolf Hitler and his agenda.
And - this is the more important detail - they took no effort to moderate this.
This isn't a slippery slope situation, this is the full on iceberg.
Short of child pornography I can't imagine a more easier case to justify a ban from the perspective of a private enterprise.
If you are going to make such claims, then at least try and be objective about it. Let's start with the toxic hellfire that is Twitter, I'm sure no credible death threats exist there, right?
You know "Whataboutism" isn't a magic word that lets you ignore an argument, right? If Twitter has extremist content (or death threats, or racism, or whatever) and isn't banned, then that is evidence that that's not the real reason for another app to have been banned. Weak evidence? Maybe. Open to argument about proportions of "bad" content? Absolutely. But still evidence.
Yes it is, I and almost certainly a large amount of people just blip right over whataboutism posts. Everyone is fed up with this style of deflection. Its just.. old. The discussion is about Parler, if you can't talk about it without the constant barrage of "well what about what Twitter does??" then maybe don't hit reply at all.
>
You know "Whataboutism" isn't a magic word that lets you ignore an argument, right? If Twitter has extremist content (or death threats, or racism, or whatever) and isn't banned, then that is evidence that that's not the real reason for another app to have been banned
Parler were banned for refusing to properly moderate ( while having no problem with moderating any content they disagreed with), not for having hateful content. When you report similar content on Twitter or Facebook, they usually do something.
So yes, this is classic and very useless whataboutism. Don't you have anything meaningful to add to the conversation?
We appear to have very different ideas of what whataboutism is. If twitter and parler are actually different, then just say that. There's no reason to dismiss the argument without meeting it.
> Don't you have anything meaningful to add to the conversation?
And in any event, there's no reason to resort to personal attacks.
So? Those “fascists” probably use web browsers and email too. Should chrome/gmail be pulled from the App Store because google refuses to moderate their creations?
Neither was it an attempted coup nor much of an assault. The protesters even dispersed voluntarily - if it has been an actual insurrection, the national guard would have arrested the participants that day. As it is, they're still trying to figure out who was even there.
We've banned this account for breaking the site guidelines and posting unsubstantive and flamebait comments. You can't vandalize the site like this, no matter how wrong other people are or you feel they are.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
This has always been the case. Book publishers can refuse to publish your manuscript, newspapers don't have to post your reply. Censorship in one form or another has always been there.
The difference here is social platforms claim they are not publishers under section 230.
"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider"
Therefore the correlation does not work.
Part of the intent for section 230 was to allow free open discussion without the platform being liable. But once they begin to curate the content ( nice way to say censor ) they are no longer performing under section 230.
> In court filings and elsewhere, Parler has said that it had been developing an artificial intelligence-based content moderation system when the larger platforms' crackdown took place.
I can't see this ending well...
...Also, did they finalize a deal for hosting? I recall seeing their absurd hosting spec [2] that would have cost >$250K/mo, and I don't see that working anywhere other than AWS/GCP/Azure or a dedicated datacenter.
Both their recently departed CTO and also their Founder/ Lead Engineer have very little real-world technical experience from what I see on LinkedIn. I can't see how they would be "developing an artificial intelligence-based content moderation system".
Same way as everyone else: Throw statistics at it, pretend that it works, brand it "Artificial Intelligence", use it to excuse all failures in moderation.
> "It's not a side business for anything. It's not a shell company for anything," he said.
I don't see how a company with "less than 50 employees" could possibly host something with that spec, reliably and securely. Maybe it's just grifters all the way down.
I wish I had the data to create a Venn diagram of people who believe that bakeries should be able to refuse gay customers, and those who are screaming that big tech can't just arbitrarily decide who it does business with.
I hear a lot of voices in these threads that would occupy the overlap of that Venn Diagram. The cognitive dissonance is really something.
For anyone who thinks that the parent comment refers to the Colorado baker case (https://apnews.com/article/130137ace2e8416aa207456827fae92b), note that the baker did not refuse gay customers per se, but refused to customize cakes that celebrated gay marriage: gay customers could still buy non-customized cakes.
For anyone who thinks the above is too minor of a distinction, a potentially worthwhile question to explore would be whether, for example, an African-American baker (who customizes cakes) should be allowed to deny a customer's request to celebrate religiously motivated white supremacy.
Personally, under current law, I think Apple and Google should be able to determine whether an app like Parler is allowed on their platforms, although I think that concentration of power (whether in a corporation or in a government) is generally not good for society. I also think that there should be some base level of services (e.g. ISP) that should be treated as a utility (e.g. we don't deny someone water service because of his bigoted opinions).
That's a very good point, and a distinction that I had not thought of before. Maybe it comes down to legality. Homosexuality is not illegal but is racism illegal? Even religiously motivated racism?
there is a legal and ethical argument. what constitutes a protected class should not be limited by legal obligations, but to a liberal democracy point of view. according to some recent polls liberals themselves report the most fear of their views being silenced, and these tech platforms should really think about what societal harm they are doing when nearly all dissent can be labeled as hate or racism and silenced. said another way, I'd like to understand exactly how were suppose to discuss sensitive issues l(for example immigration policy) when anything one side says is immediately labels racist.
Isn't the opposite true as well? Lots of people think bakeries should not be able to refuse gay customers, but do think big tech should be able to arbitrarily choose who they do business with.
Nearly everyone sees the world through the lens of their Party, and come up with all sorts of justifications afterwards.
Actions, not beliefs. Bakeries should have the right to ban anybody who has gay sex on their premises. Big tech should not be able arbitrarily ban somebody based on internal beliefs that they don't act upon on the big tech platform.
Sounds very succinct and simple, but who draws the line? Can a bakery deny a customer who "acts gay"? That's an action, not a belief. Does voicing an opinion count as an action?
You still have to decide which actions are protected and which are ban-worthy.
I think that's disingenuous. All business have rules and regulations that "customers" must follow. No shoes; no shirt; no service. I can't refuse service to you if you are thinking about taking your shirt off, but I can refuse service if you actually take your shirt off.
I don't understand how that point applies here. A bakery could come up with "rules and regulations" which would effectively deny service to any group the baker wished.
The baker could make rules against holding hands with someone of the same sex, or not speaking proper english. There could even be a strict "don't argue with the baker" rule, and the baker could sass and goad customers he didn't want into arguing with him so he can throw the customer out.
The entire controversy around what happened with Parler is the question of whether the rules the big tech platforms have are just, and evenly enforced. I have not seen anyone arguing against the concept of rules itself.
As far as I understand it, yes, the rules the big tech platforms have ARE just. In this case the customer (Parlor) didn't follow the rules (ToS) and was given notice as to what rules were broken and how they needed to fix things. They did not correct their actions and so they were removed. When they took corrective action, they were allowed back as customers. Seems pretty simple to me.
Now, the question of "even" enforcement is another matter, and I don't think I have enough knowledge or background to have an opinion on that.
The law creates a binary view of the world. Actions are either legal or not, and we all have to live with the same law.
Complexity makes hypocrisy inevitable in that system, but that doesn't make it desirable. You'd like to think that somebody confronted with objective, straightforward, simple hypocrisies would be willing to seek a compromise. Often, it's the opposite.
Well, I'd like to believe that several reasonable people holding complex ideological positions can come to a compromise in terms of a more of less binary law, and abide by it, despite not being in complete agreement with it.
I would love to believe that, but in my experience, the law is most often set by people holding the strongest positions. That creates enthusiasm and certainty. Discussing complex ideological positions is dull and uncertain. Those strongest positions are usually held by a minority, but often, that minority can forge an alliance with other minorities to become a majority.
Maybe that's too pessimistic, but there it is. Paradoxically, I hope you're right.
While I don't think bakeries should refuse gay customers, you have to admit that there is a difference in terms of social capital and political power between a local bakery and a multi-national company with more revenue than entire nation states' GDP?
There's healthy competition between local bakeries. If one refuses you, you can just go to another. Try doing that with the App Store. P.S. I don't believe companies should be able to discriminate based on sexual preference, I just dislike this comparison.
I would say that parlor was created to get away from big tech options and yet Apple, Amazon and Google were overreaching and IMO attacking the platform.
The issue with Big Tech is that:
(1) They can't be sued (read: your case would be dismissed) for any of their decisions, because of Section 230.
(2) They are also not a common carrier, and not subject to those restrictions.
Essentially they get many of the benefits of being a common carrier, but without being a common carrier and having the associated obligations.
That's the difference between Big Tech and bakeries. Big Tech could literally object to all LGBT content on their platform and because of Section 230, your case would be dismissed.
Bakeries don't have anywhere near the amount of freedom as Big Tech enjoys from Section 230.
> Big Tech could literally object to all LGBT content on their platform and because of Section 230, your case would be dismissed.
I don't see how Section 230 is relevant there at all. Section 230 is about content that appears on your site, not content that does not appear on your site.
First off, that argument cuts both ways - you could equally say that the people arguing Twitter or Facebook can deny some customers but a bakery should accept all customers are displaying cognitive dissonance. But I would argue that the two situations are actually not comparable. There are lots of alternative choices available for a random bakery. But there are no alternatives available for big tech companies. They are protected from competition due to network effects or due to their sheer size. They operate the public digital town square, but are not subject to regulation like utilities are. They carry almost all societal speech today, and their censorship is as powerful as when a government does it, but they are still not required to implement any of the rules (the Constitution) that apply to our government. Big tech is not operating in a healthy environment of market competition.
It’s not dissonance. They believe that things they don’t like shouldn’t happen and things they do like should be allowed. They want to be homophobic and not have their political opinions censured. There’s no inconsistency there.
It bothers me when people bring this up about SCOTUS appointments. “Mitch McConnell said Merrick Garland can’t be voted on because it is an election year then did the same with ACB weeks before an election. What a hypocrite!”
The right has cared about one thing above all else for the past decade: put conservative judges on the bench; as many as they can, as fast as possible. There’s no “dissonance”. They know exactly what they care about, what they’re doing, and why.
It’s the same thing here. People don’t go by frameworks. They backfill justifications for their feelings. If those justifications overlap in weird ways, it’s just that they’re bad at consistently translating those justifications. The feelings are the same and not contradictory.
Is there some reason no digital platform producing company ever washes its hands of all moderation and leaves control of illegal activity to the people actually responsible for it (the FBI, NSA, local law enforcement, etc.)?
Are companies actually liable somehow if they don't take proactive steps to mitigate illegal content, which nowadays also leads into heavy-handed censorship because of what they interpret as being borderline illegal (like "hateful speech")?
In other words, is this a purely risk assessment related action, or an ideological one as someone like Alex Jones would shout into a mic?
This is a reasonable question. I hope you are able to find an answer but I don’t have one for you. I’d start by reading the “you’re wrong about...” pair of posts covering the First Amendment [1] and Section 230 [2] as they are good primers, even if you aren’t “wrong”. I’d also read the text of Section 230 itself [3] and then read up on safe harbor in general [4]. That should get you going in the right direction.
and fosta / sesta cut up some pieces of that for some sites / situations, which could create some of the issues that op was suggesting for some parts of the internet and not others (yet).
Of course there has been talk on both sides of the aisles (especially the past 6 months or so) to remove 230 totally or update it so that moderating speech makes you lose 230 protections - the argument that 'we don't write the words our users do so we aren't responsible" - the 'dumb pipe' defense, starts to erode when you censor some things you don't like - especially if you censor them before they are posted - from what I understand.
Of course some portals have already destroyed thier dumb pipe defense all on their own, aka cloudflare -
This leads to interesting situations where I believe most users do not know that their assumed private speech is being monitored, censored, judged, reported, deplatformed etc..
IANAL, dr., etc - anyhow I think it's a fluid situation currently.
We know what happens to totally unmoderated social platforms: you get 8chan, child porn, and the absolute worst content humanity produces. It's not just about legal liability, it's about what community you curate.
This appears to be a common flaw in human reasoning.
Email has sent countless child porn. BitTorrent as well. SMS? Check.
Everyone understands you don’t blame AT&T when someone texts CP, but to wrap a pretty UI around your protocol and suddenly the unwashed masses expect you to be the sole arbiter of truth and legality for everything that flows over said protocol.
> Is there some reason no digital platform producing company ever washes its hands of all moderation
Perhaps all that is necessary is the empirical observation that every completely unmoderated forum turns into a metaphorical sewage containment vessel once it gets beyond a certain size.
Doesn't matter who's "actually responsible" for hateful and toxic speech if no one - neither users or advertisers - is willing to patronize your business for hosting that speech. Would you be willing to pay to use 4chan? If you ran another business, would you advertise on there? So ask yourself: if you owned a digital platform where users/advertisers were fleeing left & right because they constantly see vile content that they don't want to see or associate with, would you be content to wash your hands of moderation and impotently say "well actually the FBI/NSA is responsible for this, not me" as your platform dies around you?
The goal of most platforms is to get more adoption and more users. People who post hateful, weird, and/or borderline illegal content are mutually exclusive with the vast majority of other "normal" people. When the majority of people view something as tainting then the platform will remove it because hosting it will taint the platform and the majority of people will leave the platform, or not join it in the first place.
Remember Voat. They wanted to be "free speech Reddit" but predictably they turned into a cesspool of degenerate users, even for a time hosting the r/jailbait folks who had been kicked off Reddit a long time ago. And unsurprisingly Voat never got mainstream and now it is dead, while Reddit is still online and growing more and more all the time.
If you have one platform which bans, say, racists, and another platform which doesn't, then most normal people will gravitate to the sans-racist platform. The sales team of the latter platform then has to go out and convince people that they want to advertise to a small market of racists and racism appreciators. This, fairly obviously, isn't as easy as selling ad-space on the less racism-y platform.
People have _tried_ unmoderated forums and other social media, again and again. It doesn't work. Being unappealing to advertisers is actually the _good_ case; the bad case is that people use it to orchestrate serious crimes.
Whether you agree with the censorship of Parler or not. Big tech showed it's hand. And how it is willing to organize, to ban competing services.
I am getting calls weekly on how to deplatform off AWS services. These are normal businesses, not social media or news organizations. Decentralization is going to happen.
I don't see any evidence of coordination or organization. Rather they all saw a dangerously out-of-control situation where they could be found liable, or at least involved (in the court of public opinion) in armed insurrection. They all demanded some form of the same thing: moderate the dangerous conversations that were calling for rebellion and violence. But they did not demand a particular methodology (which you would have seen in an organized endeavor).
If your service suddenly becomes synonymous with an armed uprising, and your response is to talk about free speech and how it is not your fault... then ya, you are going to loose business partners left and right.
Armed does not just mean firearms. You can be armed with....let's say a flagpole or fire extinguisher(both were used as weapons against the police by the insurrectionists)
> The autopsy found no blunt trauma to the head. Sicknick's own family kept urging the press to stop spreading this story because he called them the night of January 6 and told them he was fine — obviously inconsistent with the media's claim that he died by having his skull bashed in — and his own mother kept saying that she believed he died of a stroke.
> Authorities have reviewed video and photographs that show Sicknick engaging with rioters amid the siege but have yet to identify a moment in which he suffered his [supposedly] fatal injuries, law enforcement officials familiar with the matter said.
> According to one law enforcement official, medical examiners did not find signs that the officer sustained any blunt force trauma, so investigators believe that early reports that he was fatally struck by a fire extinguisher are not true.
My prediction (extrapolating from other large human-built systems) is that the infrastructure of the web will evolve into a soft bifurcation of companies that can leverage centralized Cloud infrastructure and companies that can't (for whatever reason... Infrastructural load too weirdly shaped, application too risky for service providers to want to sign off on it, pissed off the wrong people, etc.). People in the out-group will build more ad-hoc infrastructure for serving their needs, and we'll have a bifurcation into basically the "cleanweb" and the "trashweb" that will somehow (probably by URL shape) bubble up into the view of the end-user.
Things on the cleanweb will be average, safe, and predictable. Various deviations from user expectation will be punished (up to and including having your service provider yank a piece of your service Jenga tower out from under you if they're irritated enough about the offense).
The trashweb will spawn some serious beauty and truly novel things and an awful, awful lot of scams, hatred, illegal transactions, and garbage.
And most people will constrain themselves to the cleanweb but the trashweb will see a robust userbase.
I wonder about that. If you think about it, Parler isn't particularly "decentralized", either. It isn't Twitter, but it is controlled and curated by a fairly arbitrary authority. In theory, if you didn't like Twitter's rules, and you didn't like Parler's rules, you could just move off to another competing service - if you or somebody else started one up (easy) and did the legwork of advertising it far enough that enough people used it to make it useful (hard). True decentralization would be something like Tor, I2P or Freenet, where all the users cooperate to share bandwidth and there's no plug to pull and no central dictator to ban content. However, truly decentralized services have been around for a long time now (I seem to recall installing Freenet once in the late 90's), and they have yet to catch on. As much as some of us fear the power of central planning, it seems as though the majority of people like knowing that there's nuclear kill switch that can be pulled if the communication platform gets to be too contentious.
Bitclout is actually quite promising as a decentralized service, but they do require a phone number at signup (or alternatively, a payment at signup) as an anti-spam measure.
I really don't understand this perspective. There was a severe emergency situation which prompted this response! It's like watching the police lock down an apartment building in response to an active shooter, and saying "they've shown their hand, cops just want to keep everyone locked inside their home".
"All" isn't an accurate description. Much of it did happen on Facebook, and in response Facebook swiftly put in place new policies to prevent the situation from escalating. Unlike Facebook, Parler was unwilling to enact emergency measures, so instead emergency measures were enacted on them.
FWIW, while I appreciate that you can technically pick at the word "all", if--as many people (I think correctly) believe--Parler was ancillary to the coordination (as in, without it existing the riot would certainly still have happened) but Facebook was key to it (as in, without it existing the riot might not have happened), I think the word "all" is appropriate, as that is how the word is used by actual English-speakers... think of the phrase "all of the nutrition in that meal is coming from the side of broccoli".
Regardless, it doesn't seem like Facebook put in these supposed emergency policies anywhere near quickly enough, right? Here is an entire article talking about the nuances of blame here... I just really get the feeling like a big part of what happened with Parler was "big tech" realizing their own culpability and then having to organize around a scapegoat, lest anyone decided to come after them instead.
I don't think that's an entirely false description, and I'm certainly not trying to claim that Facebook (or Apple, to be more on point) is blameless. But the reason Parler was an effective scapegoat was their unwillingness to acknowledge a problem and work to fix it. So I'm not convinced that this specific sanction of this specific company is relevant to the larger trends.
What a joke considering the damage is already done. Big tech coordinated to censor a major communication channel based on politics they didn't like.....during a presidential election season. This should be used as ammunition to break up Apple's monopoly on the app store.
The reason for the ban was for failing to have a content moderation system at all. I think that was a fair thing to require.
And now that Parler has proven it has a system in place, Apple has allowed them back on to their storefront.
There are a lot of potential issues with the Parler ban -- namely the revelation of how monopolized online distribution is. But the ban had legitimate reasons behind it.
I feel like "the damage is done" so to speak and Parler will forever be associated with negative connotations. Not just the associations with January 6th, but also with security breaches and having such detailed personal information leak from it.
It missed it's shot at becoming a social media platform and I don't see why people wold come back to it now that it is on the App Store again. The free speech crowd has moved onto other platforms and the Trump people are awaiting Mike Lindell's new platform and/or the Donald Trump social media site that's rumored to be in the works.
> Parler will forever be associated with negative connotations.
Parler was created so people would be free of Twitter "Censorship". About the only thing Twitter censored was threats of violence or rape, election misinformation, or extremely blatant racism.
Their only appeal was to people who found that kind of speech acceptable. If you market primarily to violent and racist people, you are absolutely going to be associated with negative connotations.
They also block any skepticism around COVID-19, origins, transmissibility, whether toddlers need it, efficacy, side-effects, etc. I’m against anti-vaxxers as any rational person, but you ought not censor discussion around it, particularly when we’re still learning about the disease, treatment, counter measures, etc.
They allow doxxing of some people but then have policies against doxxing of other people.
So there is good reason for an alternative if we want DIA-logue rather than echo-chambers.
Twitter is not Official News (Granma, Pravda) nor is it the Ministry of Truth (though it tries) Even when we read official statements and we accept Twitter's censorship, we don’t actually know if Twitter is making the right or wrong call. WHO and CDC officials have equivocated themselves multiple times.
Meaning they have contradicted themselves meaning they have countered their own arguments.
Debate is healthy. Suppression is not healthy. It’s getting to be Orwellian and Dystopian. They are allowed to be wrong but not others. They are allowed to re-write history.
China has it where you report fellow citizens for "mistaken beliefs". I fear we're going down the same path and people are doing it happily. It's concerning.
While I certainly believe there is an objective truth that can be divined by patient review of the facts, the Twitter, Inc. HQ is a long way down the list of places I would look for guidance.
Twitter can certainly kick whoever they want off their platform for whatever reason. However they don't know anything more about breaking news than the rest of us. They cannot have banned a breaking story because it isn't true, the reason being that they couldn't know.
If they banned a NYPost article based on the newspaper's reputation, that is a ban on a strictly political opinion.
How was Twitter supposed to know that unless they independently investigated the story at the same time as NYPost before NYPost broke it?
This is the danger with censoring "fake news". Obviously there's a huge grey area, since I don't think many people would object to censoring falsehoods that are agreed upon to be harmful to the public, like claiming that 5G spreads COVID-19.
But with something like the Hunter Biden story, who exactly are these companies like Twitter to almost immediately censor such stories, whether they seem specious or not? Do they hire people specifically for their background in journalistic ethics?
Yes, they are private companies. We are also treating them as arbiters of truth and have handed a massive amount of power to them. They can absolutely censor what they want to censor, but that doesn't mean people are wrong for pointing out the bias and the danger present in exercising such strong armed control over what information people want access to.
By the way, telling people they can't see something only makes them want to see it more. Can we hardly blame people for distrusting Twitter because, for some reason, Twitter had an interest in hiding a story of political interest counted to their own?
If Twitter is only to sanction stories that they believe are true, then they are completely falling flat when it comes to the sheer amount of specious tripe that comes out of the mainstream media. I can't count on both hands the number of times I've read articles making claims based on a study, only to use Sci-hub to look up the actual study and find that it completely contradicts the claim some journalist was making.
They were blocked for posting a specific article about the Biden family. The article was factual and was of public interest.
The fig leaf they used to justify it was that it exposed an email from a public official (a Ukrainian gas executive) and was "hacked/unauthorized" material. Of course, journalists use unauthorized information all the time (Snowden, Trump's tax returns, etc.) without getting banned from Twitter.
The article was the 100% fact based. The facts laid out in it just happened to not line up with Twitter's preferred election outcome, so they silenced it.
There most definitely has not. It remains entirely unsubstantiated, while several other claims in the post story are directly contradicted by ample public records.
April 2 (?) interview, (20/20?) - he says it could be his laptop, or it could be fake - he doesn't know - I thought that came out recently establishing that yeah it most likely is his laptop, he's pulling a 'I can't recall' - then feel sad for past drug addiction.
( https://www.startpage.com/do/dsearch?query=20%2F20+hunter+sa... )
So I would not say "entirely unsubstantiated" -
In fact I would read between the lines and be real - he's certainly seen the articles quoting emails and pics and such - if this was a setup / fake plant - he could of said 'hey that could be my laptop, but I never emailed X and never had a pic of Y - so the thing is false flag - He did say deny the content and owned up to the possibility that it was his hardware - so..
If I'm wrong and don't know the whole context I am open to consider whole truth, admittedly have not watched the whole interview (yet, and don't want to, but will if people say this or that)
At this point it certainly seems that twitter censored something as voting disinfomation or election manipulation - and that by censoring it twitter actually created disinformation / election manipulation.
With this, being such an important thing - I am okay if corrected and find out my current understanding is wrong.
The DKIM signature of the "bombshell email" (Vadym Pozharskyi thanking Hunter for giving him the opportunity to meet and spend time with Joe Biden) is publicly available and can be verified. Other people involved in the emails have also corroborated their legitimacy.
The laptop story seems like the most probable explanation of this leak, given the documents and FBI correspondence provided. Most attempts I've seen to debunk it seem dishonest or misleading, like "the PDFs' metadata showed they had been created in the fall of 2019, though the emails were supposedly from 2014 and 2015" as if anyone expected the PDF compilation of incriminating emails to be pre-made by Hunter.
While the email is real and the laptop story seems the most probable explanation, the email doesn't necessarily imply corruption on Joe Biden's part or even that he discussed anything business-related with Pozharskyi.
It's not accurate to say it was "entirely unsubstantiated". Other news sources besides the NY Post (most recently the Daily Mail) have claimed it was authentic, and Biden himself hasn't disputed its authenticity.
Regardless of the laptop though, there's clearly a double standard in what Twitter will block with or without proof. The Russian bounty program that was widely reported by the Washington Post and others during the election cycle was unsubstantiated, but no effort was made to block it.
That wasn’t the basis on which Twitter suppressed the Hunter Biden story. Overall I think Twitter’s heart was in the right place as they know that the average voter can’t be trusted with the future of the country, but I can see why people are upset.
"Twitter’s heart was in the right place as they know that the average voter can’t be trusted with the future of the country" -- that's an extraordinary statement. Who do you believe should be trusted with the future of the country? Social media platforms?
There's a spectrum, and all news outlets will have incorrect information published from time to time, be it bad sources or bad reporting. Humans are involved, after all.
You'd also have to decide whether you include Fox/CNN/MSNBC's opinion segments in the evaluation, or just the news sections. (Fox and MSNBC are especially screwed in this evaluation if you include the opinion hours.) After that, decide if "technically true but highly misleading" counts as factual or not.
The NY Post has a long history of being sensationalist and misleading - there's a reason it's considered a tabloid.
So is the NY Post the only tabloid in the world? What about the Sun in the UK? Daily mirror? I think based on your requirements all those should be fact checked 100% of the time and blocked when incorrect information is posted right? Or should it only be when it damages the preferred political candidate?
They also, on Jan 6th, deleted Trump posts and video post telling people to not be violent and unlawful and disorderly, and saying anyone doing otherwise will be brought to swift justice, right before banning him, all so they could feed the narrative that Trump wanted people to invade the Capitol.
That video included the statements "We had an election that was stolen from us. It was a landslide election and everyone knows it, especially the other side." and "This was a fraudulent election..."
They were removed for being a repeat violation of the conduct he'd already been warned to avoid.
I share your view of the situation, but it is fucking frustrating that "the other side" including the person you replied to probably will keep seeing it the way he described, even after you writing this explanation...
There is a lot of whataboutism in society lately. The what about in this case are the many posts claiming the 2016 election was fraudulent, stolen, etc... While I personally understand the differences (the 2016 election really was stolen, but the 2020 election was not), I also understand why others do not agree and instead see both claims of election fraud as sour grapes, consistent with similar claims of election fraud from every close election in American politics in the past century. Dangling chads?
The claims about the 2016 election being stolen had no more factual basis than the claims about the 2020 election. Amazingly, there was even a supposed proof that the 2016 election was stolen circulating where, if you understood why the supposed election theft couldn't have worked as advertised, it was really obvious that the author knew this too and was pushing a narrative he knew was bullshit. The real difference is that the mainstream press and big tech supports the party that won in 2020 but opposes the one that won in 2016, so the idea of election theft was lent credibility by them for one and discredited for the other in ways completely unfounded in actual evidence.
I think some of what you say is widely debated but it is disappointing that those who advocated better election security after 2016 did a 360 in 2020 and decided the same systems are now infallible. Election security shouldn’t be a partisan issue. Wholly suboptimal.
I’m the person, and I don’t see these things as one or the other.
The election results questioning thing is a totally different topic but it doesn’t change the fact that people literally attempted to claim Trump wanted an insurrection and Twitter could have continued to flag things with their “election results” blurb rather than pulling the entire post.
I think we all can see what the common denominator is here... why not allow both the condemning of violence and lawlessness as well as the ability to “trust but verify” on election results?
Mitt Romney said “an audit won’t satisfy them only telling them the truth will” only nobody to this day even knows the truth yet, the best we can say is that we aren’t sure, and that leaves it open for either side to say they are sure about their angle.
He spent months telling people that their country was being stolen from them when what's really happening is the country is moving on. He lied and he lied and he lied some more. He told them that if they didn't fight for him they wouldn't have a country and then he told them we need to go together to intimidate our lawmakers. Of course he didn't have the balls to march alongside them but he literally spent months feeding the anger, formed that particular crowd, and pointed them towards congress.
I was watching the entirety of the events through streamers cameras from start to finish. Were you?
Yep, I even went down and checked it out on because I live near DC. I think lying is a strong word, saying that “you won’t have a country” is true but I think he’s talking about due to long term effects of bad policy, not literal country stealing (which makes no sense).
We also know that people have cheated in elections since the dawn of time, in the US it’s typical for each party to just ignore it for the sake of moving along, in this case we found a guy who didn’t care about towing party lines so he decided it was a hill to die on.
Right or wrong if anything he revealed some massive flaws in the system and if voting is as important as people say then it’s probably a good thing to be aware of and scrutinize.
Remember, he could be right, it came down to a mere handful of counties where the results were never able to be verified, and to this day the local officials have been blocking attempts to audit, even after judges ordered it or subpoenas were issued. Maricopa county audit is happening soon I think.
We have no reason to suspect the election was anything but generally fair outside of small scale issues like the fellow who tried to vote for trump with a ballot obtained for his dead mother in law. Trump told tens of thousands of verifiable lies. Lying is exactly the correct word for what he did throughout his presidency and beyond. Almost every election could come down to a "handful" of counties if you selectively pick the most populous counties you lost, make up a tally that would result in victory, and then declare fraud must have taken place to achieve that fictional number.
By that logic any tennis match between an excellent player and a terrible one could be said to have been decided by a handful of mistakes and declaring without evidence that those calls were made badly.
The election was fairly lost by a huge margin of the electorate and a substantial EC vote margin. Trump is both a traitor to our country and a liar and you should stop apologizing for him.
Are you joking? There’s a shit load of reasons to question the election results. We have just as much reason to trust them as we have to doubt them because they went entirely unverified in the counties that had irregularities. And no, it wasn’t lost by a large margin, it came down to questionably small margins in only a handful of counties.
All they needed to do was audit the ballots, it wouldn’t have even taken long. The reason lawyers did their best to block these audits was because they knew it was possible for them to lose if the audits went forward.
Maricopa AZ, Fulton GA, Detroit... did you look at any of the links I posted?
This is pretty pointless because people have already made up their mind that there’s nothing there and nobody would ever lie or manipulate a count to “save the country from Hitler”, when in reality we still don’t know if that happened or not because everyone with the ability to check agreed not to check. I am not claiming to know there was enough fraud to swing it, just that they never confirmed there wasn’t, they just took the word of the local officials that the counts were correct. To me, that’s absolute insanity, and how anyone who is pro-democracy would support that is beyond me.
The Maricopa audit is happening soon, so that should be interesting. And the Fulton ballots were ordered by a Judge to be turned over a few weeks ago, but the local officials are appealing to block the audit!
Presumably they are trying to block further meandering political theatre because there has been sufficient process in place and its abnormal to be attempting to re litigate an election that wasn't even close almost 6 months after its lost.
I mean he’s just saying what he believes, he could be very well be right, we don’t know.
In my opinion a complete lack of trust should be the default stance when it comes to voting, we should ideally not have to trust anyone or anything for this.
It is completely legitimate to challenge election results. It is legal, and a part of our political process. It shouldn't be surprising either - it's hard to trust an election process that made a hasty and unplanned switch to mail-in voting, that altered the rules of the election arbitrarily (example https://www.npr.org/2020/10/19/922411176/supreme-court-rules...), etc. And with Democrats repeatedly pushing back on very reasonable voter ID laws, it seems that one side really does not want to make elections secure and instead leave security gaps open, which fuels those concerns further. We should all want a maximally-secure election process because it is the most fundamental piece of our political system.
Keep in mind, I am saying this as someone who believes that the amount of fraudulence alleged would not alter election outcomes. Regardless, it's important for our political process to play out in a trustworthy, transparent, and censorship-free way. Your phrasing of "violation of the conduct" tries to lend legitimacy to Twitter's actions. In reality Twitter simply engaged in one-sided censorship, basically providing election and campaign help to the Democrats because they (their executives, their employees) are politically biased.
Twitter rarely “censors” anything. There are countless examples of people getting multiple death threats and Twitter just blowing it off. Female journalists are frequently targeted with rape and death threats for posting things which are only moderately controversial and Twitter waves it off “this isn’t a violation of Twitter policy”.
The people who were blocked at all were generally repeat offenders.
About the only thing Twitter censored was threats of violence or rape, election misinformation, or extremely blatant racism.
One of these things is not like the others. Equating rape threats with what Twitter unilaterally decided was “election misinformation” and using that as a basis to silence a sitting President of the United States is an extreme case of censorship that justifiably scared anyone with views that divert even slightly from Silicon Valley’s brand of extreme leftism.
Even Jack Dorsey acknowledged that Twitter had entered into dangerous territory with this move [1]. Specifically, he said it ”sets a precedent I feel is dangerous: the power an individual or corporation has over a part of the global public conversation.”
I agree with you that Silicon Valley is an odd mixture of socialist ideals and unbridled capitalism. For example, we have tax dodging corporations like Apple pushing to elect people that have a stated goal of implementing massive tax-and-spend initiatives. Somehow it all makes sense in their minds. A cynical person would say that they are just siding with whatever side happens to be popular at the time in order to make more money, knowing that they will be able to avoid the effects of such policies through international tax avoidance schemes. But I think that many of these guys actually believe the things they say, which is an interesting dichotomy.
| Silicon Valley is an odd mixture of socialist ideals
There is literally nothing socialist about Silicon Valley. Government spending is not socialism and never has been.
If Silicon Valley supported nationalization, collectivization or unionization, you would have a case. But they don't, in fact they're pretty obviously against all of them.
Many in Silicon Valley advocate for things like basic income and socialized medicine. I’m not saying that these are good or bad, but they are socialist ideals.
| Many in Silicon Valley advocate for things like basic income and socialized medicine.
UBI is not socialist, it has roots in Austrian economics. Most socialists are opposed to it, although some might accept it as a short-term compromise. Like I said, socialism is nationalization, collectivization or unionization. It is not the negative income tax proposed by Hayek which maintains market relations and alienation of labor.
As for universal healthcare/single payer, can you show me people representing Silicon Valley that support Medicare For All? Not just employees, not the proles, but actual power brokers?
This is easier than you are making it. The right majority has gone off the deep end. The farthest left minority would jack up taxes substantially. The left moderate majority would increase taxes somewhat in a way that leads to long term prosperity.
Allowing the right to take the country on the crazy train to moral and economic destitution is bad for business and the furthest left are a manageable risk. The smart money is in on supporting moderate right and blue majority.
> About the only thing Twitter censored was threats of violence or rape, election misinformation, or extremely blatant racism.
> Their only appeal was to people who found that kind of speech acceptable. If you market primarily to violent and racist people, you are absolutely going to be associated with negative connotations.
This is not just explicitly false, it is a gish gallop of unsubstantiated insinuations about Parler and/or their customers. I spent some time on Parler, and I was expecting a hellscape but instead literally did not see even one example of the type of content you are listing out here. If it exists, it is exceedingly rare, which makes sense considering Parler had 20 million users. The vast majority of the content was people posting the same content they post on other platforms, to build up a presence on Parler.
As for your claims about what Twitter censors - it is certainly not as limited as you claim. Some examples: 1) They censored Zero Hedge from talking about the Wuhan lab leak theory and banned them for months before reinstating them, under the very dubious claim that they "doxxed" the head of the lab, whose contact information was provided on the Wuhan lab website as the public contact for the lab. More recently, once other people starting talking about the Wuhan lab leak theory, it was suddenly okay. 2) Twitter blocked NY Post's story about Hunter Biden under a vague "hacked materials" policy (https://nypost.com/2020/10/15/twitter-changes-hacked-materia...) - meanwhile, they allowed the NY Times and others to post about Trump's private financial data, and allow "hacktivists" to share illegally obtained materials with no consequence. 3) Twitter censored posts about BLM's founder spending millions on real estate (https://nypost.com/2021/04/16/social-media-again-silences-th...)
There's a more extensive list of inconsistent, one-sided application of censorship in social media at https://thefederalist.com/2020/10/15/11-hacks-leaks-and-hoax... and https://www.newsbusters.org/blogs/free-speech/nb-staff/2020/.... Other articles compiling censorship actions also appear in web searches, and they too go far beyond the types of censorship you mentioned. It is clear that US tech giants - who are more influential than most nations - are not in the business of facilitating open discourse and a diversity of ideas. They are simply ideologically biased in their rampant censorship, which is why alternative platforms, heavier regulation, and a renewal of antitrust law are needed.
Who told you that? I know of many cases where accounts were silenced for none of these reasons. Most recently, the Project Veritas guy was banned from Twitter for his "fake account" after posting a video of a CNN technical director describing CNN's role in ousting Trump[0].
AFAIK the people that hold the negative connotations are not the people that were on the platform in the first place. I think people who were will likely flock back to it.
Apple has always been quite heavy-handed with the developers on its platforms regarding user experiences.
Its users generally appear to consider that a feature, not a bug, so I don't anticipate any changes. People have been predicting users walking away from Apple's version of Omelas for over a quarter-century; hasn't happened yet.
Apple is making the right call here. They are framing their actions as something they had to do in the moment due to the potential for violence, but don't want it to be permanent.
I hope Facebook, YouTube, and Twitter unban Donald Trump next. None of the justifications they provided in January exist any longer. If they don't, I hope decentralized alternatives like BitClout start to take off.
Would be great if the internet and social media were treated like utilities. You could then prosecute people legally according to state laws for doing things like inciting violence.
Completely agree. Simply pointing to someone actually tracking censorship activities. It would be nice to see an inclusive set so we the humans could make our own decisions.
It is a problem for everyone. Unfortunately just like censorship and political repression was used by the right in the past when they were dominant (https://en.wikipedia.org/wiki/McCarthyism) it is now used by the left. There are almost no organizations standing up for those civil liberties now. The ACLU is abandoning this mission (https://reason.com/2018/06/21/aclu-leaked-memo-free-speech/) as they have become explicitly leftist rather than neutrally focused on fundamental civil liberties, so new institutions are needed. Right now, the side with the most motivation to champion this cause is the American right, which I think is why we're seeing sites like this.