I'm honestly convinced that at some point this kind of big tech social media witch burning will eventually lead to private lawsuits aimed at non-combatants like Signal and Wickr. Please leave a comment if you disagree - down votes don't help sow discussion.
Nope. Huge swaths of the democratic party are opposed to these SV companies because they see Twitter, FB, etc, as having played a huge part in Trump gaining the influence he did and the conspiracy nutjobs evangelizing their craziness as well.
I expect pushes for regulation, just a different sort than Trump would've pushed for.
Here we have people dying because a President who was afraid of losing spent months preemptively setting up a fraud excuse. An excuse that was enabled and amplified by social media platforms that also helped people spread conspiracies about the supposed non-existence of a global pandemic, and thus the need for different behaviors.
Even if you end up saying Trump himself shouldn't have been banned, you have a huge echo chamber problem that has now passed the point where we can dismiss it with stuff like "they just need to feel heard." Lots of people listened, and it led to lots of violence, and is the way we're currently (not) regulating social media sufficient?
All of these nutjob theories have gotten MORE prevalent in the past 4 years DESPITE their candidate winning the 2016 election and DESPITE their candidate completely failing to expose any national pedophile conspiracy or any of the other nonsense they spout... The ultimate influence of the big social media platforms has been anything but left-leaning.
You packed a lot into this one sentence.
Please share your evidence.
It depends which type of Republicans, right now it looks like the party is de facto split in two. Even though today's vote showed that they still think the Trump side will win out in the medium to long term, otherwise I can't explain why they chose to face the media storm directly and voted against impeachment.
If a military officer commits a felony offense the week before they're discharged, I don't think the military would/should say "well you're so close to leaving, so we'll still give you an honorary discharge and a pension."
I feel like the same concept applies here, otherwise we're opening the door to the president being able to commit impeachable offenses with impunity as long as they're close to the finish line.
Also, not a lawyer, but I feel like conviction after impeachment would make conviction in subsequent civil suits a slam dunk.
I doubt it. Congress can impeach for almost anything - it doesn't have to be anything illegal - they just need to have enough votes for it.
I know this part is true. However, and again I'm not a lawyer, but I think inciting an insurrection is actually illegal?
It is, but even if they use the name of a crime or tort, Congress isn’t bound to apply the same definitions or rules of evidence as a court of law, and the decisions of the Senate in an impeachment trial can’t be entered as evidence of the facts found by the Senate in a civil case the way a criminal conviction can.
So, while the trial (regardless of outcome) may generate a potential trove of free-for-others-to-use evidence, I don’t think a Senate conviction (or acquittal, for that matter) has much direct bearing on future civil litigation.
No, unlike a criminal conviction, I don’t think it would.
OTOH, any evidence gathered and publicly presented against Trump in the impeachment trial could be a valuable gift to those with substantially-related civil claims.
Impeachment may prevent him from running again: https://www.reuters.com/article/us-usa-trump-impeachment-exp...
Doing so bans the person from holding federal office forever. In other words he couldn't run again in 2024.
Not inherently, but presumably (as it is the only remedy left, and apparently requires only a majority vote after conviction) the Senate would adopt the forward ban allowed by the Constitution if it convicted.
Trump has already been impeached the second time (it didn’t happen after he left office.)
Conviction after he left office would be limited to a lifetime bar on future office (assuming it is legal at all, but I suspect that were it challenged it would be found to be), but the main purpose would arguably not be the actual disability to Trump, but drawing a line of accountability for Presidents in general. (Politically, it would also be an opportunity for some Republicans to concretely distance themselves from Trump, which they may see as beneficial for future elections.)
Personal fear of the people whose affection for Trump led them to erect a gallows and storm the Capitol with weapons, restraints, etc., screaming “Hang Mike Pence!” when Pence merely wouldn’t steal an election for him could also be a factor.
While I think it would be cowardly to withhold action on that basis, I don’t think anyone in their position could fail to have significant fear that betraying the GOP base by supporting impeachment would be painting a giant target on themselves for violent domestic extremists who are often more even more hostile to betrayal from “their side” than they are to the other side (to whom they are plenty hostile to start with.)
There's really only two things that have reliably gotten people booted from any of these platforms:
* Appeals to violence
* Overt KKK-grade bigotry
I've been watching the "alternative" platforms for the last couple years. It got boring last year when it was looking increasingly like one of the major two, the one accurately described as "one of the less popular Mastodon instances", was simply going to fold --- the bot posts were outnumbering the real ones. That's because all the conservatives were on Twitter and Facebook, and they still are.
Even though I find that it's rare for folks on the left to understand what's happening on the right, I'm surprised that even you are so misinformed about what gets people booted. Appeals to violence and "KKK-grade bigotry" are not what these complaints are about, and don't even approach the correct universe in describing the types of folks getting moderated (although hate speech should be allowed IMO). Not only that, but twitter is chock full of appeals to violence, which, as long as it's directed at conservatives, has remarkable staying power compared to anything directed elsewhere.
It's worth watching some of the opposing folks that you've been taught to hate once in a while, even if just to keep abreast of the concerns being discussed. Otherwise, all you get are the distorted caricatures of the fringes.
Mind you. Historically most people didn't have access to a platform anyway. And the record from at least some players like AWS is that their ban bar is pretty high. But it certainly suggests that big tech can decide to cut off some slices of discourse if they feel it's convenient.
And so on.
This is not about disappearing people. Or businesses. This is about disappearing politicians.
The real game here is that the mainstream media - especially on the far right - have suddenly discovered they're no longer owners of the narrative space, and their game tokens can be removed without notice.
Of course they're unhappy about this.
So if we're going to talk about concentrated media influence, we should be questioning the international reach and centralised narrative control of the Fox network (and others) at least as much Twitter or Amazon's ability to ban someone who was literally planning an insurrection in the US.
There is no free speech issue here, because the kind of "free" political speech we're talking about is very, very expensive, and only available to a tiny selection of headline operators.
Everything else in the social media space is noise farmed for engagement, and is unlikely to unduly trouble anyone who matters.
Yes, we cannot ignore the system effects of shared ownership and control. Many economists understand the concept of market power . I'm not so sure that laws and cases around regulating speech have adequately explored the analogous ideas.
I'm not following. Could you say it in a different way or perhaps with more words?
What far-right mainstream media? In any case, why should any media own the narrative?
Your decision that Trump "Plan[ned] an insurrection" isn't well supported - He's accused of inciting a crowd. But why couldn't any similar narrative escalate (or be escalated) when all narrative are monopolised by left-associated media?
But now the companies that make printing presses won't sell to you. The machine shops that make the parts to build printing presses also won't work with you. Maybe you're supposed to mine your own ore and refine your own metal to build your own printing press?
In the long run, we're going to have to find a better way to deal with odious speech that by simply cutting people off and shutting them completely out of the public sphere. For one thing, this requires us to trust that the gatekeepers who control access are always going to be fair and just and only shut out genuinely bad actors. There's needs to be a process where anyone who wants to can still see and inspect the banned speech to audit whether it was really worth banning or not.
How is this different from what we have today? Who decides what's really worth banning? Why should that person/entity and not another one?
There's no end to this, there's always going to be a something that is just completely arbitrary. IMO the only "solution" is to have a truly decentralized platform.
I'm more then happy for us to decide that some parts of the internet should be public spaces, paid for out of general revenue and provided for the common good by the government. Internet access should be a basic human right.
But frankly, once you have IP-packet routing functionality, that's as much as I think you should be offered. No one else has any obligation to associate with you.
It’s especially problematic if these de facto monopolies take sides in elections.
These "platforms" are just large message boards.
I see it as more like the telephone: customers that want to receive calls (i.e. TCP connections) sign up for the service and receive a number (i.e. an IP address) that others can then use to contact them. And some of these customers choose to have their number published in a public directory of subscribers (i.e. DNS) to make it more convenient for others to find them.
At the most fundamental level, the Internet is nothing more than a vast and automated network for establishing point-to-point (i.e. private) calls between two parties.
I am more worried about where are we heading as a society? Why the heck is a regular joe (me) lying awake at night about armed unrest in the most powerful country in the world. Just beyond depressing.
What erosion of civil discourse are you talking about? 10 years ago half or more Americans were against gay marriage, today 30% of the US is against that, except nowadays they are also called neonazis and are being deplatformed from major platforms. Do you think 50 years ago there could be civil discourse on legalizing rape or celebrating Holocaust? No, these people would get the same treatment. Or, as a non-imaginary example, just remember McCarthyism.
What changed is the level of discrepancy of Overton windows between rural and urban America. That’s all.
> platforms being used to organize illegal activities
Hah, but when they are used to organize illegal activities in countries like Egypt or Belarus it was not an issue.
> proliferation of misinformation (during a pandemic)
If you think that an average citizen is too stupid to think for themselves, we might as well just abolish voting rights. And surely there is a better way to combat misinformation than giving censorship powers to private technocrats.
I sincerely have no such intent. Apologies if it came out that way.
>> erosion of civil discourse
Maybe my naivete but it seems that online discussions are much less nuanced compared to face to face discussions we had when growing up. Sure I disagreed with relatives but it certainly feels a lot more intense now a days (quite possible that its just the nature of the medium).
>>level of discrepancy of Overton windows between rural and urban America
Do you have any recommendation on how to lessen this gap? I live in a major metropolis. But I keep getting a feeling that such a widening chasm can not be good for the society.
How often did you talk with complete strangers about politics when you were growing up? Are you now having more intense discussion with your relatives? Do you think it is can be attributed to Facebook?
> Do you have any recommendation on how to lessen this gap? I live in a major metropolis. But I keep getting a feeling that such a widening chasm can not be good for the society.
Uniform education with officially supported views on main topics will bridge the gap and, also anti free speech laws. Neither European countries nor China have such divergence of opinions. All of them police speech and ban certain viewpoints. Of course people from different stratas of society will always have different opinions: small business owners, industrial workers, intelligentsia, government workers, non-working people all have different interests. But common national viewpoints keep the fabric of society intact.
Does it really matter if it was twitter, facebook, a new paper, a radio station, a tv station, or really anything that can communicate.
If it was 100 years ago and the same events happened (invasion of the Capitol building and 5 people died) because of an op-ed piece in the NYTimes (printed on paper of course) would cause the same "deplatforming".
I can foresee a lot of the people who were on Parker before going to signal. I am sure there is close to a 100% chance that there will be messages advocating violence like death threats on there. Some enterprising investigative journalist will gain access to a private group and publish screenshots with vile threats. Of course, Signal is unable to do any moderation, and the calls will come for it to be de-platforms as well. Their hosting cut, their domain registration removed, their GitHub account locked, etc.
Greenwald might be a disillusioned Leftist (not liberal), but then, so the original neoconservatives were disillusioned Trotskyites.
Disillusionment is what spawns radical conversion and radical converts are often the least-open-minded, most most fact-averse, ideological crusaders.
> if we’re going to, in the secret little court in each of our minds, “hold Parler liable for its moderation decisions” despite the fact that Section 230 won’t allow that in a real court,
Other than civil liability of the types which are specifically dependent on status as a “publisher or speaker”, Section 230 doesn’t prevent them from being held liable.
Criminal liability, or civil liability not dependent on status as a publisher (though some courts have, controversially, extended Section 230 to liability as a “distributor”) is generally outside of the scope of Section 230 protection.
You're right about the scope of section 230, and given that Parler is not considered a publisher or speaker (which I think is appropriate), there is currently nothing else to consider holding them liable for, so I was using shorthand. I'm actually mostly talking about each person's individual opinion. Personally "holding them liable" for anything at this point depends on a guilt-by-association argument.
Generally, relieving them of being found to be the “publisher or speaker” of content they distribute would leave them as a distributor, having civil liability for content where they have actual (or constructive) knowledge of unlawful content that they distribute, rather than liability independent of their knowledge of the content based on the presumption of control that goes with status as a publisher.
But the free browsers will be gone.
Weirdly, Fox News recently exited primetime news. Their 7 PM news hour is being replaced with another opinion show. This was before the riot; it was a decision based on ratings. It's a concern in that it disconnects a sizable fraction of the US population from any source of information about real-world events. Fox will still have a hard news show, but at 3 PM.
I'd argue that NYT has fallen into the latter, with dissenting voices stifled internally, example Bari Weiss:
If that was the “one time there was a dissenting opinion” in the NY Times, what do you call Ross Douthat’s and Brett Stephen’s columns about that incident that the NY Times also ran?
The NYTimes op-ed section has plenty of dissenting opinions and/or conservatives... just not ones calling for violence.
That aside, he's a public representative of the highest order (a Senator). He needs to be heard whether we like it or not. I've not seen any public representative being censored in the country I'm from, that including opinion pieces from extremist parliament members on both sides. This is clearly a US thing.
And to achieve that you're going to force businesses to host content that hurts their bottom line (the loss of income from the advertisers that will pull out from working with a company that hosts such content).
> This is clearly a US thing.
In the country where you live all news organizations are forced to publish everything a politician says?
NYT makes more money from subscriptions than from ads (60% vs 30%), and as a (formerly) paying subscriber I'd like to think it lends them some independence. That aside, many other news sources would find Sen. Cotton's opinions pretty mainstream (certainly these opinions may have even had majority across the general public) and publish such views just fine, without advertisers pulling out.
>> In the country where you live all news organizations are forced to publish everything a politician says?
Of course not. They publish on their own accord, and extremists get an outsized exposure because they are interesting to read (just like click baits, right?). It's how I'd have expected most news organizations to work, you know: reporting on the unusual and unexpected.
For me, NYT stopped being a news organization the moment it decided some opinions are forbidden from being published: not because they are fringe (a significant part of the American public agreed with Cotton), not because they come from fringe sources (he's a Senator after all), but because they are contrarian.
Cotton was just 7 months too early.
an overwhelming show of force to disperse, detain and ultimately deter lawbreakers
Also, Cotton pre-emptively disagrees with your characterisation also:
This [law] doesn’t amount to “martial law” or the end of democracy, as some excitable critics, ignorant of both the law and our history, have comically suggested
> In the United States, online speech is governed by Section 230 of the Communications Decency Act, a piece of legislation passed in 1996 that grants Internet companies immunity from liability for user-generated content. Most public argument about moderation elides the fact that Section 230 was intended to encourage tech companies to cull and restrict content. But moderation is complex and costly, and it is inherently political. Most companies have developed policies that are reactive rather than proactive. Many of the largest digital platforms have terms-of-service agreements that are constantly evolving and content policies that are enforced unevenly and in self-contradictory ways.
I highly recommend reading from this point forward.
Actually, any digital platform with user-generated content should be regulated in such a way that frees platforms from pressures from the government (Section 230) and from social justice (ban bad guy that's not convicted and will have no recourse or penalty expiry)
Section 230 does free platforms from pressure from the government. A service provider that is responsible for the content posted by users would have much greater exposure and would be far more likely to censor. People don't understand that 230 is mostly enabling free speech, not stifling it.
The anti-230 crowd is like a dog chasing a car. It's all fun and games to chase that car, but what do you think is going to happen when you get what you're chasing?
I think that what should be sought is another "Section 231" that makes banning users from platforms must go through the justice system, just like if you wanted to ban someone from the town square.
This gives platforms a scapegoat, "the government won't let me", but still allows the platform to do things like not list "bad" people in the "discover" feed and make better mute/ignore features
Unilateral and permanent banning just creates more problems
These platforms aren't remotely the town square. The comparison has been made to them being the shopping mall, and I think it's apt.
> Unilateral and permanent banning just creates more problems
Creating government red tape when swift action is needed to remove users who violate the rights of others or harm others is not the answer. The latitude service providers are given to remove objectionable content enables the internet to work quite well. From what I've seen, the biggest complaints of censorship are defending the indefensible, and for support of that claim I direct your attention to the evidence Amazon gathered in response to Parler's lawsuit.
> This gives platforms a scapegoat, "the government won't let me", but still allows the platform to do things like not list "bad" people in the "discover" feed and make better mute/ignore features
If a shop that you're forced to allow in your shopping mall is doing money laundering and arms trafficking, I don't think the answer is 'just remove them from your mall's map while you file a lawsuit against them'.
I’m slowly coming around to the idea that getting rid of section 230 would be a good thing overall. Social media platforms shouldn’t be amplifying ideas they think are odious. Why the heck would anybody do that? For money? Would you be willing to spread propaganda that you thought was terrible for money? I sure wouldn’t.
I like to call them Little Sister because of how they name their seemingly innocuous digital assistants.
The rhetoric is really overblown.... these companies waited years and until the heads of state were fully under attack before they did much of anything.
This isn’t even about free speech. People can still say what they want.... they just don’t have an absolute right to broadcast it on every major platform.
If I had to pick though, yes I would say I'm more worried about people who think the state of affairs this article is discussing is completely fine, as opposed to idiots acting out after being cooped up at home for a year. The latter is a concerning problem but one much more easily dealt with, if for no other reason than that a large majority of people agree that it is actually a problem.
Cory Doctorow? Ridley Scott with Blade Runner and the Alien series? Paul Verhoeven with RoboCop. Frank Herbert with CHOAM.
However there are over 100 countries with democracies or at least elections and in other languages and different cultures, how can social media platforms police content globally? Its a mess.
> To cut a politician and affiliated organizations off from payment processors, Web-hosting services, and e-mail providers is to halt them at the level of infrastructure. It is a different game than the moderation of user-generated content, and raises different questions. Advocates for net neutrality have long argued that Internet service providers should not determine who their users are, within the bounds of legality; in this view, Internet service providers are akin to public utilities. For Trump, arguably, it’s as though the electric company has permanently turned off the lights.
I was concerned when Twitter and Facebook and other social media platforms started the mass deactivation of accounts they no longer wanted to host, but in the end I don't think these actions are wholly unjustified: nobody should be entitled to a megaphone for their speech. And as many have said, "if you don't like it go start your own social network" is an acceptable response.
The real issue is at the next layer down, the service providers which collectively operate what we know as "the Internet": what if you can't go start your own social network because none of the server hosts or domain registrars will provide their services to you?
But do we really want to mandate (and regulate) that hosting providers must provide service to anyone that asks, no matter how undesirable or troublesome they might be? Should there be a "provider of last resort" for those that are unable to find a willing host? I'd like to see more discussion and debate on this topic.
If you say something on the Internet that angers a mob, you are no more directly harming your ISP than you are directly harming your electric company. Why should one of them be able to cut you off on a whim, and the other cannot? If it is because one of them is more indirectly harmed, such as by the mob attacking them, then why do tolerate such attacks on one but not the other?
Fundamentally, I think that a service provider, of any service, should be expected to provide their service to anyone willing to pay for it, as long as that customer does not actively and directly harm the service provider. Mob actions against the provider are a separate matter, and should be opposed, whether they are calling for all ISPs to disconnect the offender or for all grocers to stop selling them food.
Further, I think that even at the platform level, permabans should not be allowed (unless perhaps with a court judgement after multiple offenses). Temporary suspension for TOS violations? Sure! But there should always be a clear and accessible process for appeal, and for the offender to redeem themselves. Considering what a large and important part of society communication platforms have become, a lifetime ban without trial or recourse is unacceptable and unconscionable.
At the end of the day these companies are businesses that cater to their customers. For adtech that's advertisers. The reason a lot of non-illegal content is removed from platforms is because it can't be monetized. When someone incites violence no advertiser wants to have their ads show up along side their clip, or even simply on the platform that hosts such content. So the platforms will listen to their customers and make sure to remove such content, simple.
If you want to force platforms to host content that hurts their business you should also propose how they will be compensated for the lost of revenue from doing so, especially if large advertisers start pulling out from the entire platform. Are you going to use tax money to compensate these businesses so that they can't remove that content?
The people who marched on the capitol did it because their reality tells them that their country is being stolen, by Democrats. People will actively oppose the current administration, because they believe this is illegitimate.
Trump and the info bubble he inflates, collectively believe that American elections were a farce. Trump holds a 40% approval rating, down from 42% on Jan 1st. His party, and media enablers repeated and enforce this idea. If the coup didn't happen, they would continue to argue the point.
Parler? For all its claims to free speech, it enforced ideological conformity, as do a lot of conservative boards. Their reasoning is that you get banned for being a conservative. However they don't mention that they get banned for saying "COVID is a hoax", "Trump won" and the perennial "some communities just have problems". Death threats, hate speech are all too common.
That is how far things had gotten, before tech firms acted. If anything, they acted late, after the costs of inaction are obvious, and Trump is leaving the stage.
If you are uncomfortable with their power, know that research from 2016 shows that conspiracy content gets stronger the longer it is available and gets repeated. A fact absolutely not lost to tech firms.
Tech firms kept material they knew was dangerous, up longer, because they did not know how to handle the repercussions of doing the scientific thing when people think its the wrong action.
As an international spectator of USA politics, it was worrying how easily corporations took a side and disappeared a couple of applications and banned a group of people.
Imagine if this happened say during Chile's removal of Pinochet or other South American's revolutions where the people tried to overthrow dictatorships. Now think that USA government interests or the interests of any large nation (China, or even Russia) could flex their arms to entice Amazon, Twitter, Facebook and other large companies to block any type of dissident interactions.
Edit to add: anyone going to answer or just downvote and ignore the question?
Don't get me wrong, I do want to be defended from violence! But not at all costs. I don't want people to be tortured in the name of defending me, and I don't want every conversation on the internet to be monitored and censored in the name of defending me. That's too much.
I'd like to hear about some testable predictions from the HN community. What do you think will happen and with what odds? And, to put a finer point on it, how much would you be willing to wager on your prediction if you were given $1,000 in credits for a prediction market (this is hypothetical, but pretend like it is real money if you can).
I don't see this happening anytime soon, worker-related collective action has been pretty much dead in the West for at least the past 15-20 years, even much so in an industry like ours with deep ideological roots in libertarianism. Even if we were to pass over the ingrained ideological biases, the money is too good for us to risk it.
> the emergence of new business structures for communications and media platforms,
Not sure what the author means by that. Will there be a new Trump-loving Twitter or FB? Probably not, at least not at that scale, and it will be all for the best, because we need to see behemoths like Twitter and FB perish (in Schumpeter's "creative destruction"-sense, before anyone starts flagging this post for inciting violence) and it's time we went back to the forum-like internet communities popular until the late 2000s (of which this website is one of the last active survivors).
* Companies that could be held liable enforce Terms of Service after warning offenders
* Senators draft legislation to "prohibit social media censorship." (https://www.miamiherald.com/news/politics-government/state-p...)
Ironically, if legislation like that was passed, Parler would be facing the same issues as any online forum that moderates content: https://www.techdirt.com/articles/20200627/23551144803/as-pr...
But they did make sure only the communist-aligned outlets had unlimited access to paper.
Like so many censorship-happy people today, the USSR could also say that the government wasn't censoring speech. They were only controlling paper.
Had it gone worse someone would have been killed or hung on live TV. All of that is on him.
Maybe all the platforms did all call each other, but there is a moment where you just have to take action. And that action was to stop him from killing more people. Stop him from destroying more things.
It is not free speech ... it is at a cost. Those lives. He is a monster and does not need a platform. This is not a slippery slope. He had to be stopped.
Maybe it leads to more choices and actions and discussions as to why he had a voice on those platforms at all. In either case the right choice was made. Stop him.