It’s too easy to walk down a slippery slope where suddenly these sites have to regulate in accordance with whatever is the popular opinion of the day.
What about Paul Nehlen, a political candidate who (before he was permabanned) frequently railed on Twitter against the "Jewish media" and mused about the "JQ", which stands for Jewish Question? Who called Jewish critics "shekels-for-hire" and used the phrase "The goyim know!" as an imitation of their involvement in a supposed conspiracy? Who said he wouldn't mind "leading a million Robert Bowers [the Pittsburgh synagogue shooter] to the promised land"?
Is that something Twitter is within their rights to ban for, in your view? If so, would that change if he were elected?
Social media is a not a public square where people can plant their soapbox, it is a set of privately owned servers where people can persist small amounts of data that comply with a set of rules. You break the rules, you get kicked off the private property.
Except there isn't really an alternative to Twitter - they have monopolised that "short catchy text stream" market. Other services, like Gab, exist - but when services like Cloudflare readily pull support for less mainstream alternatives, its hard to say that if you don't like Twitter you can go somewhere else.
For better or worse, Facebook and Twitter consumed blogs, and there are no real alternatives.
If they were a TV channel or programme then they'd have no bother refusing you. And that's the nub, are they a platform or are they a publisher? If they're the former and you allow them to discriminate then what's your argument against other businesses discriminating? Can't see that ending well.
If, however, we accept that they're publishers - which in my view they are, as they edit what can be seen in several ways - then by all means let them discriminate but they must follow the rules of other publishing media companies.
Would retroactive application of rules be fine, or new rules created and then retroactively applied?
Would it be okay if they refused simply because of who you were? Or what you did elsewhere? These kind of questions seem to be more applicable if we're comparing to Twitter et al.
But, if I'm going to answer I think legal issues are grey and it can be tough to accept as a programmer dealing in binary. There aren't many absolute rules and sometimes a rule works well at one scale but not another.
Part of the reason we should avoid monopolies is so people and governments can leave companies alone and those who disagree with their rules can go elsewhere because they have options. But this is in complete opposition to products and platforms that flourish because of network effects. By its very nature, Twitter wants to be the only game in town. I think that changes things and opens them up to scrutiny because we have fewer realistic options.
So as a non-answer, if we had many viable social networks or TV channels to choose from I'd be happy with them arbitrarily deciding who gets to play in their playground for any or no reason (as long as it's only in that one place, and there's no secret cabal or cartel). Protected reasons and classes aside. Keeping in mind, many times rules aren't rules, they're flagged as guidelines and subject to change for subjective reasons.
But I don't think that really reflects what is going on in modern social media networks. They are similar but distinct, each is used for different types of speech, and the key players dominate their niche. Users can choose to some degree what they prefer to receive. Networks also choose for the users via their feed algorithms.
So they are already taking on the role of arbiter and have been for some time. That sounds more like a publisher to me, as you say. They don't change the content of each post - but they do change the collection of content you get presented, just like a magazine editor rejecting articles and putting together this month's issue. Except every single article is tagged opinion.
On top of that I think we are just running up against another "too big to fail" situation. They're our only platform to communicate this way, so we don't like the idea of them suppressing speech, and speech can be inflammatory, inciteful, or libelous. That's not really incompatible with free speech in other areas, and I think once a platform reaches a certain size we should treat it as a public arena. This would be consistent with not allowing them to remove posts for arbitrary reasons, but allow them to do it if they think it falls under one of those categories.
Any insightful comment, even "just" descriptive, deserves a response, nay, a challenge to force the speaker to tease out more! :)
And I'm glad I did because your opinion is spot on, in my opinion.
But I don't think it's needed in the argument. As you say, the interesting distinction is around platform versus publisher and any legal protections you gain from posing as the former.
What rules are those? Can you give an example of such a rule and what it would look like if it were applied to twitter?
> Section 230 says that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" (47 U.S.C. § 230). In other words, online intermediaries that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do.
In other words they'd be liable for what is published on their site whether by them or a third party.
Platform is not a legal term, it's a colloquial distinction from publisher, which in this case refers to either the "interactive computer service" (e.g. Twitter) or the third party (e.g. the tweeter). Interactive computer service isn't as catchy.
> Can you give an example of such a rule and what it would look like if it were applied to twitter?
My first guess would be a slew of defamation cases, probably by touchy celebrities, just as with other publishers like tabloid newspapers.
Ok but clearly that is an impossible standard for a website where users are able to post content. In effect, it seems you're suggesting Twitter must choose between moderating their platform and being shut down. Would you say that fairly describes your position or am I misinterpreting you?
I'm able to post comments on many websites that are counted as publishers - try any online newspaper. I was able to post on countless forums long before Twitter appeared and still can.
> it seems you're suggesting Twitter must choose between moderating their platform and being shut down
That is technically known as a false dilemma.
- They can remove shadowbanning
- They can remove algorithmic manipulation, of all and any kinds not instituted by the user
- They can adhere to the rules they've set up, no retroactive action, no action for things off of the "platform", no new rules that just happen to affect their political opponents
- They can refrain from reframing content
- They can decide to become a smaller company (maybe more profitable (at last;), focus can help, niches help, size isn't everything)
- They can hire more moderators
- They can make the algorithmic tools they use available to users, if you don't want to read "deplorable" content then why not give you they power to make the choice and others the power to continue to read it?
- They can make the rules clearer, tighter, more accessible.
My Lord, there are so many choices available and they're not all mutually exclusive, and this isn't even an exhaustive list.
Right... Because they have section 230 protection. Forums and newspaper websites are not "counted as publishers" with respect to comments you make on the site exactly the same way twitter isn't "counted as a publisher" for your tweets.
> They can remove shadowbanning - They can remove algorithmic manipulation, of all and any kinds not instituted by the user - They can adhere to the rules they've set up, no retroactive action...
Why is it a false dilemma? You've offered up an arbitrary list of product and business suggestions based on your own opinions about how twitter should be run, but nothing you've said seems to contradict the conclusion that twitter should lose the legal protection that allows them to keep the site open if they moderate the site.
Let's use a clear example. Do you think twitter should be shut down because of decisions like hiding Trump's tweet?
That's a good point. You're still begging the question with regards to an "impossible standard" so… I'm going to shrug until you come up with your own evidence for that.
> Why is it a false dilemma?
You (repeatedly) give two options when there are more. We must preserve the status quo or we die! is not a compelling argument to anyone with an ounce of imagination.
> arbitrary list
No, they're not "arbitrary", and I'm beginning to lose my patience with you.
> based on your own opinions
This is my account, I write my own opinions using it.
> that allows them to keep the site open
Begging the question. The site could be kept open by following my "arbitrary list" because they would then retain protection even under a narrower interpetation of the law. Hence, not arbitrary.
> Let's use a clear example. Do you think twitter should be shut down because of decisions like hiding Trump's tweet?
I don't think Twitter should be shut down or would be shut down, regardless of whether they retained protection, and loaded questions that are entirely facile are where I draw the line.
If twitter becomes legally responsible for anything posted by the millions of users that publish content to the site then it's obviously impossible for them to keep the site running, the logic is very clear.
> You (repeatedly) give two options when there are more
No, there are only two, either the site has section 230 protection or it doesn't, there is no in-between state.
> We must preserve the status quo or we die!
An impressive strawman for someone with such an obsession for formal fallacy labels.
> The site could be kept open by following my "arbitrary list" because they would then retain protection even under a narrower interpetation of the law. Hence, not arbitrary.
Your list of business suggestions are just ideas you made up, they have no legal meaning, hence arbitrary. Business decisions like "shadowbanning", "retroactive action", "reframing content" and even explicit partisan bias are 100% legal and Twitter is within their rights to operate their business in such a fashion.
> I don't think Twitter should be shut down or would be shut down
Yet in your own words:
> The site could be kept open by following my "arbitrary list"
So in other words, the site shouldn't be kept open if they don't follow your legally meaningless suggestions.
> and loaded questions that are entirely facile are where I draw the line.
lol whatever, if you're so intellectually dishonest that you won't admit to the implied conclusions of your own argument then I'm wasting my time anyway.
Does that describe your position?
You or I should be afraid of ruinous lawsuits because even one, one without any merit, can cost us a lot. We do not have the resources of Twitter, we do not have a permanent legal staff, we do not have a pit of money, we do not have wealthy backers, we do not have the ear of powerful people. They can fight a suit as far as it can go and actually create precedent in higher courts that you or I could never afford to reach. They can even face down a government lawsuit.
If lawsuits were spurious they'd soon put a stop to them.
If you're unable to maintain a respectful conversation then perhaps Twitter is a better place for you to spend your time.
> If they weren't immune from lawsuits for things as simple as libel by their users there would be a 1000 meritorious lawsuits per hour
They could and would be immune if they did not editorialise. That's the whole point.
> What is the desired end result.
I don't have a desired end result because I'm not planning some utopian outcome. Let people be free to express themselves unencumbered and without meddling for overt political outcomes or some form of misplaced paternalism, that's it.
If that's not illuminating enough for you then you have my permission to reply to someone else.
>To be clear what most people pushing this position want is for Twitter to be so afraid of ruinous lawsuits that they are afraid to ban people who the rest of us find deplorable.
and this absolutely IS your position albeit you would say for more nuanced moral reasons.
From my perspective its a win win. On the one hand I get to see deplorables banned from twitter on the other this ought to be great motivation for decentralized tools that are the only thing that can possibly be actually censorship proof.
I fully believe that your Facebook and twitter ought to be running on a $100 box on your desk where nobody can censor it with a click. Trying to fix after the fact the situation of having your ability to communicate politically depend on the courtesy of another is a losing proposition.
Nobody is going to apply the CDA in the way that the president wants especially not before November and the will to protect conservative voices will disintegrate once they have lost the presidency and the senate as well.
If you care about truly open communication donate money to people making decentralized tools instead of waiting for the injustice department to do anything useful.
EDIT: What qualified as "equal" and "all sides of a major issue" were up to the discretion of the FCC, so enforcement was fairly ad hoc
Broadcast spectrum is a limited resource. Said resource is owned by the public but since it is limited, you need a license to make use of it. In order to retain a license, you need to operate for the public's interest, convenience, and necessity.
It's also the basic reasoning behind such things as content restrictions and the system by which you can complain about something that is broadcast. If enough people complain about something you broadcast, it can be argued that you are not serving the public's interest and so be fined or lose your license to broadcast.
It's why cable programming is more restricted by a network's desire to avoid pushback from advertisers or cable carriers due to complaints (rather than anything under the jurisdiction of the FCC). They aren't required to serve the public interest in the same way, but market forces apply some of the same pressures.
In this case, I see social media platforms as being more like cable networks than broadcast networks. You won't see specific government content restrictions on most (legal) content hosted on these services because they don't require a license. However, they still face backlash if they piss off enough customers and/or advertisers.
Many tech companies are simply utilities now. They should not be allowed to refuse service in the same way as electric companies are not allowed to refuse you service - except in extreme circumstances.
> The fairness doctrine of the United States Federal Communications Commission (FCC), introduced in 1949, was a policy that required the holders of broadcast licenses to both present controversial issues of public importance and to do so in a manner that was—in the FCC's view—honest, equitable, and balanced. The FCC eliminated the policy in 1987 and removed the rule that implemented the policy from the Federal Register in August 2011.
Wasn't that the idea of Public Access Television?
I could totally imagine a Monty Python sketch depicting Hitler in an absurd/comedic situation being rebroadcast on US public TV without issue. I think the previous comment was referring to a hypothetical pro-Nazi program.
We do that, and have since 1976. Any cable provider in a city with more than 3500 residents is required to put up 4 public access channels and studios and allow the public to use them and broadcast on them.
> Social networks are private companies built for profit.
To what extent do current laws provide exclusions for them and their business model? Do we deserve something in return for that?
> Hell, you can't even post a nipple on Facebook.
Precedent exists here as well. Free public speech does not include broadcasting of materials of a prurient interest at any time. This is separate from the "seven dirty words" which are generally excluded outside of the 'watershed' which is typically 10pm to 6am in most cities.
I'm fairly libertarian, so I'm uncomfortable with the question, but there has to be some objective method of determining the extent to which "internet domains" are to be considered "leased from the public" the way we do broadcast licenses.
We've already seen certain websites with particularly vile, but legally protected, points of view have their domains taken away from them. If speech can be so easily marginalized, doesn't the public have some interest in claiming domain over some of these seemingly "private" systems?
Well with that logic popular politicians are good for FB's business and hence must not be censored.
How did business fare after Germany lost ww2?
Of course I could go cut the data lines running through the right-of-way on my property. I don’t agree with how these big data companies are censoring, so maybe I will censor them.
If you don't support lowering Walmart's taxes, along with all other grocery stores, then why shouldn't they be allowed to permanently ban you for all places to buy food from, forever?
They can't discriminate based on protected traits, but political opinions aren't really protected on a federal level.
It might not be a good business idea, but I think it's legal (correct me if I'm off base on my legal positions).
For every single grocery store in the world to starve the population of people who do not vote for decreasing their taxes?
No, I am pretty sure that society would do something about that, if this happened.
Not necessarily, no. Many grocery stores have a bit of a monopoly in certain areas. It may not be possible for competitors to replace them instantly.
And a lot of people would starve before perfect competition, which doesn't exist, comes around to eventually replacing them.
Tax policy is not anywhere close the tweets at issue which often amount to terroristic threatening. If you started verbally abusing other customers in a grocery store, perhaps even implying you'll murder them, then yes, you'll be banned.
To the rest of your comment, are you seriously comparing tweets to food? You have a fundamental right to food, not to have your opinions transmitted by others.
At least in the US, it is the opposite.
There isn't a constitutional amendment protecting your rights regarding food. But there is a constitutional right related to your rights regarding speech.
Many people think that these rights regarding speech are pretty darn important. Possibly even the most important rights that a person can have.
So I would absolutely say that they are comparable, and that people's right regarding transmitting speech are possibly even more important than any other rights out there.
We cannot and shouldn't contemplate a solution to any and all possible feasible problems because time is finite and a solution that in theory results in the best outcome for a..z wherein h..z are hypothetical and a..h are actual may actually be a worse solution for a..g
The analogy is ridiculous on its face. It will never be actual. In such a situation we would trivially resolve it by passing a law that food stores can't blacklist groups of people while retaining the right to blacklist individuals for actual misbehavior. This would be somewhat limiting on the freedom of small shopkeepers to say throw out nazis but it would break the back of the "big food for lower taxes" coalition. Since none such exists this solution is as described above worse for the set of problems we actually have to solve a problem that is only hypothetical.
It would be more productive to discuss the ACTUAL issue by addressing it directly instead of trying to reason by analogy. It's not like its a topic people on this site have trouble understanding. Going off into the weeds like this just wasted time.
Finally, as far as I'm concerned, they are welcome to ban me for my voting pattern. Within a couple weeks I would open up a new grocery store, marketed as not caring what your political beliefs are. The thing is, in a free market (or pseudo-free market like we have) you only succeed with buy in. Once you alienate your customers, you fail.
Well, they could just look up your social media accounts to see if you've made any comments they don't like.
Sure, that would catch everyone, but it would certainly find some, and cause a chilling effect.
> anti-trust actions.
Ahh, no it wouldn't though! Anti trust is about competition. It would not be a violation to collude for non price related non competition reasons.
> I would open up a new grocery store
Even if you happen to be a multi millionaire capable of doing so, most people aren't capable of doing this. Many people might also starve before you get around to doing this.
Also, there are many towns where such competition is not really possible, because there are only a few stores in the area.
> you fail.
You are misunderstanding how long it can take for the market to react. Market power is real, and established entities cannot be overthrown overnight.
What has Cloudflare pulled support for, except the one neonazi site that made them vow never to do it again?
I'm curious, what's the problem with that?
But Cloudflare isn't a monopoly.
Botnet operators have no trouble finding crime-friendly DDoS protected hosting, the only reason the Nazis struggle with this is lack of experience.
Changing fads tend to drift the masses to a new platform for communication every decade or two. Personally I barely use Twitter or Facebook. I much prefer forums like this one or StackOverflow, which do a better job of encouraging quality contributions.
Trump or the White House could conceivably come out with their own app as an alternative means to directly reach followers and avoid censorship. If the people who want to hear him aren't engaged enough to tap a few buttons to install it, I have trouble mustering much sympathy.
I'm an avid supporter of free speech, but I think people who place their faith in a small number of private companies to provide that are naive.
As long as it's not anonymous, you can see who is who (even under screen names), and value other speech and actions of these people accordingly. For instance, it would be easy to let anyone know that Paul Nehlen is a racist, by pointing at these posts
As a service owner, you could collapse and mark certain posts so that content offensive for most people won't jump in their eyes. Twitter does this.
Unfortunately, this is not always possible. First, there are legal limitations (hate speech, libel, etc). Second, the public outrage: people actively want to destroy things they find offensive and wrong, and put pressure on the service's owners. This includes its employees and shareholders.
Most people only tolerate free speech as long as it fits the confines of the Overton window. You have to have balls of steel and really good sources of money to explicitly and publicly tolerate anything beyond. Common carrier protections help here, too.
Now, the "commons" is controlled largely by private corporations, so while the government couldn't legally say "You cannot utter this speech in Times Square, go utter it in an abandoned piece of farmland in Kansas," private companies can boot you from their platform until you're basically required to set up the entire infrastructure of an ISP to have any access to the Internet, and zero access to the public conversation.
If that despicable moron somehow got elected, I think there would be a public interest in keeping his statements available to the public. If for no other reason than to maintain the public record of him making those statements as an elected official.
If President Trump told a journalist “when the looting starts, the shooting starts” and the journalist reported it, should social media censor that journalist? Of course not. Well in this case we don’t need the journalist because we can just see it ourselves. Social media can be, among many other things, a window into thoughts that public figures would not otherwise share without a significant filter in place. If elected officials actually want to live their private mental lives in that particular fishbowl, I sure as hell want to be able to keep an eye on them.
> Social media is a not a public square where people can plant their soapbox
Under California law there is an affirmative right to free speech even inside private shopping malls. So if your social media company is subject to California civil rights laws, the “private property” argument doesn’t hold up.
held that parties could collect signatures for their political activity on mall property due to a provision in the CA state constitution or more generally that a state constitution could offer more protections than federal law or the US constitution provides.
It found that it didn't represent a taking because passing out leaflets didn't substantially harm the mall.
It found that the political speech was unlikely to be confused with the malls and that they could publicly disclaim such association thus it wasn't compelled speech.
We can logically assume both of the above points are congruent with twitter so we can conclude that if CA or any other state provides a similar protection it ought to be applicable. What it notably did NOT say is that such speech on someone else's property was generally federally protected nor if I understand did it disclaim it.
The next question is does CA law make banning people from your social platform an illegal infringement of citizens rights. So lets follow the same case back to CA.
"The Agricultural Labor Relations Board opinion further observes that the power to regulate property is not static; rather it is capable of expansion to meet new conditions of modern life. Property rights must be "'redefined in response to a swelling demand that ownership be [23 Cal. 3d 907] responsible and responsive to the needs of the social whole. Property rights cannot be used as a shibboleth to cloak conduct which adversely affects the health, the safety, the morals, or the welfare of others.'"
In context it is talking about the worthiness of restricting the property rights of owners by forcing them to allow speech on their property.
In fact it calls out the fact that the increase in importance of places like malls makes the cause of allowing free speech there more worthy which is in line with the rise of importance of social media.
Seems pretty clear that there isn't much daylight between allowing people to petition people for an offensive cause in a mall vs on twitter at least in California but while I reluctantly agree with you I have a few concerns.
I'm concerned that in the short term our Orange dictator will end democracy as we know it and would prefer the matter of free speech to be litigated after we have averted the immediate threat and had the opportunity to decrease the power of the executive to decrease the chance of a similar situation. Given the time required for any court decision this seems inevitable.
The court it seems would have found it an infringement on the property owners rights if the activity made say commercial activity at the property impossible. What kind of activity on a platform would be regarded as a taking? Electronic properties are a lot different from physical ones in this regard.
- Is speech from a bot protected? Is it still protected if it pretends to be a bunch of fake people?
- Is using someone else's platform to distribute ads protected? Does it matter if they are political ads?
- Is Slander protected speech? Do we get to remove obvious to us slander or does it require a court to decide?
- Is direct incitement to illegal activity allowed?
- What about indirect incitement? Can I run ads for the purple haters club which aren't themselves directly offensive themselves but promote a group that everyone knows is all about promoting the murder of purple people?
It seems like disclaimers on posts are the safest legal choice as they represent the owners own free speech rights. From the Supreme Court opinion
"appellants are free to publicly dissociate themselves from the views of the speakers"
It seems logical that other limits to your free speech on others property will logically be established to protect the integrity of the platform in the future. It would be better if the limits in question could meet one standard under law.
They really have unfathomable potential to be insidious thanks to algorithmic selection and presentation of information and entertainment.
"Social media is a not a public square where people can plant their soapbox, it is a set of privately owned servers where people can persist small amounts of data that comply with a set of rules. "
Certainly that is how it started.
But the de facto power of social media's censorship and dominance in discourse will eventually bring government to regulate social media so as to preserve free speech.
Except, @jack has said publicly that is what Twitter wants to be.
So incumbents get a material campaigning advantage?
I don't support Paul Nehlen, I just don't think it's that cut and dry.
It's explicitly not the job of Congress. That's the point of the First Amendment.
Regulating free speech is up to private enterprise, using private enforcement methods like de-platforming (old school: you can't spew your drivel inside my bar) and shaming.
That has proven not to be the case. Bringing extremist views into the mainstream by way of social media (and politics) has only strengthened their influence.
I don't know the answer to this problem, but it seems our education system needs dramatic improvement.
Imagine tut-tutting if the Rwandan government had cracked down on radio broadcasts calling Tutsi people cockroaches and encouraging Hutus to chop them down.
If anything they would have cracked down on opposing broadcasts by claiming that, since Tutsis are the historically privileged beneficiaries of colonialism who were collectively responsible for the violent oppression of Hutus, any pro-Tutsi or anti-anti-Tutsi sentiment was unconscionable hate speech while any pro-Hutu or anti-Tutsi bigotry was not technically racism. The power to ban “hate speech” is the power to define “hate speech” and hateful people will use against you the very same weapons you propose to fight them with.
Interesting that you picked Rwanda. The go-to example for the dangers of free speech is normally the Nazis. For instance I'm thinking of a study that attempted to correlate growth in support for Hitler with availability of his speechs on radio or via rallies. It found IIRC no correlation at all.
In fact Hitler was regularly censored, deplatformed and so on yet it didn't stop him, only perhaps slow him down a bit. On the other hand he spent vast efforts on suppressing, censoring, vote stuffing and de-platforming in various nasty ways anyone who spoke out against him - all managed entirely privately via his various organisations, whilst the weak Weimar Republic were unable to stop him.
Dictators are invariably big fans of suppressing free speech, because they know censorship works in favour of whoever is in power. The reason democracy and free speech are so tightly connected is because in a democracy the people in power aren't meant to be able to do anything to stay in power beyond winning the approval of their citizens.
Also what that poster did say--and I agree--is that people lack critical thinking skills and have not developed immunity to flat-out bullshit. There's a huge education pipeline problem and our society is at risk because its normal resistance would be to have voters that can think for themselves. Things like flat earth and moon landing conspiracy theories don't need to be "not allowed", they just wouldn't take hold if people could think through basic logic.
There's literally no way you can read the 1st amendment and think it should apply to non government entities. It specifically mentions congress only.
>and amend accordingly.
The chances of getting such amendment passed is approximately zero.
It already applies to non government entities. Telephone companies can't deny you service because they don't like you or what you say. Your power and gas company can't deny you service because they don't like you or what you say.
Trump can't even block people on twitter ( a private company ).
This has nothing to do with the Constitution and everything to do with their licenses for public airwaves.
Public airwaves? This applied during the landline days and it had more to do with the idea of monopolies/utilites/trusts.
Either way, it debunks your point that non-government entities are all allowed to censor...
It's not they can't censor, it's that if they do then their legal status changes in a way they do not wish it too.
In other words, they can't censor as they are. A private company cannot censor.
>A private company cannot censor
No. They are common carriers. Twitter and the vast majority of other companies are not.
But you said "Yes" two lines above.
> They are common carriers.
These common carriers are private companies, not public companies. They aren't owned by the government. They are owned by private shareholders. AT&T, Verizon, T-Mobile, etc are private companies or non government entities.
> Twitter and the vast majority of other companies are not.
I never claimed twitter and the vast majority of companies are. I only responded to the assertion that all "non government entities" can censor. I was providing proof that some non government entities cannot censor.
If AT&T wants to censor, they have to go find other business. They can't be in the telephone business and censor. It's a lawsuit and possible jail for the execs involved.
People are so desperate to justify censorship that they'll make up nonsense. I can't believe anyone is dumb enough to believe all the pro-censorship nonsense in my thread. The only explanation is bias once again forcing people to make up and accept nonsense.
There is an argument to be had there.
That's not quite accurate. There is precedence for regulating speech in private media for example, as in the case of broadcast networks. Broadcast TV networks are still restricted on speech/expression to this day, as are radio stations.
There's nothing to say that social networks can't also be brought under the regulation of the FCC and have their speech and terms of service dictated to them. This turns social media platforms into something more like a utility. All users must be treated equally or the platforms get massive fines. If I were attempting to get this through, I'd argue the major social media platforms are natural monopolies due to reinforcing network effects and the finite nature of user time (there can only reasonably be so many large social networks, and the market has already strongly demonstrated globally that that is a very small number). You can replace a natural monopoly with another (an independent Instagram, allowed to evolve as a threat, possibly could have killed Facebook for example and become the new monopoly), but you can't have 100 Facebooks that will all perfectly compete with eachother (some people will argue that in theory you could, but that would never happen in reality). For whatever niche / category / concept they target, social networks inherently trend toward either consolidation of users or death; it's why the Twitter clones were all killed off in the early days when people were still experimenting with that social concept. It's why there are not dozens of large scale YouTube or Reddit or even Stackoverflow clones (the rather brutal combination of network effects and finite user time wins in all cases and you end up with consolidation of users to just a few viable platforms). The market, largely left to sort things out on its own in the social media sphere, has provided a lot of proof that large social media platforms are natural monopolies.
Also, the FCC no longer regulates broadcast content. It publicly stated a number of years ago that it does not have the desire to do so. The stations police themselves in order to stay within their perceived community standards to prevent external regulation.
Today, unless a station does something really egregious and congresscritters get involved, there is effectively no content regulation.
In the mid-90's, I was part of a morning radio show that was #2 in a medium-sized East coast market. One day we said "shit" on the air three times to see if this was true. Nothing happened. We didn't hear word one from any listener, manager, or government agency.
Watch how fast Facebook would adapt and be monitoring news and trying to conform to the public opinion if they felt they needed to compete.
The only reason we're in this position is because a reduction in real competition in the market and the only power they fear is that of democratically elected representatives.
Elizabeth Warren had this right. Break up the platforms, and force social networks to allow a download of your data in a json format and move to a seperate service. Easily. That is the ONLY regulation we need.
It's interesting how they also collude with each other by defaming competitions and circularly maintaining this monopoly. Good example is Apple/Google process to accept new social media companies. It's nearly impossible and to a much higher standard they hold themselves.
I feel old writing this, but once a upon a time, phone-book companies (often the phone company, but not always) would deliver a phone book to every address with a phone. This was paid for almost-entirely by the advertising sold by the phone-book company.
If a company or person wasn't in the phone book, they pretty much didn't exist.
All of this has happened before, all of it will happen again.
Twitter should be a protocol with many providers and many clients, not a restrictive service from a single provider. Ditto most of the rest.
People want what was best for a small group of hunter-gatherers in hostile territory.
Market consolidation (regional, national, global) reduced the number of markets, reducing the number of players.
One very illiberal idea is that we also need to break up markets, erect some barriers. Prevent winners in one market from usurping other markets.
In markets concerned with serving European users, GDPR already allows you to download your data. Facebook has allowed one to download all of one’s own messages, pictures, etc. for many years now.
The problem is that, in order to avoid infringing on other people’s privacy rights, downloaded data has to be anonymized: you can see your own messages as your own, but anything from another user will just be identified with a random identifier instead of their real name. This then makes it a challenge, when you want to import the data into a new service, to have all those messages associated with your friends’ accounts on the new service.
You're talking about censorship as if it's something civil institutions have responsibility for, but private institutions shouldn't be doing?
Your comment is concerned about slippery slopes, but doesn't consider the one we're already skiing down where we discourage institutions from taking responsibility for accuracy?
Facts are often debatable; the set of facts that need to be accurate is also debatable; and you are suggesting that both of those debates be settled unilaterally by expecting (requiring?) the institution to "take responsibility for accuracy." Note that public pressure (read: popular opinion of the day) will almost certainly be the deciding factor in how those debates are settled by those institutions.
Are you suggesting that it is wrong for institutions to abstain from "bowing to the popular opinion of the day" in how they censor speech?
Where there's less room for debate, responsibility includes an obligation to avoid "shape of the earth: views differ" approaches.
If individuals and institutions don't take up this kind of responsibility, then there's no such thing as discourse, only speech as indulgence.
And the value of popular opinion is directly related which one of those is used in constructing it. Individual and institutional responsibility, however, will remain orthogonal to that.
For anyone else to insert themselves into the conversation, public or not, and claim to have The Real Facts is arrogant. For an institution to attempt to referee the conversations it is privy to, prevents actual truth from being discovered. To require institutions to referee in that way? Is another act of arrogance: placing oneself above 'the other people who have to be kept in line'. To leave those institutions at the mercy of popular opinion, to referee as the mob sees fit, is insanity.
At least, that's the way I see it. Thank goodness we're still allowed to disagree on this.
Do you honestly not realize that this is the exact issue being debated? Whether the "standards" are being applied fairly and consistently irrespective of political affiliations? And whether the "standard" itself is loaded to censor certain relatively mainstream but not Silicon Valley viewpoints?
What you're saying reminds me of an anti-gay marriage argument I heard: "Hey, I'm straight and I don't have the right to marry a man. A gay guy doesn't have the right to marry a man either. Look at that: the same standard applies to all. Everything is fair!"
I'm not particularly aligned to either democrats or republicans, but I've looked at "fact-checks" by supposedly neutral parties and they're often absolute garbage. Not only is the "research" superficial, what is chosen to be fact-checked vs. what isn't fact-checked is transparently biased. The "same standard" does not appear to be applied to all.
Given the debatable nature of these questions, I think common sense would say it's better to let voters and the marketplace of idea arbitrate rather than social media companies. And, as a general matter, I'm shocked people are still so into censoring in 2020. Isn't censoring books/ideas/speech something we laugh at given how consistently wrong it tends to be? I'd say censoring is silly at best and evil at worst.
Got some numbers behind this?
Another bias of yours is assuming that I was for censorship. I argued only that the same standard be applied. That can mean no censorship too.
The views expressed in this case were
"When the shooting starts, the looting starts". Which, while perhaps not Trump's intent, can reasonably read as encouragement (or incitement, or glorification) of violence toward people.
> the marketplace of idea
Twitter is a part of the marketplace of ideas, is it not? Their actions are simply an action taken as part of the marketplace of ideas.
> What you're saying reminds me of an anti-gay marriage argument I heard.
To the unoppressed, equality looks like oppression. That's true whether the unoppressed is a straight man or a person who can threaten others with violence without consequence. So yes, the two situations are very similar.
Do you not see how I could just as easily use your empty standard to ban tweets saying “A riot is the language of the unheard.” -- a quote I largely agree with? This quote explains rioting as a consequence of oppression. But if I was as easily triggered as you, I'd have it banned since it "can reasonably [be] read as encouragement (or incitement, or glorification) of [rioting]" a definitionally illegal act.
And frankly, your interpretation of the Trump tweet is not reasonable. The most reasonable, common-sense interpretation is one linking severe consequences to severe actions. Does the phrase "When you play with fire, you're going to get burned" glorify burning people?
> Twitter is a part of the marketplace of ideas, is it not?
In the standard notion of a marketplace, you let decentralized end-users gravitate toward things they like and avoid things they don't like. If Trump's speech is so offensive, then let END-USERS decide to block his content or barrage his comments with condemnation or vote him out of office. Instead, you're advocating for a major centralized "authority" to make the decisions about tweets that should and shouldn't be seen, whether or not they're popular with the end-users. Yes, if they disagree that strongly with the censorship, those end-users can leave the platform, but that's analogous to "well, if you don't like it, then you can just leave". If people like something, they're rather fight to keep it good than let other people "ruin" it.
> ...whether the unoppressed is a straight man or a person who can threaten others with violence without consequence...
Mobs breaking windows, lighting things on fire, and stealing is violence. It exposes people to tremendous danger and tremendous injustice. Saying that violence begets violence should hardly be a controversial idea. I certainly don't think Trump's tweets are helpful in anyway. I think they're incredibly tone deaf and unhelpful. But I'm not going to cry to Twitter or Facebook and demand it be blocked.
I'd ask that you respect the guidelines of HN and refrain from using needlessly charged language like "triggered". I'm not "triggered". I have views that differ from your own. Triggered has a specific meaning, either implying a traumatic response (as in "The loud noise triggered my PTSD") or it can be used to mean something like "offended"). I, personally, am not "offended" by Trump's statement, at least not in the colloquial sense. I'm disappointed by it, sure, but not offended. He didn't insult me. Since we're discussing language and semantics, let's be precise, yes?
> "When you play with fire, you're going to get burned" glorify burning people?
Fires are not intelligent beings with free will. You cannot, by definition, encourage a fire to burn someone. So while shouting "yeah, burn him" at a fire would not be inciting violence, shouting "yeah, shoot him" would be.
I'll also note that context plays an important role in how we understand statements. For example, if I, an individual sitting at my computer, say "protestors deserve to be shot", while that's a rather crappy opinion for an individual to hold, I cannot take action on it. It isn't threatening. On the other hand, when a representative of the government says the same thing, we must take that same statement much more seriously because the government does have the ability to shoot protestors. The same thing applies if I make the same statement while standing with a firearm across the street from a group of people protesting. In context, the same statement can have very different connotations.
Further, Trump specifically has made no effort to differentiate between peaceful and violent protestors. As an example, he retweeted an article claiming that park police cleared peaceful protestors out of Lafayette Park. That article was based on a series of tweets that at the time concluded that park police didn't use tear gas and were unaware of the president's movements, but now include more context, that Secret Service agents may have used tear gas, and that the clearing of the area was ordered directly by AG Barr.
So we have a President who has the power to fire on people, and has used that authority to use weapons of war on peaceful, non-looting, non-violent demonstrators. And who actively tries to blur the line between peaceful and violent protest (as do police forces when they escalate at demonstrations).
So the statement has to be taken in that context.
> And frankly, your interpretation of the Trump tweet is not reasonable. The most reasonable, common-sense interpretation is one linking severe consequences to severe actions.
So let's dive into this. I'd agree, if I were to make the same statement sitting here in front of my screen, it would be mostly an observational take. A terribly worded observation, but an observation. However, I do not have the ability to order people to shoot protestors. Trump does (and in a sense he did: Trump's AG ordered police and soldiers to fire tear gas at peaceful protestors to clear the room for a photoshoot).
In other words, an observation made by a person without power to follow through is a threat when made by a person who can. And Trump does have the power to follow through on his threat.
> In the standard notion of a marketplace, you let decentralized end-users gravitate toward things they like and avoid things they don't like.
Indeed, and twitter is a participant in the marketplace. Twitter itself is not "the marketplace of ideals". There are other places to speak. The idea that every participant in the marketplace must themselves also be a marketplace is antithetical to the idea of a marketplace. You can argue that twitter or facebook has positioned themselves as a marketplace, but they've clearly never really done that. Both have always had moderation and speech policies. They were just speech policies you agreed with (for example: certain uses of certain words have always been banned on both platforms).
> If people like something, they're rather fight to keep it good than let other people "ruin" it.
And you're welcome to do that. And others in the marketplace can listen, or not. Don't praise the marketplace of ideas on one line, and lament it the next when things don't go your way.
> Mobs breaking windows, lighting things on fire, and stealing is violence.
And I didn't say otherwise, but I think you've missed the point. I'll clarify:
The number of people who say that citizens on the streets who are acting violently should face no consequences is very limited. It's isolated mostly to anarchist and anarcho-marxist circles. People who commit violent crimes, indeed, deserve to face consequences.
But that goes both ways. Officers and forces who shoot unarmed, peaceful protestors should face consequences. There are hundreds of incidents of that happening in the past week.
If you have two groups, one who commits violence and is punished, and one who commits violence and is not, the one that isn't getting punished probably isn't oppressed, but the one getting punished might be. If you have two groups, one who commits violence and isn't punished, and one who commits no violence and is punished, well then something is very wrong. And one of those groups probably is oppressed. When the peaceful people are speaking out about being oppressed, and the response is to punish them, well the irony is palpable at that point.
So yes, a president who can threaten violence and face no consequences is not oppressed. While the citizens on the streets, citizens whom he is supposed to represent but is instead threatening with violence, they might just be oppressed.
> Apply the same standards to all.
I find these statements at odds. People can say what they want and it may or may not be inaccurate or misleading or accurate or truthful. Giving elected officials "a free pass" to do just that _is_ applying the same standard to all.
Elected officials have a microphone because of the political system and the endemic elevation of these regular, flawed, people to a status above the common man (same with police and a number of other power hierarchies in the USA).
Here’s an idea... if you don’t like what somebody writes online, don’t read it.
Do people really think that censoring, or even annotating, the President’s words will change how people feel about them? Like someone is going to read Trump’s message, think “oh right, mail-in ballots could lead to voter fraud,” then read an annotation from CNN and conclude “oh never mind, Trump lied.” On the contrary, it will lead to further entrenchment of division as they retreat to the comfort of their pre-existing views.
Are people serious about this?? It seems obvious that the most likely outcome of censorship or editorializing is more entrenchment of views and pre-existing beliefs, not less. Nobody who is already listening to Trump is going to suddenly stop because Twitter or Facebook shows them a CNN article next to a “fact check” label.
It’s just so ridiculous, I can’t believe we are actually having this conversation. What happened to teaching people critical thinking and not to believe everything they read on the internet? And what the heck is “violent speech?” Just close the page if you don’t like it. My goodness.
As far as changing minds goes, I think there's a swath of voters who sit in the political middle and they don't fact check. The swath is large enough that they can influence elections. It's critical to make sure they don't get influenced by politicians who make up facts on the spot.
A friend of mine and I commiserated last week about the difficulty of debating with people on political issues. We noticed that people can make up "facts" quicker than we can fact check. You would think that the onus would be on the person making an argument to provide proof. But that standard of argument is long gone. The proof seems to always be on the person who says, "No, I don't think that's right."
Herein lies the rub. It’s extremely rare that a politician is tweeting about something so black-and-white that the tweet can be provably true or false. There is almost always a gray area between fact and opinion. In fact, people tend to vote for politicians because they agree with their interpretation and prioritization of a set of facts. So I think you’ll find the vast majority of political tweets reside in this gray area, because otherwise they wouldn’t be political in the first place.
And if “provably false” is the standard for editorializing, then Twitter picked a terrible example to set as the precedent. The tweet they “fact checked” was Trump making a prediction about the future. Namely, he was suggesting that mail-in ballots could lead to increased voter fraud. Not only is this an opinion, but it’s a projection about something that has not happened yet. By definition, it cannot be provably false.
I can't believe that people are criticizing an organization for exercising it's right to free speech. Like this wasn't even a case of removal of content. It was literally taking a sign and sticking another sign below it. It's something that the government would be allowed to do to a private citizen. It's that far away from censorship. And yet.
And most importantly, who decides what is a lie?
Either take 230 away from the company and no-one can lie on the platform, or stop censoring/altering users and allow everyone to say what's in their legal right to say.
There's a meme about nothing being true on the internet, why do we need a Ministry of Truth now?
Surely that will lead to much more content deleted to avoid lawsuits? Why is that a desirable outcome?
Can’t you imagine Mark and Jack telling Republican senators: “We wanted to enable free speech, we told you so, but you took away the protections around that so we had no choice but to turn the moderation dial to 11.”
I wonder about those different platforms. They already exist — Gab simply isn’t that popular. Revoking Section 230 would expose Gab to a “Thiel vs Gawker” situation: a billionaire could sue them into extinction. That doesn’t seem desirable at all.
If FB's status changed, Gab could theoretically become more popular. Not all new platform/publishers would be the equivalent of a Gawker and thus raise the ire of a billionaire. There are plenty of independent journals, magazines, blogging platforms, etc that don't get sued out of existence.
Please don’t make accusations of astroturfing. It’s rude and against the HN guidelines. Why? Because the majority of the time, it’s not true. I don’t work at or have any connection to Microsoft, but a week ago, I was accused of astroturfing for them because I did not agree that them open sourcing stuff is EEE.
The problem isn't that people "don't like" what elected officials say. It's that people often take what elected officials say as truth without even considering that it might be false (they must be Smart and Good if they're in office, right?)
Fact checking and censoring prominent figures isn't going to affect people who already have strongly held beliefs. But it will make a difference to people who are impressionable or vulnerable or don't know any better.
Well educate them on that instead of trying to control the information flow. History taught us that will always backfire.
The US is a rich country. If anything, using percentage of GDP understates just how much we spend on education. For primary & secondary education (everything before college), there are only four countries in the world that spend more per student than the US: Luxembourg, Switzerland, Austria, and Norway. For post-secondary education (college and grad school), the US spends more per student than any other country.
If you look at other metrics such as the percentage of adults with college degrees, or the percentage of adults who graduate high school, the US has never done better. If you look at PISA scores, we're very close to the OECD average and that number hasn't changed much over time. (Like most developed countries, absolute scores have slowly decreased over time. It's not clear why this is happening.)
If people have been working for decades to destroy public education, they're doing a very bad job of it.
I teach in college. I was born early sixties. My daughter teaches math in high school. Recently I showed her my math books from my old high school. Her response: No way that my students could follow this.
Where I teach we have to give extra classes in basic math (I'm in STEM).
We're definitely sliding. Reading seems to be sliding worse even.
By the way, I don't think there is much of a correlation between how much a country spends on education and the quality of it.
I'll defend their right to be stupid.
I won't prevent people from learning rather I won't force them to learn if that what they prefer.
>A democracy requires an educated electorate, so you are anti-democracy?
Democracy doesn't require an educated electorate, it only require majority. If the majority is 'stupid' (stupid is relative) then stupid it is.
A democracy ruled by an ignorant majority winds up as authoritarian, but you already knew what that quote I paraphrased means. Keep deflecting.
No, unless you force somebody to put that link.
>A democracy ruled by an ignorant majority winds up as authoritaria
If majority wants authoritarian then authoritarian it is, thats democracy.
Also: a quite effective assault at climate action using misinformation, violent riots, a movement campaigning for civil war, ridiculous medical advise by the US president leading to deaths during a pandemic, etc.
This all happened and permanently changed the debate. Free speech is poisoned these days and we can't simply ignore that.
Even if that is the case, it shouldn’t mean we give up on free speech and allow it to be effectively eliminated “so it can be saved”. Free speech with some censorship is just censorship.
A robust public debate is how ideas are fairly brought out and given an opportunity to thrive or die. Having Twitter or a government or “fact checker” or any other person/organization in the middle adding their outsized influence to the picture brings the whole debate out of balance. Why is Jack’s voice worth 1000x more than yours? Because he’s more “enlightened”?
If I let someone into my living room, and that person makes political statements I don't like, I'm free to ask them to leave and not come back, or tell other people in my living room that I think the statements are factually incorrect.
If Facebook lets someone onto their website, and that person makes political statements Facebook doesn't like, why can't Facebook ask them to leave and not come back, or tell other people on Facebook that Facebook thinks the statements are factually incorrect?
Where do you draw the line between my living room, where I'm free to regulate who my visitors are and what those visitors can say, and Facebook?
Unless you have 22% of the worlds population passing through your living room this is a pretty poor analogy.
Maybe the town square is more apt.
As I've written in another recent comment, the problem is when Facebook or other networks become quasi-official sources for public communication. If the primary means that the government thinks I will find out about public health orders, mandatory business shutdowns, etc. becomes social media, I'm very concerned about the messy mix that "public space" and "private space" that Facebook will become.
But I'm actually more concerned about a state where overnight the government can ban normal day-to-day activities, and if you're on a 30 day ban from Facebook you don't know about it until they're arresting people for surfing. Or, in my town's case, where they've stopped notifying residents of construction-related closures with written notice to each house days ahead of time, and have just started putting it on NextDoor.
There are two principles here: 1) it's a problem if you have enough power to force the weak to cover the expenses of your failures and 2) if you have a lot of power you also have the ability to defend yourself from assholes and malicious actors.
So, for point 1, if facebook decides that they want to dump trash into residential areas, they could silence everyone on facebook who complains. They could also call up their friends at twitter et al and convince them to do the same in exchange for whatever massive corporations like these days. Because of this possibility, I want facebook's freedom to curtail freedom of speech on their private platform to be extremely limited.
For point 2, if facebook becomes inundated with assholes or (heaven forbid) malicious trolls who make relentless fun of the font that facebook uses for posts and every post that everyone sees is just complaints about fonts, then facebook has the capability to create a special asshole facebook website and marketing campaign to convince the assholes to go somewhere else. They have the ability to switch out the font. They can fund research into powerful AI techniques that can group all the font complaints together so that only people who want to see that crap will see that and everyone else can have a good experience. Because of facebook's ability to deal with malicious (or social incompetence that is indistinguishable from) behavior, I want facebook's freedom to curtail freedom of speech on their private platform to be extremely limited.
On the other hand, if we're talking about an individual (or otherwise small and powerless organization) running a personal blog (or your living room), they have extremely limited ability to force other people to deal with their platform. And they also have extremely limited ability to deal with malicious behavior if they didn't otherwise have the right to curtail freedom of speech. In these cases, I want them to have greater rights because the likelihood of them being able to use these rights to oppress others is minimal AND their necessity of them needing them to defend themselves is greater.
Now, given that major social network advertise pretty much everywhere, this might not be an entirely fair analogy. Some parking lots have ads for them posted in every other parking lot. But it's a point worth making that access is relatively egalitarian on the internet.
If a tree falls on the forest, and no one is there to hear it, did it even make a noise?
Also relevant is this very interesting research experiment conducted ~5 years ago on social media manipulation, in which they dubbed the term "Censorship 2.0"
You can't decide Facebook is subject to a different set of laws or has fewer rights simply because people have arbitrarily decided to communicate on its platform more than another platform. Every private entity should be afforded the same rights to regulate their private property or services unless there is legislation specifically saying otherwise.
If you or any other person do not like the way Facebook is regulating usage of their platform, you are free to go start up your own social media platform with a different ToS (and many people do).
>You can't decide Facebook is subject to a different set of laws or has fewer rights
Oh but we can, that's what democracy and law makers are for, and if you've been keeping up with the news at all for the past few years, you'll know that's where we are increasingly likely to be heading.
And you're free to not ask them to leave, also... which is what we're arguing Facebook should be.
Second, some entities are so big they constitute a public good, even if they are not recognized as such. If Google Search started to openly promote one side of politics and not the other, the regulations would hit hare and fast.
Third, this idea of 'populism' is a little warped. People agree or disagree with the nature of the protests to varying extent. Don't assume that the press and those writing letters to FB, or definitely FB employees themselves, are in any way representative of anything.
Fourth, it's in the company's interest to try to remain 'neutral in whatever terms that might mean.
Finally - it's in everyone's interest for said platforms to have consistent, objectively applied, and hopefully independently monitored criteria and operations for managing what they take down and not. Zuck should not be involved in any of the day to day operations, and should not be commenting on any specific situation or post.
Far from both your living room and Facebook so its exact location is merely of theoretical interest here.
This distinguishes them from libraries, the only other similar platform for unmediated content. Libraries treat all information as equal and curates it as such. Facebook does not treat information equally, this means it is already moderating content.
Facebook already censors content that politicians deem unsavoury, so why should one political message get a free pass while another is removed?
Even access is limited in some libraries for some information like technical libraries and books describing the manufacturing or design of dangerous materials.
The point still stands, the information that libraries provide is organised to an open standard defined by information science, not an opaque algorithm that is subject to the whims of its creators.
Employees I spoke with did not seem particularly moved by these answers. "Everyone's grateful we have a chance to address these things directly with him," one told me. "At the same time, no one thinks he gave a single real answer."
Another said Zuckerberg appeared "really scared" on the call. "I think he fears his employees turning on him," the employee said. "At least that's what I got from facial expressions and tone."
At the same time, another employee told me that Zuckerberg's decision was supported by the majority of the company, but that people who agreed with it were afraid to speak out for fear of appearing insensitive. (An employee who spoke on the call echoed this point.)"
So much for "free speech" within Facebook, the company. In this case, "popular opinion" cannot be shared openly.
It’s a scary thing to get labeled as a racist because you defend free speech. I think it illustrates why we should shy away from censorship as much as possible.
People here saying that social media companies should be considered a utility once they reach a certain size are very on point.
I have zero doubts that the anti-racists would happily advocate for Trump to be cut off from electricity/internet/sewage/water.
In some way I agree with them but in another there are healthy methods of activism and unhealthy methods of activism. We shouldn’t be encouraging companies with billions of users to weaponize the unhealthy forms.
I am interested to see how many people who want Trump’s post labeled have read about the history of censorship in Europe and China.
Christianity used censorship a long time ago and what were the primary principles they believed in? Do onto others as you would do onto yourself, love other people, don’t do drugs or commit adultery. Then when you look into the cracks you can see the crusades they went on and all the horrible things that occurred from it.
I hope that isn’t the world we recreate.
Which, as you started to say it, is the slippery slope fallacy.
I don't see what the problem is with embedding some factchecker work beneath content (NOT "censoring" it, which is a ridiculous statement since I haven't seen this at all) that is making claims unsupported by literally a single Google of the claim with evidence provided by reputable sources. Bad information is harmful, and the vast majority of social media consumers are lazy and reactive and not very good critical thinkers. This wouldn't even be necessary if any sort of critical thinking was taught before college, but it isn't. (And that might even not help, according to some other claims, which again, necessitates the people with the voice being heard giving people good information.)
Do you think people have the right to convince other people of something that would be harmful to them, like a charlatan or snake-oil salesman? Ethically, I don't think so, and I don't think it's a "slippery slope" to anything that is an overall bad.
Plus, private companies can do whatever the fuck they want. If they want their content to be factual, they have the right to enforce that according to their criteria. Anyone is welcome to create a competing site that allows any and all harmful bullshit to flourish.
Even if for some reason you don't buy that, social media could easily justify a ban by differentiating between his personal and official government accounts.
The whole 'the only good democrat is a dead democrat' and glorifying violence would have been much harder to do with traditional potus press statements.
Twitter and Facebook allow him to use these new tools, despite his tendency to incite violence. They help to enable him. And it's their choice. And they thus make his words much more effective.
There is a reason yelling fire in a crowded theater is not protected speech, and companies should be in the business of censoring speech that incites violence.
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Furthermore section 230 (c)(2) states:
> any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected
So to be clear, it's not that the law is unclear on this, it's very clear on this: The publisher has no responsibilty to consider whether speech is constitutionally protected at all. Not only is there no qualification that these platforms need to meet in terms of moderation, but the law actively states that the platform can moderate speech.
Yes this is the current state, but the question is whether it's morally right and should we update it to reflect the current situation of the social media behemoths, this law was designed for small bulletin boards at the time.
Twitter and Facebook are publishers. Publishers still get protections afforded to them under 230(c).
It's a discussion we should all want to have to preserve our freedom of speech from being infringed on by these (very very few) social media companies.
I don't think anyone is arguing they don't currently have these powers, it's whether they should and whether they've been abusing them.
Why? Twitter is not infringing on people's freedom of speech. Trump's comments are proof of that because he's continued to have a platform on the site. People keep saying that their being oppressed by the site they're using to say they're oppressed. No one is being oppressed by Twitter.
You take away 230(c) from Twitter and one of two things will happen:
- they'll cease to exist becase they will become legally liable for all content their users post
- they'll automate moderation much more aggressively and "censor" 10x times more than they do currently.
Also, it'll make it harder for competition to exist because the government will increase the burden on creating a site that hosts user generated content. It will be a contraction of speech online.
Really, if you're not happy with the moves these companeis are making, you should be advocating for more competition so you can switch to an alternative.
Facebook has for the most part stayed out of it and they are being attacked for not providing a Ministry of Truth like Twitter does.
So the little bit of competition that Twitter has is being coerced to do the same.
If Twitter continues down the route it's going and continues to publish it's thoughts and link them to other's thoughts then I think they should have their 230 protections revoked and that will probably shutter them. Good.
Hacker News should be all about breaking up monopolies and regulating companies, what gives? Ah yeah, the elections.
Similar alternatives for those interested:
> Ah yeah, the elections.
I'm not american, so i have no horse in the elections, apart from wanting twitter to continue to exist.
Twitter is a private company, it should be able to moderate its platform as it sees fit. If it see's content that it feels is dangerous, they should be completely within their right to remove it from their site. Preventing them from being able to do that is the government restricting twitters speech.
If you don't like how twitter moderates it's platform, you should have alternatives to go to that moderate differently. That's basically the whole premise of subreddits and, well, a "free and open market", rather than the government forcing companies to publish content it doesnt want on it's site.
This is a popular belief among people who feel some viewpoints are being "censored" but it has no basis in law or reality. My cynical perspective is that if enough people spread this lie, it will become true in the court of public opinion, which is what the people being "censored" are hoping for. (the scare quotes are because "censorship" is a scary word used in place of "moderation" or "community management" or "spam removal")
Platforms are 100% allowed to take down material that they find objectionable for whatever reason.
The other reply to your comment has posted a link to the actual law: https://news.ycombinator.com/item?id=23408218. I'd encourage you to take a look.
Please do not conflate civil torts with criminal hate speech
[Warning this message is inaccurate]
Now they created the expectation for them to choose and they said no.
Going to be funny when the Public Health Orders all start targeting social media.
Being the antenna for Donald Trump to broadcast constantly to the internet was going to be a quagmire no matter what.
If Facebook or any other social media site wants to do that, they should be classified as a common carrier and be regulated by the FCC.
"One reason common carriers are licensed is to ensure they provide indiscriminate public access and protect the privacy of the general public."
It would help to make things clear if FB in their terms of service would specifically and explicitly excluded public officials from limitations on hate, violence inciting and other otherwise prohibited on the platform speech,and marked that speech accordingly.
Or let an official to explicitly checkbox such exception in his/her profile or may be at individual post level, a la carte style with separate checkboxes for hate speech, racism, etc., and mark such his/her post as made under that exception - "This post is published under the hate speech, [comma separated iterator over the checked checkboxes' values] exception requested by this public official"
* Is the way that people view large public figures on social media all that different than how they did with newspapers?
I'm not sure if these questions imply anything about how social media sites should deal with regular small-audience / low-influence users, but I think any platform sharing influential posts to a large audience has similar responsibilities to a newspaper.
Ultimately, the comparison between government censorship and social networks' decision to spread somebody's message falls flat because of the different baseline:
If governments do nothing, there is no censorship. Governments have to decide to take action for there to be censorship.
If social networks do nothing, nobody gets to spread their message through them. Social networks have to take action (write software, run servers) for messages to be spread.
This difference changes the calculus of moral and ethical responsibility.
The ancient Romans didn't allow prosecution of some public officials while in office for a similar reason. Their courts could too easily be abused as a political tool.
It's not unreasonable to leave public officials unmoderated while they're in office.
The real problem here is that we're letting private companies make the decision. We should pass a law that requires unrestricted free speech on any platform of a certain size. And then give users optional moderation tools. Users could toggle a button that says "Show me fact-checking along side controversial topics" but it would all have to be opt-in.
It could lead to censorship, which would be terrible, and we should let people post whatever is legal. A way around this would be to combat the misinformation by making sure the reader has quick access to the facts like Twitter's fact checking.
I grew up in a rural city and was taught to believe that jet fuel doesn’t melt steel beams, aliens were real and the government was hiding them, farms were being oppressed by liberal government in favor of factory farms, ghosts were real, cellphones cause cancer, vaccines cause autism.
This shit was always here, but now people completely ignorant to the fact of it spreading now know.
They’re all for censoring or not censoring if it aligns with their agenda. And they’ll do a 180 when it benefits them.
I believe the Gores’ dream of the PMRC may yet come to fruition for different reasons...
What is the free speech argument for automatic algorithmic recommenders?
With all due respect, I disagree entirely, and elected officials censor each other as a matter of doing business. social media sites should be designated as journals and subject to the same ethics as journalists. I think the novelty of harvesting personal data (as in, "Oh, you mean someone cares about my favorite color? Neat!") can be tempered by holding users to the same standard as journalists.
Social media sites are just journals. I am not going to allow Tinker Bell to sprinkle her magic pixie dust over Facebook and Twitter to give them some "special" status that allows them to avoid their journalistic responsibilities. They are in the journalism industry; they should lead, not beg for special consideration.
BTW is threatening violence against fellow citizens even legal?
Our elected officials have their own electronic resources. Every senator and congress person has a website. Trump has a website. Anyone with $100 can have a website.
Let them distribute hate and lies on their own site.
He incites violence.
Facebook and Twitter's tools give him a fuller engagement with the population.
Facebook banned me for 3 days for commenting on a previous ban where I had said "white people suck" and "white people are savages", both for pictures contrasting white people protesting vs PoC and First Nation people protesting.
Disclaimer: I am very white. I was not promoting hate.
Out of interest I searched for "alt right" on Facebook, and it took about a minute to find a post that said "Social media has made too many of you comfortable with disrespecting people and not getting punched in the mouth for it."
The post is dated Feb 9, so it's been there a while. It's not unusual for that group.
And so on. It's one thing to have community standards, it's another to prove you're applying them fairly.
I see no evidence that Facebook is interested in doing that.
Everyone who promotes hate speech thinks they're just being misunderstood. You're not the first.
I don't hate white people. Saying they "suck" is not promoting hate.
I think you suck at understanding what I'm saying, but I don't hate you.
Yeah, I did. Please connect the dots from that comment in that specific context to where actual hate comes into play, and more importantly, where that supposed hate translates into a pathway to discrimination, violence, or other ramification of real hate speech.
I'm not trying to be innocent. I make mistakes all the time. If I do an edit, I append and keep the original content intact (save for typos) because I try my damndest to own my words.
My accusations against you are my guesses about you. You are very comfortable in judging me, so I'm not sure why you feel so hurt when I share my tentative assessment of you.
One more datapoint: that within the private group that the original comments were made, the feedback I got on this issue was aligned with my perspective. Something tells me that in your social circle it would never happen.
Your one data point is pretty funny. I bet within any racist group, hate against another race would also get postive feedback.
I believe that you intend to promote white nationalism by your disingenuous engagement.
Do you find that agreeable? If not, why?
edit: It looks like you don't find it agreeable but can't articulate why. Shall I ascribe intent there as well?
I think that the standard I broke was attacking a protected class: "white people".
I posted this scenario (the background of my original post, as well as the response here) to FB to ask for feedback. I specifically asked for criticism if they saw it (not just for echochamber validation).
All 10 respondents thus far saw no issue with what I said, with the most telling comment to when I asked again:
Q: Do you think I was promoting hate speech in the given contexts?
A: possibly, In the eyes of someone threatened by your opinion.
I stand by my words. I will correct them when the correction is valid, however, I've not seen that here in this dialog.