Unfortunately there's a lot of truth in this. It is rather fashionable to be anti-GOP. As much as I loathe the current state of the party, they are not always wrong and on most traditional stances, especially those on the economic and foreign policy side, I tend to agree. That is to say you have company here in seeing that a countering point of view gets crapped on just for being a countering point of view.
But really it's bigger than that -- they need a new boogieman. Big tech is that because most of the winners from tech aren't the GOP-base, so it's easy to bash on. And banning Trump has got them all riled up even if there's no truth to the systematic bias beyond him specifically. In fact social media has largely propelled Trump and others into the spotlight in the first place.
> I think this is similar to much of the censorship we've seen.
That's... not a particularly convincing argument though, if you are really saying that the two situations are similar. Because the downvotes this comment got came from users, not big tech. The community decided for whatever reason, good or bad, fair or biased, that they didn't like the post.
So whats your alternative to that? If I tell my friends not to eat at a restaurant and their Yelp rating plummets and they're not the first result anymore, is that censorship?
Is the implication here that communities shouldn't be able to organize their own content? Because if that's really what you mean by big tech censorship, then I'm going to have a really hard time getting on board with that. The whole point of the marketplace of ideas is that some ideas in a community win, and that other ideas become unpopular.
I would think HN would be exactly what Conservatives want. Your post eventually gets hidden if enough people in the community dislike it, but people can still get to your ideas and upvote them or vouch for them if you genuinely have community support. And doing things like scraping the site or building a competitor is much easier than with larger tech sites, HN is big but it doesn't encompass the entire tech industry. It's a little difficult for me to imagine an alternative system. Is it censorship if somebody dislikes a Youtube video, or subscribes to a channel that isn't mine? Is it censorship if I search for a video and Youtube shows me the video with the most likes first?
I'm all on board for increasing market competition and making safe spaces for communities so they aren't drowned out, I can get behind a criticism that consolidation and having one all-encompassing space where a certain group is constantly blocked out is bad. But that's a very different proposal than "it's censorship that I got downvoted."
I agree that downvoting someone on HN is not censorship.
What I meant is the biases of the people writing and training the ranking / flagging algorithms bleeds through into the implementation. Similar to how there have been stories that facial recognition tools only recognize white people.
I don't know that it's a great argument or analogy. I mainly was trying to point out that when you escalate from individuals downranking items to individuals writing algorithms that have similar built in biases, then you end up with something very near censorship, even if it isn't intentional.
I believe that's really the whole argument behind "system discrimination", maybe individuals aren't overtly discrimination, but the system is built up in such a way that the result is discrimination.
You can agree or not, but it's worth considering, and I'd guess most people could be convinced of it in one context or another.
But the idea of "communities shouldn't moderate themselves", or what the GOP is proposing, which is that the government should decide how communities are allowed to moderate themselves -- that is in my opinion is just really contrary to the ideal of free speech. I think that there's a risk of confusing the dangers of disproportionate and centralized power (which is a real problem) with the idea of moderation or criticism or freedom of association in general, which are themselves fundamental parts of free speech.
So I just think that we need to be very careful and precise about exactly what is meant by a claim that companies like Facebook or Visa are threats to free speech. I do encourage people and communities to try and examine their unconscious bias and how that affects the systems that they build, I think that's important for everyone to do. But the proposals that are coming out of the GOP are doing a lot more than that, there's a difference between examining bias and direct intervention into how platforms are allowed to moderate.
In fact, it's better than the Discord channel situation, because HN rankings aren't only not determined by a single moderator (votes reflect the people who are on HN and voting at any given moment), you can also still get to the content and post, it's just not by-default visible or promoted.
It's censorship in the most technical sense of the word, we are technically repressing someone's voice in a community, but the 1st Amendment guarantees communities and individuals a right to censor through their freedom of association, and downvoting and hiding comments is probably one the least extreme examples of that I can think of.
This only becomes a problem with market domination.
It's not a problem that Google puts some search results on the 1st page and some on the 2nd, even though most people don't click through to the 2nd page or even think about the fact that it exists. There's literally no other way to do search. It's only a problem because Google controls 90% of the search market.
And the English Internet includes a large number of users from Europe/Canada in addition to the users from liberal US cities. Sites like HN with a long-form discussion format are also going to skew the demographics further along those lines. It's not a marketplace that's going to value right-wing ideas; a majority of users will have a cultural background that finds them distasteful.
I don't think there's anything about free speech that says that only Americans should be able to form communities with each other, or that communities should be restricted to a single continent.
This would be a problem if HN dominated the entire industry and it was the only place to talk about tech, but... it's not. It's just not the same concern as talking about something like Youtube, where not only is Youtube extremely dominant, it's also hard to integrate with and hard to replicate because of high hosting costs. Heck, HN is one of the best places to build organic direct contacts with people. It's one of the few places online where if you have a popular site, you will get asked about direct RSS feeds, you will occasionally get direct emails from people. So I just don't get the analogy people draw between HN and, say, Facebook. I can't even read most business listings on FB without an account, it's entirely different beast to compete with.
I don't see any problem with a community of tech people worldwide having a site where they talk about tech, and part of that is always going to be that the people who use HN end up skewing in certain directions. There's no such thing as a neutral community, there are always predominant ideas, and there are always ideas that the community will find distasteful.
You can argue whether HN's specific biases are good, I think some of them are quite harmful, and I'm sometimes upset at the way this community skews on multiple issues. But I don't think it's a risk to free speech that those biases exist.
There are spaces I'm a part of that consider HN to be basically Right-wing, and there are a few spaces I encounter that consider it to be Left-wing. It just comes down to what those communities consider to be normal, it's expected that different communities have different standards of what is and isn't objectionable to say.
Users on a website meant specifically for tech workers?
Isn't that a bit like saying tech censorship doesn't come from big tech, just from the individual feelings of its employees?
Is the implication here that communities shouldn't be able to organize their own content?
Deleting and hiding any non-left wing view is not "organizing".
I'm not entirely sure. Seen from a distance, it looks like happened to Republicans in 2021 exactly what happened to Muslims in ~2016: social media companies finally decided to take action against the violent factions that used the rest of users to hide in plain sight.
According to a paper I read a while ago, it worked against Da'ech, Al-Qaeda, etc. without harming regular Muslim users. I hope that it can work as well against alt-right violent movements without harming regular Conservatives.
Now, do I believe that social networks should have that much power? Not at all. But I believe that focusing on actions that Conservatives are aimed against all of them is counterproductive. We should rather aim at breaking these ~monopolies.
> Add to that the fact that Republicans are in fact also consumers, and I don't think it's fair to say they are ignoring consumers.
Well, they seem to be ignoring most consumers to focus on a single group. Is that a better formulation?
Great. I'd like to see them do the same with Antifa and co.
I don't think it is that obvious because, at least me personally, I haven't seen any evidence of this. Now there are websites that are built around reducing visibility of unpopular opinions (like this one and Reddit), and elevating popular opinions (like Twitter), but I don't think it's fair to call this censorship. You can't join websites where opinions are judged and then complain when they're not judged your way. By that logic, every time a candidate loses office, they've been "censored", which isn't an honest application of that term.
The only times I've seen posts stricken is when they're damaging in some way, like vaccine misinformation or voter misinformation. It just so happens that one side seems to indulge in a couple of those categories.
> I don't think it is that obvious because, at least me personally, I haven't seen any evidence of this.
Not going to argue one way or another on this, but I do want to point out that you can change some words around and end up with:
"I think the downvotes are due to your claim that it's obvious there is systemic racism. I don't think it's obvious because I haven't seen evidence of it. Black people just happen to commit more crimes"
Where your analogy falls apart is that I can observe systemic racism using impartial statistics. When controlling for variables, there is imbalance that cannot be explained by anything else.
When we look at statistics for right-wing censorship on platforms, they find that this isn't happening
I can sympathize with an emotional argument that it feels like it's happening, but when I look at the facts, I can't see any evidence. I can't say that about systemic racism.
I am willing to be convinced though.
We have been in this far from equilibrium state of individual liberty and we are drifting back towards equilibrium.
Because of the WW2 generation our education system has probably focused a bit too much on the dangers of Hitler and not enough on the dangers of Maximilien Robespierre.
So you get people out to crush and down vote Republican "Nazis" while chanting Vive la Liberte!
Not to mention up and down voting is just a terrible system and especially in a place like this with people smart enough to use their brain.
Clearly there was a time in the past that the smartest people in the world would have upvoted phlogiston theory at the expense of reality.
I think it's worthwhile and meaningful to point out that their strategy of tackling big tech does not align with that ideal. It is meaningful to point out that the GOP is directly trying to legislate speech for large companies rather than address market forces that consolidate power in the hands of those companies.
Looking at the attitudes behind efforts like Florida's tech law -- I don't think that's dehumanizing, it's just interesting. We're seeing a massive flip in the "conventional" wisdom about GOP's market philosophy:
> "Platforms like Twitter, Facebook, and YouTube are functionally the public square of the digital age. It is wrong that these platforms control and censor speech with impunity."
That's a surprising thing to hear from a supposedly free-market Conservative political party. I think it's interesting that the GOP's official statements focus almost entirely on censorship and on government regulation of hosted speech, instead of on the market itself and on consumer-driven outcomes. If that comes across as dehumanizing... :shrug:, I don't know, my opinion is that it's healthy to critically examine party lines and think about the motivations behind them.
And it's not like the title is wrong: the GOP genuinely isn't targeting consumer choice or general consumers right now, they're specifically and very narrowly targeting a subset of consumers, they specifically care about Republican censorship. They're not even targeting broad censorship -- they're not talking about sex workers or LGBTQ+ demonetization. There's a little bit of an implication I hear behind the GOP's official statements, to me it sounds like they're saying that they are fine with big tech as it is, just as long as it stops censoring Conservative voices. Which, yeah, I think that's meaningful to talk about.
> That's a surprising thing to hear from a supposedly free-market Conservative political party.
No it's not really. The GOP is not in favor of a perfectly free market and never has been. Other than libertarian extremists who may find their home in the party, you'll be hard-pressed to find a GOP politician that opposes things like public right of ways on private property.-
Also, I'll point out that liberals attempting to parse out GOP 'attitudes' are almost always wrong. I've been pointing out the flip in GOP attitudes away from free market absolutism for many years now. Popular commentators like Michael Knowles and Matt Walsh have as well. Suddenly, a liberal publication discovers it and tries to point out 'hypocrisy' on thinking that's been changing for the better half of the last decade. Keep up.
I don't agree that this is just Liberals reading things into GOP positions, the GOP does position itself publicly as a free market party.
Whether or not their actions line up with that philosophy isn't really the point. I do agree that under Trump people may have gotten a bit more open about the fact that they reject liberalism and/or Libertarian philosophies. I have seen a relatively large increase in Conservatives telling me that they don't trust the free market to solve problems. But in general, the official party messaging hasn't changed just because the actions don't line up with it.
Of course, there's a lot of different Conservatives, and different areas/groups have different beliefs. But in my experience, if you step into mainline Conservative spaces and talk to (especially older) Conservatives, they will still tell you they believe in using the free market to solve problems. They will tell you that the government shouldn't decide outcomes. They will tell you that the free market solves discrimination. They will usually tell you that mandating equal access is government overreach. And these people are not Libertarians. They're not arguing for completely unregulated markets; but they are arguing for free market solutions for problems like discrimination or bias. They generally tell me that legislating for equitable outcomes in representation instead of processes is incorrect. The fact that people can find hypocrisies or disagreements or differing opinions about to what degree the GOP favors market solutions to problems -- that doesn't mean that the GOP is not cultivating a very specific view of itself and a set of narratives that it likes to tell about itself.
It is, of course, a mistake to assume that genuinely held political ideals and actions from party think-tanks are going always going to match a party's PR, but it is equally a mistake to assume that differing opinions or actions from party members means that the PR is irrelevant or that it shouldn't be discussed. Most ordinary Conservatives I personally know outside of think-tanks or extremely engaged groups would disagree with your assessment of their own party. They believe (right or wrong) that they are free market Conservatives. When I talk about their beliefs, I'm not parsing out, or reading into, or guessing anything about their internal representation of their motivations. This is what they tell me, this is what they think that their party stands for.
So I think whether they're right or wrong about what their party believes is worth talking about and reflecting on. If younger or more involved members of the party have dropped those positions, or if the demographics have shifted -- the party doesn't seem to feel a lot of urgency to reach out to older/rural Republicans to let them know about that change -- at least not in my experience in the Conservative communities I encounter.
Older conservatives are weird and will be gone in a few decades.
As a republican for a very long time, I personally am on the 'free market is a tool, not an end' side of the policy platform.
To be fair, if you ask most normal democrats, they'll tell you they're for free markets too.
All Americans seem to think they are. It's part of our cultural mythos. If a democrat says they're not, it's usually because they don't want to be perceived as a republican, rather than an actual belief.
Regulations and criminal laws change the terrain. We could make them flow where we want them to, but we don't.
Regardless, it's still water and will still flow downstream. No dam or levee is going to change that, just as no regulation or criminal law will change the basic motivations facing corporate entities at any scale (i.e. min-maxing self-interest).
People say this a lot but it's not the case when the CEO is an ideologue. A few weeks ago I was at a meeting where our CEO said customers were unhappy about the company's choice to insert politicized messages into its products. He said that they could potentially lose as much as 50% of their revenue because of it. And he still felt that he was doing the right thing.
In the current cultural climate, putting political messaging in your products creates an immediate double-bind: you immediately anger part of the total market, but if the messaging is then withdrawn, you anger another part that did want it, with residual anger from the original anti-part of the market (some of which won't return in any case). In this situation, if the pro-political market is bigger, then it's just more rational to stay the course and claim the high ground.
Hays was chairman of the RNC, campaign manager for republican president Warren G. Harding, and postmaster general under Harding. Not just any Republican politician, but a leader in the GOP.
Personally, it seems to me to be a code developed at a time of differing cultural values, that likely would have been supported by any mainstream politician.
There's no way you can meaningfully associate the modern political parties with pretty much anything of the 1930s, because aside from the names, little is the same about the parties. You could blame the Hays Code on pressure from Catholic social conservatives, and observe that that is now a GOP constituency, but how meaningful that is depends very much on what you are trying to get to from it.
> Catholics have been solidly democrat until 2016.
No, they haven't; they’ve been split roughly evenly between the parties for much longer (this data only goes back to 1992, and shows it true the whole time):
There's at least two fronts, having good enough software to be compelling, and having the ops situation be radically easier. Honestly I think Social Web (a general idea not specific tech) is kind of in a doldrums, has some interesting projects, and a lot of enthusiasm, but there's a lack of key innovators. I'd probably get outright spat on by some but I truly think Kubernetes is the first diy-able enough platform to be interesting for ops, and I do think we're not far off from having some fairly good out-of-box autonomic systems, & that would change the game. Alas multi-tenancy is still extremely early days for nearly everyone, & crude, needs a lot of work, but that is kind of optional, but in the end is a key enabler for interesting home computing. Reciprocally Kubernetes really requires multi-cluster capabilities, clusters as cattle, & right now that's a cowpath we need paving.
I totally agree that many members of the Republican Party are criminals and should be in jail, but we need to prosecute the crime, not throw away free speech and start surveilling everyone. When people coordinate violent anti-democratic insurrections, we should throw them in jail along with the other terrorists. When people dissuade people from getting vaccinated, we should take away their medical licenses when they have them, and hold them responsible for manslaughter when people die from Covid based on their lies. More laws aren't going to help when Democrats don't have the spine to enforce the laws we already have.
I'm still waiting for the Republican (formerly alt-right) alternatives to YouTube, Facebook, and Twitter to get anywhere near the formers' popularity. Also, a lot of tech folks won't work for these companies in the first place. It takes more than money to build a FAANG company.
> More laws aren't going to help when Democrats don't have the spine to enforce the laws we already have.
I didn't say anything about creating or enforcing any laws. All I said is that Big Tech is well within its right to kick off unscrupulous users, even if they happen to be the ruling party. The alternative -- passing laws that unconditionally force private entities to host the ruling party's content, including propaganda, calls for insurrection, and vaccine misinformation -- is far more objectionable and authoritarian. Especially so if you believe that building your own infrastructure and distribution networks isn't an unreasonable barrier to entry in the first place.
Of course, I'm all for prosecuting the criminals who instigated the insurrection and botched the COVID response to the fullest extent of the law (among other high crimes and misdemeanors). I agree with you that the Democratic party needs to grow a spine and take a stand on this.
But, why not both? Why not retain Big Tech's power to decide who can use their platform and who cannot? Last I checked, FAANG isn't infrastructure. Like, what would a reasonable regulatory alternative be, that doesn't just devolve into regulatory capture and erect lobbyist-designed artificial barriers to entry that nominally restrain FAANG's power, but practically impose too high a burden for anyone to challenge their market position?
"In 2019, Fox News was the top-rated cable network, averaging 2.5 million viewers."
They've not only become as popular, but the most popular with TV. Why are you confident that this can't happen with social media?
And the metrics of money and users don't really capture the whole problem. You're right that Parler never reached the popularity of similar non-insane services, but they didn't need to have the majority of money and users for a mob to come less than 100 yards from lynching members of congress. Reaching a majority is a barrier for peaceful democratic movements, not violent totalitarian movements.
Be that as it may, cable is also on the way out , and 2.5 million viewers is just over 1% of the voting population in the US. By contrast, Facebook has 258 million MAU in the US and Canada .
> They've not only become as popular, but the most popular with TV. Why are you confident that this can't happen with social media?
It totally can. However, as awful as Fox "News" programming can be, they neither call for nor coordinate violent insurrection. The minute they do is the minute they lose their FCC license.
> You're right that Parler never reached the popularity of similar non-insane services, but they didn't need to have the majority of money and users for a mob to come less than 100 yards from lynching members of congress.
Parler has since been forced off of AWS, and has been forced to block hate speech as a condition of being added back to the Apple app store . I don't think Google has added it back to the Play Store. This was possible because there are no laws on the books requiring AWS, Apple, and Google to unconditionally do business with Parler, as long as their decision isn't based on discriminating against a person in one or more protected classes (and political affiliation is not a protected class).
I'm not saying that this is an ideal solution -- I don't think Big Tech acted swiftly enough, and I'm not convinced yet that they've learned their lesson. Ideally, platforms beyond a certain reach would be required to have a fast and effective content moderation mechanism to prevent their distribution channels from being weaponized as they were. But, I don't know how to codify this into a law that is both constitutional and not also weaponizable by big players to prevent competition from arising. Maybe the law could require a ratio of human moderators to human users, or something?
1. So how sure are you that the harmful conversations haven't moved on from Parler to another venue you don't know about? I don't know the direction things have gone, because these things move fast and I don't have time to keep track of them. But I do know that every time people tout a deplatforming success, I find out about a new cesspool a few months after the fact. Twitter -> Mastadon. Reddit -> Voat. Facebook... is anyone even claiming that Facebook has deplatformed hate speech when every few months there's another private hate group being Doxxed? And even on the supposedly "cleaned up" platforms, the hate is still there, with better branding--if Jim Crow could become the War on Drugs and the KKK could become the Proud Boys decades ago, what makes you think that racism can't rebrand today? The primary success of deplatforming is that "woke" white folks don't think about racism because they don't see the N-word on social media any more--actual racists still have very little difficulty organizing.
2. Could you point out where I proposed laws that force companies to host content? I don't think I did that, and if I did, I'd like to retract it. My ideal solution would be for people to stop using corporate social media entirely, because it's nothing but toxic, but that's seemingly not realistic. A possibly more realistic solution would be for people to learn to talk to racists instead of trying to sweep them under the rug. Racism is a disease but it's curable in many cases. We need to stop just canceling people the minute they offend us, and learn to engage in a dialogue that changes minds.
Good, keep them scrambling. Time spent scrambling to re-establish your online presence is time not spent trying to get people lynched.
> The primary success of deplatforming is that "woke" white folks don't think about racism because they don't see the N-word on social media any more--actual racists still have very little difficulty organizing.
The purpose of deplatforming is to stop the racists from recruiting on that platform and harassing that platform's users. I don't think most people believe that deplatforming ends racism itself, anymore than deplatforming terrorists ends terrorism. But, making it hard for racists to grow their numbers is still a morally-worthy action, no?
> My ideal solution would be for people to stop using corporate social media entirely, because it's nothing but toxic, but that's seemingly not realistic.
Why would "non-corporate" social media be any better or worse than "corporate" social media? Honest question.
I think the fact that social media in which bad behavior (racism, trolling, or otherwise) doesn't carry the social cost that it would in real life is precisely what enables it online. The fact that it's corporate or non-corporate is orthogonal to this.
> A possibly more realistic solution would be for people to learn to talk to racists instead of trying to sweep them under the rug.
Why should I believe that "engag[ing] in a dialog that changes minds" will "cure" racism online? Have you successfully convinced online racists to stop being racists? Do you have a methodology you can share? Again, honest questions.
Asking because what has stopped racism in the real world was that it ceased to be socially acceptable behavior. Acting like a klansman in public usually gets someone banned from mainstream establishments (social cost), loses them non-klansmen friends (social cost), and if their employer, colleagues, and and business clients find out, they lose their respect and business as well (more social cost). Effectively, being an overt racist gets you shunned -- something societies have been doing to enforce social norms for all of history.
I don't think we as a society have yet developed a healthy and effective shunning protocol for ensuring that online racists receive offline social consequences, but I think that this is a matter of establishing better social norms about it. "Cancelling" is the latest take on shunning, except that relative to past forms of shunning, it's (1) not that effective of a deterrent for actual racists, (2) it's currently far less forgiving for putting your foot in your mouth (since the Internet doesn't forget), and (3) it offers very little room for expressing sincere repentance. I'd be interested in figuring out ways to implement item (3), since I think that will precipitate a way for society to figure out how to mitigate item (2). I think the way to deal with item (1) is to make it possible to identify the actual racists to the communities they live in, so that offline shunning can be effectively used to deter bad online behavior.
You're not doing a full cost/benefit analysis here: you're looking at the benefit without looking at the cost.
The cost is that you're helping to radicalize these people, feeding into their persecution complex, and placing them in echo chambers where they never get to hear non-racist points of view.
And I'm also not actually convinced that the benefit really exists: if deplatforming actually prevents recruitment, where are all the people flocking into QAnon coming from? The fact is, racism is often learned in person from your parents and the people in your immediate social circle, without social media at all. A person in rural Tennessee may never meet a person of color or an out gay person, and if you ban them from social media the first time they use the N-word, you've confirmed what they've heard about the "liberal media" while ensuring that they never have the bigoted beliefs they grew up with challenged.
All the successful civil rights movements of the past worked by exposing bigoted people to the truth so that they change their minds. MLK positioned peaceful protestors dressed in their Sunday best where they would be beaten by police on television. Harvey Milk showed Americans that gay people were their neighbors, friends, coworkers, and family members, with coming out days. The modern left, on the other hand, doesn't seem to think that the truth is worth telling--we've given up the power of the truth for trying to silence our opponents. It's bad ethically and it's bad strategically.
> Why would "non-corporate" social media be any better or worse than "corporate" social media?
Because corporate social media is designed to addict you to it so that they can increase engagement and serve up more ads. Decentralized social media which isn't driven by profits instead exists to meet users' needs for communication.
> I think the fact that social media in which bad behavior (racism, trolling, or otherwise) doesn't carry the social cost that it would in real life is precisely what enables it online. The fact that it's corporate or non-corporate is orthogonal to this.
I agree that bad behavior doesn't carry the social cost that it would in real life on social media, but it's worse than that: bad behavior is actually rewarded because it creates engagement. Flame wars sell ads.
> Why should I believe that "engag[ing] in a dialog that changes minds" will "cure" racism online? Have you successfully convinced online racists to stop being racists? Do you have a methodology you can share?
"Cure" is maybe a strong word, and the racist/not racist dichotomy is also too black-and-white. Progress is slow, people rarely admit when they're wrong, and sometimes the person you convince is a lurker rather than the person you're talking to. For my part, I do know that I've convinced people here and there of some things, but the real compelling example is myself and my family. I was brought up pretty conservative; my parents were anti-racist, but while I was growing up I almost never came across any viewpoint in my life that was accepting of homosexuality, for example. My views on a lot of issues changed because people talked to me respectfully, and showed me gently where I was wrong. I'm grateful that people didn't cancel me or give up on me, and I think that other people deserve the chance I was given to change.
Strategically: I think what we need to recognize is that bigots are people too, and that bigotry comes from hopes and fears that are very reasonable. Bigots want to protect their families, to work profitable jobs, to be able to do the things they enjoy doing, to be able to have their own beliefs. These things are good things to want! But they become bigotry because of ignorance. The solution isn't to write these people off, it's to correct the ignorance. Usually, if you're debating someone online, they're too invested in their own view to be convinced, so it's more important to convince other people reading the discussion. Hearing the truth in isolation doesn't give people responses to the lies spread by bigots. Wherever lies are presented, we need to combat them with truth.
At a more fundamental level it's just who we're supposed to be. Aren't we supposed to be on the side of love? If we're just giving up on our fellow humans and treating them as garbage to be discarded, isn't that just hate? If we only tolerate people who we agree with, how can we claim to be tolerant? If people are written off at the first mistake they make, who among us actually meets that standard?
> Asking because what has stopped racism in the real world was that it ceased to be socially acceptable behavior. Acting like a klansman in public usually gets someone banned from mainstream establishments (social cost), loses them non-klansmen friends (social cost), and if their employer, colleagues, and and business clients find out, they lose their respect and business as well (more social cost). Effectively, being an overt racist gets you shunned -- something societies have been doing to enforce social norms for all of history.
The idea that racists need your acceptance is incorrect, and highly reflective of the urban-rural divide. I have family in rural areas, and I've spent a good amount of my time in the rural south, and in those places, being openly racist has little social cost--on the contrary, if you decide to mention that you think black lives matter, that has a social cost. The "social norms" you're talking about aren't social norms outside of your bubble. You think we don't need to convert racists because you look around you and think that antiracism has already won and we just have to maintain the lead, when in fact, the result is still very much in question.
You're also choosing a pretty old-fashioned and extreme example: sure, literally wearing white robes will get you shunned just about anywhere. But that's just because the face of racism looks different today.
And finally: if you're going to talk about how people enforced social norms for all of history: realize that for all of history, racism has been the social norm that was being enforced. Antiracism is a new thing. Only a few generations ago, the social norm was that humans legally owned other humans because of their race in the supposed "land of the free". If we are going to change the social norms, I'm highly skeptical that we can enforce the new social norms with the same tools that enforced the old ones.
> Big Tech could stop being Big Brother and then both consumers and Republicans would be happy.
Emphasis mine. Consumers and Repulicans would be happy until Republicans start weaponizing Big Tech to overthrow democracy and prolong COVID. Then consumers at least would cease to be happy.
This opinion piece forgets or (more likely) completely ignores that these companies have in fact censored people including a sitting president. People, including some doctors, were being censored because of thoughts on the Corona lab-leak theory (which seems to have some meat to it), hydroxychloroquine and ivermectin (both of which have been proven safe and moderately effective, though the jury is still out on long-term effectiveness), and vaccine hesitancy (mRNA vaccines are still new and the public has a right to be a little skeptical). Right or wrong, that is what happened.
This action is investigating the perceived coordination among the major social media networks that engaged in such censorship. This coordination, if true, is a version of antitrust because it is anti-competitive much in the same way OPEC is an anti-competitive organization. I don't think however that attacking section 230 is going to solve this. I also think that those claiming censorship are going to have a hard fight in court to prove their case, because tbh they don't have much of a case - there are other outlets and other ways of communicating. That and I frankly don't like the thought of the government pressuring companies on matters of speech or press.
However this action is _not_ claiming traditional antitrust rules have been violated as the author attempts to inject. The Democrat plan is more traditional antitrust in that it wants to break up companies rather than prevent coordinated censorship.
> The Democrat plan is more traditional antitrust in that it wants to break up companies rather than prevent coordinated censorship.
Well of course republicans are going to be against this. We have anti-trust legislation (DOJ needs to actually enforce it, like the Trump DOJ did with Facebook, but it got thrown out for some reason).
Passing a bill with the express purpose of targeting certain corporations comes too close to a bill of attainder for me.
And I support breaking up the big tech companies.