It basically is a fundamental part of our psyche. It's never going away.
What a piss-poor cop out argument this is. There are lots of impulsive, animalistic features still baked into our slightly more evolved reptilian brains, and it seems we haven't had much problem ironing out some of the worst of those impulses for net positive outcomes.
Clubbing people from different tribes over the head with tree branches, for example-being one of them.
Why, pray tell-is/should this be any different?
I think the cases for which that's true don't tend to correlate with the sort of content we're discussing. I don't think "outrage culture" is the reason people object to Alex Jones, as an example.
He wouldn't necessarily, because the people who don't like him aren't the ones making him popular. And his message has real-world consequences which sometimes can't be simply ignored. Harassing the victims of mass shootings is going to become a thing now because he's convinced people that all of them are staged and the victims are crisis actors.
>But he's a handy foil for the outrage culture to get their outrage on.
We may simply be disagreeing on what constitutes outrage culture. To me, it's outrage for its own sake, outrage as identity politics and virtue signaling ... and there is certainly a lot of that on social media.
But people also have legitimate and understandable reasons to be outraged at Alex Jones and people like him, which to me makes it no longer outrage culture, but just outrage.
He claims he's received over 5 million new subscriptions to his paid service since he was kicked off multiple social media sites. And even if he's lying about that, the Streisand effect is real.
"But people also have legitimate and understandable reasons..."
This is a guy who says lizard people from Mars are deeply involved in the ongoings of our politics. Being outraged that a (pretend) crazy person says something crazy seems like an incredibly useless and unproductive emotional response.
If the only people he demonized were "lizard people from Mars," then yes, it would be silly to take him seriously, because there aren't any actual lizard people from Mars who could be victimized by that, but that isn't the case.
I didn't want to make assumptions though so I asked.
But my thesis is literally that once people take it seriously, it no longer becomes outrage culture, which is a politically neutral statement. Notwithstanding the implicit politically biased undertones which make "unspoken implications" difficult to avoid.
That is some top notch rationalization of hateful and racist speech and such a reductionist view of the issue. Let’s maybe take a step back and consider the reasons -why- someone may find a belief offensive rather than making some blanket statement about the issue. Let’s also consider that being offended by racism is not the same thing as being offended by someone calling you out for being racist.
You've also described the quote as a 'reductionist view of the issue'. What issue? The decentralized archiving of content that some in a specific group of a certain segment of online denizens have deemed objectionable? Isn't this just proving exactly the point of the quote you're taking issue with? Where is the rationalization?
How did you come to this conclusion? I'm curious what that logic ladder may look like.
The racism comment is an example and not necessarily directly targeted at the article, but more at what’s implied given the context (e.g. the comment about The NY Times article having the race changed from “white” to “black”).
And so your first response to engage this is to immediately play political-word-association and tie this service up with the worst elements of the "PC" divide, because of how some other groups, not even involved in the discussion right now, deploy bad-faith debate tactics?
That doesn't seem at all any more helpful than whatever it is you're portending to have an issue with, at the core of a service like this.
Obviously, the list of banned content will contain a complex spectrum of offensiveness, including some really blatant stuff but also some where the banning is very questionable or inconsistent with respect to that which was not banned. It is the latter which is most interesting to preserve and study, but cannot be mechanically separated from the former, citing the original problem of who to ban in the first place.
I do agree with that. It’s not so much a commentary on my part about banning, but moreso about how the service is being presented given that sentence and the context around it.
People shouldn’t be banned for absolutely any reason, but that doesn’t mean banning shouldn’t ever happen because “people will always find something offensive” (which is what seems to be implied by that particular footnote - though I could be wrong). The total relativism about speech and offensiveness is the thing with which I take issue.
The primary implication is that the corpus of banned content includes generally agreeably banned content, arguably banned content, and a record of historical shifts in bannability.
That's fair. I absolutely acknowledge that my emotional response is making me get ahead of myself (I have an axe to grind with linguistic and moral relativism in general) and my apologies for not crafting the most reasonable arguments, here.
And don't get me wrong, I don't think the service is at all a bad idea. I just think that it could have been presented without falling back on the "Someone will always find something offensive and hate your label" comment.
I think it's obvious that everyone is offensive to some non-zero count of people. Even if that is challenged, there is strong precedence that your current words can be read as ruinously offensive under currently-unknown future social sensitivities, which could retroactively punish you (assuming a continued extrapolation of punishable offensiveness from the current state).
In that, the nature of "being offensive" in and of itself is specifically not a valid distinguisher as it's always true. It's only when the level of offensiveness reaches some agreeable flash point where society acts in agreement that something is "too offensive" (as opposed to simply a binary "offensive"). But since everybody is "offensive" to some non-zero amount as a binary predicate, that word by itself becomes purely selective enforcement with everybody guilty by default.
The statement of "Something is always offensive" is well founded and speaking to a real problem, but yes it is unfortunate that its wording is very similar to defensive, reactionary, relativistic dismissal. In practice, "being offensive" is not actually that which is railed against; it is "agreeably too offensive" (or simply "antagonistic" which is a separate concept) that is the actual pursued distinguisher, while the former phrase is overly broad to a fault and is used in that way to cause real damages.
Having said all that, I wholeheartedly agree that all of the above can come across as stuffy, pedantic, and disconnected; and that people obviously tend to use "offensive" to mean "agreeably too offensive". HOWEVER, all of this is critically important when dealing with situations that have high stakes real world implications, especially when specifically codified terms of service, policy, or legislation are involved where the "letter of the law" rules.
Is this hate speech? Some would say that it oppresses people who are just trying to raise their children how they see fit. It’s certainly ignorant, since I haven’t done the research to know how to feel. But that’s partly the point.
It’s a tricky issue. But I’ve seen people get banned for less.
A few decades ago, it wasn’t ok to say that gay people shouldn’t be shunned.
> oppresses people who are just trying to raise their children how they see fit.
I usually see this phrase in the context of people opposed to bans on smacking children...
(I don't think either counts as hate speech as written, BTW, but similar things phrased more vehemently in a different context might.)
"Deemed offensive" by the service gets us back to exactly the same situation with the major social platforms now, so it's legitimate concern.
If this gets traction it will be fascinating to see whose claims of systematic oppression are validated.
The problem isn’t that the voices of hate are being banned. The problem is that the mechanisms used to lower the volume on or silence hate speech can lower the volume or silence any other form or speech. This is why the US enshrined free speech as the first amendment of the constitution.
Technical means of defying censorship, like this one, help to preserve that right.
What is concerning is that these particular companies are the main platforms used nowadays for conveying ideas and expressing thought, and there are no alternatives that offer the same level of reach. Sure, you can put your video up on another hosting platform. Nobody is going to find it like they would on YouTube.
In order to disseminate ideas widely authors have always been at the mercy of editors, publishers and distributors.
It's an impossible situation because if you force someone to publish something they find objectionable then you've stripped them of their own rights just so some arsehole can write shit about Jews/Muslims/Gays/Blacks/insert minority here.
You're right, it wasn't very charitable, I took it quite literally, and that means the 1st amendment has absolutely no bearing on preserving the right of one person when that is trampling on the publishers rights.
> Technical means of defying censorship, like this one, help to preserve that right.
I did miss the nuance there because I get angry that there appears to me an impression in the general populace that the 1st amendment applies everywhere and gives people carte blanche to say whatever they please without consequence.
"This is objectionable to me, therefore, no one should be allowed to consume or be exposed to it."
That's what bothers this particular commentor (me).
My own access habits are in practice similar to one who is banned or suspended regularly.
I have never felt like I essentially have no voice on the internet.
I understand that if some popular person with lots of followers gets banned then it seems like they would risk losing a lot of their audience that they may rely on for income etc etc etc, but that's hardly the reality of the average 10/100/1000-followers user, is it?
Just because the big social networks are really big doesn't mean that they are the only places you can find a way to have a voice on the web. I don't mean to diminish your feelings if you feel that way. If you love twitter or fb or whatever and you get kicked out, then I certainly get that would be hard. But there are other places on the web. Lots of other places. Maybe some alternative places let you have even more of a voice precisely because they're smaller and it takes less to cut through the noise.
This is completely different than, and carries higher stakes than the role that telecoms, hosting companies, and publishers have played in the past.
At core though, for me this is about protecting access to information and the importance of decentralized (uncensorable) identity, not about who private companies should or shouldn't allow on their platforms.
This is a discussion that has some serious points on both sides. I don't believe throwing judgments on the people who choose a side that differs from your own will be helpful.
But normal/regular person can be used as opposed to celebrity, or billionaire. Both Jeff Bezos and myself are natural persons. Only one is a normal/regular person :).
Shaming people sometimes makes them rebel, and sometimes makes them reform.
Maybe you drive more people into radicalization than self-reflection when you shut them down. I'm not sure I've ever seen evidence of that either way, have you?
However, it seems almost axiomatic that if a person cannot be heard (as easily) then that person cannot recruit (as easily). I would guess that this is at least part of the purpose behind de-platforming tactics. Maybe. Maybe de-platformers just want to stop hearing from their opponents.
I'm wary of de-platforming as a universal tactic though I might be convinced that it is sometimes the right approach. At any rate, if it's effective, then people will keep doing it. And it seems to be pretty effective.
I think that someone isn't hateful because they're an inherently hateful person. People are a product of their environment. If they only communicate with hateful people, it shapes their worldview.
Everyone has flaws, but somehow hateful speech is that One Thing we cannot tolerate in a person. Bob might be a racist, but he's also a good carpenter, and there's some common ground there. If you just want to get hateful people out of your feed, then ban away. If you want to reduce the number of hateful people in the world, then treat them with respect and gradually show them a better way.
I'm not a good data point, because I have never been a target of that kind of resentment/anger/hatred/etc. At the same time, I can imagine the quiet racist is easier to tolerate than the inflammatory provocateur.
I probably agree with your solution in the long term. I just don't know who is supposed to do the tolerating and way-showing. Certainly I don't think that social media/tech/publishing platforms are required to tolerate viewpoints they think could hurt their business.
For what it's worth, I can tolerate hate speech. I'm not quite a free speech absolutist maybe, but pretty close. But it's easy to tolerate it when I'm rarely if ever a real target of it.
Edit: If you feel I am unfairly calling you out by asking if you have evidence for your previous claim about banning users for content causing radicalization, then I'm sorry. That's not my intent. I really am just curious if that's been studied. I've seen similar claims but not any evidence.
Candace Owens, a black conservative woman was temporarily banned from twitter for making the same comments NYT’snew editor made but replaced “white” with “Jewish” and all of a sudden twitter decides those are racist. She did it prove a point and twitter played right along.
Those are two examples.
"Judicial Watch has made numerous false and unsubstantiated claims, with a “vast majority” of their lawsuits dismissed"
JW is one of the only groups out there actually winning lawsuits to disclose information on the 2016 election and FBI mishandling of basically everything.
Sorry, they're real lawyers, in real cases, using actual facts that stand up in a court of law. I can see why your favorite website there doesn't like that.
ALSO... FWIW... My local news paper is EXTREMELY BIASED, and your site lists them as Neutral.
I think this is performance.
The short version is unless you follow that person already, you won't see them organically in any feeds. They could #JonathonKoren and you won't see it because they are effectively hidden unless you already followed them.
What Twitter is doing is absolutely political censorship.
EDIT: There are tools to detect the "Quality Filter" there is a hidden cam interview with an engineer from Twitter explaining it. Hardly conspiracy theory. I don't know why it doesn't seem to be an issue for you, or anything about your anecdote of not seeing it.
I’d take this a lot more seriously if this wasn’t being peddled by same folks that complain about “conservative purges” when Twitter bans a bunch of obvious bot accounts.
What makes this a conspiracy theory is the assumption of some grand covert political agenda. That’s really really hard to believe. Twitter is infamous for not enforcing their own TOS when it comes to hate speech or even calls for violence.
While I’ve never worked at Twitter, I’ve seen how fringe groups start spreading rumors of dark political agendas when it comes to these sorts of algorithms. It just doesn’t happen. It’s just spreading false outrage for clicks. It’s bullshit sold to people desperate for validation of their unpopular ideas.
I can't help but think the big platforms are shooting themselves in the foot by acting as censors and opening the door up to whole new categories of services (like this) that will ultimately replace them. Betting against decentralization seems like a long term losing strategy. Censoring content and promoting particular political viewpoints will accelerate the move away from centralization.
He had literally hundreds. For reference, pewdiepie has around 2800, I think.
I believe you can still access them through archive.org. I’ve thought it might be worthwhile creating a new YouTube account for terry, reupping all his videos, then give him the keys. But I bet that’d run afoul of YouTube’s tos.