
Online hate speech could be contained like a computer virus, say researchers - hhs
https://www.cam.ac.uk/research/news/online-hate-speech-could-be-contained-like-a-computer-virus-say-researchers
======
javajosh
As long as the user has complete control over the agent, I am all in favor of
screening inbound messages and dropping anything I don't like (hate speech,
but also paid speech). This is consistent with my view on "consensual
communication" \- it is startling that we allow ourselves to force information
into each others minds, but we don't allow ourselves to shove food into each
others mouths, without permission!

And yes, if this function came under central control, you might have yourself
a very merry dystopia. You might have a dystopia anyway if people can set
their filter not just on hate speech and paid speech, but on any predicate
they define, including anything that disagrees with them. It's the ultimate
information silo, and would no doubt be deadly to any democracy. It might even
be deadly to any form of government, since it would tend to seriously weaken
the intellect of the population.

~~~
imgabe
> it is startling that we allow ourselves to force information into each
> others minds, but we don't allow ourselves to shove food into each others
> mouths, without permission!

I'm sorry, this whole concept irks me in a "not even wrong" way. It's
difficult to even begin to explain how wrong I feel this sentiment is. Let me
try.

First, it makes no sense to draw an equivalence between taking in information
and physically assaulting someone with food. Eating is an active thing. It's
something you do. You can be in the same room as a piece of cake and not eat
it. You can't choose not to sense something. There is not even an option to
consent to communication with someone. If you're near someone and they talk,
they've communicated with you, whether you wanted them to or not. (Assuming
you speak the same language).

It's the natural state of a person (all living organisms, actually) to be
passively receiving as much information as possible all the time. This is
because survival is dependent on being as aware as possible of your
surroundings so you can detect both threats and opportunities.

Deliberately choosing to limit the type of information you're able to perceive
makes as little sense to me as deliberately trying to reduce the number of
colors you can see. Or making it so you can feel only pleasure but not pain.
There are people who can't feel pain, it's a debilitating disorder that makes
it very difficult to navigate the world.

We need to be exposed to uncomfortable ideas just as much as we need to be
able to feel pain. Removing any information you "don't like" from your
awareness would be about as healthy for your mind as eating only cake and
potato chips would be for your body.

~~~
kgwxd
So you don't have a spam filter on your email?

~~~
imgabe
What's being proposed is more like that episode of Black Mirror where the
mother gets her daughter a brain implant that makes her unable to perceive
anything "objectionable" (anything that raises heart rate or upsets her in any
way).

Spam/Not Spam is a very limited category to filter. "Things I Like / Things I
Don't Like" is very different. Sometimes you need to know about things you
don't like.

Suppose everyone who doesn't want to act on climate change just filters out
any information related to climate change. Do you think that is helpful for
the world?

~~~
bmarquez
> Suppose everyone who doesn't want to act on climate change just filters out
> any information related to climate change. Do you think that is helpful for
> the world?

If they've already made the conscious decision to filter out that information,
forcefully shoving it down their throat isn't going to change their mind, and
may even reinforce their beliefs.

~~~
imgabe
But existing in a bubble where the only information they get is "Everything is
fine, climate change is a hoax" ... _is_ going to change their beliefs? How?

The mere existence of information within your realm of perception doesn't mean
it's "shoved down your throat". That is an unnecessarily hostile view to have
towards information. I assume you're in a room somewhere with some objects
around you. Maybe there's a painting on the wall. Is the painting being shoved
down your throat or is it just a thing that you might notice that happens to
exist?

Yes, there are definitely ways of presenting information that are counter-
productive to changing people's minds, but quarantining them off from
information doesn't seem likely to do any better. People can learn and grow
and change their views, but they need access to information to do it.

~~~
bmarquez
To use your analogy, if I chose to remove all paintings from my wall, you
would break into my house to install a painting on my wall to force it into my
peripheral vision.

Why should someone be obliged to consume information that they've deliberately
and consciously chosen to ignore? Do climate-change deniers, anti-vaxxers, and
flat earth supporters also have the same right to present their side of the
story too? And if not, who decides what information people are being forced to
consume?

Why can't the user decide for himself?

~~~
imgabe
But you're not consciously choosing to ignore it. You're handing your
conscious choice over to an opaque algorithm and telling it to ignore anything
it deems "similar" to some content, via however the person who programmed the
algorithm decided to define "similar". That is the opposite of a conscious
choice to examine and reject information.

> Do climate-change deniers, anti-vaxxers, and flat earth supporters also have
> the same right to present their side of the story too?

Yes! They do! You don't have to read it or agree with it, but of course they
have the right to express it. Freedom includes the freedom to be wrong.

The Internet is not your home. Walking around the Internet with filters hiding
information from you is like walking around in public with blinders on and
headphones in. Sure, maybe it prevents you from feeling uncomfortable
sometimes, but sooner or later you're going to get blindsided by a speeding
car you didn't see.

------
OneGuy123
Is it intersting that people do not understand intuitively that this can only
lead to 1984 sonner or later.

They will understand it only when it's too late.

~~~
commandlinefan
The only people who do understand have already been labeled "hateful".

~~~
metamet
That's a pretty absurd thing to say.

I don't have to post thinly veiled pepe "memes" in order to see how real this
slippery slope is.

The fact is that there is is a lot of hate speech online, and it mostly
fosters in unmoderated forums and channels. Finding an extremist group (of any
kind, really) is as easy as a handful of clicks.

Fringe ideas become normalized and accelerate the extremism due to the ability
of these long tail groups to find each other. Flat Earthers have reinforced
each other this way, and they continue to attracts new people due to YouTube
rabbit holes.

Not sure there is a good solution to it, though. You can rely on private
companies to moderate and choose what they allow on their platform, but that's
proven to be highly ineffective. Then you face the problem where a platform
opens up in order to foster these specific communities (8chan/8kun being the
primary example right now), which then forces the companies they pay for their
services to react to market pressure to either support or deny them.

They'll always exist, somewhere.

~~~
banads
>Fringe ideas become normalized and accelerate the extremism due to the
ability of these long tail groups to find each other.

The same is true for _good_ ideas. For instance, suggesting that doctors
should wash their hands was originally considered an offensive fringe idea
which could get you locked up in an insane asylum.

>Not sure there is a good solution to it, though.

The only solution to bad ideas is to allow them to be freely expressed so that
they can be delt with publicly and transparently. Hiding bad ideas through
censorship is like keeping a boil covered with a bandaid -- it's only going to
fester.

"Like a boil that must be opened with all its ugliness to the natural
medicines of air and light, injustice must be exposed to the light of human
conscience before it can be cured."

~~~
metamet
> The same is true for good ideas.

Absolutely. This is precisely why I don't think there is any form of feasible
solution out there. Things get weird when forum moderation is used to snuff
out dissenting opinions. You see this a lot on certain forums on Reddit in the
last few years. r/Conspiracy, for example, was initially overtaken with right
wing conspiracies peddled by Alex Jones (Pizzagate being the most obvious),
with certain mods banning anyone critical of it and the Trump administration.

Not entirely sure why my post above is receiving down votes, though, as there
really isn't a realistic approach to suppressing hate speech outside of the
free market that doesn't teeter on opening the door to totalitarianism.

The spread and incubation of hate speech, debunked conspiracies and just
general misinformation is, unfortunately, just a byproduct of the existence of
the internet. Not much room for inoculation outside of education.

~~~
banads
>general misinformation is, unfortunately, just a byproduct of the existence
of the internet. Not much room for inoculation outside of education.

The scale of it is greatly enhanced by the internet, but such problems have
existed since language itself was invented

------
Mikeb85
And who gets to decide what's 'hate speech'? Censorship is basically the same
as propaganda - changing the narrative to influence opinions. One day obvious
hate speech is banned, the next day any dissenting opinions are 'hate speech'.

~~~
downerending
Ideally, _I do_.

The tools that are needed here would provide the ability for each of us to
screen out whatever we don't care to see, whether it be racist speech or just
people that won't shut up about crossfit.

And since it's just me controlling my filter, I could easily adjust the level
of filtering. Sometimes it's _useful_ to know what people outside my Overton
Window are talking about.

~~~
DaiPlusPlus
That doesn't help the situation: people who already have extremist opinions
would then simply filter-out the mainstream and/or opposition content and be
content with their own echo-chamber, which would only further their extremism
and end with nasty results.

> Sometimes it's useful to know what people outside my Overton Window are
> talking about.

People don't have a personal Overton window: The Overton window describe's
society's range of acceptable opinions.

And I guarantee that the vast majority of extremists - or even people already
on the fringes - are not interested in reading or hearing about the
opposition.

~~~
ratsmack
>... extremist opinions ...

Who determines what is an "extremist opinion"?

~~~
DaiPlusPlus
That's (tautologically, I admit) defined implicitly by my answer.

I define it (for the purposes of this conversation) as an opinion that its
holders buy into so much that they outright refuse to acknowledge anything in
opposition. After-all, if you believe you've bought-in to " _the truth_ " (and
lack critical-thinking skills, and/or have been propagandized to this point
anyway) then it has to be an extreme position (by whatever the standards of
the contemporary Overton window apply) because otherwise it wouldn't be
necessary to propagandize or rile similarly-minded people against your
opposition.

------
cal5k
"Hate speech" seems to be imbued with somewhat religious meaning, since most
conversations about it make huge assumptions about its impact, frequency, and
scope.

~~~
homonculus1
I noticed several years ago that you can mentally replace terms like
"problematic" with "sinful" and "hate speech" with "blasphemy" without losing
any meaning in most instances. They basically convey nothing more than a
subjective moral judgment informed by group affiliation and puritan ideology.

"Racist/sexist/homophobic" still convey a little bit of information since they
hint at a way in which something might be genuinely bad. But their definitions
have been so irresponsibly broadened that you have to take it on a case-by-
case basis whether the label even means anything let alone whether it's
accurate.

This is a bad problem and everyone should worry about it, because a meaning
vacuum is like a power vacuum. Even people who don't care about cis white men
getting verbally abused need to realize that when you publish insane
histrionics about OK signs and cartoon frogs, you generate confusion and doubt
that actual white supremacists can use for cover. Extremism begets extremism.

~~~
kitsuac
Problem is a tribe and all its associated organizations become dependent on a
fresh supply of racists, sexists in order to justify its own continued
existence. Evolutionarily, it must continue to find more and more of them to
survive, even if they are mostly imaginary

------
scarmig
Communication should be consensual. If I want to pre-emptively block some kind
of content, or delegate such authority to some trusted third party, that's
great. It's what we do with ad blockers all the time, which are less
controversial here for some reason than self-defined "safe spaces."

If anything, the ability to block content as opposed to individual users would
lead to more exposure to new viewpoints. For instance, I think Rod Dreher has
lots of interesting things to say. But it seems like a quarter of his posts
are rants about trans people using bathrooms, which I just don't have time
for. And as a result I mostly skip everything he writes, including the good
stuff. But if we built tools that emphasized filtering on content instead of
people, it would simultaneously let us get presented with arguments we're
actually interested in having and also not prejudge content by association
with other content that shares the same author.

~~~
antepodius
We accept ad blockers more readily because it's commonly accepted that ads are
a different sort of speech, in an ethical way. It's not somebody telling you
something; it's a machine trying to sell you something.

------
LinuxBender
This is already done on some sites through shadow-banning.

On big social media sites, people will likely work around the restrictions. I
can think of hundreds of ways to bypass such bans. GIFs, different languages
and character sets, slang, linking blogs rotating domain names, rot13, base64,
Morse code browser plugin, to name a few. Those that bother don't may find
themselves in echo chambers.

Do people still create new slang or code speak these days?

One of my favorites, but probably not useful on facebook:

    
    
        perl -le '$_="6110>374086;2064208213:90<307;55";tr[0->][ LEOR!AUBGNSTY];print'

------
mikedilger
I'm glad they are suggesting something short of censorship.

I've always supported the idea of letting people choose what they want to see.
If moderators could tag posts with tags like 'spam', 'nsfw',
'graphicviolence', 'racistagainstraceX', etc... and if people could then
choose what kinds of posts they want to see... and if anyone could be a
moderator... and if people could then subscribe to only the moderators they
trust... problem solved.

I just can't figure out how to make such a system scale. I've tried a few
architectures but I'm still looking for the breakthrough to make it scale.

The problem today is if you don't like Twitter moderators you go to Mastodon
or to Gab, and the community fractures into bubbles, and that drives people
further apart. I think fixing moderation is going to be the next big thing.

------
novia
A lot of the comments here seem to be missing the point of the article. Hate
speech wouldn't be removed, it would just require an additional click or two
to take a look at it. I like this because it gives the user a choice.

Lets say you are a black man who just wants to scroll through Facebook at the
end of the day to see what your friends are up to, without having to see
racist comments that friends of friends have left. You would see the warnings
that hate speech might exist behind the filter, and, since you are trying to
relax at the end of the day, you could make the conscious decision that
looking at that stuff wouldn't be good for you.

But then, some other time, like maybe early the next day, you could make the
conscious decision to click through to see what the person said, and tell them
off, or, if the content was misflagged, let the algorithm know.

The current state is that we remove the content completely and pretend like
people aren't racist/sexist/phobic, or we leave the content up and allow
people to get dragged into flamewars at all hours of the day. This new
proposed tech would be akin to HN's option to "showdead," except with more
context about what you're opting into potentially seeing.

------
misterspaceman
> This approach is akin to spam and malware filters

I think "spam" is a better analogy than a computer virus.

------
trhway
>Definitions of hate speech vary depending on nation, law and platform

according to the Russian laws and their application the speech critical toward
the government falls under "extremist speech spreading/inciting hate toward a
specific social group" (in this case the social group is the government).

------
tus88
That's weird...I thought would be a link to the Chinese military. Oh just the
UK...close enough.

------
bryanrasmussen
So contained like a computer virus, sounds good, I suppose whatever these
computer virus things are they're not a big problem for people or anything.

anyway one obvious way this is different - when the popup says this might be a
dangerous file continue users click yes without thinking about it because they
want to see the funny thing that Bob sent them. When the popup says this might
be homophobic hate speech they will click no if they don't want homophobic
hate speech and yes if they do - because some people do want it, which is part
of the problem.

The problem is only partially that gay people have to read homophobic hate
speech, the other insoluble part of the problem is that there are non gay
people who are interested in reading it, and reading more of it, and more
virulent hate speech, and then going out and beating up a gay person.

Anyway researches say that two totally unlike each other things can be handled
in the same way because they're here to help
[https://xkcd.com/1831/](https://xkcd.com/1831/)

------
rudolfwinestock
If “hate speech” has a computable signature, then _not_ -“hate speech” also
has a computable signature.

Therefore, we can expect all kinds of bad actors to take advantage of that.

Sooner or later, Internet outrage mobs will form around vulnerable people
whose speech wasn't _not_ -“hate speech” _enough_.

I'm sure that plenty of people, around here, can dream up some more nightmare
scenarios.

