

Abuse of the abuse button - steveklabnik
http://rarlindseysmash.com/posts/2013-07-29-abuse-of-the-abuse-button

======
Karunamon
> _I honestly do not expect a privileged homogenous team to actually be able
> to come up with a solution, because privileged groups tend come up with
> solutions that are best for privileged groups._

Leaving aside the issue of whether anyone buys into third wave feminism's
definition of "privilege", this comes out as an indictment of every abuse
reporting system ever made. It adds nothing and suggests nothing, only says
"this sucks".

Actually, I take that back. It's impossible to leave that issue aside since
you implicitly accept that definition to even make sense of this article.

Thing is, "abuse" is determined by the service provider, not the users. (And
rightly so - or else you run into the problem mentioned where people flag
something off and ruin someone else's day because their delicate sensibilities
were offended..) Twitter and every other social communication site has a list
of "thou shalt not's" which you are reporting when you click on the "flag"
button.

~~~
makomk
Butbut, apparently it's important that non-privileged people, as defined by
feminists, be allowed to tell privileged people to kill themselves or go choke
on their own vomit: [http://stavvers.wordpress.com/2013/07/27/against-a-
twitter-r...](http://stavvers.wordpress.com/2013/07/27/against-a-twitter-
report-abuse-button/)

To be honest, at this point I wish there was some way every major side of the
argument would lose, because they're all pretty unpalatable.

~~~
Karunamon
Not really. I see three sides in this argument.

* The "deal with it" crowd that doesn't care who they offend or see why tone is important. (i.e. 4chan users)

* The radfem "offenders are literally hitler" types who go out of their way to give people a hard time over their choice of words, putting zero thought into intent (i.e. r/SRS users)

* The people who want to have informative and interesting debate without being hassled by group 2 or trolled by group 1 (i.e. HN users).

The first two groups can fuck right off, they are both a plague on
constructive discourse and on the sites where they congregate. They also both
tend to polarize the userbase and cause more problems by fighting with each
other and dragging other boards into it (one can see this in action by the pro
and anti SRS factions on Reddit).

~~~
makomk
I've been seeing a different three sides:

* the 4chan /b/ "deal with it" crowd

* a group of radfems like Cathy Brennan who think they should be able to insult and dox trans women with impuny (names, addresses, photos, jobs, the works) because they're fighting to protect feminism but consider polite complaints about this to be harassment. They support the Twitter abuse button; if they ever get suspended, they just organise large groups of fellow activists to pressure the service provider into reinstating their accounts.

* people like the blogger I've linked who object to this because it isn't strict enough on their enemies and won't allow them to insult and belittle people _they_ believe to be privileged, meaning the above-mentioned radfems, men who've been raped, and various other threats to _their_ form of feminism. Most of the influential Twitter users I follow are in group three. They're opposed to the new abuse button.

All three of these groups use near identical "if you're not happy with _our_
unproductive, abusive language, just unfollow/block and stop whinging"
arguments. Your group 3 is, so far as I can tell, nowhere to be seen.

~~~
Karunamon
>Your group 3 is, so far as I can tell, nowhere to be seen.

So where would you say HN sits?

~~~
makomk
I'd say HN sits several hundred miles away, watching the battle on the TV. I
hadn't seen HN express any opinion on the topic at all up until now, and it
doesn't look like we're likely to become a major player in this.

------
girvo
Leaving aside the Us vs Them mentality that has pervaded every side of the
current Twitter controversy, I see her point (and am trying really hard not to
take the comment about privileged homogeneous groups personally as a young
white male in tech. I'm an ex heroin addict who lived on the streets in my
teens... but I still fall into this group, and I know that).

Automated systems are the only way to scale something like this to Twitter or
Facebook or YouTubes size... but an automated system can't ever work and
please all groups. I dont have a solution either. In response to a quote in
the article about the Twitter staff not being pro trans* and the like, my
initial thought was: should they be?

I need to explain that, lest someone misread my intent with that question.
Personally, I am very pro-tolerance, regardless of who you are or where you
came from; my past sort of makes you stop judging others, to be honest. But
then, should Twitter staff be expected to be on the bleeding-edge of social
issues and tolerance advancement? How do they balance that with the fact that
(as pointed out in the article) the masses are the ones with the numbers, and
the masses might (and sometimes do) disagree with the more (in my opinion,
humane) liberal stance?

This is a very hard problem... and I can't think of any simple solutions,
short of hiring thousands of people to go through each and every report.

~~~
patmcguire
Anil Dash makes a pretty good case for doing just that. (It's blog-specific,
but close enough) [http://dashes.com/anil/2011/07/if-your-websites-full-of-
assh...](http://dashes.com/anil/2011/07/if-your-websites-full-of-assholes-its-
your-fault.html)

------
richardv
I've stopped reporting anything these days. The report system isn't there to
actually do anything... at least not in the disciplinary sense.

It's there exclusively to make the people who are reporting things feel like
they have some solution to things they disagree with.

\----

I recently reported two separate comments on Facebook.

The first comment was on startup investor's profile picture after he announced
that he was getting married to his long term boyfriend/partner (about two
weaks ago). The polite version of the comment basically went along the lines
of saying that he would be "judged by allah" for his sins and should die.

The second person was a friend who commented on a picture of several football
supporters and one of the guys had a turban on. The comment said "I see a
dirty terrorist".

So I kicked my 'friend' (who was really just an associate anyway) from my list
and reported to Facebook...

Both of these reports came back saying that after reviewing they haven't
broken any rules...

Apparently the Facebook staff who run the review proceedures are the same
folks who leave comments on the cesspool over at YouTube...

~~~
GauntletWizard
And if they were to enforce your rules, next week we'd have a fantastic
article blasting Facebook for "Suppression of an Open Forum". We, as a
society, took a hard stance long ago: People can say what they want,
regardless of how much you whine and complain. Nobody has to listen, and
Twitter still has a "block" button, but when you get your knickers in a twist
you're usually giving the trolls what they wanted.

The other option is for those marginalized groups to be completely silenced.
You can't have it both ways. "Community Standards" that are just your
standards lead to tyranny.

From a profit standpoint: Is Facebook really going to give up the 17% of US
voters who think Obama is a dirty muslim just because it offends you?

~~~
steveklabnik
> People can say what they want, regardless of how much you whine and
> complain.

This is not true. "Fire in an movie theatre," etc.

~~~
hfsktr
People like to think that you can say whatever you want online but even there
you can't.

Any search combination of: Facebook + teen/jail/sentence gives a pretty good
idea. I'd only seen a couple on HN front page but it's apparently there are a
lot of examples that don't make it that far(1).

For your fire example there are others that have to do with inciting violence
or putting others in danger. I am sure there are more examples.

(1) without reading them all it could be just linkbait or misinformation

~~~
true_religion
Whilst its true that you can't say anything you want, should it really be up
to any corporation to determine if saying X or Y is immoral?

If someone has said something that's illegal (e.g. threatining), then there's
legal consequences for that. If anything, we shouuld beg the police to become
more proactive. Currently, the stance on most crime involving computers is
'meh, its too hard to catch them unless they're on facebook using their real
names'.

~~~
hfsktr
I think that unless it's actually illegal then it is up to the corporation.
Corporations express opinions (morality) all the time (eg, Chick-fil-A/gay
marriage, Abercrombie & Fitch/fat people).

So if FB/Reddit/Twitter/$NextThing decided that anyone who ever posted
anything about $topic gets banned. If people wanted to use it they would
understand that and could decide not to.

If my ISP started filtering things that I could access though that would
probably get a different reaction from me.

As to the policing...some of the articles I pulled up when I did those
searches were ridiculous and I'd prefer if police didn't waste time and
resources on them. Of course there will be someone who thinks it's completely
within reason.

Short version: I don't care if $corp censors it's platform based on whatever
it wants. I do care if ISP/government censor. Personally I find so little
offensive I have to wonder if I'm broken, normal, or just incredibly tolerant.

(sorry feels like I just said a lot of nothing)

------
speeder
Report buttons never work perfectly in large sites.

I know a certain large social network that is mostly domianated by feminists,
there the issue is exactly the reverse that the author here complains, it is
that complaints against feminists are ignored, while conservatives frequently
get outright banned.

------
detcader
People like EFF's Director for International Freedom of Expression Jillian C.
York hold the view that Facebook (and likely Twitter too) "should not be in
the business of censoring speech, even hate speech." [1] I tend to sympathize
-- if you set guidelines for speech on your website with any degree of
vagueness, you end up privileging [2] certain types of speech based on who the
moderators are. With platforms as populated as Twitter and Facebook, censoring
speech don't make sense unless it actually violates the law. Ideally a social
media website should emulate the real world in its governance of expression,
or else any criticism of Israeli government policy is anti-Semitic, any
criticism of Obama is racist, etc etc

[1]
[http://www.slate.com/blogs/xx_factor/2013/05/30/facebook_and...](http://www.slate.com/blogs/xx_factor/2013/05/30/facebook_and_hate_speech_the_company_should_not_be_in_the_business_of_censorship.html)

[2] apologies if you can't read the word without feeling threatened or
Offended, it just fit there

~~~
pgeorgi
This sounds like a strong argument for the filter bubble to me: Keep those
people around but hellban them for individual potential readers (or even along
the social graph)

