Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Abuse of the abuse button (rarlindseysmash.com)
22 points by steveklabnik on July 30, 2013 | hide | past | favorite | 24 comments


>I honestly do not expect a privileged homogenous team to actually be able to come up with a solution, because privileged groups tend come up with solutions that are best for privileged groups.

Leaving aside the issue of whether anyone buys into third wave feminism's definition of "privilege", this comes out as an indictment of every abuse reporting system ever made. It adds nothing and suggests nothing, only says "this sucks".

Actually, I take that back. It's impossible to leave that issue aside since you implicitly accept that definition to even make sense of this article.

Thing is, "abuse" is determined by the service provider, not the users. (And rightly so - or else you run into the problem mentioned where people flag something off and ruin someone else's day because their delicate sensibilities were offended..) Twitter and every other social communication site has a list of "thou shalt not's" which you are reporting when you click on the "flag" button.


Butbut, apparently it's important that non-privileged people, as defined by feminists, be allowed to tell privileged people to kill themselves or go choke on their own vomit: http://stavvers.wordpress.com/2013/07/27/against-a-twitter-r...

To be honest, at this point I wish there was some way every major side of the argument would lose, because they're all pretty unpalatable.


Not really. I see three sides in this argument.

* The "deal with it" crowd that doesn't care who they offend or see why tone is important. (i.e. 4chan users)

* The radfem "offenders are literally hitler" types who go out of their way to give people a hard time over their choice of words, putting zero thought into intent (i.e. r/SRS users)

* The people who want to have informative and interesting debate without being hassled by group 2 or trolled by group 1 (i.e. HN users).

The first two groups can fuck right off, they are both a plague on constructive discourse and on the sites where they congregate. They also both tend to polarize the userbase and cause more problems by fighting with each other and dragging other boards into it (one can see this in action by the pro and anti SRS factions on Reddit).


I've been seeing a different three sides:

* the 4chan /b/ "deal with it" crowd

* a group of radfems like Cathy Brennan who think they should be able to insult and dox trans women with impuny (names, addresses, photos, jobs, the works) because they're fighting to protect feminism but consider polite complaints about this to be harassment. They support the Twitter abuse button; if they ever get suspended, they just organise large groups of fellow activists to pressure the service provider into reinstating their accounts.

* people like the blogger I've linked who object to this because it isn't strict enough on their enemies and won't allow them to insult and belittle people they believe to be privileged, meaning the above-mentioned radfems, men who've been raped, and various other threats to their form of feminism. Most of the influential Twitter users I follow are in group three. They're opposed to the new abuse button.

All three of these groups use near identical "if you're not happy with our unproductive, abusive language, just unfollow/block and stop whinging" arguments. Your group 3 is, so far as I can tell, nowhere to be seen.


>Your group 3 is, so far as I can tell, nowhere to be seen.

So where would you say HN sits?


I'd say HN sits several hundred miles away, watching the battle on the TV. I hadn't seen HN express any opinion on the topic at all up until now, and it doesn't look like we're likely to become a major player in this.


> Actually, I take that back. It's impossible to leave that issue aside since you implicitly accept that definition to even make sense of this article.

I don't think this is true. The main thrust of the article is this:

You have two options for reporting abuse: automatic and not automatic. The automatic option is flawed because it makes the assumption that the feature itself won't be abused. The non-automatic option won't work because it doesn't scale and humans interpret rules in different ways.

No specific thoughts on privilege needed to make sense of that.


Part of the problem is a lot of abuse reporting systems are automated, so like in the case of YouTube videos, if you want something taken down you just get people to spam on the button. That's actually worse than no system at all, because the abuse reporting system is a vehicle for more powerful abuse. I think this is part of why you see posts like the OP about abuse report systems.

The abusers in this scenario are larger in number and have more access to resources, so it's only natural that any automated abuse report system will get turned against their victims instead of against them.


In my experience it's less of a question as to whether one buys into or doesn't buy into the idea of privilege and more one of whether one understands it or (perhaps willfully) misunderstands or only bothers to gain a partial grasp of the concept. It's hardly a tool native to third wave feminism; the most famous essay on the idea is chiefly about white privilege. [1] Anyways, why must one "implicitly accept" the idea to "make sense" of the article? Do we need to accept Communist-Leninism to make sense of Das Capital? I find it hard to believe that you've never understood any piece of writing that you don't already agree with.

[1] http://www.library.wisc.edu/EDVRC/docs/public/pdfs/LIReading...


Leaving aside the Us vs Them mentality that has pervaded every side of the current Twitter controversy, I see her point (and am trying really hard not to take the comment about privileged homogeneous groups personally as a young white male in tech. I'm an ex heroin addict who lived on the streets in my teens... but I still fall into this group, and I know that).

Automated systems are the only way to scale something like this to Twitter or Facebook or YouTubes size... but an automated system can't ever work and please all groups. I dont have a solution either. In response to a quote in the article about the Twitter staff not being pro trans* and the like, my initial thought was: should they be?

I need to explain that, lest someone misread my intent with that question. Personally, I am very pro-tolerance, regardless of who you are or where you came from; my past sort of makes you stop judging others, to be honest. But then, should Twitter staff be expected to be on the bleeding-edge of social issues and tolerance advancement? How do they balance that with the fact that (as pointed out in the article) the masses are the ones with the numbers, and the masses might (and sometimes do) disagree with the more (in my opinion, humane) liberal stance?

This is a very hard problem... and I can't think of any simple solutions, short of hiring thousands of people to go through each and every report.


Anil Dash makes a pretty good case for doing just that. (It's blog-specific, but close enough) http://dashes.com/anil/2011/07/if-your-websites-full-of-assh...


I've stopped reporting anything these days. The report system isn't there to actually do anything... at least not in the disciplinary sense.

It's there exclusively to make the people who are reporting things feel like they have some solution to things they disagree with.

----

I recently reported two separate comments on Facebook.

The first comment was on startup investor's profile picture after he announced that he was getting married to his long term boyfriend/partner (about two weaks ago). The polite version of the comment basically went along the lines of saying that he would be "judged by allah" for his sins and should die.

The second person was a friend who commented on a picture of several football supporters and one of the guys had a turban on. The comment said "I see a dirty terrorist".

So I kicked my 'friend' (who was really just an associate anyway) from my list and reported to Facebook...

Both of these reports came back saying that after reviewing they haven't broken any rules...

Apparently the Facebook staff who run the review proceedures are the same folks who leave comments on the cesspool over at YouTube...


First world problems. I agree it isn't always fun to share the world with racists, bigots, and religious fanatics; but I am skeptical of the utility of hiding offensive speech. It might be better to let everyone see what they are.

Also, making trivial moderation requests takes time away from moderators who might be dealing with the really terrible things happening on the intnernet: https://theinternetoffendsme.wordpress.com/2013/04/09/the-re...


I am skeptical of the utility of hiding offensive speech

I share that, but I am also really seduced by the idea of interacting in a space free of that kind of toxic garbage. The same way I prefer to go to restaurants that don't allow smoking.

If there's an appropriate place for offensive speech, I'll prefer to not be there. Maybe instead of "flag as abusive" the interaction should be nudging people into their appropriate cesspools.


This is actually one of the reasons /b/ still exists: if /b/ went away, they'd just post all over the rest of 4chan, and at least all the toxic waste can just be concentrated in one place.


That's exactly what I was thinking. "appropriate cesspool" is a pretty transparent /b reference.


And if they were to enforce your rules, next week we'd have a fantastic article blasting Facebook for "Suppression of an Open Forum". We, as a society, took a hard stance long ago: People can say what they want, regardless of how much you whine and complain. Nobody has to listen, and Twitter still has a "block" button, but when you get your knickers in a twist you're usually giving the trolls what they wanted.

The other option is for those marginalized groups to be completely silenced. You can't have it both ways. "Community Standards" that are just your standards lead to tyranny.

From a profit standpoint: Is Facebook really going to give up the 17% of US voters who think Obama is a dirty muslim just because it offends you?


> People can say what they want, regardless of how much you whine and complain.

This is not true. "Fire in an movie theatre," etc.


People like to think that you can say whatever you want online but even there you can't.

Any search combination of: Facebook + teen/jail/sentence gives a pretty good idea. I'd only seen a couple on HN front page but it's apparently there are a lot of examples that don't make it that far(1).

For your fire example there are others that have to do with inciting violence or putting others in danger. I am sure there are more examples.

(1) without reading them all it could be just linkbait or misinformation


Whilst its true that you can't say anything you want, should it really be up to any corporation to determine if saying X or Y is immoral?

If someone has said something that's illegal (e.g. threatining), then there's legal consequences for that. If anything, we shouuld beg the police to become more proactive. Currently, the stance on most crime involving computers is 'meh, its too hard to catch them unless they're on facebook using their real names'.


I think that unless it's actually illegal then it is up to the corporation. Corporations express opinions (morality) all the time (eg, Chick-fil-A/gay marriage, Abercrombie & Fitch/fat people).

So if FB/Reddit/Twitter/$NextThing decided that anyone who ever posted anything about $topic gets banned. If people wanted to use it they would understand that and could decide not to.

If my ISP started filtering things that I could access though that would probably get a different reaction from me.

As to the policing...some of the articles I pulled up when I did those searches were ridiculous and I'd prefer if police didn't waste time and resources on them. Of course there will be someone who thinks it's completely within reason.

Short version: I don't care if $corp censors it's platform based on whatever it wants. I do care if ISP/government censor. Personally I find so little offensive I have to wonder if I'm broken, normal, or just incredibly tolerant.

(sorry feels like I just said a lot of nothing)


Report buttons never work perfectly in large sites.

I know a certain large social network that is mostly domianated by feminists, there the issue is exactly the reverse that the author here complains, it is that complaints against feminists are ignored, while conservatives frequently get outright banned.


People like EFF's Director for International Freedom of Expression Jillian C. York hold the view that Facebook (and likely Twitter too) "should not be in the business of censoring speech, even hate speech." [1] I tend to sympathize -- if you set guidelines for speech on your website with any degree of vagueness, you end up privileging [2] certain types of speech based on who the moderators are. With platforms as populated as Twitter and Facebook, censoring speech don't make sense unless it actually violates the law. Ideally a social media website should emulate the real world in its governance of expression, or else any criticism of Israeli government policy is anti-Semitic, any criticism of Obama is racist, etc etc

[1] http://www.slate.com/blogs/xx_factor/2013/05/30/facebook_and...

[2] apologies if you can't read the word without feeling threatened or Offended, it just fit there


This sounds like a strong argument for the filter bubble to me: Keep those people around but hellban them for individual potential readers (or even along the social graph)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: