Hacker News new | past | comments | ask | show | jobs | submit login
Our Thoughts on the New Zealand Massacre (eff.org)
52 points by DiabloD3 39 days ago | hide | past | web | favorite | 25 comments

I am from New Zealand. This was a terrible tragedy, and the appropriate response to this horrible video is to not watch, and not share. It's not only glorifying an awful act, but it's potentially traumatic.

However I'm very uncomfortable with sties like liveleak being fully censored on NZ ISPs. I was unable to access liveleak.com at all from the few hours after the event, up to and including now.

It sets a bad precedent and while I hope no one sees that video, the content itself is not illegal and we should not be censoring it as a kneejerk reaction. People who go to LiveLeak know what they're going to get.

Are you sure liveleak was being blocked? Since a few days before this crime and especially since I get very frequent 503 errors from the site. I can't watch many videos, scroll var through videos, perform search, etc. It seems to be an issue with liveleak, not any blocking on the part of my isp. I'm in Hungary and have not heard anything about any videos being blocked here.

> I am from New Zealand. This was a terrible tragedy, and the appropriate response to this horrible video is to not watch, and not share. It's not only glorifying an awful act, but it's potentially traumatic.

The correct response is for the government to treat domestic terrorists, and preachers of hate the same way that they treat foreign ones.

Anwar Al-Awlaki, and his son were the targets of a US drone strikes for the same thing - radicalizing speech, and inciting terrorist attacks. Yet, here we have an entire ecosystem of alt-right radicalization, inciting terrorist attacks, and the government doesn't lift a single finger to deal with it.

Right wing white supremacist terrorism is clearly international. The New Zealand shooter's manifesto contains the same grievance and fear of invasion and replacement of white people expressed by Anders Breivik, and the Squirrel Hill shooter. The POTUS likewise uses the same invader grievance, playing down right wing white supremacist terrorism saying it's a "small number of people" when asked if it's a growing problem, even though FBI considers it a growing problem and has for over a decade; while in the same statement repeats his hyping of problems on the southern border, calling it an invasion, despite ~40 year low in southern border interdiction and crime including illegal border crossing. The invader grievance by POTUS is not lost on people who study white supremacy in the U.S., nor is it lost on actual white supremacists. Nor was his entire five year racist lie of birtherism. Simultaneously with his rants about violence committed by Muslims, POTUS cut Countering Violent Extremism funds which focused on mitigation of white supremacists and neo-Nazis by de-radicalization therapy.

So the right wing white supremacist problem, at least in the U.S., has been taken seriously; but with political change it has been taken less seriously by the current administration, for entirely predictable reasons. Whether FBI takes it seriously enough is an open question, but when intelligence agencies publicly list threats to the U.S., the southern border is not on the list, but right wing white supremacist hate groups are.

There is a very real issue that until there is planning or solicitation to commit violence, right wing white supremacy is legal freedom of association, and hate speech is still protected speech, so long as it's not in the gray area of "fighting words". I think it's completely reasonable for FBI to infiltrate violence prone hate groups by paying informants and putting agents on the inside. I also have no problem with enduring warrants on leader organizers of such individuals for wire tapping. What they want is a race war. They want to destabilize the country. It's not a new thing.

ADL and Southern Poverty Law Center have been tracking hate groups for a long time, both left wing and right wing. It just so happens right now it's overwhelmingly right wing hate groups out there. And it would not surprise me if the more violent prone ones would love to see successful recruitment of left wing groups, because they can't really have a race war, or make real their romantic notions of a civil war 2.0, with just a bunch of right wing groups. They have to incite the left to attack, or they've got nothing.

> nd the appropriate response to this horrible video is to not watch.

Watch it. It represents the horrible truth of the elevated sense of Islamophobia and its tragic consequences. People shouldn't look away from this. Many of us have to deal with this on a daily basis. When it gets so out of control that it creeps up into the privileged world people want to look away. Don't forget that today's global structure is based on widespread massacres and genocide. This extreme comes from overlooking a racist, bigoted status quo much like an infestation comes from overlooking the first signs of insects. We may be looking away from your history too. Looking away is not going to change things. I don't care if it makes us feel bad. Its the truth. We need to deal with it because others have to.

Basically most of us sit there and tolerate minor (and sometimes major) forms of racism and do nothing only to act shocked when something like this happens. Its a psychological game played to avoid guilt and responsibility.


Please keep grandiose ideological rhetoric off this site. It drowns out intellectual curiosity, which is what HN exists for.


While I don’t disagree with your sentiment; I would love for you to elaborate a bit and be more specific instead of painting such broad strokes without breaking down examples. It will help the next reader, particularly readers with little context, parse the message. Otherwise this comment will be read, nodded at, and forgotten


> I'd say the proponents of multiculturalism are the ones looking away from history...

There's a difference between looking at it and saying we must do better and looking away.


Please don't repost flagkilled comments. Maybe not all of them have been flagged for legit reasons, but most have. If a comment is dead that you believe shouldn't be dead, you can vouch for it by clicking on its timestamp, the clicking 'vouch' at the top of its page. Or you can email us at hn@ycombinator.com.

Christchurch +1

So thankful organizations like the EFF exist. If you work for a FAANG please take the Santa Clara Principles to heart and push them up through your ranks.

> give notice to users who’ve had something removed about what was removed, under what rules; and

The reason nobody actually does this, is because bad actors will use this as a unit test, to figure out how to get bad content onto your system.

When you are trying to build a secure websystem, the common advice from the tech community is to deny all invalid, unauthorized, malformed requests - with a generic "Request failed" page. Don't give the attacker any information that they can use to understand your system.

In the same breath, that same community completely disregards this best practice when it comes to social security.

> The reason nobody actually does this, is because bad actors will use this as a unit test, to figure out how to get bad content onto your system.

The real reason nobody does this is because it requires a human in the system. And humans cost money. And, if your transaction cost isn't high enough, you can't pay for that human.

This is the real reason why none of these services will do anything about this.

I disagree. There's no technical reason why an automated system couldn't be fully transparent to the end user about the criteria on which it is acting. That's not the reason why.

I mean, you're correct about the fact that putting a human in the loop costs too much at scale, which is why it doesn't happen, and that sucks when you're an edge case that gets shafted. But that tangential to the question of transparency. Even if there were people in the loop, they wouldn't be fully transparent about the actions they take for exactly the reason the parent comment said. It would make it easier for the bad actors trying to exploit the system.

If for example I post a picture of myself in a skin coloured top, that gets automatically flagged as nudity, and I'm notified as such is that really transparency?

Yes I know why the post was removed, but based on the stated no nudity rule, the post should have been ok.

If we go one stage further and appeal (to the same algorithm). That too will presumably fail.

So what we are left with is a set of rules, and a set of defacto rules, that don't match up. That I wouldn't say is transparency, except in a limited and meaningless sense.

What's opaque about the situation you're describing? Everyone understands what happened and why. What other definition of transparent is there?

The rules as described and as implemented don't match. Saying you're doing something, when that something makes no sense and is unreasonable isn't transparency.

Wikipedia's opening paragraph on transparency:

"Transparency, as used in science, engineering, business, the humanities and in other social contexts, is operating in such a way that it is easy for others to see what actions are performed. Transparency implies openness, communication, and accountability. "[1]

There is neither openness or accountability in my example.

Yes in a narrow sense it is transparent, in the wider (I would say, most important) sense, it isn't.

[1] https://en.m.wikipedia.org/wiki/Transparency_(behavior)

The human is not required for the transparency but for the possibility of appeal (last of the three bullet points)

That's not the bullet point we were talking about, though.

I recently saw the he head of Google’s Nordic cloud division give a keynote speech at an event for the public digitisation in Denmark called Offentlig Digitalisering.

She talked about Google’s mission to democratise data. It probably would’ve been interesting to ask her what they planned to do to make sure their censorship filters knew the difference between hateful terrorists and human rights groups.

Because if it can’t tell the difference between those two, then I don’t think it’s good enough to be allowed an editorial role on YouTube.

My 0.02 c

Youtube has (imo) two functions - an independent hosting platform (which can argue it is not appropriate for it to censor content) and a search engine which implies a recommendation algorithm. This second part is inescapably making editorial decisions.

I don't think Santa Clara principles are sufficient at this point - it's basically assuming Facebook et al run a "town square".

I am not sure the "town square" defence is useful if you cannot exclude certain content reliably and effectively (definitely an unproven point)

I know this is heading down the Warren route - but Inwoukd be grateful for other thoughts

Final note: I cannot conceive what twisted neurons male gunning unarmed people as a good idea, let alone livestreaming it but I will be unsurprised to find that the killer has live-streamed previous sick rubbish and got lots of likes and comments.

Doing that in a normal town square would not have built a bubble of morons who laugh at a guy pretend killing muslims till the guy actually goes and does it - it would have built horrified approbium of a normal town. It's not a town square if the rest of the town does not see you.

What's the 'Warren' route?

I am not sure the "town square" defence is useful if you cannot exclude certain content reliably and effectively

Beyond what is already existing illegal speech, such as telling someone to cause a specific crime, the whole point of this EFF article is that 'excluding certain content' does more harm than good.

If you are providing webspace then not knowing or caring what is stored there is a defence (but is limited the minute we say "but if it's illegal you should have known")

But by indexing and then returning search results you are choosing and selecting - maybe being able to find "grot porn" or "that video of muslims being killed" is a public good (!) but if it is then I struggle to see how adverts and other minimisation schemes will not influence your decision to return it - the very act of making something findable is a curation decision - and curation implies responsibility.

Perhaps we will work around it by having google publish it algorithm for search results - but that gets gnarly fast.

In short I think webspace is like selling white banners and pens to a political rally - what people write on them is their business. But once you look at the banners and start to hand them out on street corners you are engaging in political speech - and worse speech someone else has written.

i used to work for Demon Internet and our big USP (so we thought) was free webspace (10MB!!). And if you published your work on there you then needed to go through various cycles to get someone to come and read it - but Google and facebook have a domain name based leverage going on that means putting my stuff on facebook makes it easier to be found by orders of magnitude - and that implies curation and promotion of speech.

In short - there is no "hands off approach" to free speech that is not hard disk space. And we and FAANG need to deal with the responsibility

Edit: if you have the right to say awful things, you don't necessarily have the right to have those things promoted and published by some of the largest companies on the planet - but if they choose to edit your speech they need to start obeying everyone's speech laws (globally impossible) - so perhaps they should be split into Elizabeth Warrens platforms - and that would presumably allow duckduckgo to run its requests across google hardware and indexes on a free competition basis....

Just looked at 8Chan's Twitter account. They've still got the shooter's preferred account image in their header. (The crass Australian wearing a hat).


8Chan is now blocked in NZ.

I just checked and it’s still there.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact