However I'm very uncomfortable with sties like liveleak being fully censored on NZ ISPs. I was unable to access liveleak.com at all from the few hours after the event, up to and including now.
It sets a bad precedent and while I hope no one sees that video, the content itself is not illegal and we should not be censoring it as a kneejerk reaction. People who go to LiveLeak know what they're going to get.
The correct response is for the government to treat domestic terrorists, and preachers of hate the same way that they treat foreign ones.
Anwar Al-Awlaki, and his son were the targets of a US drone strikes for the same thing - radicalizing speech, and inciting terrorist attacks. Yet, here we have an entire ecosystem of alt-right radicalization, inciting terrorist attacks, and the government doesn't lift a single finger to deal with it.
So the right wing white supremacist problem, at least in the U.S., has been taken seriously; but with political change it has been taken less seriously by the current administration, for entirely predictable reasons. Whether FBI takes it seriously enough is an open question, but when intelligence agencies publicly list threats to the U.S., the southern border is not on the list, but right wing white supremacist hate groups are.
There is a very real issue that until there is planning or solicitation to commit violence, right wing white supremacy is legal freedom of association, and hate speech is still protected speech, so long as it's not in the gray area of "fighting words". I think it's completely reasonable for FBI to infiltrate violence prone hate groups by paying informants and putting agents on the inside. I also have no problem with enduring warrants on leader organizers of such individuals for wire tapping. What they want is a race war. They want to destabilize the country. It's not a new thing.
ADL and Southern Poverty Law Center have been tracking hate groups for a long time, both left wing and right wing. It just so happens right now it's overwhelmingly right wing hate groups out there. And it would not surprise me if the more violent prone ones would love to see successful recruitment of left wing groups, because they can't really have a race war, or make real their romantic notions of a civil war 2.0, with just a bunch of right wing groups. They have to incite the left to attack, or they've got nothing.
Watch it. It represents the horrible truth of the elevated sense of Islamophobia and its tragic consequences. People shouldn't look away from this. Many of us have to deal with this on a daily basis. When it gets so out of control that it creeps up into the privileged world people want to look away. Don't forget that today's global structure is based on widespread massacres and genocide. This extreme comes from overlooking a racist, bigoted status quo much like an infestation comes from overlooking the first signs of insects. We may be looking away from your history too. Looking away is not going to change things. I don't care if it makes us feel bad. Its the truth. We need to deal with it because others have to.
Basically most of us sit there and tolerate minor (and sometimes major) forms of racism and do nothing only to act shocked when something like this happens. Its a psychological game played to avoid guilt and responsibility.
There's a difference between looking at it and saying we must do better and looking away.
The reason nobody actually does this, is because bad actors will use this as a unit test, to figure out how to get bad content onto your system.
When you are trying to build a secure websystem, the common advice from the tech community is to deny all invalid, unauthorized, malformed requests - with a generic "Request failed" page. Don't give the attacker any information that they can use to understand your system.
In the same breath, that same community completely disregards this best practice when it comes to social security.
The real reason nobody does this is because it requires a human in the system. And humans cost money. And, if your transaction cost isn't high enough, you can't pay for that human.
This is the real reason why none of these services will do anything about this.
I mean, you're correct about the fact that putting a human in the loop costs too much at scale, which is why it doesn't happen, and that sucks when you're an edge case that gets shafted. But that tangential to the question of transparency. Even if there were people in the loop, they wouldn't be fully transparent about the actions they take for exactly the reason the parent comment said. It would make it easier for the bad actors trying to exploit the system.
Yes I know why the post was removed, but based on the stated no nudity rule, the post should have been ok.
If we go one stage further and appeal (to the same algorithm). That too will presumably fail.
So what we are left with is a set of rules, and a set of defacto rules, that don't match up. That I wouldn't say is transparency, except in a limited and meaningless sense.
Wikipedia's opening paragraph on transparency:
"Transparency, as used in science, engineering, business, the
humanities and in other social contexts, is operating in such a way that it is easy for others to see what actions are performed. Transparency implies openness, communication, and accountability.
There is neither openness or accountability in my example.
Yes in a narrow sense it is transparent, in the wider (I would say, most important) sense, it isn't.
She talked about Google’s mission to democratise data. It probably would’ve been interesting to ask her what they planned to do to make sure their censorship filters knew the difference between hateful terrorists and human rights groups.
Because if it can’t tell the difference between those two, then I don’t think it’s good enough to be allowed an editorial role on YouTube.
Youtube has (imo) two functions - an independent hosting platform (which can argue it is not appropriate for it to censor content) and a search engine which implies a recommendation algorithm. This second part is inescapably making editorial decisions.
I don't think Santa Clara principles are sufficient at this point - it's basically assuming Facebook et al run a "town square".
I am not sure the "town square" defence is useful if you cannot exclude certain content reliably and effectively (definitely an unproven point)
I know this is heading down the Warren route - but Inwoukd be grateful for other thoughts
Final note: I cannot conceive what twisted neurons male gunning unarmed people as a good idea, let alone livestreaming it but I will be unsurprised to find that the killer has live-streamed previous sick rubbish and got lots of likes and comments.
Doing that in a normal town square would not have built a bubble of morons who laugh at a guy pretend killing muslims till the guy actually goes and does it - it would have built horrified approbium of a normal town. It's not a town square if the rest of the town does not see you.
I am not sure the "town square" defence is useful if you cannot exclude certain content reliably and effectively
Beyond what is already existing illegal speech, such as telling someone to cause a specific crime, the whole point of this EFF article is that 'excluding certain content' does more harm than good.
But by indexing and then returning search results you are choosing and selecting - maybe being able to find "grot porn" or "that video of muslims being killed" is a public good (!) but if it is then I struggle to see how adverts and other minimisation schemes will not influence your decision to return it - the very act of making something findable is a curation decision - and curation implies responsibility.
Perhaps we will work around it by having google publish it algorithm for search results - but that gets gnarly fast.
In short I think webspace is like selling white banners and pens to a political rally - what people write on them is their business. But once you look at the banners and start to hand them out on street corners you are engaging in political speech - and worse speech someone else has written.
i used to work for Demon Internet and our big USP (so we thought) was free webspace (10MB!!). And if you published your work on there you then needed to go through various cycles to get someone to come and read it - but Google and facebook have a domain name based leverage going on that means putting my stuff on facebook makes it easier to be found by orders of magnitude - and that implies curation and promotion of speech.
In short - there is no "hands off approach" to free speech that is not hard disk space. And we and FAANG need to deal with the responsibility
Edit: if you have the right to say awful things, you don't necessarily have the right to have those things promoted and published by some of the largest companies on the planet - but if they choose to edit your speech they need to start obeying everyone's speech laws (globally impossible) - so perhaps they should be split into Elizabeth Warrens platforms - and that would presumably allow duckduckgo to run its requests across google hardware and indexes on a free competition basis....
8Chan is now blocked in NZ.