Hacker News new | past | comments | ask | show | jobs | submit login

> give notice to users who’ve had something removed about what was removed, under what rules; and

The reason nobody actually does this, is because bad actors will use this as a unit test, to figure out how to get bad content onto your system.

When you are trying to build a secure websystem, the common advice from the tech community is to deny all invalid, unauthorized, malformed requests - with a generic "Request failed" page. Don't give the attacker any information that they can use to understand your system.

In the same breath, that same community completely disregards this best practice when it comes to social security.




> The reason nobody actually does this, is because bad actors will use this as a unit test, to figure out how to get bad content onto your system.

The real reason nobody does this is because it requires a human in the system. And humans cost money. And, if your transaction cost isn't high enough, you can't pay for that human.

This is the real reason why none of these services will do anything about this.


I disagree. There's no technical reason why an automated system couldn't be fully transparent to the end user about the criteria on which it is acting. That's not the reason why.

I mean, you're correct about the fact that putting a human in the loop costs too much at scale, which is why it doesn't happen, and that sucks when you're an edge case that gets shafted. But that tangential to the question of transparency. Even if there were people in the loop, they wouldn't be fully transparent about the actions they take for exactly the reason the parent comment said. It would make it easier for the bad actors trying to exploit the system.


If for example I post a picture of myself in a skin coloured top, that gets automatically flagged as nudity, and I'm notified as such is that really transparency?

Yes I know why the post was removed, but based on the stated no nudity rule, the post should have been ok.

If we go one stage further and appeal (to the same algorithm). That too will presumably fail.

So what we are left with is a set of rules, and a set of defacto rules, that don't match up. That I wouldn't say is transparency, except in a limited and meaningless sense.


What's opaque about the situation you're describing? Everyone understands what happened and why. What other definition of transparent is there?


The rules as described and as implemented don't match. Saying you're doing something, when that something makes no sense and is unreasonable isn't transparency.

Wikipedia's opening paragraph on transparency:

"Transparency, as used in science, engineering, business, the humanities and in other social contexts, is operating in such a way that it is easy for others to see what actions are performed. Transparency implies openness, communication, and accountability. "[1]

There is neither openness or accountability in my example.

Yes in a narrow sense it is transparent, in the wider (I would say, most important) sense, it isn't.

[1] https://en.m.wikipedia.org/wiki/Transparency_(behavior)


The human is not required for the transparency but for the possibility of appeal (last of the three bullet points)


That's not the bullet point we were talking about, though.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: