Hacker News new | past | comments | ask | show | jobs | submit login

> What happens in the case that a particular culture is more hateful? Do we just disregard any data that indicates socially unacceptable bias?

That's not what was happening. If you read the link, you'll see the problem is that the AI/ML system was mis-classifying non-hateful speech as hateful, just because of the dialect being used.

If it were the case that the culture was more hateful, then it wouldn't have been considered "mis-classification."

> You're completely missing my point.

I'm not missing your point; it's just not a well-reasoned or substantiated point. Here were your points:

> There's simply no indication that these aren't statistically valid priors.

We do have every indication that this wasn't what was happening in literally every single example I posted. You just have to read them.

> And we have mountains of scientific evidence to the contrary, but if dared post anything (cited, published literature) I'd be banned.

You say that, and yet you keep posting your point without any evidence whatsoever. Meanwhile, every single example I posted did cite peer-reviewed, published scientific evidence.

> This is all based on the unfounded conflation between equality of outcome and equality of opportunity, and the erasure of evidence of genes and culture playing a role in behavior and life outcomes.

Again, peer-reviewed published literature disagrees. Reading it explains why the point that it's all unfounded conflation is incorrect.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: