Hacker Newsnew | past | comments | ask | show | jobs | submit | mspeedy's commentslogin

Stickers are more than something to sell to consumers - They can be advertisements used to target a gaming audience. Skype and Facebook have done this, offering free sticker packs for upcoming movies (e.g. Inside Out). A pack of stickers or a theme featuring characters from a game like Overwatch could potentially bring in revenue from both Blizzard and players.


According to the wiki, it was discriminatory at the time because it generally meant that only union workers would get federal construction jobs, and unions did not admit African-Americans. As this is no longer the case with unions, it should in theory no longer be discriminatory - just wasteful.


I received the same error, but refreshing the page took me straight there.


> if you took into account enough circumstances (e.g. single parent, school district, income level, parents' wealth) you'd be able to remove race from your model and still arrive to the "equal opportunity" result.

The problem is, when given access to a large number of classifiers, some of which have inevitably been affected by a pre-existing racial bias, a black box machine learning algorithm will likely become discriminatory as well if race is not in some way represented and equalized.

For instance, many justice systems in the U.S. use machine learning software to determine the likelihood that a criminal will reoffend, and use that prediction to determine sentencing. Race is never used explicitly as a classifier, but the program ended up being significantly more likely to rate blacks as more likely to reoffend [1]. Classifiers like "had parents with previous criminal convictions" can be misleading when blacks are more likely to be convicted for the same crime as whites. It doesn't mean that the white person's parents didn't engage in criminal activity or other reprehensible behavior that might cause their child to become a violent, repeat offending criminal - just that they were able to get away with it more easily because of a biased system.

Machines end up just as biased as the data they've been trained on, so if we are going to use computers to judge things that have such a significant impact on people's lives, we can't risk racism slipping through the cracks.

[1] https://www.propublica.org/article/machine-bias-risk-assessm...


The problem is, when given access to a large number of classifiers, some of which have inevitably been affected by a pre-existing racial bias, a black box machine learning algorithm will likely become discriminatory as well if race is not in some way represented and equalized.

This is simply not true. Black box machine learning algorithms will have the tendency to correct bias in their inputs. Insofar as they do systematically deliver wrong answers, this is actually called "variance" and has no particular sign. It's just as likely to be biased in favor of $protected_class as against that class.

https://www.chrisstucchio.com/blog/2016/alien_intelligences_...

Also, you do know that Pro Publica's R script actually found no bias, right? The bias was actually in the selection of anecdotes in their article, which obscured the fact that their statistical analysis could not reject the null hypothesis.

https://www.chrisstucchio.com/blog/2016/propublica_is_lying....


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: