Hacker News new | past | comments | ask | show | jobs | submit login
[flagged]
denzil_correa on June 26, 2016 | hide | past | web | favorite



There will definitely exist biases that exist in algorithms of this sort that are, at least in part, a result of the biases, intentional or not, that exist in the minds of those building the code and gathering the data to train it on. It's not unlike the real life example of using the world around you to train your kids about what's good or bad, or who to trust and not trust, etc. There will be patterns they infer that you may not have intended. But your selection of data to use as exemplars communicated information that will be assimilated regardless of your intentions.

But.

The algorithms, potentially, are a great source of biologically bias free pattern inferencing. That is, if all care has been taken to ensure that the algorithms that are created, and the data they are trained on, is as free of human bias as possible, what do we do, or what should we think, when that code reveals patterns about some among us, or even all of us as a whole, that we do not find flattering? Will we still blame the makers, or will we finally take a hard look at who we are?


My AI and ML classes were majority Asian. But I guess they don't count since that doesn't fit the narrative for some people.


What an offensive smear piece imho! Though I guess it resonates with a hyper racial-identity political world view - which I reject largely because Nazis, et al were the originators of this "born with invariant identity due to your 'race'" view.





Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: