Hacker News new | past | comments | ask | show | jobs | submit login

There will definitely exist biases that exist in algorithms of this sort that are, at least in part, a result of the biases, intentional or not, that exist in the minds of those building the code and gathering the data to train it on. It's not unlike the real life example of using the world around you to train your kids about what's good or bad, or who to trust and not trust, etc. There will be patterns they infer that you may not have intended. But your selection of data to use as exemplars communicated information that will be assimilated regardless of your intentions.


The algorithms, potentially, are a great source of biologically bias free pattern inferencing. That is, if all care has been taken to ensure that the algorithms that are created, and the data they are trained on, is as free of human bias as possible, what do we do, or what should we think, when that code reveals patterns about some among us, or even all of us as a whole, that we do not find flattering? Will we still blame the makers, or will we finally take a hard look at who we are?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact