Hacker News new | past | comments | ask | show | jobs | submit login

"People are very overconfident in human ability despite overwhelming evidence we suck at predicting things and doing anything statistics related. Human error is just ignored or seen as an inevitable fact of life."

Programs don't program themselves. Algorithmic biases often reflect human biases. If we want people to accept technology and give us opportunities to pursue our visions of what technology can offer society we need to be cognitive of ethical and moral challenges especially when there is so much at stake. Yes there are fields where there are regulatory and liability issues, but I'm more worried about the fields where there isn't as much oversight and transparency.




Algorithmic biases often reflect human biases

I've been doing this for a while and I've literally never met a human who told an algorithm to overweight x[23] ("good looking"), x[48] ("is white") and x[873] ("is wealthy"), for x a 1,100-dimensional feature vector.

Algorithms do have biases, but they are almost always orthogonal to the human ones. Witness, for example, all the recent "we can fool deep learning image recognition systems" papers.

http://arxiv.org/abs/1412.1897

http://arxiv.org/abs/1312.6199

At this point I'm 90% sure you are a layperson who's never actually programmed such a system.


Not to mention biases in data collection. "Garbage in, garbage out" certainly applies, and the situation probably worsens as datasets get bigger.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: