"People are very overconfident in human ability despite overwhelming evidence we suck at predicting things and doing anything statistics related. Human error is just ignored or seen as an inevitable fact of life."
Programs don't program themselves. Algorithmic biases often reflect human biases. If we want people to accept technology and give us opportunities to pursue our visions of what technology can offer society we need to be cognitive of ethical and moral challenges especially when there is so much at stake. Yes there are fields where there are regulatory and liability issues, but I'm more worried about the fields where there isn't as much oversight and transparency.
I've been doing this for a while and I've literally never met a human who told an algorithm to overweight x[23] ("good looking"), x[48] ("is white") and x[873] ("is wealthy"), for x a 1,100-dimensional feature vector.
Algorithms do have biases, but they are almost always orthogonal to the human ones. Witness, for example, all the recent "we can fool deep learning image recognition systems" papers.
Programs don't program themselves. Algorithmic biases often reflect human biases. If we want people to accept technology and give us opportunities to pursue our visions of what technology can offer society we need to be cognitive of ethical and moral challenges especially when there is so much at stake. Yes there are fields where there are regulatory and liability issues, but I'm more worried about the fields where there isn't as much oversight and transparency.