Hacker News new | past | comments | ask | show | jobs | submit login

OK, I see where the disconnect is. I think the best way to address it is with an example.

Many people today would object a priori to businesses using race as a factor to predict loan default risk, regardless of whether doing that makes the predictions more accurate or not. In many cases, using race as a factor WILL get you in trouble with the law (e.g., redlining is illegal in the US).

Please tell me, how would you predict what factors society will find objectionable in the future (like race today)?




My claim is very specific. If you tell an algorithm to predict loan default probabilities, and you give it inputs (race, other_factor), the algorithm will usually correct for the bias in other_factor.

I claimed a paperclip maximizer will maximize paperclips, I didn't claim a paperclip maximizer will actually determine that the descendants of it's creators really wanted it to really maximize sticky tape.

Now, if you want an algorithm not to use race as a factor, that's also a math problem. Just don't use race as an input and you've solved it. But if you refuse to use race and race is important, then you can't get an optimal outcome. The world simply won't allow you to have everything you want.

A fundamental flaw in modern left wing thought is that it rejects analytical philosophy. Analytical philosophy requires us to think about our tradeoffs carefully - e.g., how many unqualified employees is racial diversity worth? How many bad loans should we make in order to have racial equity?

These are uncomfortable questions - google how angry the phrase "lowering the bar" makes left wing types. If you have an answer to these questions you can simply encode it into the objective function of your ML system and get what you want.

Modern left wing thought refuses to answer these questions and simply takes a religious belief that multiple different objective functions are simultaneously maximizable. But then machine learning systems come along, maximize one objective, and the others aren't maximized. In much the same way, faith healing doesn't work.

The solution here is to actually answer the uncomfortable questions and come up with a coherent ideology, not to double down on faith and declare reality to be "biased".


My claim was specific too: if a corpus is biased -- as defined by evolving societal values -- then the word embeddings learned from that corpus will necessarily be biased too -- according to those same societal values, regardless of whether you think those values are rational and coherent.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: