Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The main problem is that most other systems have their own biases. A typical suggestion I hear a lot goes along the lines of "if in doubt, let a human make the decision". If you think about ML in say a school context it's likely that certain biases are baked in but I'd argue that these biases tend to be more extreme when humans make decisions because they take into account less factors (this may not be true but it's my working hypothesis). I think a decent example is "first names". There's certain names that result in worse overall grades during a normal school career, even if the teachers are aware of this bias (in Germany the posterchild for this is "Kevin").

It's a tough problem. I think being aware that biases exist in ML is a good first step.



> I think a decent example is "first names". There's certain names that result in worse overall grades during a normal school career, even if the teachers are aware of this bias (in Germany the posterchild for this is "Kevin").

There is a possible causal link with names which goes beyond "children are being treated worse because of their name".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: