Hacker News new | past | comments | ask | show | jobs | submit login

An AI doesn't do "it's thing", it learns with the bias the researcher encoded in the model, and most importantly in this case, with the massive bias of the datasets.

Correcting is just steering a bias from one way to another.




Bias is relative to a null hypothesis, you are just begging the question. Predictive power is the final arbiter


> Predictive power is the final arbiter

But how do you measure that predictive power? Humans do have to build an evaluation set. And that evaluation set will be biased one way or the other, you cannot just pretend bias does not exist and hope for the best.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: