Hacker News new | past | comments | ask | show | jobs | submit login

> Any system that claims to work on that sort of input is almost certainly picking up socio-economic status of different races, or something similar, with no causal predictive power.

I wonder which will have more predictive power, the version where you let the AI do it’s thing or the version where you intervene to correct for things that are almost certainly wrong according to you.




An AI doesn't do "it's thing", it learns with the bias the researcher encoded in the model, and most importantly in this case, with the massive bias of the datasets.

Correcting is just steering a bias from one way to another.


Bias is relative to a null hypothesis, you are just begging the question. Predictive power is the final arbiter


> Predictive power is the final arbiter

But how do you measure that predictive power? Humans do have to build an evaluation set. And that evaluation set will be biased one way or the other, you cannot just pretend bias does not exist and hope for the best.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: