> Any system that claims to work on that sort of input is almost certainly picking up socio-economic status of different races, or something similar, with no causal predictive power.
I wonder which will have more predictive power, the version where you let the AI do itβs thing or the version where you intervene to correct for things that are almost certainly wrong according to you.
An AI doesn't do "it's thing", it learns with the bias the researcher encoded in the model, and most importantly in this case, with the massive bias of the datasets.
Correcting is just steering a bias from one way to another.
But how do you measure that predictive power? Humans do have to build an evaluation set. And that evaluation set will be biased one way or the other, you cannot just pretend bias does not exist and hope for the best.
I wonder which will have more predictive power, the version where you let the AI do itβs thing or the version where you intervene to correct for things that are almost certainly wrong according to you.