Hacker News new | past | comments | ask | show | jobs | submit login

As always, it depends entirely on what you're doing with the (predictions/estimates/inferences). This is an entirely false dilemma.



When would you want to hack the AI?

Not a rhetorical question. Reading through the comment thread I'm slowly shifting more towards being on the fence, as opposed to having a "don't hack the AI" position.

But rather than hacking the AI, why not just get rid of the AI? What is the point of the AI in the first place if you're going to hack it to get the results you want anyway?


> When would you want to hack the AI?

This is not a meaningful phrase - it adds literally nothing to the conversation but confusion. To bend over backwards to give you a reasonable answer: when you're interested in conditional effects.

Let's say you're interested in the risk of cancer associated with alcohol consumption. People who drink some are often found to have lower cancer rates than people who don't drink at all. Reasonable models adjust for wealth/income - estimating the risk of cancer association with alcohol CONDITIONING ON wealth/income changes the picture; the positive association is clear.

Adjusting for confounding is "removing bias", changing the effect estimates.

In a predictive context, using ensembles, NNs etc, the problems don't go away, they're just sometimes harder to detect (and they're dressed up in sexy marketing-speak like "AI").

Repeat after me three times:

"AI is not a magic truth telling oracle" "AI is not a magic truth telling oracle" "AI is not a magic truth telling oracle"


I meant in my example about undiagnosed lead in Flint. Once you notice that the AI detects people in Flint under-perform, why would you want to modify the AI to avoid the AI detecting that?

Your cancer example doesn't answer the question, which is a pity because I mean it when I say it's not a rhetorical question. I honestly want to understand your point of view better. In the cancer example, we wouldn't go in and filter the training set to force the AI to think that rich people have higher cancer than they actually do, etc. But that's precisely what it seems like people want to happen with AI: they don't want us to condition on wealth/income, rather, they flat-out want us to filter the training set to force the AI to think group A and group B are equal when that's not what the data says.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: