Hacker News new | past | comments | ask | show | jobs | submit login

When I interviewed John D. Cook for a Profile in Computational Imagination I asked him a similar questions here is his answer:

John: One danger that I see is algorithms without a human fail-safe. So you could have false positives, for example, in anti-terrorist algorithms. And then there’s some twelve-year-old girl that’s arrested for being a terrorist because of some set of coincidences that set off an algorithm, which is ridiculous. Something more plausible would be more dangerous, right? I think the danger could increase as the algorithms get better.

Mike: Because we start to trust them so much, because they’ve been right so often?

John: Right. If an algorithm is right half the time, it’s easy to say well, that was a false positive. If an algorithm is usually right--if it’s right ninety-nine percent of the time--that makes it harder when you’re in the one percent. And the cost of these false positives is not zero. If you’re falsely accused of being a terrorist, it’s not as simple as just saying oh no, that’s not me. Move along nothing to see here. It might take you months or years to get your life back.

If you want to read more of our conversation it is available at http://computationalimagination.com/interview_johndcook.php

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact