
Don’t trust AI until we build systems that earn trust - parmegv
https://www.economist.com/open-future/2019/12/18/dont-trust-ai-until-we-build-systems-that-earn-trust
======
ksaj
I disagree with this because it reeks of post-Terminator hysteria.

AI works when it tests cases humans wouldn't or couldn't. And as a result,
realistic[1] AI provides results humans so far hadn't. Otherwise it would just
be redundant.

The fact that people are using AI results followed by statements about how
they don't know how those results came about, simply means we are only part-
way there to effective AI productivity. Until AI simultaneously outputs easy-
to-understand reason, it is very risky and pretty much unusable in that form
since a single bit of bad input can make things spiral out of control.

If an AI told you a particular novel compound wasn't toxic and was quite
edible, you still wouldn't eat it unless it also gave you the necessary
details that would to go forward with the taste-test confidently. AI isn't a
god. It's just a lot faster at the decisions we might have made, given the
current capability limitations.

Nobody actually working in the industry ignores these obvious things. AI is a
petri dish, and not an instant fix. Programmers and scientists strongly
dislike "magic." AI teaches us how to do things better, based only on what we
humans already know. To think and allow AI to behave as if it was better,
instead of just faster and more efficient at very specific tasks, is silly.

Use AI to for its ability to _discover_ and _create_. That's not the same as
giving it unfettered control over whether humans have to endure a Skynet-style
takeover and abolishment or not.

