
The Dark Secret at the Heart of AI - cdvonstinkpot
https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/
======
russellbeattie
This is kind of a silly strawman in some ways, simply because all software -
including the code helping fly jumbo jets, steer oil tankers or run MRI
machines - is written by fallible humans, and is generally considered safe
only because of QA testing, rather than code analysis. There are some rare
instances of insanely complex code having every line thoroughly vetted like
those in NASA projects, but pretty much everything else out there is simply
"good enough" until a flaw is (inevitability) found and fixed. The decision
trees generated by AI will be no different. Until, I guess, an AI can perform
the analysis of the code of another AI... cue Inception music.

------
mvindahl
'Getting a car to drive this way was an impressive feat. But it’s also a bit
unsettling, since it isn’t completely clear how the car makes its
decisions.'.replace(/car/g, 'human')

------
mto
I've always heard that argument in favor of decision trees or random forests,
yet those decision trees had 400k nodes :). So no one ever really looked at
them, but in theory you can could check the long node paths doing arbitrary
splits on weird features :).

Apart from that, the strength of DNNs is exactly that complex decision making
compared to, say, the simple algorithms physicians learn and manually apply
for diagnosis. Those are obviously vastly underfitting in many cases.

------
Eridrus
This article makes the assumption that we are learning a complete model that
goes from sensor inputs to control outputs, but I don't think anyone is doing
this outside academia. There's a whole lot less controversy when we use deep
learning to do scene understanding, where we understand at a high level the
model is recognizing entities in its sensors, and we can evaluate whether that
subsystem failed, etc.

------
candiodari
That's the big plus of AI algorithms. For instance, all voice recognition
algorithms use a patented algorithm. Nuance holds the patent.

But, the reasoning goes, because this was learned, and there is no code in
there implementing that algorithm (just "weights" implementing an unrolled
version), that code does not violate patents.

It's not a bug, it's a feature. Know any valuable algorithms ? Figure out how
to learn them.

~~~
greenyoda
Whether it's a big plus or a big minus depends on the application. I probably
wouldn't mind if voice recognition software in a device like Google Home or
Amazon Echo uses an opaque and unverifiable algorithm. But the creator of an
algorithm that controls something that can kill me - like a car or a plane or
a nuclear power plant or a robotic surgery tool - had better be able to prove
that their software works safely.

~~~
yeukhon
But if the voice recongition is part of your driving experience and it bugged
out you can end up in a car crash. One such example is Siri and Google Home
must hear the phrase to starts with 'Siri' or 'Google', but what if your car's
has a bug and it heard you say 'stop the car'? Or that came out of your radio
host's mouth? I remember a couple years ago some news covered this, you can
trick Siri to think it was you commanding if Siri hears a conversation starts
with the phrase 'Siri'.

~~~
calvano915
The problems you mention are valid (bugs that result in accidents & closed
source algorithms) but the example is unlikely. Surely, carmakers will not
allow vehicles to respond to commands in such a literal way that would
definitely result in an accident. For example, if the voice recognition hears
"stop the car" the response will be to stop the car, but safely rather than
immediately/unsafe. This goes for all other valid commands. The alternative to
a safe execution will be a prompt requesting clarification because the command
cannot be safely executed, cannot be understood, etc.

~~~
yeukhon
Friend: "So my father said 'stop the car, Jack.'"

Me: "That's funny."

Friend: "Do you want to get lunch tomorrow?"

*car assistant: Do you wish to stop the car?"

Me: "Yeah." (responding to my friend's question)

This is a valid response. Yes, but this is the case if the assistant has a
bug, so the safety meter isn't working quite properly.

------
TheOtherHobbes
If we want to "understand" what a network does, that really means we want to
disentangle cause and effect and spit out simple algebraic models for it after
distilling them from a training set.

To the extent this is even possible - which is debatable, for all kinds of
reasons - we're going to need a different set of tools. ML is not the right
tool for that problem.

Something similar to ML may be, but ML itself definitely isn't.

------
iridium5
Model decision interpretation is a solved problem:
[https://github.com/marcotcr/lime](https://github.com/marcotcr/lime)

