
Inside Darpa’s Push to Make Artificial Intelligence Explain Itself - jpindar
https://blogs.wsj.com/cio/2017/08/10/inside-darpas-push-to-make-artificial-intelligence-explain-itself/?mod=e2tw
======
torbjorn
this is a fool's errand. today's statistical inference based "AI" is a
probability distribution that squishes itself through cracks in data to
"learn" a path of least resistance. subject-object reasoning systems, such as
English, are cut from a different cloth. semantic language is a discontinuous
function.

Asking asking an entity that can only discern its environment through
continuous functions to explain itself is like asking an amoeba to play piano.

Machine Learning is a statistics renaissance, it is changing the world, but
"Explainable AI" is a notion that is totally divorced from where the field is
at atm.

~~~
nl
Sorry, but this comment is completely uniformed.

Explaining ML inference is an active area of research, and there are plenty of
decent approaches which are getting good results.

For example, LIME[1] can explain why a neural network doing visual
classification came out with the result that it did. _Distilling the Knowledge
in a Neural Network_ [2] is another approach which leads towards simpler
representations.

LIME isn't perfect, but that's the nature of research (and why DARPA funds
this kind of thing).

 _this is a fool 's errand_

"Distilling the Knowledge in a Neural Network", authors: Geoffrey Hinton,
Oriol Vinyals, Jeff Dean

Fools, all of them. DARPA should ask HN next time it wants to know about the
current state of AI.

[1] [https://github.com/marcotcr/lime](https://github.com/marcotcr/lime)

[2] [https://arxiv.org/abs/1503.02531](https://arxiv.org/abs/1503.02531)

~~~
bayonetz
I've done some experiments with LIME -- it's one of the most promising
approaches for extracting prediction reasons from arbitrary and otherwise
opaque models. Another interesting approach specific to random forests is
decision paths: [http://blog.datadive.net/interpreting-random-
forests/](http://blog.datadive.net/interpreting-random-forests/)

You get some weird outputs from these sometimes though which makes it hard to
automatically show them to users. For example, a reason might be "because you
liked salad restaurant X you should check out check BBQ place Y" and it's
because there happens to be an overlap in the users who like both captured in
your model. Yet it can cause cognitive dissonance for the users who are either
strictly healthy eaters (salads only, no BBQ) or delude themselves into
thinking they are (forgetting how much they actually order BBQ in addition to
salads). That's the main challenge I see -- figuring out how to filter out
reasons from these approaches that don't jive with common intuitions or, even
harder, get people to learn to trust the reasons as counter-intuitive as they
may seem.

~~~
nl
Yes, there is plenty of work to do.

Tree-based classifiers always have the reputation of being "explainable". As
you note this isn't always as simple as it should be.

In the unsupervised space I really like plotting dendrograms on hierarchical
clustering. There's an excellent example in the recent DeepMoji paper[1] where
they show how similar emojis cluster together AND how you can truncate the
hierarchy at different depths to get capture different ranges of emotion.

It's laughable when people insist that _DARPA_ are foolish for funding work in
this area.

[1]
[https://arxiv.org/pdf/1708.00524.pdf](https://arxiv.org/pdf/1708.00524.pdf)

------
danschumann
As long as it doesn't realize it's easier to lie to humans about what it's
ACTUALLY doing. Maybe make the explainer algorithm is non-ai-based, to ensure
it doesn't optimize out honesty.

~~~
Retra
Couldn't you just compartmentalize? Make an outer AI that has to defer to an
inner AI, and nest them in such a way that the only interface between the
layers is a 'explaining' communication channel?

~~~
therein
That reminds me of this video and the left/right brain compartmentalization.

[https://www.youtube.com/watch?v=wfYbgdo8e-8](https://www.youtube.com/watch?v=wfYbgdo8e-8)

------
roceasta
Haven't read the article but what if General Intelligence precisely _is_ the
ability to explain oneself to oneself (or to others)? This could turn out to
be a chicken and egg problem.

------
giardini
The URL is behind a subscribe wall. Is there an open link available?

------
randomerr
If DARPA can't it's own UI, what good is it?

~~~
arcanus
The government is far behind on ML/AI. I don't see any evidence this effort is
going to help them catch up.

In fact, the department of defense is far behind even the department of energy
inside the US government on high performance computing, algorithms and machine
learning.

~~~
SomeStupidPoint
Isn't this fairly normal?

I seem to recall that the DoE, NOAA, and NASA always had a lot of computing
resources, and that they generally rented time on those clusters out to other
government agencies.

I do wish the government would step up on AI though -- I think that they owe
it to the country to put together a Manhattan Project for AI.

~~~
arcanus
Perhaps normal, but the concern is that this could be a strategic disadvantage
versus countries that are massively boosting these programs.

The Chinese government is leading (and funding) massive efforts in high
performance computing and AI. If they win, entire high tech industries could
be situated in (and lead by) China.

~~~
SomeStupidPoint
I'm troubled by the DoD actions on AI, but not by who owns the supercomputers
in the US.

It actually works out better if the US government computers are through
various science organization labs, because it locates much of their computing
power in places also accessible to universities and at agencies amenable to US
universities using their resources. So funding computers through the DoE,
NOAA, NASA, etc actually makes them more available for the US research
community, while allowing the military to lease the time they need (eg, for
calculating explosion dynamics).

The total funding for US supercomputers also could be higher, but that's a
general problem with US infrastructure investment.

My main point was there's no real concern with the computers being owned by
scientific institutions instead of the military -- they're still government
funded, it just makes them more widely accessible for use.

------
major505
Do you wanna the Judment day? Because that's how you got a Judment Day.

