
Peering into neural networks - ravenkat
http://news.mit.edu/2017/inner-workings-neural-networks-visual-data-0630
======
etiam
1.) For the umpteenth time, they're not black boxes. We can inspect
_everything_ in the structure.

2.) "a team of computer-vision researchers from MIT’s Computer Science and
Artificial Intelligence Laboratory (CSAIL)" may have "described a method for
peering into" the not-black box of a convnet two years ago, but Oxford
researchers published on in in 2013.

3.) Gushing about how understanding convolutional networks can help confirm
the grandmother cell hypothesis in real brains is embarrassing under all
circumstances but should be particularly so when thorough examinations from
real brains just came out to the considerable detriment of said hypothesis.
[http://www.cell.com/cell/fulltext/S0092-8674(17)30538-X](http://www.cell.com/cell/fulltext/S0092-8674\(17\)30538-X)

Nothing wrong with making visualizations of your nets, but I'm less than
impressed by the reporting.

~~~
opportune
Few things are black boxes in absolute terms. It's the _difficulty_ of
understanding NN's that makes them black boxes. Personally I think the black
box analogy is accurate for any net of appreciable size.

~~~
etiam
I think your point about the frequently gradual nature of the concept is a
very good one, and I'll buy that there may be appropriate _analogies_ to a
black box in this context, especially if some sort of poetic or metaphoric
description is what one is aiming for. I see no indication of that in the
article. What the present journalists, and many others, are doing in this
respect is not analogy but rather destructive appropriation of terminology.

I get that they mean something like 'is difficult to understand' too, which is
in many ways completely uncontroversial. ( _I 'm_ certainly not going to claim
it's a solved problem to quickly and effectively come understand
intellectually how an arbitrary ANN does what it does. I doubt there are many
people who would.) If that's what they mean, then they can say that, or make
up their own picture language that isn't already busy meaning the polar
opposite of the situation they're alluding to. A black box is characterized by
observability only at the edges and unknown inner workings. It is by
definition an inappropriate term for a convolutional network where every
single weight and operation and intermediate result is trivially inspectable
and you can do things like follow the effects of an experimental perturbation
along every step of every path through the network. 'But it's really hard to
get an intuitive understanding of what that means in the big picture' is a
perfectly legitimately concern for something to improve on, but it isn't
remotely good enough as an excuse for effectively claiming we have no
observability or control where both are clearly abundant.

Abusing the term like this detracts from its established role in engineering,
systems theory, etcetera and in my opinion also from communicating the actual
problems of understanding how ANN:s do their thing. I really wish they'd stop
making that claim.

Now I'm going to leave my computer before I get started on the ######s talking
about steep learning curves as if they were an obstacle.

------
twblalock
There are two things that ML/AI developers are going to have to deal with once
the technologies become widespread in things like self-driving cars,
hiring/firing decisions, and the criminal justice system:

"Why did it do that?"

and

"Make it stop doing that!"

The first time a self-driving car accident results in a court case, these
things are going to come up. I very much doubt that people are going to be
satisfied without clear explanations, and they shouldn't be. When these
systems take on roles of increasing importance to society, some level of
accountability is going to be necessary.

------
opportune
If I'm reading this correctly, it's old news. They're just tracing the
activation of kernels. You can see examples in this wikipedia article:
[https://en.wikipedia.org/wiki/Kernel_(image_processing)](https://en.wikipedia.org/wiki/Kernel_\(image_processing\))

This one's cool too:
[http://scs.ryerson.ca/~aharley/vis/conv/](http://scs.ryerson.ca/~aharley/vis/conv/)

------
sp332
I like it. I've seen experiments that break out eigenvectors of a neural
network, which is like being given a dictionary in a foreign language. It's
precise, but you still have to figure out what each eigenvector means. This
technique is like having a translating dictionary. It's less precise but it
lets you reason about the network with a familiar visual vocabulary.

~~~
amelius
How are eigenvectors/values defined for nonlinear systems, and are their
properties as useful as in the linear case?

~~~
sp332
I was thinking of
[https://en.wikipedia.org/wiki/Eigenface](https://en.wikipedia.org/wiki/Eigenface)
(I like slide 66 of [https://www.slideshare.net/MostafaGMMostafa/neural-
networks-...](https://www.slideshare.net/MostafaGMMostafa/neural-networks-
principal-component-analysis-pca) "Not magic: Can only capture linear
variations".) But you can do things like Google's Deep Dream, where you pick
the classification first and feed it backward to see what aspects of the image
are important to a class. You can even do it for different layers to see what
each layer is looking for.
[https://research.googleblog.com/2015/06/inceptionism-
going-d...](https://research.googleblog.com/2015/06/inceptionism-going-deeper-
into-neural.html)

------
ravenkat
Question to the community. Where can i follow research on this area
understanding decision making and reasoning of neural networks?

~~~
joe_the_user
Well, I think the research is more ad-hoc than being it's own field at this
point.

I just scan papers that come up in the Reddit group[1]. I've seen:

"Chains of Reasoning over Entities, Relations, and Text using Recurrent Neural
Networks" by Rajarshi Das, Arvind Neelakantan, David Belanger, Andrew McCallum

"Rationalizing Neural Predictions" by Tao Lei, Regina Barzilay and Tommi
Jaakkola

"'Why Should I Trust You?' Explaining the Predictions of Any Classifier by
Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin

You might be able to chase down the works of these various authors to find
more.

[1]
[https://www.reddit.com/r/MachineLearning/](https://www.reddit.com/r/MachineLearning/)

------
divenorth
I wonder if our understanding of neural networks will help us understand the
human brain.

~~~
pulse7
If THAT happens, then we can - after some further research - loose the last
bits of our privacy - our mind...

~~~
divenorth
Maybe one day. From my understanding we're not even close. We don't even have
the computing power available to simulate a human brain. But yeah, scary
applications. Minority Report anyone?

------
gumby
Pretty but how does this provide insight?

~~~
sp332
When the computer is putting things into categories, you can see what aspects
of the images are important to the decision.

