
The black box is a state of mind - imartin2k
http://www.eurozine.com/black-box-state-mind/
======
Rhapso
When I call a system a black box I am not saying we cannot understand it, but
rather we cannot easily extract the rationale used by the system to discern
the classification. Layered neural networks are good examples of this where
the signal is mixed in with the noise. I like simpler/older systems we can use
to discern the method of classification (decision trees/forests) and NEAT
which produce human readable structures.

------
gumby
I have a visceral dislike of such human-produced black-box systems, but I have
to admit that humans themselves are black boxes, even to ourselves, and any
large system (a social system, political system, or a huge machine such as an
oil refinery) will exhibit obscure emergent behavior.

So maybe I just have to get over it.

------
YeGoblynQueenne
What we mean when we say that (statistical) machine learning models are "black
boxes" is that they are just vectors of numbers, that have no meaning that a
human can readily understand.

Note that scale doesn't have anything to do with this. Say I show you a model
my algorithm has learned, that's basically a vector of five numbers: [n1, n2,
..., n5]. Just by looking at those numbers, you can't say anything about this
model, with any certainty: what data it was trained on, what algorithm, what
classes it was trained to discriminate or if it is a regressor- or anything at
all, really. It's just a bunch of numbers.

We can compare this to the incomprehensibility of a large software, or
engineering project. There's probably no one person who can understand
everything about an airliner- but there are many people who understand its
constituent parts, down to the level of nuts and bolts. This is not so with
(statistical) machine learning models- they are numbers without recognisable
context all the way down (although there are exceptions, this is a pretty
solid rule).

And that's why people are afraid that, with (statistical) machine learning
we're riding a machine that we don't understand and therefore do not control.

------
EtDybNuvCu
This sounds like a pile of equivocations designed to try to minimize an
unfortunate truth about learned functions: the learning process does not care
whether it corresponds to human thinking, so learned functions do not
necessarily correspond to human-written functions.

Yes, the black-box nature of machine learning is a state of mind; the
alternative is to view all software as black boxes and then throw up our hands
and claim to know nothing whatsoever. This is clearly facetious and meant to
protect machine-learning developers from the responsibility of grokking their
systems.

~~~
vamin
You’re setting up a bit of a false dichotomy, in my opinion. Treating
something like a black box doesn’t mean you have to “throw up our hands and
claim to know nothing whatsoever.” Rather, it means you have to change your
approach to understanding the system.

People who are concerned about “black boxes” seem to think that we need a
first principles or causal-mechanistic explanation for what’s going on in a
machine learning system to have any confidence in it. That couldn’t be further
from the case. By interrogating the inputs and outputs of a “black box” you
can learn all you need about how it works. Much (if not most) of our
understanding of the physical world comes from carefully probing black box
systems—systems for which we have no a priori knowledge of mechanism. So, the
alternative is not to “throw up your hands,” it is to take a considered,
scientific approach to understanding the relationship between inputs and
outputs in your model: in what situations it succeeds, in what situations it
fails, how changing a single variable affects the output, etc. Yes, that can
be difficult for a complex model, but why should anyone expect it to be
simple?

