
Our Machines Now Have Knowledge We’ll Never Understand - jonbaer
https://backchannel.com/our-machines-now-have-knowledge-well-never-understand-857a479dcc0e
======
amasad
This is just bad epistemology.

> _As a consequence, if you, with your puny human brain, want to understand
> why AlphaGo chose a particular move, the “explanation” may well consist of
> the networks of weighted connections that then pass their outcomes to the
> next layer of the neural network. Your brain can’t remember all those
> weights, and even if it could, it couldn’t then perform the calculation that
> resulted in the next state of the neural network._

Why did you reduce the neural network to the connections and layers but
maintained the “brain” at the level of the whole? If you want to compare
apples to apples you should also reduce the brain to neurons and connections.
When a ball is about to hit your face your brain “can’t remember how to
calculate the velocity and acceleration of the ball and predict that in 100ms
it will impact your face and thus you need to send a signal for your hand to
move to cover your face”. Yet we do it, how? Well, it’s encoded in the complex
system of nerves and networks just like the the information about the various
potential Go boards and moves is encoded in the neural network.

> _And even if it could, you would have learned nothing about how to play Go,
> or, in truth, how AlphaGo plays Go — just as internalizing a schematic of
> the neural states of a human player would not constitute understanding how
> she came to make any particular move._

Similarly if you were to study the neurons, nerves, and networks and their
firing patterns that are involved in protecting your face from the ball you
wouldn’t learn anything about balls, acceleration or even pain.

~~~
throwaway87423
> When a ball is about to hit your face your brain “can’t remember how to
> calculate the velocity and acceleration of the ball and predict that in
> 100ms it will impact your face and thus you need to send a signal for your
> hand to move to cover your face”.

It's likely doing optical flow calculations (much simpler), rather than
integrating an ODE. Richard Dawkins apparently makes the same mistake about
"catching a ball" in his book.

~~~
mockery
By 'optical flow calculations' do you mean a linear extrapolation of the
projected 'image-space' position of the ball?

If so, that seems like it would be insufficient since a thrown object doesn't
move linearly, let alone its projection.

If not, then I'm curious what you're describing, and how it's much simpler
than integrating an ODE?

~~~
throwaway87423
Not quite. If you're interested take a look at,

[https://youtu.be/eKaYnXQUb2g?t=6m10s](https://youtu.be/eKaYnXQUb2g?t=6m10s)

[https://www.youtube.com/watch?v=iz9UVIo_ZUo&list=PL0AknDL1Vt...](https://www.youtube.com/watch?v=iz9UVIo_ZUo&list=PL0AknDL1Vt-
nEn1U-ztDulFATxiPvrELY&index=10)

Camera homography preserves straight-lines, but it also gives a heuristic to
avoid obstacles. Apparently this is also how bees and birds navigate.

------
Houshalter
Neural networks get a bad rap for being "black boxes" and "uninterpretable".
But any decent machine learned model is incredibly difficult for people to
understand.

There was a machine learning system specifically _designed_ to produce
interpretable models. It's called Eureqa, and it does symbolic regression. It
finds the simplest possible mathematical expression that fits the data.
Instead of millions of neural weights, you get a nice simple equation. How
could not not be interpretable?

But in any nontrivial case, it's still impossible to decipher. "Why is there a
sine there? Why on Earth is it using mod? What could this constant possibly
mean?" One biologist used it on some data he had been struggling with. And it
found a simple equation that fit the data perfectly, which is what he had been
looking for. But he couldn't publish it, because he couldn't possibly explain
why it worked or how to derive it. You can't just publish a random equation
with no understanding.

I think the best method of understanding our models, is not going to come from
making simpler models. Instead I think we should take advantage of our own
neural networks. Try to train humans to predict what inputs will activate a
node in a neural network. We will learn that function ourselves, and then it's
purpose will make sense to us. I suspect most of what NNs do isn't complicated
in an absolute sense. Its just time consuming to go through the data and work
it out. (Maybe we could outsource it to something like mechanical turk...
"What do these images have in common?")

There is a huge amount of effort put into making more accurate models, but
much less into trying to interpret them. I think this is a huge mistake.
Understanding a model lets you see it's weaknesses. The things that it can't
learn. The mistakes it makes.

~~~
Qantourisc
>> You can't just publish a random equation with no understanding.

This has happened a lot in the past iirc:
[https://en.wikipedia.org/wiki/Unsolved_problems_in_mathemati...](https://en.wikipedia.org/wiki/Unsolved_problems_in_mathematics)
Imo if you can 'proof' it's really true, publish it, maybe some else some day
can actually proof/explain it. Meanwhile we can use the formula where it works
(at your own risk).

------
QuinnyPig
This feels only one step removed from "databases contain more information than
any human can remember!"

~~~
kgwgk
s/databases/books/

------
ebbv
This is baloney. The workings of any machine learning algorithm is not beyond
our understanding. Even on an intuitive level, it is easy to watch what it is
doing and derive theories about what is going on. If you spend the time to
investigate it is possible to map out what is going on. You can determine
"This pathway delivers a high signal on pictures of brown cats, this pathway
lights up with pictures of black cats." Or whatever.

This kind of breathless "Oh we CAN'T understand!" editorializing gets clicks
because machine learning is so strange to most people, but it's terrible. We
should instead be spreading knowledge about the very simple principles that
are underlying the complicated behavior.

Our brains are far, far more complicated than even our most advanced machine
learning right now. Yet we are constantly making progress on understanding the
workings of our brains. This article is just terrible.

~~~
spc476
Leadup: Dr. Thompson used a genetic algorithm to program a FPGA to detect one
of two tones. The result was:

> Dr. Thompson peered inside his perfect offspring to gain insight into its
> methods, but what he found inside was baffling. The plucky chip was
> utilizing only thirty-seven of its one hundred logic gates, and most of them
> were arranged in a curious collection of feedback loops. Five individual
> logic cells were functionally disconnected from the rest— with no pathways
> that would allow them to influence the output— yet when the researcher
> disabled any one of them the chip lost its ability to discriminate the
> tones.

(from [https://www.damninteresting.com/on-the-origin-of-
circuits/](https://www.damninteresting.com/on-the-origin-of-circuits/))

Granted, the results were probably due to the quirks of the FPGA gates on that
particular chip (and that particular area of the chip) but without further
(potentially destructive) investigation, Dr. Thompson (or us for that matter)
may never know the actual reason that circuit worked the way it did.

~~~
guskel
Backpropagation doesn't work like genetic algorithms on an FPGA.

------
gooseus
So this article assumes that the problem of understanding how an AI solves a
problem is a problem beyond any possible AIs ability to solve?

Aren't we constantly iterating over these AI algorithms specifically to give
us ones that can help us understand things "We'll Never Understand"?

~~~
a3n
Exactly. My first thought when I read the title (and I only read a paragraph
or two in), was that this seems like a similar problem to analyzing astronomy
data. It's a BigProblem problem to which we can apply "stuff."

And to answer a probable objection, "It's BigProblem, all the way out."

------
lngnmn
Bullshit, bullshit everywhere.

Learning is not some abstract mystery, it is a process of "evolving" and
maintaining a structure (of neurons or of bits - does not really matter) which
reflects some particular aspects of reality (presumably not socially
constructed) and could be used in a meaningful way to have advantage in the
similar environment. This is how birds know how to make nests or babies read
facial expressions of strangers.

Knowledge is not some abstract philosophical crap, it is a "trained" structure
with reflects aspects of what is.

BTW, the structure of the sand floor of a lake, which reflects all the past
waves could be called "knowledge", but it is totally useless. Most of flawed,
socially constructed models are of this kind.

------
lend000
Or alternatively, "machines are still too simple to relate their decisions in
an abstract, conceptual manner."

------
self-diversity
I'd love to remind the author of the original headline (it appears to have
been changed) that there's a huge difference between not understanding
something now and knowing you'll never understand it.

------
throwaway87423
And many people have "knowledge" we'll never "understand". So ?

(Is anyone excited about the implications AI has for philosophy ?).

~~~
tyingq
I do think it's having some impact now. Google, for example, did not used to
use ML for ranking sites. So they had a reasonably good handle on why sites
would end up ranking where they did.

They then introduced several different ML add ons, like Vince, Panda, Penguin,
etc. They know roughly what the algorithms are doing. But, as I understand it,
it's still opaque enough that they can't explain exactly why a specific set of
results is what it is. Especially since they don't all work together. They run
one after the other, like a pipeline.

That is, there are some amount of false positives or negatives they can't
explain. If you report something, they can guess and experiment around a bit.

For some spaces, they are basically king makers or breakers. So ML decisions
are driving real world consequences. To the degree, though, that the results
are "good enough", it won't ever get looked into. The acceptable error rate is
the one that doesn't affect Google too much. An actual flaw could go unnoticed
for quite some time.

------
rdlecler1
I'll never understand the knowledge of Picasso, Einstein, or Bezos either. AI
does not operate according to principles of magic they operate according to
the principles of computation. They can be understood, but how much value does
it bring?

------
KirinDave
If we spent the same amount of effort working to visualize RNNs (and train
them to explain themselves) as we did whining about how Turing machines are
not very observable AGAIN, we'd have ML systems directly synthesizing the
JARVIS voice to give us their reasons.

The fact that we can't observe specifically how a brain works doesn't mean we
can't work out some structure. The fact an ant doesn't explain to us the
individual rules to us doesn't mean we can't divine them via observation.

These articles irritate me, because they act like NNs are magic and they are
anything but.

------
ppod
There might be some aspects of this article that over-reach, but I think that
it is a useful explanation to the many people who still don't understand the
difference between connectionist or high-dimensional statistical algorithms
and good-old-fashioned symbolic AI.

------
studentrob
> Since we first started carving notches in sticks, we have used things in the
> world to help us to know that world. But never before have we relied on
> things that did not mirror human patterns of reasoning — we knew what each
> notch represented — and that we could not later check to see how our non-
> sentient partners in knowing came up with those answers

This is alarmist and completely untrue. The scientific method specifically
involves forming a hypothesis, trial, and error. We don't know much until we
experiment.

That deep learning makes things possible before we fully understand them is
nothing new.

I can understand this fearful attitude coming from non-scientists who fear the
black-box nature of machines. However, technologists should understand that
the functionality of this tech _can_ be reverse engineered, or verified, with
additional legwork.

We should proceed cautiously and with vigilance. Saying we'll "never"
understand the inner-workings of such algorithms is too much.

------
randyrand
Our brains are Turing complete. Anything that is computable/ understandable,
can be understood by humans.

~~~
dragonwriter
> Our brains are Turing complete. Anything that is computable/ understandable,
> can be understood by humans.

Our brains are fallible, and so only approximately equivalent to Turing
machines; and even ignoring that can only compute every computable thing given
both infinite error-free storage capacity (which they don't have internally or
externally) and sufficient time to execute the necessary steps of the
computation.

Realistically, there are computable functions which no human brain will be
ever be capable of computing.

Whether being able to compute a thing is sufficient, necessary, or tangential
to understanding it is also a question.

~~~
charles-salvia
>> Our brains are fallible, and so only approximately equivalent to Turing
machines; and even ignoring that can only compute every computable thing given
both infinite error-free storage capacity (which they don't have internally or
externally) and sufficient time to execute the necessary steps of the
computation.

So are actual computers. They're just "less fallible" and closer to a
Universal Turing Machine than human brains in certain ways. A Commodore 64 is
also incapable, in practice, of computing certain computable functions, due to
memory limitations and what have you, but nobody would really claim a
Commodore 64 is not Turing complete.

~~~
dragonwriter
> So are actual computers

Sure, but no one was making claims about actual computers other than the human
brain that requires pointing that out.

> but nobody would really claim a Commodore 64 is not Turing complete.

Actually, it's a rather common observation that real-world computers are not
Turing complete (languages, considered independent of the limitations of
concrete machines, may be) and particularly that concrete machines with
limited storage space and operating with finite time constraints may not be
able to compute all computable results, even though the abstract model they
approximate, without those limitations, can.

------
louithethrid
Im not so sure anymore.. humans could miss subtile developments, curves that
overlap every full moon, and reach a never reached result.

Darpa is not pumping money into safety on NN Nets research because confetti
ran out.

