
Measuring abstract reasoning in neural networks - ZeljkoS
https://deepmind.com/blog/measuring-abstract-reasoning/
======
bhouston
I think that neural networks of the current design will have trouble with
abstract symbolic logic because they do not have any structures to support
that type of reasoning.

That said, they are great at recognition/prediction and it is likely that you
can solve some simple symbolic logic problems by recognition/prediction but
only up to a degree.

Neural networks currently are very much like the sensory processing areas in
the brain and sometimes they are mapped directly to actions for control of
video games.

But symbolic logic as human explicitly do it, and what we consider as thinking
(planning, imagining, evaluating, deciding) is not done just in the sensory
areas of the brain but involves mediation by executive control centers and
self-stimulation of the sensor areas in sort of a loop.

I am not sure that trying to deal with abstract reasoning in just the current
sensory NN designs that we have is going to be super effective. But I guess
measuring abstract reasoning will allow us to realize the current limitations
and then push forward with better structures that enable it.

(Although even if the NN do not have structures to support symbolic reasoning
in the way that humans do it, I guess DeepMind will just write custom code
around the NN do enable NN do help with symbolic logic? Sort of how they
combined NN with other search structures to create AlphaGo?

Personally I think it would be easier to combine NN with existing symbolic
reasoning tools to get a better result rather than just sticking with NNs. Use
NNs to recognize and evaluate patterns and feed that to the symbolic logic
tools to get reasoning solutions. Much more efficient I would think and
tractable. And for extra credit run NNs on the symbolic logic reasoning tools
to see if you can make some "intuitive" jumps sometimes instead of pure
reason.)

------
foxes
Frankly, I doubt that neural networks can "reason" in an abstract way. Correct
me if I am wrong but it looks like you got what you ordinarily except from a
neural network. The network was successfully trained to pick out certain
progressions (or properties) in the training images. It's not doing any
reasoning at all - just the usual thing neural networks do.

~~~
stared
Abstraction is looking at some properties, while abstracting others (removing,
ignoring). Even simple artificial neural networks for computer vision can do
so - e.g. ignoring noise, small rotations, reflections or the background.

If, or when, it gets to human-level (for a given task) is another question.
But saying that computers cannot learn how to abstract data is demonstrably
false.

Vide [https://distill.pub/2017/feature-
visualization/](https://distill.pub/2017/feature-visualization/) for internal
representations.

~~~
guntars
I’d say neural networks abstract over successive layers until the answer
becomes self evident. That’s not the same as reasoning though.

~~~
stared
What do you exactly mean by "reasoning"?

~~~
ovi256
Reasoning is whatever is left after you take away everything neural networks
do. Obviously!

------
chmhsm
Being a software engineer and a big DL enthusiast, this is getting a bit
worrying for 2 reasons: 1- We already can easily build an NLP-related model
that can write code in a certain language without making syntax/build or
runtime errors. 2- I wasn't worried so far because the code a model could
write doesn't carry any business logic. And I've realised that, as long as
reasoning is still yet to be "discovered" in AI, we would be fine. DeepMind
seems now to start focusing on just exactly that. If a job as complicated as
implementing code with business logic in it can be done by an AI. I do not
care whether it's an AGI or not, it's already a bit troubling.

~~~
amelius
I guess we'll first see IDEs with a built-in code-completion function. I
suppose this could work first in cases where coding is boring, such as when
refactoring code.

------
FrozenVoid
This is specialized pattern recognition - not abstract human reasoning. It
could potentially solve IQ tests better than humans(would make a great PR
stunt) by training on millions of possible IQ tests(Raven Progressive
matrices) to develop a generic association map with symbol-clusters it
detects, enough to handle a rare unfamiliar shape.

These patterns don't translate to anything outside their domain, it would just
prove IQ tests are not measuring intelligence - they measure the absorption of
specific pattern data(association maps) in our brains, essentially forcing a
brain to become a reactive "association-puzzle" automaton fitting symbols into
RPM patterns.

------
hprotagonist
This work does not engage with Searle’s Chinese Room argument, as far as I can
tell, nor do the authors use any shockingly novel network architecture or
advances in approach beyond normal DL approaches.

My position, then, is that the system they describe cannot be said to be
reasoning at all.

~~~
SCHiM
I think I've never completely understood the Chinese room argument, to me it
seems like that whole chain of examples and counter examples are the product
of people playing word games and discussing definitions.

If your 'room' 'understands', but no part of the 'room' can be attributed with
having 'intelligence', doesn't it stand to reason that the _room_ itself is
the intelligent actor? I really don't see the problem with the fact that the
human 'cpu' doesn't speak Chinese, the room itself 'understands' Chinese just
fine.

------
tzahola
I’m starting to get tired of this antropomorphized terminology for neural
networks. “Belief propagation”, “abstract reasoning”... No wonder that
laypeople think we’re on the brink of the AI apocalypse.

~~~
amelius
Yes, sadly, even scientists use marketing-speak. And I guess companies use
this style of terminology to attract VCs and to push their half-baked
products.

Not sure if complaining about it would help though.

