
Why deep neural networks don’t actually think - sshamoon
https://medium.com/shallow-thoughts-about-deep-learning/why-deep-neural-networks-dont-actually-think-b5768f751f43
======
tlb
This article (like most articles on this topic) doesn't define what "think"
means. There's no shame in that: nobody has a good definition. But without a
good definition of X, you can't make reliable conclusions about what is or
isn't X.

The logic seems to be: Thinking is mysterious [true]. Neural nets aren't
mysterious [true]. Therefore neural nets must not be thinking [?].

That's a fallacy. Compare: The location of the buried treasure is mysterious.
The place I'm about to dig isn't mysterious. Therefore, the treasure must not
be where I'm about to dig.

It's a fallacy because as soon as you find the treasure, it's location isn't
mysterious any more. Mysteriousness isn't an inherent property of things, it's
a statement about our own limited knowledge which changes over time.

The same will be true of thinking (for any definition of that term you might
care to use). When we figure out how to make artificial systems that think,
thinking will no longer be (completely) mysterious.

~~~
AnimalMuppet
There is (probably) a definition of "think" where neural networks actually
thing. There is also (probably) a definition of "think" where neural networks
_don 't_ actually think. There are articles advocating both positions, _and
neither side explicitly states their definition of "think"_. So they're really
arguing about what the right definition is, but in disguise, since neither
side states their definition, nor gives reasons why that definition is better
than the other side's. As a result, the "discussion" is pretty much useless.

