
Artificial Intelligence Is Already Weirdly Inhuman - rl3
http://nautil.us/issue/27/dark-matter/artificial-intelligence-is-already-weirdly-inhuman
======
noir_lord
We saw this in Chess engines about 20 years back (when they started getting
seriously strong) they play Chess incredibly well but looks very little like
what a human playing does.

It was fascinating to watch the changes in chess theory through that period as
the machines validated or invalidated concepts and ideas that GM's had posited
but been unable to prove one way or the other.

John Speelman wrote an intro to The Mammoth Book of Chess (that thing was
awesome,I think I wore out three copies) which (paraphrasing horribly) that
Chess is a realm of pure thought and that computers play it the way an alien
might.

The best engines on commodity hardware now require offering pawn odds for a
top human to even have a chance and their strength is accelerating[1]
(stockfish now has a distributed test architecture to test for regressions
which is incredibly cool[2])

[1] [http://www.chess.com/article/view/how-rybka-and-i-tried-
to-b...](http://www.chess.com/article/view/how-rybka-and-i-tried-to-beat-the-
strongest-chess-computer-in-the-world) (Rybka was/is one of the strongest
engines and in concert with a human it still couldn't beat Stockfish).

[2] [https://stockfishchess.org/get-involved/](https://stockfishchess.org/get-
involved/)

~~~
iofj
Keep in mind that chess engines are heuristics based hill-climbers
(essentially they enumerate all their options, then optimize that process).
That these don't respond the way a human mind does is quite understandable.

Deep-learning networks should already be a lot closer to how a human mind
would react.

~~~
adrianN
I disagree. Only because the architecture reminds one more of a brain this
doesn't imply that it "thinks" even remotely similar to a brain, a human brain
at that.

~~~
merb
I disagree with you. The computer brain would still think the same as we do,
but what do we do with our "thinking" process? we evaluate them based on
emotions, while there is currently no way to have "real" emotions on a
computer. Humans will always do their decisions based on emotions and not
logically, we think way to fuzzy that any program could (yet) do it like that.

------
joshmarinacci
I don't find this surprising. In fact, I would find the opposite surprising.
It would be very surprising if AI was human-like. Human intelligence is
designed to power a human body, with fingers of a certain length and eyes of a
certain stereo field of view. I don't think we will ever develop a human like
AI until we give it a human like body to live in.

~~~
saidajigumi
> I don't think we will ever develop a human like AI until we give it a human
> like body to live in.

This is not a new thought; there's quite a bit that's been studied and written
on the importance of embodiment to cognition, perception, etc. Cognitive
Scientists and a few other disciplines have dug into this a fair bit.

And I agree about the article. This quote is just ridiculous: _That suggests
humanity shouldn’t assume our machines think as we do._ It presupposes that
anyone with actual knowledge of AI, neural nets, etc. believes that we're hot
on the path of human-style artificial cognitive capabilities with our current
tools. At best, we may have started building small components that in a future
form, might eventually be assembled, Society of Mind[1] style, into something
that eventually begins to resemble general AI.

[1]
[https://en.wikipedia.org/wiki/Society_of_Mind](https://en.wikipedia.org/wiki/Society_of_Mind)

------
white-flame
I do have my doubts that plain neural networks will ever be able to achieve
conceptual understanding.

I have an affinity for classical, rational AI in that you can correct it and
it will take that correction and instantly apply it to is knowledge base. It
can also explain why it came to a conclusion. (though obviously this style has
its very real limitations)

NNs and other current statistical/connectionist approaches don't really have
this capability, which I see as a necessary part of human-level intelligence.
They are trained to "get a feel for" particular inputs to indicate particular
outputs. If you were to personify the NN analyzing the dog/"ostrich" picture,
and ask it why it thinks it's an ostrich, it could only reply with "I dunno,
it feels like it's an ostrich". The only way to correct it is to retrain it
with careful checks so that it behaviorally "gains a sense" of what looks like
a dog vs ostrich more reliably.

Many language/word based features operate similarly. Watson, Siri, Google
search, all sort of map strengths of relations with your input words &
patterns to some associated results that it just sort of statistically was
reinforced with. These can yield information that is of real use for a human
to further evaluate and act on, but I wouldn't trust such a system to directly
act on those associations; they're wrong too often.

But there is no possibility of actual conceptual discourse with NNs as we know
them, to correct them, to inform them, or to ask them to explain their
results. This is a fundamental barrier to achieving human-style intelligence.

This is not to say that there aren't NN-based possibilities that might work,
like tacking together multiple interacting NNs, each which have the
possibility to specialize on concepts and influence other NNs. But too much
has focused on single NNs and direct input-to-output "monkey-see, monkey-do"
training. It's a manifestation of the Chinese room problem.

~~~
existencebox
At the risk of nitpicking, I'm going to specifically address your statement of
"It feels like an ostrich"

To my mind, that's EXACTLY how humans do it. A baby is not instructed, "it has
2 legs, feathers, and is tall, therefore it is an ostrich", there's quite a
bit more "LOOK OSTRICH /present input of ostrich/" prior to the point of being
able to generate any justification.

As sister posts have pointed out, I tend to believe that the ability to
justify the classification is another learned skillset that comes later
developmentally, the ability for higher primates (and babies) to perform
classification without the (as far as I know) ability to do the higher order
reasoning lends itself to this theory.

~~~
andrewljohnson
I find how my two-year old classifies birds to be amusing:

1) He called all birds ducks for the longest time, presumably based on being
exposed to a bath duck, and this weird Youtube video about ducks swimming in
the water.

2) Then he called some of them bird, based on being exposed to a bath toy
bird.

3) Then he split them up a bit more... parrot, chicken, rooster, etc. He
started seeing them in books and real life.

4) At no point could he explain the difference. Still can't.

5) And he definitely thinks the only difference between a rooster and a
chicken is colorful feathers.

6) And he's not quite sure about ducks and geese, but usually gets it right.

~~~
swombat
I guess he still needs to develop the advanced bullshitting (sorry, I meant
"rationalisation") skills required to say, "Well, I think it's a duck, because
it looks like my internal picture of what ducks are supposed to look like,
dear sir!"

------
crazypyro
(My take on this problem, feel free to correct any assumptions I make that are
wrong)

I think this is because neural nets are not actually "intelligent" in terms of
the commonly accepted definition of intelligence. They are dumb. They try,
usually with simple probabilistic techniques and input element-wise
transforms, to mimic some function that produces approximations for a given
set of inputs and outputs. The training data and the test data will always
have underlying differences which will create gaps in the data generating
distribution, assuming a reasonably large-sized set of data. This is contrary
to the assumptions that (every?) machine learning algorithm makes, usually
referred to as the i.i.d assumptions. The test and training data are assumed
to be independent and _identically distributed_. Since practically no real
world data sets are perfectly identically distributed, there will always be
gaps in the learned model and the real model.

Beyond that, the set of training data can never fully encompass the entire
domain of possible input/outputs, else what need would we have for a machine
to predict new ones? The oddities that researchers find are actually problems
in the relationship of their training data to their testing data because the
i.i.d assumptions are never actually true. We can only try to get as close as
possible.

The solution to this problem is just nigh impossible, so we try to reduce it
as much as possible.

~~~
noskynethere
> They try, usually with simple probabilistic techniques and input element-
> wise transforms, to mimic some function that produces approximations for a
> given set of inputs and outputs

It's my understanding that this is basically how the brain works. My personal
theory is that enough of these "dumb" inputs, wired correctly together, leads
to emergent behavior that is consciousness.

~~~
crazypyro
I imagine the brain more like hundreds (thousands, millions, I'm not sure the
magnitude) of different specialized neural networks. So you have a specific
neural network for picking out colors and that feeds (along with a bunch of
other inputs) into the neural network for picking out object boundaries and
that feeds into the neural network for object recognition and so on. In
comparison, most neural networks that are used in computer vision are
generally trying to do the entire process in a single network (although they
also use feedforward, so the difference is more complex than just composing
the various layers). I think there is something to the idea that we need the
neural network to have points where it can spit out a partial piece of the
eventual goal model, things like object boundaries before recognizing the
object, recognizing eyes before the entire face, etc. The key is being able to
get those logical partial model results at various layers of the network.

~~~
noskynethere
I'm outside my depth here, but isn't that what hierarchical learning is? (I
think it's popularly called "deep learning", which I assume means the neural
nets have depth?)

From what I've read, we aren't going more than a few dozens of levels deep.
But it also sounds like this technique is very successful in image
recognition.

Am I incorrect in my understanding?

------
rcthompson
I think the "static is a cheetah" example just highlights that the neural net
is not identifying the best features with which to identify a cheetah. Or
alternatively, if, during training, the neural net was only fed pictures of
nature with or without cheetahs in them, then what it's really telling you is
not the probability that a picture contains a cheetah, but rather the
conditional probability that a picture contains a cheetah given that it is a
picture of nature. In other words, that picture of static is most likely well
outside the domain of the training set, so classifying it involves a large
extrapolation, with all the attendant amplification of errors.

Perhaps what we need is a classifier that can tell when a picture is
significantly outside of its training experience and say "I've never seen
anything like that before" instead of giving an arbitrary classification.

------
h0l0cube
There is one important point that seems to be lost every time an article uses
adversarial examples to justify why neural nets are deficient: not even a
human is perfect at understand an image at a blink. When we see an image we
decode a stream of impulses. Saccades will follow observing in detail number
of areas of the image at different orientations, and so our interpretation of
the image will come from numerous samples, not a single image.

IMHO, it's actually quite amazing that such primitive software neural networks
can understand an image in a blink, in one 'sample'. Conversely, it's not
inhuman to see pictures in clouds, Rorschach tests, or even static.

------
Qantourisc
Another reason that neural nets do crazy things: they have little knowledge.
We see a dog, by looking at it's shape etc. The neural net by looking at
features.

A solution to this I think would be splitting up the taks, like "finding
eyes", "is this fur", etc ... freeze these networks, and use these as input.
This would prevent most errors given in these examples.

A lot of animals know what eyes look like regardless of the species, not
saying nature always has the best solution. But it probably had a good reason
to have a specialized-build-in "head/eyes" detector.

~~~
seiji
_We see a dog, by looking at it 's shape etc. The neural net by looking at
features._

Have you ever seen a kid learn language? At first, every four legged animal is
either a cat or a dog. Every car is a truck (or every truck is a car). The
specific categories get learned over time, but they aren't in any sense
"natural."

------
faragon
_How_ vs _why_

I started learning genetic algorithms (random permutations selected/promoted
for picking better combinations of Hamiltonian paths for solving NP-C problems
obtaining near-optimal solutions quickly) and neural networks (mainly
backpropagation, using different topologies) circa 1995 (college). It was like
magic: you learn _how_ to use those techniques easily in order to solve
problems. However, the _why_ was not so evident: while in the case of genetic
algorithms you can understand that in a space of solutions with many similar
cost solutions could allow you to pick a "good one" easily, in the case of
neural networks the idea I got was like some a "magic box" which was supposed
to do some interpolation/extrapolation that effectively work like generating a
convex surface for giving a "solution"/"location" for a given input. It was
like you got alien technology simply to make it work for simple things, but
not being able to really understand _why_ it was working (except for
trivial/small networks).

Do people really understand what complex neural networks do? I.e. is still
trial-error only, or are built on purpose?

~~~
IndianAstronaut
>Do people really understand what complex neural networks do

From a biological sense, the only neural net we have a decent understanding of
is the C elegans nematode worm with a few hundred neurons. Large neural
networks like that of our brain are beyond our understanding and I suspect the
same is the case for ANNs.

------
hellofunk
Sigh. The phrase "neural network" is getting tossed around these days with
some type of sensationalist flair, an almost romanticized notion of this
impending explosion of super-human phenomena. As Alex Smola says with much
frustration in one his classes, "it's only math!" It's a fancy term for
straightforward mathematics. Bloggers are so often making them out to be much
more than they are.

~~~
kdoherty
There's something to be said for both sides of this. I think you're right
about the way NN's have been blown out of proportion. I find as I work across
fields they're the most abused and misunderstood ML technique, since everyone
and their dog has heard of NN's at this point. But on the other hand, there's
a lot that can be said for simple building blocks combining to form something
very complex. The biological "inspiration" behind NN's make them an attractive
starting point for investigating this line of thought.

------
kazinator
If you don't know why a neural network suddenly reports that video static is a
cheetah, the main reason for that is that you also don't know, in the first
place, why it reports that a picture of a _cheetah_ is a cheetah!

To emphasize our lack of understanding of the false classifications is
misleading.

------
Quanticles
This article relies on carefully constructed images that maximize one
particular outcome by summing up lots of small errors into it.

For it to work, the pixels have to be very accurately tweaked. If the tweaks
were off by one pixel, the whole thing would fall apart.

The assumption is that this cannot be done to a person. But there is no way to
put in a pixel-level "exploit of sorts" into a person to test that theory.

The real answer is probably that a little bit of noise on the input probably
disrupts the exploit. It could never happen to a person because eyes have
noise. At the same time, it could never happen to a robot either because
cameras have noise.

~~~
eru
From the article:

Such screwy results can’t be explained away as hiccups in individual computer
systems, because examples that send one system off its rails will do the same
to another. After he read “Deep Neural Networks Are Easily Fooled,” Dileep
George, cofounder of the AI research firm Vicarious, was curious to see how a
different neural net would respond. On his iPhone, he happened to have a now-
discontinued app called Spotter, a neural net that identifies objects. He
pointed it at the wavy lines that Clune’s network had called a starfish. “The
phone says it’s a starfish,” George says.

Spotter was examining a photo that differed from the original in many ways:
George’s picture was taken under different lighting conditions and at a
different angle, and included some pixels in the surrounding paper that
weren’t part of the example itself. Yet the neural net produced the same
extraterrestrial-sounding interpretation. “That was pretty interesting,”
George says. “It means this finding is pretty robust.”

In fact, the researchers involved in the “starfish” and “ostrich” papers made
sure their fooling images succeeded with more than one system. “An example
generated for one model is often misclassified by other models, even when they
have different architectures,” or were using different data sets, wrote
Christian Szegedy, of Google, and his colleagues.4 “It means that these neural
networks all kind of agree what a school bus looks like,” Clune says. “And
what they think a school bus looks like includes many things that no person
would say is a school bus. That really surprised a lot of people.”

------
ilaksh
We are trying to emulate some human capabilities. In doing so we have created
a complex system which can be almost as hard to predict as a human. That's not
surprising, that's hust physics.

Having a few odd classicifations is not surprising either. It hasn't been
trained like a human and is being asked to select a class where 'randon
squiggly' isn't an option.

To get human-like intelligence we need to develop them more as virtually
embodied agents. Sort of like kids.

------
pokkanome
Neural Networks do not think. They perform a series of computations, and
arrive at a result. It is not intelligence. It is a very useful tool for
solving problems that can be quantified and that we can generate a lot of data
from, but ultimately, it is still human intelligence that is interpreting the
problem and result.

~~~
ThrustVectoring
Human brains do not think. They perform a series of computations, and arrive
at a result. It is not intelligence. It is a very useful tool for solving
problems that can be translated into neural inputs.

~~~
pokkanome
And what evidence do you have that human neurons do any kind of computation at
all?

Generally, they are stimulated by some sensation until they reach a certain
threshold that causes them to fire. That is the basic kind of functionality
that nodes in a neural network try to simulate.

But human neurons are not dependent on numbers and change in much more complex
ways than a few parameters. The brain requires a lot less data than these
networks to learn new concepts. And the concepts that these networks learn are
all ideas that humans came up with.

A network does not hold an opinion, it takes in inputs and gives outputs. I do
not mean to say that we cannot make a network to simulate a brain, but that is
not what we have right now.

~~~
ThrustVectoring
And what evidence do you have that silicon transistors do any kind of
computation at all?

Generally, they are stimulated by some voltage until they reach a certain
threshold that causes them to change state.

~~~
pokkanome
A transistor is not doing computation. It is flipping a bit. Those bits are
flipped in binary patterns with logic gates to do the computation.

And a transistor is not a neural network node.

~~~
ThrustVectoring
That just answered why I think that human brains are doing computation. The
neurons fill the same role as transistors, and the patterns of neural
connections fill the same role as how the transistors are wired together.

I'm not saying that these are simple computations, or ones that are easy to
understand, or ones that can be done in reasonable timeframes on silicon.

For more useful discussion, I'd like to hear what you think the definition of
"computation" is - I suspect we're using slightly different meanings for the
word.

~~~
pokkanome
I am using computation in the strictly mathematical sense. As in dealing with
numbers. I do not think that our minds operate through a constant stream of
numbers that become thoughts.

In that way, a computer and a human are fundamentally different. You cannot
stimulate human thoughts as pure numbers. I think we need some extra layer of
yet-to-be-invented abstraction to achieve that goal.

Of course, we could go the route of trying to create a new model of thought
based around numbers, but that is proving to be difficult to understand. It
would not be a good idea to try to build an intelligent system that we cannot
completely understand because then all we could do is hope it works as we
intended.

~~~
ThrustVectoring
The map is not the territory. "Computation" is fundamentally an abstraction
for talking about that which various algorithms have in common. Algorithms
themselves are a high-level description of a series of well-defined tasks.
Computers aren't literally doing computation in the sense you are describing.
What they're doing is simple physics with lots of voltage levels. The
"computation" is a useful high-level description of what the computer is
doing.

I agree that there's a missing abstraction for talking about human thought -
it's a terribly complicated subject that isn't well understood. That doesn't
mean that the human brain is doing anything that's different on a fundamental
level than what computers can do. We don't have a high-level description of
how human though works like we do with a computer, but it doesn't mean that
human though has some kind of magic.

------
brock_r
I see a cheetah in the static, too...

~~~
igravious
It's right there stalking through the lush pixelated grass. Quite
unmistakable, I agree.

------
kordless
What if you showed the algorithm/network/whatever itself? You could 'take a
picture of it' and then train it to know that's itself, which would then
change what it looks like. Keep doing that until it doesn't change or you fail
trying.

~~~
eru
You could get into a loop. Though, what are you trying to accomplish here?

~~~
kordless
Consciousness?

