
Artificial Intelligence is stupid and causal reasoning won't fix it - wallflower
https://arxiv.org/abs/2008.07371
======
tasty_freeze
I recall reading about Searle's Chinese Room argument in Daniel Dennett's
"Consciousness Explained" about 25 years ago. Maybe the exposition there was
more complete than the version given in this article, but the argument of this
article is terrible.

The author says that the English speaker who simply manipulates symbols and
follows rules will never get the joke written in Chinese, even if the people
external to the room understand it and think it was produced by an
intelligence that understands the joke.

But that contains the assumption that the human is the consciousness in that
arrangement, when in fact the human is just the energy source which drives the
hardware. One might as well say that computers can never create a 3D drawing
because its power supply doesn't understand arithmetic.

~~~
dboreham
Ugh. I remember listening to Searle on the BBC in the early 1980s, and
becoming quite irate, as a 17 year old does, at his mistake in defining the
boundary within which intelligence should be found as the operator not the
whole room.

~~~
variaga
You can make a pretty good argument that the _rulebook_ is the Chinese-
speaking intelligence in the room. "But it's an inanimate object, if the human
stops following the rules it will be inert". Yeah, and if your mitochondria
stop producing ATP you'll be inert too.

Also. People taking Seale's position rarely reckon with just how big that
rulebook would be.

~~~
wrs
Daniel Dennett coined the term “intuition pump” for this, which is a great
concept to keep in mind so you don’t get caught by one. Wikipedia: An argument
“designed to elicit intuitive but incorrect answers by formulating the
description in such a way that important implications of the experiment would
be difficult to imagine and tend to be ignored.”

------
mmazing
The paper is right, all impressive achievements amount to just curve fitting.

But ... maybe that's why the human brain works so well. The 10s of billions of
neurons in your brain are designed to adapt to patterns and adjust to external
stimuli from sensory input.

Just like compression algorithms can only compress something so far before
losing data, maybe being limited by how much processing we are throwing at
this problem limits the usefulness of our solutions.

~~~
Darvon
If it was just a matter of horsepower we could perfectly simulate animals with
simpler brains already.

~~~
hprotagonist
"why does OpenWorm need a distributed supercomputer when the thing it's
simulating needs about 10 millicalories a day" is one of those category of
questions people don't like thinking about too much.

~~~
anotheryou
Are you sure the bulk of it is not emulating physics and environment?

I mean how many resources can you even throw at "302 neurons and 95 muscle
cells"?

edit: down the wormhole I go

just look at the screenshots of

the "brain":
[https://github.com/openworm/c302](https://github.com/openworm/c302)

and the sim:
[https://github.com/openworm/OpenWorm/blob/master/README.md#q...](https://github.com/openworm/OpenWorm/blob/master/README.md#quickstart)

Reading further, more physics:

> Some simulators enable ion channel dynamics to be included and enable
> neurons to be described in detail in space (multi-compartmental models),
> while others ignore ion channels and treat neurons as points connected
> directly to other neurons. In OpenWorm, we focus on multi-compartmental
> neuron models with ion channels.

------
mellosouls
Paper rendered as web page:

[https://www.arxiv-vanity.com/papers/2008.07371/](https://www.arxiv-
vanity.com/papers/2008.07371/)

------
foxes
Speculation in an absence of information. There are definitely biological
organisms that suffer the same mistakes as our curve fitting. Eg the bird that
pecks the red dot on the mothers beak. Its selected to pick the biggest red
dot, so if you put a giant red dot next to the nest they will starve. Seems
reasonable to believe human brains are lots of very good curve fitters working
together. If anything because this is so successful maybe it's all we actually
need. Specific neural nets for specific tasks. Maybe dualism is correct and
conciousness is something special, but we have no ideas on how to test that.

------
fizixer
\- loaded premise (that AI community thinks causal reasoning, whatever form
that may take, is the silver bullet)

\- Failure to distinguish b/w narrow and human-level AI

\- Zero mention of attention/transformer models

\- Zero mention of BERT, GPT, let alone GPT-3

Note-to-self: ignore and file away in the gary-marcus box.

~~~
variaga
Yeah, was not impressed. Skipping over the (opinionated) survey of recent AI
techniques (and their failure modes) and philosophic theories of cognition
that is the bulk of the paper, the only bit that claims to be novel (pp 29-31)
is an argument I will summarize as follows:

1) any computational AI can be represented as a finite state machine or FSM
(author calls it a finite state automata. Same thing.)

2) when said computational AI performs an "act of cognition" it (as a FSM)
will iterate through a defined series of states, based on a defined series of
inputs

3) It is possible to build a simpler FSM composed of a counter and a lookup
table that would take the same series of inputs + the counter as an input, and
produce the same state/output as the original computational AI

4) since the response to stimulus is identical, the 2 finite state machines
are equivalent.

5) if the state machines are equivalent, they must be equivalently conscious

6) the counter+look-up table is obviously not conscious ("reducto ad
absurdum")

7) from (6) and (5) no computational AI can be conscious

To me, this argument fails in the following manner; the only way to actually
construct the "simpler" finite state machine in step (3) above is to actually
let the computational AI react with the world first and record its
combinations of input and state. There is no way to predict what series of
states an arbitrary FSM will go through in response to a particular series of
inputs without actually running it. That would be equivalent to solving the
halting problem. Any program can be encoded as an FSM. If you could predict
the state sequence of such an FSM, you could tell whether the FSM would enter
the 'halt' state.

IMO this is analogous to arguing that:

1) the animatronic band at Chuck E. Cheese could be programmed to play
identical music to that which has been (previously) performed by a human band
(and recorded in perfect detail).

2) because they produce identical outputs the 2 bands are equivalent

3) if they are equivalent, they must equally be said to create original music

4) the animatronic band obviously doesn't create original music

5) from (3) and (4) no band can create original music

He also elides any discussion of whether or not actual human intelligence
manages to avoid the failure modes he uses to conclude that neural networks
are not intelligent - e.g. he mentions adversarial examples fooling visual
classifier networks without mentioning that "optical illusions" exist and
people will reliably misperceive certain images in certain ways too.

I actually agree that neural nets as they currently exist are aggressively
stupid, but the author concludes way too much.

TL;DR, author starts from a premise that there is something uniquely special
about human consciousness that machines can't duplicate, and reaches the
conclusion that there is something uniquely special about human consciousness
that machines can't duplicate.

~~~
AgentME
Also, depending on how physics and the mind works, it's possible that human
minds would be representable as finite state machines. If this argument were
valid, it's an argument against human consciousness too in that case.

------
wrnr
To be honest I struggle with causal reasoning myself. I can intuit on some
level that one thing causes another, and that in the process of causing the
"other" the "thing" itself is reflexively brought about. This leaves me stuck
in a place trying to make sense of things infinitely far apart in space and
time all influencing each other instantaneously.

~~~
renox
You can't send information faster than C, so if causality == communication
it's not instantaneously.

------
fouc
"all the impressive achievements of deep learning amount to just curve
fitting"

~~~
avmich
Yeah, and then the same Judea Pearl also admits that "we didn't expect curve
fitting to work so well".

~~~
visarga
That's because their mental model of AI is not good enough. Apparently humans
don't really 'understand AI', the irony.

By the way, causal reasoning is not the product of just one human, it is based
on experiments, observations and careful model building by the whole human
society over long spans of time.

We didn't understand even basic things such as infections and the role of
hygiene until recently. What does that say about our causal reasoning powers?
That we were stupid?

We know how COVID spreads and many people are still exposing themselves
without care, sometimes causing their own demise or somebody else's. Why isn't
causal reasoning working for us all the time?

I think humans can only do causal reasoning when they have a very good model
of the thing they are trying to understand. Causal intelligence is not in our
brains naturally, it depends on having access to specific models.

------
SpicyLemonZest
I feel like the author has unreasonably strict expectations with some of his
examples. Humans sometimes crash cars, misunderstand grocery lists, or learn
to say things just as racist as Tay did - surely he wouldn't argue that any
human who does dumb things lacks phenomenal consciousness.

~~~
tanatocenose
Not to say that _any_ dumb person lacks consciousness, but you might be
surprised how many cognitive scientists think not everyone is “conscious.”
There could be zombie/automatons living all around us.

~~~
visarga
> There could be zombie/automatons living all around us.

So now we have 'consciousness of the gaps'. It's an ever retreating concept,
as AI advances what we call consciousness recedes into these gaps. Now we're
discussing about how some humans are not really 'conscious', what next? Maybe
in the end the only remaining 'conscious' people will be philosophers who
don't believe in AI.

------
carrolldunham
>but that AI machinery - qua computation - cannot understand anything at all

Ugh, here we go. I swear this was all gone over with a very similar post just
a week ago where it was pointed out that if an author says physical things
can't 'understand' or whatever else, they are implying some non-physical soul-
spark in humans.

~~~
nurettin
Then it was pointed out wrongly and this isn't some spiritual talk. If you are
able to retrofit new knowledge into the observations and categories you've
previously formed by correcting what you already know and by changing
perspective so that you can have proper inputs without causing contradictions,
you understand it. Do you think curve fitting does this?

~~~
SpicyLemonZest
I'm not sure what you mean by "curve fitting" here. I would definitely say
that a deep neural net has observations and categories, and I think it's
pretty reasonable to characterize the training process as retrofitting new
knowledge into it by correcting its knowledge and changing its perspective.

~~~
nurettin
This is the reply I was expecting, and I think you are right, but a CNN
doesn't take into account any contradictions. It doesn't test the input for
correctness, it just tries to categorize and fucked if the results make no
sense.

~~~
SpicyLemonZest
I'd agree, but I'm not sure how much this is an inherent problem. To what
degree does it just reduce to needing a good conceptual framework for a "makes
no sense" category?

~~~
nurettin
I would be interested in the kind of model which will tell you something is
wrong when you show it a black square and call it a black square, then show it
a white square and call it a black square. Not the kind of model which would
average the results and adapt to nonsense.

------
euske
I can't stand with the unquestioned use of "understanding" here and there -
did the author give a more precise definition of that? (I couldn't find any in
the paper.) It reads more like a ranty blog piece than an academic article.
It's sad that some people will read this and get confused.

------
konjin
Thanks for the link. It will take quite a while to read the paper.

------
vixen99
We all know it but somehow ... 'it is not so much that AI machinery cannot
grasp causality, but that AI machinery - qua computation - cannot understand
anything at all.'

~~~
visarga
> Figure (8) shows a screen-shot from an iPhone after Siri, Apple’s AI
> ‘chatbot’, was asked to add a ‘litre of books’ to a shopping list; Siri’s
> response clearly demonstrates that it doesn’t understand language

So his conclusion is based on Siri, an AI assistant I would agree was
'stupid', but not representative of SOTA. It's unfair to judge AI by Siri,
Siri is a mass produced system with scaling costs, Apple can't host GPT-3 for
everyone yet. Not even Google can use the latest and greatest neural nets in
mass produced AI systems because they don't have the hardware and it would not
make economical sense.

------
pippy
> the ability to infer causes from observed phenomena

So 'reading the room'. In social settings you can't follow logic and rationale
blindly because there are these things on two legs full of meat and organs
that don't like it.

------
dandanua
Relevant: Youtube algorithm blocked a video on a popular chess channel with
1800 videos, supposedly for a "racism"
[https://www.youtube.com/watch?v=KSjrYWPxsG8](https://www.youtube.com/watch?v=KSjrYWPxsG8)

~~~
tasty_freeze
What is your point? People make bad calls all the time too. I got a ticket for
parking my car behind a sign saying "no parking beyond this point."

As a secondary point, I suspect youtube's classifier is a bag of heuristics
they are constantly fiddling with. Any failures of it are no indictment of the
futility of developing AGI.

~~~
dandanua
1\. I don't need "a point" to attach a relevant information.

2\. But I indeed have one - giving algorithms so much power without an
appropriate checking and appeal process is clearly wrong.

3\. This doesn't imply we shouldn't do science or develop AI systems.

------
avmich
It's like a conspiracy that Americans didn't land on the Moon. Maybe a few
decades will pass and people will be surprised to learn that on a leading
technical forum people entertained ideas of absence of progress.

