
There is no general AI: Why Turing machines cannot pass the Turing test - sel1
https://arxiv.org/abs/1906.05833
======
asdfasgasdgasdg
I am skeptical of any paper with this result, because several very plausible
events would likely prove by example that computers can respond as a human
would. It requires accepting certain assumption though.

1: Physicalism is true. Nothing exists that is not part of the physical world.

2: The physical world obeys mathematical laws, and those laws can be learned
in time.

2.1: The physical contents of the human body can eventually be learned with
arbitrary/sufficient fidelity.

3: Any mathematical rule can be computed by a sufficiently advanced computer.
(Edit: or maybe a better assumption: the mathematical laws that underlie the
universe are all computable.)

4: Computational power will continue to increase.

Subject to these assumptions, we will eventually gain the ability to simulate
full physical human beings within computers. Perhaps with some amount of
slowdown, but in the end, these simulated humans would be able to converse
with entities outside the computer. In all likelihood, computers will pass the
Turing test long before this. But if they don't, simulated humans seem like
something that is certainly possible or even probable, and therefore the
result of this paper is likely incorrect.

~~~
GTP
I also was of the opinion that there was the possibility to reach general AI
in the future, but now I will take time to read this paper. I'm not skeptical
about it because we know your assumption number 3 to be false: there are
mathematical functions that aren't computable. For a proof you can look here
[1] or google it. [1]
[https://www.hse.ru/mirror/pubs/share/198271819](https://www.hse.ru/mirror/pubs/share/198271819)

~~~
asdfasgasdgasdg
Thanks. My assumptions are imprecise, because as I mentioned I am no academic.
So I appreciate your correction. To make the post stronger, replace assumption
3 with "All of the mathematical rules which underlie physics are computable."
IMO, this doesn't materially affect the plausibility of these assumptions.

~~~
YeGoblynQueenne
>> "All of the mathematical rules which underlie physics are computable."

I really have to ask- what are those mathematical rules that underlie physics
and how do you know that they are computable?

~~~
asdfasgasdgasdg
All of chemistry and physics as far as we're aware are described by
mathematical rules that can be computed. Things like the strong and weak
forces, gravity, etc. all have mathematical descriptions. There may be other,
non-computable aspects to physics, but I am not aware of them if so.

~~~
YeGoblynQueenne
>> There may be other, non-computable aspects to physics, but I am not aware
of them if so.

In that case, what do you base your assumption (3) on?

~~~
asdfasgasdgasdg
Let's consolidate this discussion into the other thread, since it seems like
you're offering the same objections here and there. :)

------
dmreedy
This would be a very good paper if it were titled, "What makes general AI
hard", and it didn't try to make any claims about uncomputability.

Beyond the somewhat useful collection of some of the prickly points of
whatever it is that humans do that we call Intelligence, this particular
discussion isn't bringing much to the table in support of its incredibly
strong claims. It is functionally an extended application of Searle's Chinese
Room argument to these hard points, usually built on question-begging premises
(for example, regarding "biography" as a component of dialogue, quote,
"Because machines lack an inner mental life – as we do not know how to
engineer such a thing – they also lack those capabilities".)

The paper addresses the traditional response to Searle thus: "How, then, do
humans pass the Turing test? By using language, as humans do. Language is a
unique human ability that evolved over millions of years of evolutionary
selection pressure... machines cannot use language in this sense because they
lack any framework of intentions". This is even blunter than Searle's actual
counter, that there's something specific about biological machinery that makes
it more capable in this regard than digital machinery. Instead, we're simply
_told_ that language is a special Human thing, Humans are not Turing-
computable, and thus it's probably something computers can't do.

I am a big proponent of anti-hype in AI technology and of the idea that
language cannot be separated from the general human experience of
Intelligence. I'm very frustrated when people assume we've solved a given
problem in AI because we've been able to tackle some toy examples. And I'm a
big fan of proving what _can 't_ be done. But this is not a particularly
valuable exercise in any of those things, perhaps beyond prodding some of the
hubris of the current cult of "we're almost there".

~~~
simonh
Part 2 of the article is really good. It’s a shame if people are put off by
the premise of the article. The authors have already pre-judged the outcome
though.

“Because machines lack an inner mental life...”

Right, well that’s it then. Case closed. No point in researching general AI
any more, might as well put all those researchers on unemployment benefits.

The authors do say we don’t know how to engineer a machine with an inner
mental life and consciousness, and this is true. It’s why like you (dmreedy)
I’m a skeptic of claims that general AI is just round the corner. It isn’t and
the Singularity is a good long way off. Our current efforts in AI are
pitifully primitive, at best many orders of magnitude dumber than a fruit fly.
That doesn’t lead me to believe therefore that we will never learn to solve
this problem, or that this problem is in principle not solvable.

~~~
roywiggins
For those skeptical that you even _need_ an inner mental life to be
intelligent, Peter Watt's novels "Blindsight"* and "Echopraxia" are a must-
read.

* [https://rifters.com/real/Blindsight.htm](https://rifters.com/real/Blindsight.htm)

~~~
asdfasgasdgasdg
These are great novels, but they are just novels. To the best of our
knowledge, all intelligent species in the universe have an "inner sight," or,
the ability to observe at least some of their own mental machinery. Although
the starfish of Blindsight seem possible, whether they are likely seems like
another question entirely.

------
marcinzm
The paper seems to basically say, as I read it, "the current approaches for
modeling human behavior are unlikely to be perfect enough so no approach will
ever work." I find that to be filled with a lot of unsupported strong
assumptions. Specifically, it talks about modeling language with machine
learning based on input-output pairs.

But, for example, if you took a human brain, deconstructed it's physics down
to individual chemical reactions then you're no longer trying to predict a
black box with input-output pairs. You literally have a copy of the black box
in mathematical terms.

Like most of these papers it basically boils down to positing a solution to a
problem as the only solution and then claiming that solution doesn't work so
no solution would work.

~~~
notahacker
> if you took a human brain, deconstructed it's physics down to individual
> chemical reactions

Frankly, the idea that we can mathematically model 10^21 simultaneous
[unobserved] chemical reactions in an individual's head in real time
sufficiently well to result in a generalised model of cognition which can be
applied to other uses seems more of a stretch than modelling language with
ML...

~~~
DuskStar
Why on earth does it have to be real time? "simulate a human brain" isn't a
practical suggestion for a path forward, it's an _existence proof_ that says
at least one route is possible (and thus others probably are too).

~~~
notahacker
> Why on earth does it have to be real time?

Because you're not going to pass a Turing test with an approach which takes
days to simulate the process that generates a reply. (And since otherwise
you've got to align the speed of the inputs to the brain simulation with the
speed it simulates the chemical reactions in response to them, slowing things
down may not be simplifying things anyway). It's not an _existence proof_ if a
simulation of a human brain depends on technology and processes no more
grounded in things which exist than "ask the omnipotent deity to do it for
us". If you can simulate each individual chemical reaction in a byte, you're
still using a year's worth of internet traffic to simulate the number of
simultaneous reactions going on, so even if you Moore's Law away the issue
with the required processing power and assume into existence the technology to
observe whole brain activity at the molecular level, you probably need Yahweh
or Krishna to program the simulation...

~~~
DuskStar
I think we're using different values of "existence proof" here. You seem to be
taking it as "we can't build this, therefore it is not a proof". It was
intended as "it is theoretically possible to build this", and "it will run
really, really slow" isn't a disqualifier there. (And it could still pass a
turing test - you'd have to give the human equivalent constraints, but that's
true for any "text on a screen" test anyways)

How about this. Would being able to perfectly simulate a _Caenorhabditis
elegans_ brain be an existence proof for whole-brain simulation, or do you
think that there's some sort of discontinuity before you get to human brains?

~~~
notahacker
Not really, I'm arguing that "it is theoretically possible for humans to build
and program Turing machines to undertake human whole brain simulation" relies
entirely upon magical thinking. It's not so much a case of "we don't have the
processing power", though we don't, as "there's no theoretical basis for
assuming the human mind[1] has the capability to parse and understand the
information content of a molecular structure so complex it contains the human
mind within it in sufficient detail to program an accurate simulation of a
human mind" (the real time stuff is moot, but since the whole point of a
Turing test is a human can be convinced that another human is sat the other
side of a terminal, an extended delay whilst the machine parses the complexity
of "what is your name?" is a pretty hard fail).

I think it's pretty obvious there's a discontinuity between simulating
mechanical responses of a nematode worm to stimulation of fewer neurons than
I've lost typing this sentence and achieving AGI by human brain emulation,
though we're finding the worm pretty tough. Nobody is arguing nematode worms
have intelligence, for a start...

[1]obviously one could assume sufficiently powerful AGI could do it, but if
the prior existence of AGI is a prerequisite for a particular approach to AGI,
we can safely ignore it as a proof of routes to AGI.

------
nfiedel
Skimmed a bit and found some snippets, from which I can't take this paper
seriously as it dismisses unsupervised learning / language models over large
datasets. Yes, sec 4.3.4 briefly discusses recent work in this area, but only
briefly and dismisses it by cherry-picking the least positive result of many.

"Only if we have a sufficiently large collection of input-output tuples, in
which the outputs have been appropriately tagged, can we use the data to train
a machine so that it is able, given new inputs sufficiently similar to those
in the training data, to predict corresponding outputs"

This ignores of recent work with large language models that do generalize,
zero-shot, to novel tasks.

"supervised learning with core technology end-to-end sequence-to-sequence deep
networks using LSTM (section 4.2.5) with several extensions and variations,
including use of GANs"

This reads like something generated from a LM (e.g. GPT-2): * Where is any
mention of attention or Transformer? * GANs? Have any recent works used GANs
successfully for text? There are a few, e.g. CycleGAN, but not widespread
afiact.

~~~
YeGoblynQueenne
>> This ignores of recent work with large language models that do generalize,
zero-shot, to novel tasks.

Which work is that?

~~~
nfiedel
OpenAI trained a large (1.5B parameter) Transformer model called GPT-2 on a
diverse set of pages from the web. From their paper, GPT-2 "achieves state of
the art results on 7 out of 8 tested language modeling datasets in a zero-shot
setting"

Blog entry with link to the paper: [https://openai.com/blog/better-language-
models/](https://openai.com/blog/better-language-models/)

~~~
YeGoblynQueenne
Thank you for the link.

I'm not sure I'm convinced by OpenAI's claim that their model performs zero-
shot learning. It depends on what exactly do they mean by zero-shot learning.
My understanding, from reading the linked article (again; I remember it from
when it was first published) is that, although their GPT-2 model was not
trained on task-specific datasets there was no attempt to ensure that testing
instances for the various tasks they used to evaluate its zero-shot accuracy
were not included in the training set. The training set was a large corpus of
40 gigs of internet text. The test set for e.g. the Winograd Schema challenge
was a set o 140 Winograd schemas (i.e. short sentences followed by a shorter
question), so it's very likely that the training set had comprehensive
coverage of the testing set, for this task anyway. I don't know about the
other tasks.

------
malft
This sentence sums up the paper:

"Turing machines can only compute what can be modelled mathematically, and
since we cannot model human dialogues mathematically, it follows that Turing
machines cannot pass the Turing test."

~~~
roywiggins
It feels more like an argument that _chatbots_ will never exhibit general AI.

Which ought not to be controversial. The point of the Turing test isn't to
provide a blueprint (just optimize human dialogue and you'll eventually get
general AI) but a test to see that your general AI works. You might build a
general AI that fails the Turing test, but you won't be able to pass the test
without a general AI. That's the idea.

Unfortunately people have taken the wrong idea from the Turing Test and
decided to attack the "faking human communication" thing directly. Which is
fun! But anyone who in 2019 thinks that better chatbots will eventually
develop general AI are delusional. I don't know if anyone with more than a
passing interest actually does believe this, so it feels like this paper is
arguing against a straw man.

~~~
simonh
Quite, the idea that mathematical modelling of language will lead to general
AI is absurd. The simplest way to defeat chatbots and mathematical language
generation models is to teach them something new, like a game or other rules
based system and ask them questions about it and then play it. They fall flat
on their face immediately because they have no ability to build, interrogate
and adapt models of systems.

The authors’ credence of Searle’s Chinese Room argument is telling. The
Chinese Room is misdirection. We are invited to consider an agent in a room
manipulating symbols on cards and asked could such a system be considered
conscious. In fact there might need to be trillions of these agents in rooms
covering an area many orders of magnitude larger than the Earth, manipulating
millions of trillions of symbols every millisecond. Asking if a system like
that could be conscious is a whole different question.

“Here however Turing commits the fallacy of petitio principii, since he
presupposes an equivalence between dialogue-ability (as established on the
basis of his criterion) and possession of consciousness, which is precisely
what his argument is setting out to prove.”

Sigh, no. Dialogue ability isnt claimed to be _equivalent_ to possession of
consciousness, that’s putting the cart before the horse. It’s a possible
product of consciousness. You could have a conscious system incapable of
sensible dialogue, but the point of the test is you can’t have sensible
dialogue without consciousness. That’s a claim and it’s arguable, sure, but
dialogue ability doesn’t lead to consciousness. That’s daft. They and Searle
look at this from entirely the wrong direction.

~~~
roywiggins
And everyone knows there are conscious agents who can't hold a sensible
conversation: toddlers. They're more conscious than any chatbot could ever be,
but they'd fail the Turing test. So would _dogs_ , and dogs show more
recognizably cognitive ability than a chatbot. Let alone nonhuman primates,
who are all much, much smarter than a chatbot and would all fail the Turing
test.

It would be one thing if we had built an apelike intelligence and found it
impossible to make something smarter, but as we can't model them either,
worrying about not entirely understanding language seems beside the point.

------
dsr_
"We don't know how to do it, therefore it is impossible" is silly.

A real result would be "We prove that human-equivalent intelligence is
impossible", which would be quite a shocker since we have the existence proof
of actual humans.

~~~
Conjoiner
we do not have a workable definition of intelligence

------
mindcrime
_Since 1950, when Alan Turing proposed what has since come to be called the
Turing test, the ability of a machine to pass this test has established itself
as the primary hallmark of general AI._

That's... not true. I mean, to the general public at large, sure, they think
"the Turning test is the hallmark of AI." But I don't think any serious AI
researchers actually agree with that sentiment. And for good reason: among
others, the fact that "programming" a machine to pass the Turing test is
basically programming it to lie effectively. A useful skill to have in some
contexts, perhaps, but not exactly the defining trait of intelligence. Beyond
that, the "Turing Test" (or "Imitation Game") as originally specified, if
memory serves correctly, was fairly under-specified with regards to rules,
constraints, time, etc.

This whole thing also blurs the distinction between "human level intelligence"
and "human like intelligence". It seems reasonable to think that we could
build a computer with intelligence every bit as general as that of a human
being, and which would still fail the Turing Test miserably. Why? Because it
wouldn't actually have human _experiences_ and therefore - unless trained to
lie - would never be able to answer questions about human experiences. "Have
you ever fallen down and busted your face?" "Did it hurt like hell?" "Did you
ever really like somebody and then they blew you off and you felt really
depressed for like a week?", "have you ever been really pissed off when you
caught a friend lying to you?" etc. A honest computer with "human level"
intelligence would be easily distinguishable as a computer when faced with
questions like that, but it might still be just as intelligent as you or I.

------
ilaksh
The paper does not have any redeeming qualities and the title and abstract do
not even align with the content.

In my opinion, conversation and other high level skills are sort of the icing
on the cake of general intelligence. I believe that the key abilities that
enable general intelligence are those that humans share with many other
animals.

So I think that a research goal of animal-like intelligence will give the most
progress as long as the abilities of more intelligent animals like mammals are
the goal.

I think that people who have worked closely with animals or had a pet will
more easily recognize that.

Animals adapt to complex environments. They take in high bandwidth data of
multiple types. They have a way to automatically create representations that
allow them to understand, model and predict their environment. They learn in
an online manner.

No software approaches true emulation of the subtleties of behavior and
abilities of an animal like a cat or a dog.

Obviously it's another step to say that leads to human intelligence. I'm not
trying to prove it, but will just say that it seems mainly to be a matter of
degree rather than quality. If cats and dogs are not convincing for you, look
at the complexity of chimpanzee behavior.

So this is just a half baked comment on a thread, and I would not try to
publish it, but I don't think that the paper is actually much more rigorous
and yet we are supposed to take it seriously.

arxiv is amazing and we should not change it, but you have to keep in mind
that there is literally zero barrier for entry, and anyone's garbage essay can
get on there with the trappings of real academic work. So you just have to
read carefully and judge on the merit or total lack thereof.

------
_0ffh
Unless this work disproves the Church–Turing thesis, I suppose it can safely
be disregarded.

Well, unless you 1) want to ascribe supernatural powers to the human brain, or
2) assert that human intelligence is not general. The little cynic in me is
gleefully considering option 2 right now...

------
ksaj
By this argument, prop planes, jets, helicopters and rockets don't fly because
they don't flap their wings like general flying creatures do.

To me, it seems the question of general AI is bordering on semantic word
games. We'll always come up with new reasons something isn't generally
intelligent this way.

------
czr
Was this written by GPT-2?

------
Bootvis
Is this paper any good? The way the abstract makes me wary of the results
claimed.

~~~
johnfactorial
It is. It is a thorough study of the necessary components of real human
dialogue, and a well-defended claim that there is no model, nor even any
existing TYPE of model, which can model human dialogue. Human dialogue, the
paper says, is a temporal process, and the two mathematical models for such
processes--differential and stochastic--are insufficient.

From the paper: "For example, it is not conceivable that we could create a
mathematical model that would enable the computation of the appropriate
interpretation of interrupted statements, or of statements made by people who
are talking over each other, or of the appropriate length of a pause in a
conversation, which may depend on context (remembrance dinner or cocktail
party), on emotional loading of the situation, on knowledge of the other
person’s social standing or dialogue history, or on what the other person is
doing – perhaps looking at his phone – when the conversation pauses."

Optimists in the comments here have hope for advances in mathematics that
would give us a new method for modeling that could be applied. Maybe their
hope isn't unfounded. I'm just a dude who read an academic paper. But I did
enjoy it.

~~~
marcinzm
>Optimists in the comments here have hope for advances in mathematics that
would give us a new method for modeling that could be applied. Maybe their
hope isn't unfounded. I'm just a dude who read an academic paper. But I did
enjoy it.

I'd say most of the arguments in the comments boil down to "human brains do it
so the paper's argument requires human brains to work on mechanisms other than
non-quantum physics, which is mathematically model-able, which is a strong
assumption."

~~~
YeGoblynQueenne
As far as I can tell, the strongest assumption around is that, because we can
model some physical processes, we can model _all_ physical processes.

We have some maths. We haven't solved all of physics. We don't even know if
solving all of physics will help us model intelligence. We have no way to know
whether, even if we had an accurate model of the function of the brain and of
our intelligence, we would be able to run it as a program, and on an actual
computer.

We know _so_ little about what intelligence is and the dead certainty that,
because we have maths, we should be able to reproduce it is just as unfounded
as the opposite assumption, championed by the article.

------
dooglius
This rubbish does not deserve to be on arXiv, let alone HN.

~~~
sctb
> _Please don 't post shallow dismissals, especially of other people's work. A
> good critical comment teaches us something._

[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

------
hellllllllooo
Hasn't the Turning test been discredited as a test of general AI?

Overall from reading the abstract it seems like a pretty obvious conclusion.
Same applies to robotics where the only successful cases are very constrained.

It has recently occured to me to ask why we have decided to try solve
autonomous driving before solving any other seemingly easier robotics
problems. Other than robo-vacuums, which aren't particularly complex, we have
jumped straight to trying to solve one of the hardest unconstrained robotics
and AI challenges in automotive environments.

edit: Getting downvoted, if you disagree could you reply with why?

~~~
marcinzm
>It has recently occured to me to ask why we have decided to try solve
autonomous driving before solving any other seemingly easier robotics
problems. Other than robo-vacuums, which aren't particularly complex, we have
jumped straight to trying to solve one of the hardest unconstrained robotics
and AI challenges in automotive environments.

I'd say it's because other problems are actually difficult in hidden ways or
lack much of a value proposition given existing mechanical aids. Cars are in
some ways simple because they have plenty of space for electronics and have an
easy to automate set of controls. That's not even getting into industrial
robotics which is very popular.

~~~
hellllllllooo
Industrial robotics are very well constrained and the environment can be
adapted to them which is why they're successful. Was thinking more about
consumer robotics of which robo vacuum is the only real autonomous product
right now.

IMO the answer is that it's very hard to build reliable robots + AI even in
simple environments and AVs are still very dependant on the driver. The size
of the potential market if it pays off is huge but removing the driver
completely is going to take a long time.

~~~
marcinzm
Automobiles are some of the most constrained consumer problems in my opinion.
There's strong laws on how cars can operate and the environments they operate
in. Cars operate in constrained and relatively simple ways. Most other
consumer problems are optimized around a human body's abilities which are very
complex.

~~~
hellllllllooo
It's still incredibly complex. Reliably handling an area like downtown SF
requires a robot to detect, understand and predict a lot of diverse human
behavior. All the edge cases lead me to think there will be a supervisory
driver in the car for a long time unless we constrain the environment further.

