
Why general artificial intelligence will not be realized - sega_sai
https://www.nature.com/articles/s41599-020-0494-4
======
abetusk
Here is, to me, the relevant line:

"Hubert Dreyfus, who argued that computers, who have no body, no childhood and
no cultural practice, could not acquire intelligence at all. One of Dreyfus’
main arguments was that human knowledge is partly tacit, and therefore cannot
be articulated and incorporated in a computer program. "

I have not read the rest of the article but in the introduction it's stated:

"The article further argues that this is in principle impossible, and it
revives Hubert Dreyfus’ argument that computers are not in the world."

Wiktionary defines tacit as "Not derived from formal principles of reasoning"
[1].

So the main argument is that humans have intelligence that is impossible to
express through reason or codification. In other words, humans have a literal
soul, divorced from physical world, that cannot be expressed in our physical
world thus making any endeavour to create artificial intelligence impossible.

This is a dualist line of reasoning and, in my opinion, is nothing more than
theology dressed up in philosophy.

I would much rather the author just flat out say they are a dualist or that
they reject the Church-Turing thesis.

~~~
SiVal
Tacit knowledge is knowledge that results from adapting to experience, like
learning to tie shoelaces with practice, rather than something like finding
the derivative of sin(x) by the usual mathematical proof method (reasoning
step by formal step).

Every deep learning system has tacit knowledge: it knows a chair when it sees
one but can't explain how it knows. It just adapted its connections to
training until it got it right most of the time.

So computers are provably capable of what, in humans, is defined as tacit
knowledge and can be given sensors and actuators to learn from. A car can
learn to parallel park with practice. It can't explain how it does it, but you
_can_ copy the trained system into a new car.

I don't see why you couldn't produce a combination of sensors and actuators
that vastly exceeds what any human is capable of.

But AGI isn't that. It's a variety of information processing techniques
(algorithms) deployed as a toolbox managed by meta-techniques (more
algorithms) that know how to deploy the others in various combinations. We
don't yet know much about the management algorithms, but I don't see any
reason in principle why we couldn't eventually find some and invent others.

~~~
baddox
> Every deep learning system has tacit knowledge: it knows a chair when it
> sees one but can't explain how it knows.

A traditional computer program that can find the derivative of sin(x) also can
not explain how it knows.

~~~
dreamcompiler
> A traditional computer program that can find the derivative of sin(x) also
> can not explain how it knows.

Oh but it _could_ , and that's the point. Some computer differentiation
techniques just follow the same rules you learned when you took calculus. They
typically don't show you which rules they followed, but they easily _could_.
Other differentiation techniques are more exotic but there's no reason they
couldn't show you the chain of computations and/or deductions they went
through to arrive at that derivative. Such programs can easily justify their
results and even teach humans calculus if configured properly.

Contrast that with the chair example. It is impossible right now to write a
program that can show a human the chain of reasoning it went through to decide
some image is a chair, _because no such chain exists._ There's a giant
iterated polynomial with nonlinear threshold functions and a million
coefficients, but there's no chain of reasoning.

~~~
strken
I'm not sure a human can explain how they know a chair is a chair, either.
They can come up with a post-hoc rationalisation, but that's not guaranteed to
really represent the decision-making process they went through.

At best you get an answer that describes one or more conscious decisions and
leaves the unconscious decisions out, such as "it looks a lot like a stool
because it's low to the ground and has three legs, but it has a back, so I
think it's a chair"; when the real answer is that they have a bunch of
pattern-matching visual neurons, and those neurons feed into other neurons
that detect more complicated patterns, and the concept of a chair eventually
emerges.

------
PaulDavisThe1st
This is possibly the worst article I have ever seen in Nature.

It is barely at the level of an undergraduate essay on AI (or AGI or ANI, as
the author prefers). A hand-waving argument about "computers not being in the
world" that relies on not bothering to define "computers", "being in" or "the
world". Convenient avoidance of almost every more subtle anti-Searle/anti-
Dreyfus critic (notably Dennett). Almost wilfull narrowing of a much broader
argument about the nature of reality and the connection between causality and
conventional parametric science, when these things lie at the heart of the
author's argument (such as it is).

It is amazing that Nature chose to publish this.

~~~
mturmon
Note, is not in the the flagship journal, it's an affiliated journal under the
Nature imprint. There are about 40 other such:
[https://www.nature.com/siteindex](https://www.nature.com/siteindex) (see
under "N").

It's a Springer joint. Note that the counterpart journal, _Science_ , is
published by the AAAS, and is not for profit the way _Nature_ is. If you're
interested in science, _Science_ is a great one-stop shop and it's about $120
per year.

Sometimes one wonders whether the chase for exciting stories affects some
_Nature_ publications.

~~~
maest
Slightly off-topic:

I would be interested in keeping up the latest cutting-edge science
developements, but I expect a Nature subscription is too heavy-duty for that.
Not necessarily because of the price, but I don't think I would end up using
it.

Instead, what would fit my interests better would be a science newsletter
that, say, once a month summarizes the most interesting recent developements
and sends them to my inbox. I would then use that as a jump-off point and even
read the full articles I care about in Nature.

Are there any such newsletters?

~~~
mturmon
How about one of these e-letters [0] backed up by a _Science_ digital
subscription to actually read the content?

Referring to a recent TOC [1], the "Research Articles" would likely be more
in-depth than you want, but there are also summarizing "Perspectives",
"Features", and "Reviews" that I generally find OK for medium-difficulty
reading.

[0] [https://www.sciencemag.org/subscribe/get-our-
newsletters](https://www.sciencemag.org/subscribe/get-our-newsletters)

[1] [https://science.sciencemag.org](https://science.sciencemag.org)

------
Strilanc
> _In the book [Dreyfus] argued that an important part of human knowledge is
> tacit. Therefore, it cannot be articulated and implemented in a computer
> program. [...] [...] [...] These skills cannot just be learned from
> textbooks. They are acquired by instruction from someone who knows the
> trade._

This is a classic example of the Mind Projection Fallacy [1], where a property
of how you think is assumed to be a property of reality.

It's true, for humans, that it is simply not possible to be told how to ride a
bike and then be good at riding a bike. No matter how carefully and completely
you explain to a human what they will have to do, when you put them on a bike
for the first time they will struggle.

The mistake is assuming that this "have-to-really-do-it" effect is a
limitation intrinsic to bike-riding-knowledge instead of a limitation in human
learning and communication mechanisms. The mistake is assuming this property
will generalize to all bike riding systems.

In a computer system, what would be tacit knowledge for a human is no longer
tacit. If you create a computer program that can successfully control one bike
riding robot, that program can be copied to a freshly built bike riding robot
of the same make. The new robot will then successfully ride a bike the first
time it is placed on it, without any hint of the human "have-to-really-do-it"
struggling phase.

It can be intuitively useful to imagine computers as having the ability to
"super communicate" in a way that humans simply can't. That has its
advantages, and its disadvantages. If you had super communication you could
super-explain to a blind person what it was like to see and if they ever did
gain their sight there would be no "Oh so _that_ 's what you meant" moment. On
the other hand, a heroin addict could super-explain being addicted to heroin
to you.

1:
[https://en.wikipedia.org/wiki/Mind_projection_fallacy](https://en.wikipedia.org/wiki/Mind_projection_fallacy)

~~~
TheOtherHobbes
The point isn't bike riding. Bike riding is an example of a _class_ of
problems, and your solution begs the question.

The point is that humans and computers operate differently. The human approach
is based on adaptive experiential heuristics.

The computer approach is based on explicit formalism. (Even in neural
networks, there's still a formal model. It's just made of weightings instead
of logic paths.)

The epistemology of these approaches is completely different. The problem
isn't getting a computer to ride a bike, it's getting a computer to _learn to
ride a bike how a human learns._

Why would anyone do this? Because adaptive experiential heuristics are far
more flexible and generalisable than explicit formalisms. And - it suggests
here - you can't have real AGI without them.

So the problem then becomes unpicking what "adaptive" and "experiential"
really mean. Both rely on huge accumulations of tacit knowledge and _tacit
motivations._

If this isn't obvious, consider that a human child will learn how to ride a
bike and then go and have a lot of fun with it. An ideal bike-riding computer
doesn't even have a concept of fun.

The human experience of fun is a complex system of experimentation,
exploration, reward, and challenge, combined with physical, emotional, and
mental correlates.

This matters because play in childhood helps develop the heuristics that
adults use for problem solving, and for personal motivation and satisfaction.

Even more simply, the problem is the difference between building a workable
but dumb bike riding machine and building a machine that will improvise bike
riding as a goal for itself, will "enjoy" the experience, and will generalise
from that to mastery of other domains.

~~~
Strilanc
> The problem isn't getting a computer to ride a bike, it's getting a computer
> to learn to ride a bike how a human learns.

This is just a more sophisticated way of assuming that there's something
human-intrinsic about bike riding. Human-like is not the only way to approach
doing or learning. Whatever works, works.

~~~
codr7
But humans learn by themselves, from observing the world around them and
drawing conclusions. Which is a major advantage and a core problem in strong
AI.

The only reason it works is that a lot of humans put a lot of effort into more
or less figuring out exactly how to ride a bike. It doesn't generalize at all,
teaching the same robot to ride a skateboard would mean doing it all over
again.

~~~
phonypc
> _It doesn 't generalize at all, teaching the same robot to ride a skateboard
> would mean doing it all over again._

This doesn't sound right, but I'm just a curious layman when it comes to AI.

I'm thinking AlphaGo vs. AlphaZero. Hypothetically couldn't the same relation
exist between an AlphaBike and AlphaRide?

~~~
webmaven
Yes, and not just hypothetically; more generally, transfer learning is
definitely a thing, and has been shown to reduce training requirements for
unrelated tasks.

------
sega_sai
I posted the article because it is thought provoking, although I disagree with
it. The reason is for example statements like this "The real problem is that
computers are not in the world, because they are not embodied." "And I shall
argue that they cannot pass the full Turing test because they are not in the
world, and, therefore, they have no understanding." First I think that whether
something is embodied or not doesn't matter. Our senses in the end could be
likely approximated by arrays of numbers fed to a computer, so I don't think
lack of body is such an issue. Regarding understanding by machines, that is
clearly the issue for current AI, but at least based on what I know about
modern machine translation, there is already something that works with
concepts/abstract terms and their relations, which looks to me like a
beginning for abstract reasoning...

~~~
jchrisa
My hunch agrees with the paper. Because computers operate on the rational
numbers instead of the real numbers, they can never be embodied. They are
always in a simulation...

~~~
credit_guy
This is like saying CD players will never be as good as vinyl record players
because the first use digital signals (rational numbers) and the second
analogue (real numbers).

~~~
pwm
My audiophile friends would vehemently agree with this statement :)

~~~
michaelmrose
Call us when they can back up this assertion in double blinded tests.

~~~
MertsA
They can rather easily discern between the two in double blind tests. The
vinyl recording has a noticeable amount of noise in it. Vinyl record players
can't exceed the kinds of signal to noise ratios that human hearing can
detect. The more ridiculous claims are the audiophiles who pretend to
distinguish between ultra high sample rate audio and 44100Hz or 48000Hz
sampled audio.

~~~
amanaplanacanal
You are comparing the wrong things in the test. Play the vinyl record,
recording it in cd quality, then compare that recording with the original
vinyl. They won’t be able to tell the difference.

------
godelski
Embodiment being a killer feature for AGI seems weird to me. Not only is "what
is a body" vague, but it is entirely possible to build a body. So why would it
prevent AGI?

We do know that embodiment is a sufficient condition for general intelligence:
if we assume humans have general intelligence then it is clearly a sufficient
condition. But the question of necessary is more interesting because we have
to actually ask what embodiment means.

Is a computer a embodied machine? What about a robot that can explore its
environment? What about a simulation with an environment? If no, what makes us
distinct? If yes, does embodiment even matter?

To me it is clear that feature space is the more important issue. It is also
clear that embodiment helps with creating a more complex and rich feature
space. The ability to move around and interact with your environment greatly
expands the complexity of the environment.

I think the bigger question is about our ability to create rich enough
environments to generate intelligence. Even if we can get machines in bodies,
can we get them into the complex and evolving environmental pressures that we
experienced over millions of years (without robots living for similar
timescales)? It is reasonable to think that at some point in time we'd be able
to have that kind of computational power. It is also possible that the
learning function is incredibly difficult. With a large and complex feature
space there are many local extrema and it may be possible that general
intelligence is only possible with a few of these (essentially we can have an
estimation similar to the Drake Equation). But overall, I'm not sure there
really is any issue that means AGI is impossible. Maybe at current knowledge
and computational limits, maybe for all of the foreseeable future! But I don't
see any limitations in physics that are killers.

~~~
dane-pgp
> We do know that embodiment is a sufficient condition for general
> intelligence: if we assume humans have general intelligence then it is
> clearly a sufficient condition.

I'm not sure what you mean by "sufficient condition" here. Consider:

"We do know that having a moustache is a sufficient condition for general
intelligence: if we assume moustachioed humans have general intelligence then
it is clearly a sufficient condition."

~~~
theptip
It’s philosophical/logical jargon referring to whether one event causes
another:

[https://en.m.wikipedia.org/wiki/Necessity_and_sufficiency](https://en.m.wikipedia.org/wiki/Necessity_and_sufficiency)

You’re rightly confused because the GP formulated a non-sequitur. Embodiment
is if anything a necessary, but not sufficient condition for the human brain
to develop intelligence. It’s not a sufficient condition on its own for
general intelligence; cats are embodied too.

~~~
godelski
I would not agree that embodiment is a necessary condition for human level
intelligence.

The reason I use sufficient is more broad. A cat does have intelligence. Human
level? No. Intelligence? Yes. As I explained in the post, embodiment enables a
rich feature space, which is what makes it a sufficient condition. It isn't
just the simple act of having a body, but the ability to interact with the
environment creating a more rich environment. I cannot think of any creature
(by definition all having bodies) that doesn't have some form of intelligence.
But we need to distinguish "human level" intelligence from "intelligence" and
"human level" from "human like." These are different things.

------
vidarh
Any article making such a claim need to start with explaining what evidence
they have that human brains are not computing devices that can be replicated
or simulated.

Because absent evidence that there is something fundamental preventing us from
one day copying the structure of a human brain and ending up with a working
device, whatever claims they make are hand-waving.

~~~
lurquer
>Any article making such a claim need to start with explaining what evidence
they have that human brains are not computing devices that can be replicated
or simulated.

If X and Y appear dissimilar, the burden of proof is on he who would argue
they are similar.

If one contends a brain and a computer and the functions of each appear
similar, then one is being disingenuous.

~~~
yellowstuff
Surely the burden of proof is on someone claiming something is impossible? I
skimmed the article but I didn't see any support for his argument beyond
pointing to the limitations of existing technology, and asserting that these
limits were insurmountable.

~~~
lurquer
>Surely the burden of proof is on someone claiming something is impossible?

You can't prove a negative in this sense. In general, we know things are
possible... but we never know if things are impossible.

And, don't call me surely.

~~~
baddox
That's certainly not the case. We can and absolutely have proven things to be
impossible.

~~~
mindcrime
_We can and absolutely have proven things to be impossible._

What would you consider to be an example of that? And how does that square
with the Problem of Induction which is the hole in our entire system of
empirical knowledge generation?

~~~
baddox
You can’t make a Turing machine which can solve the halting problem or an
algorithm which can determine whether any given mathematical statement is
true.

~~~
lurquer
You can prove that 2+2!=5. You could even say that, given the rules governing
math, it is ‘impossible’ that 2+2=5. The domain, however, is synthetic and
composed of a system of axioms and rules.

If I change the underlying axioms and rules, I could certainly prove that
2+2=5, just as I can prove that The sum of a triangles interior angles exceed
180 degrees, or that two identical number squared can equal -1. (Redefining
what a straight line means for the former, and inventing an imaginary number
system for the latter.)

Proving what can and cannot follow given a set of rules, however, is not what
philosophers mean when they speak or impossibility in the real world.

------
jablongo
Given the epic title I was pretty unimpressed by this line of thinking. The
argument seems to rely on (among other things) certain presumed implications
of “Computers not inhabiting the world” - a comical line of reasoning that
cites a lack of a childhood among other absurd reasons for why computers can’t
be intelligent. The author assumes that intelligence can only be derived from
experiencing the world in full... which seems to imply that paraplegics are
inhibited from intelligent thought. Taking that further - are we not capable
of intelligent thought during dreams? Fjelland goes on to argue that with no
way to fully interact with the world computers are barred from intelligence.
Firstly there’s no reason for us to believe that computers will stop
broadening the methods by which they connect to, measure, and actuate the
world around them. Secondly it seems plausible that a person locked in a room
with a radio to communicate with the outside world would eventually acquire
intelligence without ever fully experiencing the world.

~~~
proc0
> Secondly it seems plausible that a person locked in a room with a radio to
> communicate with the outside world would eventually acquire intelligence
> without ever fully experiencing the world.

I'm not sure about that. A person locked in a room with a radio would probably
go crazy or recess into some primitive mental state. Also, the article
specifically claims that General Intelligence has this requirement of
embodiment, not just intelligence as you imply.

However I agree with your first conclusion that AI will eventually include
bodies and it won't take long to integrate software intelligence and embodied
intelligence to get something greater that could resemble AGI.

~~~
Thorrez
>A person locked in a room with a radio would probably go crazy or recess into
some primitive mental state.

Hellen Keller managed to do well without vision or hearing.

~~~
ddrdrck_
This is an interesting remark, but she still had touching and was also able to
perceive vibrations. She was therefore able to _interact_ with other people.
More importantly other people were willing to interact with her. Also I
believe the fact she was not born deaf and blind but was able to experience
the world normally at least for the first 19 months of her life is of extreme
importance.

------
sasaf5
I came to this article expecting a logical proof of why AI was impossible, but
instead found the following tautological argument: hard AI, which I define as
intelligence with tacit knowledge, which in turn I define as knowledge that
cannot be ran in algorithms, is impossible to be ran in algorithms.

Disappointing.

My counterpoint: is there anything that cannot be ran in algorithms? The way I
see it the only thing stopping me from simulating every atom in a brain is
computational power.

~~~
bmitc
> is there anything that cannot be ran in algorithms? The way I see it the
> only thing stopping me from simulating every atom in a brain is
> computational power.

I am not for sure I understand the specifics of what you mean by "is there
anything that cannot be ran in algorithms?", but there are plenty of things
that are not computable or decidable in computation. These are theoretical
constraints. There are also practical constraints that effectively increase
the list of these things.

~~~
sasaf5
Sorry, my sentence was poorly formulated, and your counterpoint is attractive.
Let me try it again:

Is there any portion of reality that cannot have the evolution of its set of
magnitudes replicated in a Turing Machine?

> there are plenty of things that are not computable

I clicked on the article expecting to see something like a mapping between
aspects of human intelligence and undecidability or the halting problem.

------
tlb
If you accept Dreyfuss’ argument that a computer can’t acquire some kinds of
knowledge without a body, there’s a fairly obvious workaround. Give it a robot
body. And in fact, many research projects are doing this.

The Dreyfussians may retreat to their motte and say, well, a robot body can’t
give it the experience of walking barefoot through the grass so it can’t be
fully general because there’s something people know that it can’t.

But it’ll surely know things we can’t, and we’ll have to admit that humans
don’t have fully general intelligence either. Wr just think we do because we
can’t think of the ideas we can’t think.

~~~
proc0
Yes, Boston Dynamics and DeepMind should start collaborating.

------
RoutinePlayer
Every pronouncement that AGI will NEVER be achieved is equivalent as saying
that AGI does not exist in the universe. So, I guess this biological AGI
entity writing this note does not exist.

~~~
mindcrime
Right. Because if a carbon based / biological machine like a human can be an
AGI, what reason - in principle - is there to say that the same can't be done
with a different (perhaps silicon centric) machine? To suggest otherwise
feels, to me, like indulging in a form of mysticism which holds up human life
as something being "beyond" science.

~~~
yellowapple
Or, at the very least, that we can't artificially create a carbon-based
machine capable of general intelligence?

Like, even if there's something magical about neurons that can't be replicated
through conventional electronics, we could certainly put a bunch of _actual_
neurons in chips and use those instead.

------
sago
I am a profoundly disabled person, unable to do very many things physically. I
guess they've shown that my intelligence will never be general.

~~~
meh2frdf
You’re an instance of a class. The class is where intelligence emerged.

------
_nalply
I think human intelligence can't be recreated but this is not neccessary. It's
good enough if artificial intelligence is general enough to solve problems
autonomously. I imagine AI as a different sentient species. A species which
doesn't have instincts about food, excretion, reproduction and avoiding being
eaten by predators. AI will be always different from humans because it is not
useful to recreate all human instincts.

------
jqpabc123
It is difficult to build/create something you don't understand.

A computer is just a playback machine for binary logic.

We know that "intelligence" can create binary logic.

We don't know if binary logic can create "intelligence".

Claims that it is possible are really more "religious" than scientific at this
point.

~~~
quibbler
We know that machines with AGI are possible - humans are such machines.

Maybe you could discuss whether classical computers could achieve AGI, but I
think overall the quest is to build machines with AGI, not necessarily in the
form of classical computers.

~~~
goatlover
> humans are such machines.

Humans are animals. Part of the paper defends the notion that a lot of human
intelligence is tacit based on being embodied in the world as a living
organism. The idea that humans are biological robots is only one that came
about as metaphor when we created machines and some similarities were noted.

~~~
mcguire
Are quadriplegic people "human"?

~~~
rutherblood
from what I understand being "embodied" doesn't necessarily imply movement,
but I am afraid even I do not understand it fully to say a computer isn't and
an animal is.

------
type0
> Most of us are experts on walking. However, if we try to articulate how we
> walk, we certainly give a description that does not capture the skills
> involved in walking.

because it's largely automatic for us, we need to know where to go but not how

[https://en.wikipedia.org/wiki/Gait_(human)#Control_of_gait_b...](https://en.wikipedia.org/wiki/Gait_\(human\)#Control_of_gait_by_the_nervous_system)

with AI this means we can automate more than we need to genuinely understand,
often times approximation gives good enough solution

we humans often fool ourselves on how capable we actually are, but then yet
without training in dangerous situation we simply act on instinct and impulse

------
theptip
I’ve always been quite skeptical of philosophers arguing that a given
potential scientific achievement is impossible; logic alone cannot be used to
answer practical questions like these, much though philosophers would like to
claim this field.

On to concrete criticisms, most of this article is irrelevant; you can skip
straight to the last few paragraphs as the arguments there are largely
unsupported and stand on their own:

> Conclusion: computers are not in the world The main thesis of this paper is
> that we will not be able to realize AGI because computers are not in the
> world.

I think it’s a valid criticism of the current breed of ANI algorithms, and is
a problem that we will need to address (though it might turn out to be less of
a problem than the author thinks). But to claim that computers will _never_
inhabit the world, and are logically incapable of doing so, seems trivially
refutable to me.

Why can’t an AGI have a childhood wired up to a robotic body, where it
interacts with people and the world, thereby learning a tacit model of
physical and social causality? Currently this might be science fiction, but to
say it cannot happen in a hundred, or a thousand years seems arrogantly
certain to me, and to claim it is logically impossible is to be
epistemologically confused.

------
blacksqr
Author of linked article asserts that computers cannot achieve general
intelligence by: redefining general intelligence as precisely and only the
specific experience and behavior of human beings.

Author has incorporated the argument's conclusion into its premise, a
tautology.

------
d3nigma
I disagree with Dreyfus's view about strong ai. It's like saying that all
lifeforms in the universe are carbon based, because we are carbon based
lifeforms. But that's not necessarily true. I am a huge fan of bio-inspired
engineering, however, and I think imitating/simulating "growing up" could be a
reasonable approach to human-like intelligence. This phase of life is key for
our understanding of the world. We define and refine our values and evolve
from a blob of cells to a conscious being by observing our surroundings and
acquiring huge amounts of knowledge. If this concept (refined by evolution
over the last million years) is good enough for humankind, it is certainly
good enough for human made intelligence. The biggest problem here is
efficiency and complexity. We still don't know all mechanisms of our brain and
we are not capable (yet) to simulate 100 billion neurons, each connected to up
to 30.000 other neurons. That's the main problem here!

------
rutherblood
Why do we think computers can achieve intelligence? Where did this idea ever
first come from? it sounds almost childish if one traces the history and wild
misconception it must have emerged from. The idea maybe dates to Turing? maybe
even Leibnitz? What was the idea based on? On seeing that some machines can
"behave" i.e. do things a human does? And they started thinking if ultimately
that machine might be able to do everything a human does? That a human is
nothing but a " complicated machine" in the sense that we can formulate
complex but finite rules for its behavior? maybe even simple rules which when
applied on appropriate substrates would lead to "emergence" of such behaviors?
That is a BIG assumption to make really. Intelligence and cognition here are
treated as lists of rules which can be applied to any substrate. An electronic
computer is just one substrate. i should also be able to do this on a
sufficiently giant mechanical computer.

Even physics seem to have this rule-obssessed assumption. Can we really
simulate the universe AS IT IS? Sure we can some parts of it. But is there a
theory of everything really? After such a theory we would need to know
nothing. physics would be pretty much done with. This theory would explain all
observations in the past, present and future of anything we make in the
universe.

Even in our search for the smallest particles can such a "bottoming out" ever
take place?

EDIT: grammar

------
jmpman
If I get Level-5 driving, so I really care how it was achieved? In order to
get Level-5 driving, the car must be able to identify a construction worker
directing traffic, versus a random homeless guy, drunk, and directing traffic
into to current water main construction pit. Does that require general AI?
Given enough examples of drunk homeless people directing traffic into water
main construction, maybe we don’t. It’ll be 100 years into the future before
we’ve amassed that level of knowledge, but maybe it can happen.

------
blackrock
I think in order for AGI to be realized, the machine must be able to have
algorithms that can dynamically work with new data, that it receives from its
sensors, interprets it, and classify it.

It must generate its own data, and classify it, organize it, compartmentalize
it, and regularly subdivide it. And most importantly, it needs to be able to
invalidate it. It needs to operate on a ”most likely” scenario, based on its
own gathered evidence. Where the scenario is true, until it isn’t, and then it
needs to find the new scenario.

All of the data and information in the world, can not be encoded for the AGI,
and manually spoon fed to it. This is the fallacy. It doesn’t scale. It cannot
work, because it doesn’t operate on the “most likely” scenario. The whole
concept would fall apart and crumble in on itself.

This data fidelity issue was the problem that ultimately killed the symbolic
AI attempts in the 70s, with all the Lisp programming, where they attempted to
manually give knowledge to a robot. And then, after running out of money, they
realized that they just couldn’t sustain creating all of the data manually.

And the problem with today’s Deep Learning AI, is that it’s just a very fancy
pattern matcher. It’s like a very fancy regular expression searcher, to give
an analogy.

The problem with today’s AI attempt, is the same problem that doomed the 1970s
attempt: namely, the lack of data fidelity. At some point, you can’t have a
human go around classifying everything for you.

Given that, I still think AGI is possible, but a major rethinking is necessary
to achieve it. The Deep Learning neural net ideas of today, will not achieve
it.

------
jariel
There is a better explenation: there's no practical point to GAI.

Humans are automatons - so we see a 'single unit' of intelligence.

The Internet is a vastly connected system.

A 'basic robot' in a factory in China can have access to 'all the world's
information'.

The power of 'masses of data, services, systems' all combined, means that 'the
Internet' itself, in the broadest sense, will be much more intelligent than
any GAI anyhow.

Example: Is Siri 'AI'? We don't think of 'Siri' as a thing, rather a service,
a front end to a lot of things.

Well - 'Siri' is going to get really, really smart and be able to do a
ginormous number of things in the future, including have 'human level'
conversations with you, predict your needs and moods. She'll be talking to a
billion people at once! Isn't that even 'beyond' GAI?

The factory will be able to take a design, command robots to prepare, place
orders for parts, design work schedules for humans, prepare shipping,
anticipate problems. The factory is waay smarter than a human, is it 'GAI'?

Siri, the Factory, your car, the grid of traffic, the financial system,
distribution networks - it's all working together to do things utterly beyond
any individual 'GAI'.

And as these things develop, there really never is a real economic driver for
a true, atomic style 'GAI' like you see in the movies. There's no reason for a
company to spend $500B building a 'Data' from Star Treck - because everything
he could do, would otherwise be performed much more efficiently, cheaply and
intelligently by a system or group of systems oriented towards those tasks.

------
Animats
Yes, it's a rerun of Dreyfus.

A more useful question is "what can't machine learning do"? Each generation of
AI technology has solved some problems, then hit a wall. (I had the
unfortunate experience of going through Stanford CS in the mid-80s, when
expert systems had hit a wall but the faculty was in denial about that.)

Is there some way to get "common sense", defined as "knowing if something bad
is going to happen before you do it", from machine learning? So far, no. DARPA
is funding work on it, though.[1] The Allen Institute even has a
competition.[2] Both are verbal, though; they work on text statements.

[1] [https://www.darpa.mil/program/machine-common-
sense](https://www.darpa.mil/program/machine-common-sense)

[2] [https://www.tau-nlp.org/csqa-leaderboard](https://www.tau-nlp.org/csqa-
leaderboard)

------
scoot_718
It's a very unconvincing argument that culture, bodies or childhood are
required for intelligence. Moreover, that computers could not have any of them
(or functional equivalents, if in fact they are of any functional use at all).

I don't buy that the single data point that is Human intelligence has much to
say about the possible bounds or limits of general intelligence.

Computers inhabit _a_ world. They have interactions. Seems sufficient to me.
As for culture - they have enough learning material of human culture, and it's
not impossible to train multiple AI intelligences at the same time to create a
culture of their own. Surely at some point in history Humans developed their
culture from nothing? As for childhood - attempts at AI already have training
periods where they are given training data, made more plastic and develop
against simple situations before moving on to more complex situations.

The (also terrible and wrong) Chinese room argument is more convincing.

------
ypcx
I wonder what the GPT-3[1] has to say about this. While it can't exhibit
reasoning, I wouldn't bet that it is completely off the path getting there.
Call me crazy but I think we are about a year from first general reasoning
algorithms.

[1] [https://www.gwern.net/GPT-3](https://www.gwern.net/GPT-3)

------
Digit-Al
We can't create artificial intelligence because we have absolutely no idea
what intelligence and consciousness really are. You can't recreate what you
don't understand and we are not even close to understanding what makes us, us.

An analogy I've thought of is this. A modern jetliner is a miracle of
engineering, aerodynamics, electronics, and programming. If the average person
on the street, such as myself, was tasked with attempting to recreate one the
best that would be achieved would be an extremely crude model that could not
even achieve the most basic task of an aircraft: taking off and flying. Thus
it is with researchers trying to recreate human intelligence.

------
rutherblood
Whatever the criticisms of this paper may be, I get where it and Dreyfus is
getting at. Even Hofstader had a lot to say about this but the guy was ignored
to hell after his first book itself, I guess he has lost faith in the entire
enterprise.

One problem with computer scientists and AI researchers is the level of
arrogance that they display about their state of knowledge.

It would have had been good for us to rather engage with the larger body of
philosophical work (which has a long tradition of thinking about how the mind
works, how meaning and cognition emerge, how such embodied cognition interacts
with the world) and then conduct empirical research trying to verify a
philosophical paradigm rather than just going at it blindly by sometimes
assuming mind can be reduced to logical systems and sometimes trying to
simulate a reductionist and incomplete model of neural networks.

------
simon_000666
Imagine you tried to replicate your PC by creating a simulation of the CPU.
Sure you could probably simulate some of the functions without too much
difficulty, but the value of the PC is not in it’s CPU alone, it’s in the
interaction of all the components, the screen, the ram, the disk, the
keyboard, the GPU. Isolated these components provide a fraction of the value
they do when they are combined and simulating the whole PC is an order of
magnitude at least greater than simulating only the CPU because you also need
to simulate the environment it’s in. I think this is what the author of the
article is trying to say, not that it’s totally impossible just that we are
nowhere near doing what needs to be done to effectively recreate the highly
emergent properties (strong AI) of the human body.

~~~
WealthVsSurvive
Well, of course. The line is the body. The human body is a line of ergodic
behavior stemming back to the origin of not merely the species, but of life
itself: a chemical that said "I am that I am" and became life by continuing
form or not continuing form. You and I will do the same. The form is
ultimately shaped by the environment. What made the environment? Now enter
Human. What of this form made it so unique? The recognition of ergodic
behavior itself! Now the beast could see the result of its behavior and so
birthed prophesy, nightmares, & dreams. So that it might continue. Or not. I
think ultimately, we're coming at the problem of AI from the wrong end, if
we're attempting to mimic biological intelligence. I think the direction AI is
taking has tremendous applications that will reshape humankind, but I do not
think it will be comparable; it will be something new: the son of Man, not
alive, not dead, all-seeing, judging and impartial to emotion. Either that, or
we will place upon it's throne a sword and all shall perish.

------
yalok
Irrespective of the main topic, this citation rings a good warning:

“Jerone Lanier has argued that the belief in scientific immortality, the
development of computers with super-intelligence, etc., are expressions of a
new religion, “expressed through an engineering culture” (Lanier, 2013, p.
186).”

------
bloak
I would have thought it's possible to pass the Turing test without being "in
the world". But you'd have to be much _more_ intelligent than an average human
to do that, because you'd be successfully imitating something that is nothing
like you.

------
csours
Is there a strong argument against the fact that every approach to AGI
(Artificial General Intelligence) eventually becomes ANI (Artificial Narrow
Intelligence)?

In other words, as soon as a facet of AI (NLP etc) gets a specific name and
context, it is no longer considered part of AGI.

~~~
ersiees
Interesting idea! Future Me: A computer became better at my job, but I might
still claim that it is not generally intelligent because it does not have the
same kind of relationships to humans I have.

------
jxy
> when it was discovered that electronic computers are not just number-
> crunching machines, but can also manipulate symbols

The first sentence shows that the authors know nothing about computation. So,
there is nothing substantial to read here.

------
rsecora
>>> We know this phenomenon from everyday life. Most of us are experts on
walking. However, if we try to articulate how we walk, we certainly give a
description that does not capture the skills involved in walking.

All the article is based on statements like the previous one. Scientifically
is possible to nullify it with a counterargument.

[1] Boston dynamics Atlas.
[https://youtu.be/rVlhMGQgDkY](https://youtu.be/rVlhMGQgDkY)

[2] Boston dynamics parkour.
[https://youtu.be/_sBBaNYex3E](https://youtu.be/_sBBaNYex3E)

QED.

------
rutherblood
Unlike most of the comments here, i'd say i am at the very least sympathetic
to the views in this article. But even I have to say, this is a badly written
paper. A whole lot of talking, various anecdotes and quotes but nothing to
show for. This is just finger-wagging...never is any light thrown upon this
mysterious "being embodied in the world" and somehow out of the blue a
discussion on causality's importance in cognition comes about and fizzles out
without having said anything of substance

------
bufferoverflow
This article will not age well. That's my future prediction.

------
woeirua
When we talk about AGI, everyone always takes it for granted that an AGI would
be human-like. But I think if you look at the complexity of the brain, and how
poor our attempts to emulate it have been so far, I think it is almost a
virtual certainty that the first successful attempts at AGI will create non-
human intelligence. In many ways, I expect that our creations will find us to
be as unrelatable as we consider dolphins or other highly intelligent animals.

------
axilmar
In 80 years we have gone from nothing to deep learning.

In the next 80 years, we will have AGI, and in the next 8000 years humans will
be androids/cyborgs.

That we haven't managed to create true AGI for now doesn't mean anything. It's
like the case for the aeroplane: there were even mathematicians that "proved"
things heavier than the air cannot fly.

------
rsa25519
I often see AGI being discounted based on machines lacking expressiveness. Of
course, text isn't as complex as human expression (I assume?), but isn't that
how we're communicating right now? I think it's expected to assume each other
have General Intelligence. If so, wouldn't it be unfair to hold machines to a
higher standard?

------
pier25
I know nothing about AI but my ignorant impression is that there are
essentially two ways to achieve AGI:

1) We somehow model GI artificially, which seems like an impossible task.

2) Or instead we just model the fundamental neural mechanisms (dentrites,
axons, etc) and find a way to copy the state of a brain into a sufficiently
powerful computer.

Is this correct or are there different approaches?

------
natch
I guess the author is not aware of Rodney Brooks or any number of other long
existing proponents of having AI be embodied. And that’s not even taking into
account that the network, its data, and sensors attached to it, and their data
all fill the role of a kind of embodiment in an environment if that’s
something the author thinks is missing.

------
mu_sub_naught
Much of the revenue generated from space engineers sales (over 10M copies sold
on Steam) went towards an "AGI" pursuit.

------
29athrowaway
To better understand the brain, you have to not only talk about its cognitive
functions, but all of it. Then you will understand what it is really for.

Your body stays alive by keeping levels within certain range. There are many
functions in your body that have as an objective controlling those levels and
your brain controls them at different levels of automation. Like your heart
rate, sweating, urine output, etc.

But not everything can be automated, because some situations are more complex:
finding a place with the right temperature, access to nutrients, and
breathable air, etc. That is where decision making comes in... survival-
oriented decision making. And that was what resulted in us developing what we
today call intelligence.

When you disembody learning, and put it in an abstract evolutionary
environment where survival only depends on solving an abstract problem, the
result will not necessarily be an "AGI", a self-preserving, self-aware
intelligence that is adapted to survive a wide range of scenarios. But if you
changed that environment, made it very hostile and constantly changing, it
could eventually lead to the evolution of an AGI.

~~~
glenstein
>But not everything can be automated, because some situations are more complex

This is where you lost me. You are very right that brains do all kinds of
things to regulate the body that we don't usually think of as influencing
thought. But we can't model those because they're too complex? I don't think
environmental variables are too complex.

But (1) if those are important, we can model them too and (2) it may be that
we don't need to model computer 'thinking' after the structure of human brains
anyway to solve problems intelligently. And (3) the totality of things an AGI
might be 'aware of', even without simulating biology, could very well mean
that the 'intelligence' of a system is nestled in a complex web of variables
that give it the ability to have the equivalent of our tacit knowledge. That's
probably an informational question rather than a question of needing to
simulate biology.

~~~
29athrowaway
Not all decision making is rational.

If you are in the jungle and a lion comes at you, you will not have time to
sit down and think what to do, and your brain is prepared to act in those
situations as well.

~~~
glenstein
I'd say that's an extremely crude understanding of what computers can actually
model. There are ways that humans stream through thoughts that depend on tacit
knowledge and unconscious connections, and there's no reason why those can't
be modeled by a computer. And the equilavence betweeen "rational" (in some
informal, human sense of the term, as in talking out loud like spock), and
"computable" is just a misunderstanding. Computable contains much more than
this naiive conception of what is rational.

------
tjk_
Im no AGI expert, but I am a big fan of Robert Miles who, coincidentally, made
a kind of video rebuttal to this article ~3 years ago
[https://www.youtube.com/watch?v=B6Oigy1i3W4](https://www.youtube.com/watch?v=B6Oigy1i3W4)

------
mordymoop
These people are still going to be publishing this garbage after AIs are the
ones writing the articles.

~~~
empath75
In 2030, Nature will publish an article written by an AI claiming that AIs
will never be able to publish an article in Nature.

------
vumgl
Achieving Strong AI makes no sense because we don't understand the human brain
and how it works. Even if a non-biological AGI passes the hardest imaginable
Turing test, there will be arguments against it because "it is still not like
us".

------
rdlecler1
My theory on this is that embodiment will be necessary (but not sufficient) as
the physical world provides a consistent data model where high order learning
can be built on. In effect this becomes the model to bootstrap off of.

------
alexfromapex
Because we still don’t truly understand the underpinnings of consciousness

------
joefourier
This article seems to be yet another long-winded attempt to say that human
beings have some sort of "soul" or "vital essence" that computers don't, but
since those ideas are out of vogue, it uses obfuscated language to make the
same point without explicitly saying it.

See:

> which have allegedly shown that our decisions are not the result of “some
> mysterious free will”, but the result of “millions of neurons calculating
> probabilities within a split second”

> the quotations are “nothing but” the result of chemical algorithms and “no
> more than” the behavior of a vast assembly of nerve cells. How can they then
> be true?

The article is suggesting that human beings cannot say true things about the
world or themselves if human intelligence is no more than chemical algorithms
and nerve cells, and that proponents of physicalism are therefore
contradicting themselves. This is a fairly bizarre argument.

The use of "allegedly" with regards to explaining human decisions as a result
of neurons further reinforces the claim that therefore, there must be some
mysterious free will, a human soul, or here a "social context", to explain
human consciousness.

The trouble with vitalism or claims of a human soul should be fairly self-
evident in the modern age, and claiming that denying it is "scientism" is
utter nonsense.

They then mix it up by saying that computers are not in the physical world and
do not have a body, therefore cannot be generally intelligent like humans.
This is obviously false, what is a robot if not a computer with a body?
Computers can interact with the world using sensors and actuators, there's no
theoretical reason that they could not match or exceed human physical
capabilities (they already do in narrow instances).

~~~
vidarh
It always mystifies me to what extent people insist on trying to hold on to
this fantasy of free will, while remaining unable to _define_ free will in a
way that does not devolve to "magic" or some attempt to obscure a
deterministic description behind smoke and mirrors.

At least some of the latter (e.g. a portion of compatibilists etc.) will if
pressed admit that their "free will" is an effect or illusion of mind layered
on top of determinism, but for a lot of people the very idea that they don't
have actual agency seems entirely impossible to accept.

~~~
ggggtez
The offender I hear most often now a days is trying to use Quantum Mechanics
to bring free-will back into play. The idea being that if any part of nature
is unpredictable, then that could be a mechanism for where free-will might
come from.

Except that would be like saying your phone is conscious just because it gets
bombarded by cosmic waves that sometimes cause bitflips. Just because the
machine isn't 100% predictable, doesn't mean that those cosmic waves have any
specific goal in mind.

~~~
ebg13
> _The idea being that if any part of nature is unpredictable, then that could
> be a mechanism for where free-will might come from._

Trying to use QM to introduce free will is just another manifestation of the
same "mistaking the model for the reality" problem.

We have no reason to believe that quantum probability is the end of the road
just because we can't see what causes the probabilistic results. For all
anyone knows, Einstein was still correct and there is still no god playing
dice, just the same old predictable billiard balls at a plane that we can't
readily observe.

~~~
hansvm
Perhaps. There's been some interesting work in showing that any hidden
variable theories can't give any better predictions than QM probabilities. It
remains to be seen whether their hypotheses are valid, but the inference seems
sound.

~~~
ebg13
That's well and good, but "prediction" is an attribute of a representative
model, not of reality. Reality doesn't represent or predict itself; it just
occurs. The problem of comparing against "hidden variable theories" in this
context is that the variables wouldn't be hidden to reality; they'd only be
hidden to us. So can we predict better than QM? Maybe not. Does QM describe
the truth behind experience? Probably not, and it isn't meant to.

------
michaelmrose
If natural processes can only be artfully and artificially arranged to perform
computation and not thinking how is it that we are having this conversation.

------
mcguire
" _The modern project of creating human-like artificial intelligence (AI)
started after World War II, when it was discovered that electronic computers
are not just number-crunching machines, but can also manipulate symbols. It is
possible to pursue this goal without assuming that machine intelligence is
identical to human intelligence. This is known as weak AI._ "

Eeee, errr, well, no, it's not. Strong AI is the pursuit of human-level (not
human-like), general purpose intelligence. Weak AI is the pursuit of solutions
to difficult individual problems or classes of problems. Source: my AI
classes, ca. 1989.

This article isn't starting well.

Edit: And I'm back after reading it all.

There are two important points when evaluating philosophical arguments about
artificial intelligence.

1\. Have they introduced dualism? (Usually, it's "how have they snuck in
dualism without admitting it." Sometimes it's easy to see, as in Searle's
Chinese Room thing. Othertimes not so much.)

2\. What happens if you continue with their own questions? Does it lead to an
unpalatable (or simply wrong) conclusion?

" _The main thesis of this paper is that we will not be able to realize AGI
because computers are not in the world._ "

To start with number 2, who exactly is "in the world"? A blind person? A deaf
person? A quadriplegic person? A deaf-blind-mute-quadriplegic-from-birth
person? That poor bastard from _Johnny Got His Gun?_ How about this laptop? A
robot? A robot that is indistinguishable from a human being without
physiological tests? The author started out with an elaborate discussion of
human-like and non-human-like general intelligence, but now has unrolled that
(Without suitable caution signs. Health and Safety are going to be pissed.)
and is now only interested in human-like artificial general intelligence, only
in something that resembles a human being: " _As Hubert Dreyfus pointed out,
we are bodily and social beings, living in a material and social world. To
understand another person is not to look into the chemistry of that person’s
brain, not even into that person’s “soul”, but is rather to be in that
person’s “shoes”. It is to understand the person’s lifeworld._ " (And I'm
going to go out on another limb and ask how well anyone, anywhere,
_understands some other persons ' lifeworld_. I don't think _that_ is
possible.)

" _However, there is a problem with both these quotations. If Harari and Crick
are right, then the quotations are “nothing but” the result of chemical
algorithms and “no more than” the behavior of a vast assembly of nerve cells.
How can they then be true?_ "

And here we have number 1. If materialism is right, how can those quotations
be true? How can anything said or done by humans be true? They can't be;
they're just biochemical algorithms, neurons, and molecules. Truth requires a
soul, and a soul _could_ completely grok some other person's lifeworld.
Obviously.

So clearly, nothing even remotely like Watson or AlphaGo would be intelligent.
(Even if they're not intended to be general intelligences. Yes, he spent the
last part of the article complaining that two systems weren't something that
they were never intended to be, in spite of spending a large time at the
beginning delineating that very difference.)

------
staycoolboy
"640K ought to be enough for anybody."

(I know, Gates never said it, but no one posted this yet and I felt it my
duty.)

------
seek3r00
“To put it simply: the overestimation of technology is closely connected with
the underestimation of humans.”

------
ilaksh
The author is just totally ignorant of large amounts of AGI research. See
Tenenbaum for example.

------
booleandilemma
Think how funny this would be if we are, in fact, part of a massive computer
simulation.

------
013a
Personally, it feels to me that maybe AGI is possible, but a computer capable
of it would look nothing like the computers we have today. And, most
critically, I'm aware of no highly-progressed research into major new physical
hardware systems which more closely resemble our neurons.

There are billions of neurons in a brain. There are billions of transistors in
a CPU or graphics card. Somehow, somewhere along the line, we convinced
ourselves that brain neurons and transistors are fungible.

At some layer of abstraction they probably are. But, it seems to me that the
sheer number of transistors necessary to emulate a neuron would lead to
tertiary negative impacts to the overall system. Imagine, for a second, it
takes a billion transistors to emulate one neuron; given we're quickly
approaching the physical size limit of the universe in our production of
transistors, this means you'd need many chips, and actually many computers, to
emulate many neurons. Introduce many computers, and you have to introduce
network latency and communication problems; both problems that the brain
really does not have. And while you could argue "ok, the simulation will be
slower, but it would still work", maybe, just maybe, the latency of
communication between neurons is actually a critical component of cognition.
In fact, that seems likely to me.

Many people are trying to build a brain on tensorflow with their nvidia
graphics card originally designed to make unreal tournament run 20% faster.
Google was among the first groups with the insight that custom silicon would
make training and running these intelligences faster. But, what we're talking
about here isn't "running faster"; its running fundamentally differently. We
buy supermicro motherboards with PCI busses, plug in silicon that's just a
little different than the silicon I use to play Doom Eternal; is it really any
surprise that very little progress on AGI has been made?

I don't know what chips to truly, more accurately emulate a neuron would look
like. I suspect no one knows. I suspect that, if anyone figures it out, it
won't be Google, or Microsoft, or Apple, or China, or Russia; organizations
with so many processes, procedures, and immediate-term outcome expectations
that selling an idea as wild as "we can't use any of the pre-existing
computing theory out there, we need to start from scratch" would be
impossible, in favor of "can't you just make the Tensorcore V2 20% faster?" If
it will be invented, it will be invented by one person, in their garage, with
a unique insight and decades of work.

But I also suspect that it will never be invented. If we can't even solve
alzheimers, or psychosis, or even _depression_ , brain disorders which impact
hundreds of millions of people every year, what level of hubris is necessary
to think we have even 0.1% understanding of what goes on in our heads? We live
in a society which refuses to even address, let alone help alleviate, mental
illness, and you think we're going to be able to build, let alone maintain and
debug, a simulated brain?

------
mullingitover
This feels a lot like Douglas Hofstadter's elaborate explanation in _Gödel,
Escher, Bach: An Eternal Golden Braid_ of how computers will never beat the
best human chess players.

~~~
knodi123
And here's DH just a couple years ago,

> If you ask me in principle if it’s possible for a computing hardware to do
> something like thinking, I would say absolutely it’s possible. Computing
> hardware can do anything that a brain could do, but I don’t think at this
> point we’re doing what brains do. We’re simulating the surface level of it,
> and many people are falling for the illusion. Sometimes the performance of
> these machines are spectacular.

~~~
mullingitover
If we're successfully simulating the surface level of it, the underlying
mechanism is (imho) totally irrelevant to the user. If general intelligence is
happening, does it really matter if the underlying mechanism is neurons, or
transistors, or vacuum tubes?

~~~
goatlover
The surface level of intelligence, not the biological implementation. Chess
and Go programs have a surface level understanding of board games.

~~~
mcguire
Here's the fun philosophical question: Do you have a below-surface level
understanding of anything? I mean, sure you know that if you do _this_ ,
something else does _that_ , but do your really _understand_?

------
mindcrime
This article is a muddled, ridiculous mess. I read the first few paragraphs
and couldn't motivate myself to do any more than skim the rest. As far as I
can tell, there's nothing new here, and the author's argument that "AGI will
not be realized" _might_ be true if you stick to his ad-hoc definition of AGI,
which seems to conflate "human level" intelligence and "human like"
intelligence.

Yes, it's probably true that AI's will not have "human like" intelligence, for
some of the reasons cited. Lack of embodiment and the associated experiential
learning is the chief reason that I would personally cite for why this is
true. However, that line of reasoning is completely irrelevant unless A. make
the mistake of conflating "human like" and "human level" OR B. you very
specifically demand that your AI must be "human like."

Everybody else realizes that the goal is to build an AI that is as general as
human intelligence, not necessarily to build an artificial human.

Edit:

To go back to the embodiment issue for a moment... I think embodiment is
important. I've been playing around with building a trivial little shell to
pack some AI research in, that can be carried around (initially), and
"experience" the world via a variety of different sensors. And I do think,
again, that embodiment will _probably_ be necessary to get an AGI that can
"act human". I just don't see that as being the goal. Yeah, yeah, Turing Test,
blah, blah, I know. As much respect as I have for Turing (and it's a lot,
obviously) I don't actually consider the Turing Test to be very interesting,
vis-a-vis evaluating an AI. In fact, I think focusing on it could be harmful,
because it seems that getting an AI to pass it amounts to teaching the AI to
lie well. This seems counter-productive to me.

As for _why_ I think embodiment would matter to making a "human like" (as
opposed to "human level") AGI: it mainly comes down to experiential learning.
Imagine, if you will, what you know about the meaning of terms like "fall", or
"fall down". How much of your knowledge of this is rooted in that fact that
you, in your body, _have_ fallen down? And how does that play into your
ability to construct metaphors involving other things "falling"? And so on.

But I don't think any of this stuff is necessary to make an AGI that can
operate at a human level of generality and solve useful problem on our behalf.
And by "operate at a human level of generality" I mean something approximately
like "the same AI software, with appropriate training, can do anything from
playing chess, to driving a car, to coming up with new theories in physics and
chemistry (and so on).

------
bcatanzaro
It kills me that Slatestarcodex was deleted but this was published in Nature.

