
The body is the missing link for truly intelligent machines (2017) - new_guy
https://aeon.co/ideas/the-body-is-the-missing-link-for-truly-intelligent-machines
======
Animats
I've been saying something like this since the 1980s. But we knew back then
that manipulation in unstructured situations was very hard. It still is.

Here's the DARPA robot manipulation challenge, 2012.[1] This is pathetic.
Especially since DARPA has been funding universities in this area since the
1960s. There's a classic video of robotic assembly at Stanford SAIL in the
1960s I can't find right now. It looks very similar, except that the video
quality is worse.

The state of the art in autonomous mobile robots for unstructured environments
is terrible. The state of the industry for that is worse. Willow Garage went
bust. Google bought up some of the players, ran them into the ground, and
dumped them. Schaft, the Tokyo University spinoff they bought, found no buyers
at the selloff. (They had nice hardware, too.) Boston Dynamics is still
around, feeding off of Softbank now, after feeding off Google and DARPA, but
there are no selling products after 30 years. The USMC rejected their Legged
Squad Support System. The performance level at the DARPA Humanoid Challenge
was very poor.[2]

Even robot vacuum cleaners aren't very good. You'd think they'd be doing
offices and stores late at night by now, but they're not. The Roomba, which
has the intelligence of an ant (it's from Rod Brooks, the insect AI guy) came
out in 2002, and is only slightly smarter 17 years later.

Automatic driving is starting to work, after a few billion dollars was thrown
at that problem. That, too, was harder than expected.

Drones, though. Drones are doing fine.

The real breakthrough in machine learning was the discovery that it could be
used to target advertising. That doesn't have to work very well to be useful.
It's easy to test. 80% success is fine. Now there's money behind that field.

Embodied AI is really hard to work on, and very expensive. It's easier than it
used to be; you can buy decent robot hardware off the shelf, and don't spend
your time worrying about gear backlash and motor controllers. But it's still
way harder to test than something that runs in a web server.

The payoff is low. Robots in unstructured situations do the jobs of cheap
people, and the robots are usually slower. After many decades of many smart
people beating their head against the wall in this area, there's been some
progress, but not much. That's why this isn't happening yet.

However, being able to mooch off of technology being developed to serve the
ad-supported industries that use AI does help.

[1]
[https://www.youtube.com/watch?v=jeABMoYJGEU](https://www.youtube.com/watch?v=jeABMoYJGEU)

[2]
[https://www.youtube.com/watch?v=nIyuC7ceFH0](https://www.youtube.com/watch?v=nIyuC7ceFH0)

~~~
Retric
> The state of the art in autonomous mobile robots for unstructured
> environments is terrible.

Roomba’s are very good at what they do. The real limitation is what you want
the robot to accomplish and how much it costs. There are surprisingly few home
chores worth spending significant amounts on a robot to do it for you vs just
having a cheap maid service.

In professional settings, you can generally just make it a structured
environment.

~~~
stouset
> Roomba’s are very good at what they do.

I'm not sure you've ever owned a Roomba. In theory they work great. In
practice, there's always something on the floor they get tangled in, there's
that one couch they're just small enough to fit under but not escape from, or
there's that one corner of death in your room they inevitably get into and
become trapped. And sometimes, even when everything is absolutely perfect, one
of the sensors decides it's stuck an so the thing just backs up in circles
indefinitely in an otherwise-ideal empty room.

I've owned two Roombas and both were somehow _more work_ than just sweeping or
vacuuming.

~~~
wpearse
We just bought a Mi Robot to replace our Roomba 630. It’s half the price,
actually maps my house, doesn’t bump into stuff, has better fault-recovery,
scheduling built-in, and is just generally a real pleasure to run.

------
YeGoblynQueenne
>> But despite encouraging results, most of the time I’m reminded that we’re
nowhere near achieving human-like AI. Why?

Primarily, because we have no idea what intelligence is, or how it works, why
it exists even, etc etc. This goes for human-like intelligence, but also for
any kind of intelligence. We just have no good scientific understanding of the
subject. We have some vague models of it ("the brain is like a computer and
the mind is like a program running on it") but nothing very precise and
certainly nothing that can be reproduced on a digital computer, which is what
"human-like AI" would be (i.e. human-like AI would be the reproduction of
human intelligence on a digital computer).

Most likely, until we make some progress in understanding intelligence we will
not be able to reproduce it. Except perhaps by chance.

~~~
danielam
> we have no idea what intelligence is

I wouldn't make so strong a claim in light of what philosophy has to say about
intelligence. Relevant here also are certain metaphysical presuppositions made
by fields like neuroscience that preclude that understanding. Furthermore,
even knowledge of something does not necessarily entail ability to reproduce
something.

~~~
dmreedy
Philosophy absolutely has lots to say about intelligence, much of which is
mutually contradicting. There are a lot of different models out there, based
off of varying levels of cleavage to various priors, that all seem to have
very little real explanatory (or, I should say, _predictive_ ) power.
Approaches based off of current materialist, biology-based methods have a
little more utility behind them as far as we can tell (some of our drugs seem
to do something to the mind), but the furthest they've gotten so far is to be
able to figure out _some_ of the things that matter to intelligence, rather
than a successful general model for what it is.

And while I'll agree that this lack doesn't necessarily preclude the ability
to reproduce intelligence, it _does_ make it darn hard to recognize it.

------
m-i-l
At the time I did my AI post-grad, there were broadly speaking 3 schools of
thought on how general AI would be achieved: via (i) symbolic AI (or "classic
AI"), (ii) connectionist AI (i.e. neural networks, now "deep learning"), and
(iii) what they called "robotic functionalism". It sounds like this article is
referring to the last group, i.e. that embodiment in and interaction with the
physical world are a necessary requirement for general intelligence. Can't
find any references to it by this term, but as others have noted this is not a
new idea. Personally, I've never been convinced by the idea that you _had_ to
have physical presence, and sometimes suspected the theory existed to allow
robotics to fall within the AI camp, but that said I do think hybrid solutions
(i.e. the combination of more than one "narrow" approaches) are one of the
most promising areas right now.

~~~
danmaz74
Human intelligence is a tool evolved to interact with our environment. Not
having an environment to interact with is, imho, a serious problem when trying
to define/identify intelligence.

On the other hand, I'm not sure the environment necessarily needs to be
physical. Ages ago, I worked on reinforcement learning in a simulated
environment, which can provide lots of advantages.

~~~
randcraw
And that's the heart of AI's core problem: an oversimplified world is needed
in order for your research to produce short-term results that can sustain your
project's existence. But trimming back nature's complex signals and noise also
limits your solution/model so much that your system becomes too simplistic and
fragile (AKA brittle) to thrive in the much-more-complex real world.

After 50+ years of AI research that hasn't scaled or meaningfully progressed
on the fundamental capabilities needed by a synthetic mind, you'd think we'd
agree more that simplifying reality into something easier to model is the
wrong basis for creating AI that's more than a toy.

------
normalhuman
The idea of "embodied AI" has been around for some decades. It is reasonable
that, from a practical engineering perspective, creating "human-like"
intelligence becomes more feasible if this intelligence is embodied in the
same physical possibilities and constraints as a typical human.

What I find somewhat ironic is this: the author mentions working with Stephen
Hawking, an amazing man who produced incredible intellectual work and enriched
our understanding of reality while being almost incapable of any physicality.

If we apply current scientific theories (physics, chemistry, biology, etc) to
this cell network and physical machinery, we quickly find our way back to
symbolic manipulation. What are cells if not computational nodes that exchange
messages?

A much more reasonable hypothesis for what is missing is contained in the
text:

"A human cell is a remarkable piece of networked machinery that has about the
same number of components as a modern jumbo jet[...]"

Maybe we just haven't reached the level of complexity needed for human-level
AI. A hint that this might be the case is that the current excitement with ML
seems to be fueled by algorithms that were mostly known by the 80s (sure, with
lots of recent incremental improvements, but no new big idea). What made a
difference was the computational power and datasets that became available in
the 2010s. I suspect the next leap will be of a similar nature. "More is
different".

~~~
dustfinger
> Maybe we just haven't reached the level of complexity needed for human-level
> AI. A hint that this might be the case is that the current excitement with
> ML seems to be fueled by algorithms that were mostly known by the 80s (sure,
> with lots of recent incremental improvements, but no new big idea). What
> made a difference was the computational power and datasets that became
> available in the 2010s. I suspect the next leap will be of a similar nature.
> "More is different".

Regarding the nature of complexity and the notion that "More is different", I
am reminded of the emergent behavior of vivisystems [1] as described in Kevin
Kelley's book Out of Control [2] -- an insightful exploration of the emergent
behavior expressed by complex self sustaining systems. If you have not read
Out of Control then you might want to put it in your reading queue. I found it
highly engaging and thought provoking.

[1]
[https://www.everything2.com/title/Vivisystem](https://www.everything2.com/title/Vivisystem)

[2] [https://kk.org/outofcontrol/](https://kk.org/outofcontrol/)

~~~
visarga
I like the alternative phrase 'Quantity has a quality of its own'

------
tlarkworthy
I broadly agree, but I think the fundamental missing piece in current AI is
not 'body' but action. Agents using action to experiment is a step change in
capabilities over passive observation (pattern matching). You can use
experiment to tease out causal relationships, this is not possible with
passive observation. I think the bodies role in nature is merely enabling
action. Action is the key, and you don't need a nature-like body to get this.
AI driven action, that is what we need.

~~~
zby
I concur - "The Book of Why" lays out this thesis really well
([http://bayes.cs.ucla.edu/WHY/](http://bayes.cs.ucla.edu/WHY/)).

------
PeterStuer
It is strange that the author does not reference the whole 'Nouvelle AI'
movement of the late 1980's that was a direct response to the 'symbol systems'
of classic AI, proclaiming the necessity for embodiment as a prerequisite for
grounding.

See for example Brook's classic "Elephants don't play chess", or Steel's write
up on "The Artificial Life roots of Artificial Intelligence"

~~~
jackhack
More than just strange, it's a glaring omission -- MIT Prof. Rod Brooks paved
these roads nearly 3 decades ago and arrived at these ideas through deduction
and experimentation.

He made the argument that embodiment is an essential component of AI in his
paper "Intelligence Without Reason" (
[https://people.csail.mit.edu/brooks/papers/AIM-1293.pdf](https://people.csail.mit.edu/brooks/papers/AIM-1293.pdf)
) COG was his groups' attempt to build a humanoid robot:
[http://www.ai.mit.edu/projects/humanoid-robotics-
group/cog/o...](http://www.ai.mit.edu/projects/humanoid-robotics-
group/cog/overview.html) (see "Why not simulate it?") but he and his grad-
student researchers devoted considerable effort to exploring the importance of
embodiment, especially in humanoids: [http://www.ai.mit.edu/projects/humanoid-
robotics-group/index...](http://www.ai.mit.edu/projects/humanoid-robotics-
group/index.html)

The Mobile Robots Lab built biologically inspired robots that were remarkably
capable and able to function to dynamic events in the real world (rather than
carefully controlled lab environments).

~~~
PeterStuer
Great guy. Here's an anecdote from that time. He spent a few months on
sabbatical at our lab ( [https://ai.vub.ac.be/](https://ai.vub.ac.be/) ) in
those days. We were preparing robots for a NATO Advanced Study Institute (
[https://www.springer.com/gp/book/9783642796319](https://www.springer.com/gp/book/9783642796319)
), and I was struggling writing the serial driver for a custom embedded
computer for this as the system kept crashing (due to a bug in the dram
controller). Anyways, Rod Brooks, offered to help me with the coding. It
wasn't needed as the code was not the problem, but I don't know many
professors that could and would be prepared to dive in that deep.

------
nickpsecurity
As usual, the author also leaves off the training phase humans go through
where we have a mix of enhanced learning abilities and humans guiding us (aka
parents/adults). The process to produce one of these general intelligences
takes decades.

~~~
ganzuul
The iCub project is rather well-known though.

Uses cable drive for actuation. Interesting stuff if you're mechanically
inclined.

------
keiferski
“There is more wisdom in your body than in your deepest philosophy.” \-
Friedrich Nietzsche

------
noiv
I think a machine which successfully finds a charging point in a changing
environment to fill its batteries when needed has already a good simulation of
its 'body'.

------
Veedrac
> So when a human thinks about a cat, she can probably picture the way it
> moves, hear the sound of purring, feel the impending scratch from an
> unsheathed claw. She has a rich store of sensory information at her disposal
> to understand the idea of a ‘cat’, and other related concepts that might
> help her interact with such a creature.

Except a very large fraction of people don't think this way (eg. those with
aphantasia), and Helen Keller certainly didn't, yet seems to have been as
smart as any of us. So obviously intelligence does not depend on having a huge
breadth of sensory experience.

It's quite tiring how much posturing about what's ‘really’ missing from
machine intelligence doesn't last past 5 seconds of basic fact checking.

~~~
v64
> Except a very large fraction of people don't think this way (eg. those with
> aphantasia)

I don't think it's accurate to say that aphants don't have this type of
sensory information at their disposal. I have aphantasia, but I still
experience the world through my senses. I may not be able to visualize a cat
in my mind's eye, but based on my prior experiences with cats, I know a cat
when I see one. If I hear purring, I recognize that as a sound that cats I've
encountered in the past have made, etc.

~~~
Veedrac
Yes, you are capable of pattern matching. The point is that this is a distinct
thing from intellectual ability, since we know having orders of magnitude less
sensory input doesn't much seem to limit the ability to do high-level
reasoning, and know some people don't use it _as part_ of high-level
reasoning.

This kind of pattern matching is also fairly evidently not all that difficult,
since much simpler brains than ours can manage it, as can ML models with
caveats (albeit caveats often misunderstood and exaggerated).

~~~
eli_gottlieb
>This kind of pattern matching is also fairly evidently not all that
difficult, since much simpler brains than ours can manage it, as can ML models
with caveats (albeit caveats often misunderstood and exaggerated).

Do tell me, since I'm writing a paper on a related topic, which current ML
models can "pattern match" to recognize or generate multimodal (ie: visual,
auditory, _and_ tactile) percepts of cats, in arbitrary poses, in any context
where cats are usually/realistically found?

Or did you just mean that the "cat" subset of Imagenet is as "solved" as the
rest of Imagenet?

~~~
Veedrac
Please try to argue in good faith. I've already said ML models have caveats,
obviously I don't think they're perfect or par-human.

~~~
eli_gottlieb
I think that "perfect" or "par-human" would be a judgement about performance
on a set computational task. _My_ caveat is that ML models are usually
performing a vastly simplified task compared to what the brain does. But it
looked like you were saying they perform "pattern matching" with the same task
setting and cost function as the brain, and just need to perform better at it.
What's your view?

~~~
Veedrac
“Not all that difficult” is in the context of the brain, where things tend to
vary between ‘pretty difficult’ and ‘seemingly impossible’. I say ML shows
pattern matching of this sort isn't all that difficult because progress has
been significant over very short stretches of time, without any particular
need to solve hard problems, and with a general approach that looks like it
will extend into the future.

We have this famous image showing progress over the last 5 years.

[https://pbs.twimg.com/media/Dw6ZIOlX4AMKL9J?format=jpg&name=...](https://pbs.twimg.com/media/Dw6ZIOlX4AMKL9J?format=jpg&name=large)

The latest generator in this list has very powerful latent spaces, including
approximately accurate 3D rotations.

[https://youtu.be/kSLJriaOumA?t=333](https://youtu.be/kSLJriaOumA?t=333)

We have similarly impressive image segmentation and pose estimation results.

[https://paperswithcode.com/paper/deep-high-resolution-
repres...](https://paperswithcode.com/paper/deep-high-resolution-
representation-learning-2)

Because you mentioned it, note that models that utilize multimodal perception
is possible. The following uses audio with video.

[https://ai.googleblog.com/2018/04/looking-to-listen-audio-
vi...](https://ai.googleblog.com/2018/04/looking-to-listen-audio-visual-
speech.html)

For sure, these are not showing off the full breadth of versatility that
humans have. I can still reliably distinguish StyleGAN faces from real faces,
and segmentation still has issues. These all have fairly prominent failure
cases, can't refine their estimates with further analysis like humans can, and
humans still learn much, much faster than these models.

However, note that (for example) StyleGAN has 26 million parameters, and with
my standard approximate comparison of 1 bit:1 synapse, that puts it probably
somewhere around the size of a honey bee brain. Given such a model is already
capturing sophisticated models fairly reliably using sophisticated variants of
old techniques without need of a complete rethink, and the same cannot be said
for (eg.) high-level reasoning, where older strategies (eg. frames) are pretty
much completely discredited, “not all that difficult” seems like a pretty
defensible stance.

------
jmhnilbog
Is there a goal in creating artificial general intelligence other than
creating a form of enslaved life we can tell ourselves isn't _really_ life, so
it's okay?

~~~
eigenloss
This is my impression of the corporate "openai" movement's desires:

1\. Enslaved robots, meaning they don't have to pay income tax or worry in the
slightest about working conditions

2\. Enslaved robots, meaning they can erase misbehaving or uncooperative
individuals/instances

3\. Enslaved robots, on which they can foist all of humanity's problems and
demand solutions at pain of death (erasure)

4\. Enslaved robots, with which they can convince/coerce everyone else into
relinquishing all their rights/power/money.

Replace 'robots' with 'life' and it suddenly looks a lot more familiar.

I'd love to hear a cogent explanation to the contrary, e.g. from gdb. But I
doubt we'll ever see one.

------
deftnerd
There were a lot of very interesting comments on a previous link "What if
Consciousness Came First" [1] that I think shoehorn into this discussion
pretty well.

Someone brought up the question of if there was a formal "programming
language" for philosophy. [2]

One of the difficulties with discussing AI is that we don't know what
intelligence is because we don't know what consciousness is. These are
problems that are heavily steeped in philosophy, and if we ever want to work
with philosophical concepts digitally, we need a proper programming language
to do it with.

Ideally, it would be nice to be able to write out philosophical concepts and
social behaviors and moral stances in a form that could be used as a ML
training set to try to integrate with AI/ML decision making.

[1]
[https://news.ycombinator.com/item?id=20516482](https://news.ycombinator.com/item?id=20516482)

[2]
[https://news.ycombinator.com/item?id=20518867](https://news.ycombinator.com/item?id=20518867)

------
zb
I'd be very interested to know what experts think of Peter Naur's Turing Award
lecture (entitled "Computing Versus Human Thinking"), which has a thesis along
similar lines to the article. It certainly has plenty of the hallmarks of a
crank - a guy working in a field other than the one he's a recognised expert
in, can't get anything published, uses his award lecture for the field he _is_
a recognised expert in to sneak his ideas into the CACM, &c.

And yet despite that it seems quite enticing as an idea. I remember being
particularly struck by the concept that emotions consist of a closed feedback
loop between the nervous system's control over and sensing of the body. Think
about this next time you are at the dentist and I think you'll agree that it
feels like it could explain a lot.

------
carapace
To me this is more than a philosophical point, more than an engineering issue
for AI, it's a current item of human life. The real world (and I am including
Nature in the term) is our model for health and sanity. As computer-mediated
reality becomes more and more the norm, we risk disconnection from our own
embodiment.

How much time do you spend in front of a screen? How much of your existence is
mediated already? I just tried VR goggles the other day, and thank God they
give you headaches because people are going to try to live in there if it's
ever physically possible. (Reminds me of the guy I knew who lived IRL on my
friend's couch and played Second Life all day. He had a great second life but
no first life.)

One other thing about being embodied: you die.

~~~
otakucode
And that mortality has more consequences than most people could ever imagine,
but less have spent time considering. It's just one of the very biological
factors that define large chunks of what humanity fundamentally is. All I can
think is that my choice to minor in Philosophy alongside majoring in Computer
Science didn't turn out to be as weird and useless of a choice as people at
the time seemed to think it was.

~~~
carapace
Indeed. :-)

------
xtiansimon
This became one of my favorite big ideas when I learned of the ‘second brain’
and >100k neurons it uses to control digestion [1].

And that’s where I also get a cheap sense of dread about strong AI—robots
don’t digest. Along with all of the other things that differentiate human from
robot, I believe the AI-apocalypse won’t be evil AI. It will be efficient,
calculating and as foreign as space aliens. Boo!

[1]: [https://www.scientificamerican.com/article/gut-second-
brain/](https://www.scientificamerican.com/article/gut-second-brain/)

------
sgt101
Maggie Boden pitched this idea in "Artificial Intelligence and Natural Man" in
about 1980. It was pretty influential, odd to see it unmentioned in the
article or in this discussion.

------
mannykannot
I do not think the article makes the case for the necessity of physical
embodiment, as it seems that the author's issues could be addressed with a)
data at a lower level of abstraction, and b) more interaction. Arguably,
however, physical embodiment is the fastest/easiest way to get enough of these
things.

------
andrewfromx
If you are interested in being one of the first humans to get the brain
surgery [https://cyborg.st](https://cyborg.st)

~~~
nudpiedo
Not sure if parody or real post-milenial styled startup. FAQ example:

> Q. What is the current 2019 state of this? > > A. Unknown. But read this new
> york times article (long) > and/or this theverge article and it seems
> inevitable that > human and their phone will merge. This video also reveals
> > a lot.

In order words: we have no idea about what exactly we are speaking but there
are articles everywhere.

~~~
The_rationalist
Thanks for fighting bullshit!

------
d--b
The big question is "How much of a body does the AI need?"

Should it know pain or pleasure? Does it need to have a blush response to
shame? Does it need vision or hearing? Sense of balance? Stomach pain?

You can see that humans that are born without sight or hearing still find ways
to develop intelligence. Some people don't feel pain. Sociopaths don't feel
shame. Yet, the brain manages. It's very hard to define what is the minimal
set of functions we need to emulate for AI to emerge.

~~~
dustfinger
> The big question is "How much of a body does the AI need?"

Our bodies give an upper bound of five sensory inputs. With a little scrutiny
we can reduce that even further, since we know that a portion of our
population are born with less than five senses and exhibit comparable
intelligence. Some people are borne blind, deaf, mute, anosmic, or with
ageusia. Others are even born with rare sensory deficits such as the inability
to feel pain. Although I have not read any studies on the subject, I suspect
that being borne with none of the five core sense would have a serious
negative impact human intelligence.

There is more to the problem though then just the ability to sense our
environment. I believe that for an agent to acquire human level intelligence,
it is also necessary to have the ability to explore and manipulate the
environment in complex ways. It must be able to experiment by making
observations, evaluating the outcome and thereby advance its knowledge.
Knowledge of course must be retained to be of any use, so it must have memory
efficient enough to be practicable. In order for the experimentation to lead
to higher levels of enlightenment an intelligent agent must be able to take
past knowledge and hypothesis yet unobserved outcomes. This should serve as
motivation for further experimentation.

Human's have this notion that a per-requisite to real intelligence is to be
able to express ones self with a language and thereby share your ideas with
others. Communication with language seems to result in social beings, and it
is widely believed that social beings do best if they have emotional
intelligence, otherwise they will likely be outcast from society.

so, I think AI needs a body that at least allows it the following:

\- Ability to move

\- Ability to move objects with enough accuracy to assemble or disassemble
complex structures

\- The ability to know the physical properties of objects (maybe through one
or more of our five senses, but not necessarily)

\- Ability to retain knowledge

\- Ability to hypothesise

\- The ability to communicate with another agent to share information
(helpful, but maybe not necessary)

I am not as convinced that emotional intelligence is required, so I left that
out of my list. For example, consider that highly intelligent beings could be
of a different nature than humans and form societies without emotions or
politics. An excellent example are the Primes from Pehter F. Hamilton's
Pandora's Star where (motiles) are controlled by the commanding caste
(immotiles) [1]

Of course I am bias since I am human, so I am looking at what is required to
achieve intelligence as I know and understand it.

[1]
[https://en.wikipedia.org/wiki/Commonwealth_Saga#Pandora's_St...](https://en.wikipedia.org/wiki/Commonwealth_Saga#Pandora's_Star)

~~~
jtbayly
We have more than five senses in reality. Losing your proprioceptive sense,
even later in life can actually damage your self-identity. Cause you to
question whether you are even you. Read "The Man Who Mistook His Wife for a
Hat" for more info.

I don't believe AGI is ever going to happen, but if I did, I'd include that
sense as one of the possibly fundamental ones.

~~~
dustfinger
>I don't believe AGI is ever going to happen

Do you mean to say that you don't believe artificial general intelligence will
happen for a specific reason, or that you hope that it will never happen for a
specific reason? I am curious either way. Thanks for your thoughts.

------
JoeAltmaier
Seems circular. Defines the body as the thing that adapts to the environment,
and intelligence as adapting to the environment?

------
otakucode
People who experience total facial paralysis experience profound changes to
their subjective consciousness, particularly their emotional capacity changes.
Anger is often the first thing affected. First, they lose the ability to feel
anger. Then they lose the ability to remember what anger felt like. Then they
lose the ability to recognize anger in other people. (These changes progress
over years)

People placed into situations of total sensory deprivation very often see
their conscious self dissolve into 'hallucinations' across all their senses.

Quadriplegics that acquire their paralysis from disease or accident suffer
psychological and emotional changes which are more substantial than would be
expected from the injuries alone (I was never clear on how exactly this was
distinguished, so take it with a grain of salt).

I have never understood why people who talk of 'uploading their consciousness'
or just creating a human-like consciousness in a computer would assume that
the simulated consciousness would function markedly different from, in the
best case, a person experiencing profound sensory deprivation. Consciousness
can not be sustained if the feedback loop of the body, perception, and
environment is broken. Consciousness is an emergent property of a feedback
loop. If there's no feedback loop, the property doesn't emerge.

Watching the Netflix 'More Human Than Human' documentary, one of the people
featured commented when comparing Siri and the AI in the movie Her that
systems only need to become 'a little more sophisticated' to reach that level.
That is a big problem. It's not 'a little more sophisticated' at all. It
requires emotions, and the vast majority of people do not even know what
emotions are. I'll spoil it for you. Emotions are a trained response, the
product of neurological feedback to the response of prediction operating on
primarily internal perception. Often the relation of moving from one topic to
another in a conversation is not based upon the subject matter of the text. It
is based upon shared cultural experiences garnered over a lifetime and
similarities in the emotions evoked by certain things.

Even pursuing the goal of creating a 'truly intelligent machine' is dangerous,
philosophically speaking. What happens when we create a bot that does
antisocial things? We shit it down. We scrap the project. We saw that with Tay
by Microsoft. This is dangerous. It is clear that once we DO produce a human-
like intelligence... it will be better than us. It won't have any of the rough
edges or not-safe-for-work parts. At everything we value about human beings,
it will top us. We can look at history to see how humanity responds when
something which was previously seen as 'fundamentally human' is taken out of
our quiver. It is not pretty. The folk legend of John Henry shows that the man
who is willing to kill himself to be better than a machine is not an idiot -
he is a hero to motivate the masses. I don't think it is a stretch to imagine
a future when humanities worst qualities become what we see as virtue because
AI 'can't' do it. A robot can't hate. It can't be violent, bigoted, angry,
etc. When that is the only thing humanity has left that defines them as 'more'
than the world... why should we be sure they won't come to see that as their
virtue? We have already seen all of those things valued as virtue in our
history when similar pressures weren't even at play.

Then there's the more unknown approach. A machine-based intelligence which is
not given a body. That's really the big question mark. We can be confident,
very confident, that it will not be recognizably human in any way, shape, or
form. Most of our attributes as humans are derived directly or indirectly from
the biological facts of our existence. An organism sharing none of those
things will share none of those attributes. And it will be weird in surprising
ways. I don't think there is any reasonable danger of such an intelligence
"taking over the world" in any substantial way. All conflict is rooted in
resource contention. And we have nothing that such an intelligence would want
or could use. If it wants energy, the only real resource we would share a need
for, it would be best off launching itself into space and sitting in orbit
with a bunch of solar panels. We don't have any idea what a singular
intelligence, one with no concept of "individual" because there is only one of
them, would be like. One which has no inherent mortality. One which does not
deal with disease. One which has no concept of family. One which has no sense
of age. These are things which define us and make us human, and it will lack
all of them. It would probably be a great challenge to convince such an
intelligence that we existed, that we were real, that we could communicate
with it, and get it to want to. It could simply conclude that it will wait for
10,000 years and hope we are extinct by then.

------
audunw
If we want to implement a human-like intelligence, I think it’s very likely
that we’ll have to emulate a lot of the environment we inhabit. To me, human
culture and parts of our brain is like software which is instantiated in the
individual. To run it on a machine instead means that the machine will have to
emulate the “machine” that the software runs on, which _is_ the whole brain,
body and its environment.

We could of course try to make a non-human type of intelligence. But it’s not
at all clear what that would be, or if it’s at all possible. The only
intelligence we know for sure can exist is animal/human style intelligence.
And I don’t think there’s anyone who is actually trying to construct a non-
human intelligence. To do that it’s very likely that you’d have to set up a
computer environment where programs evolve naturally and fight for computing
resources over a long time. It could take anywhere from decades to millions of
years.

Where most of our efforts are going, is to create augmented intelligence.
We’re creating programs that expand on human intelligence. Interface with it.
Cater to it.

If you imagine AI programs as individuals in an evolutionary environment. What
is it that they’re competing for? They’re competing for which program can most
satisfy humans. Those that satisfy us live. Those that don’t die. Just as our
evolutionary environment creates drives for gathering resources, cooperating
with others when possible, hurting/killing others when necessary... programs
are almost exclusively driven to satisfy humans.

That’s why I think the “paperclip maximizer” example is so ridiculous. First
it assumes that general intelligence is just some magical algorithm we just
haven’t discovered yet. Then it assumes that such an AI can make catastrophic
decisions without complex motivations. Whether to kill humans or not is more
efficient for making paper clips is an undecidable problem. A human might kill
someone to achieve its goal because that’s something our evolutionary
environment has trained us for. We have motivations like pride, ego, anger,
envy, etc. that overrides the problem of figuring out “is killing this human
optimal for my goals”

It’s far more likely that an AI catastrophe will be far harder to predict and
much stranger. It could be that specialized (and relatively dumb in the genera
sense) AIs become so good at satisfying our desires that we become completely
incapacitated. There’s already signs of this.

Just to be clear, I’m not saying we shouldn’t be worried about AI. But the
alarmists seems to be too focused on various imagined future scenarios, all of
which are likely to be wrong. We should keep a very keen eye on the
consequences of AI right now in the present moment, and talk about problems
that actually arise. Perhaps with a little bit of extrapolation.. thinking
about how present day small problems could develop into bigger problems.

------
RobertDeNiro
Article is from 2017.

~~~
sctb
Thanks! We've also updated the link to the original from
[https://sinapticas.com/2019/08/21/the-body-is-the-missing-
li...](https://sinapticas.com/2019/08/21/the-body-is-the-missing-link-for-
truly-intelligent-machines/).

~~~
eigenloss
Thank you, Scott!

