
Machines That Will Think and Feel - jonbaer
http://www.wsj.com/articles/when-machines-think-and-feel-1458311760
======
erokar
From the article: "No computer will ever feel anything. Feeling is
uncomputable. Feeling and consciousness can only happen (so far as we know) to
organic, Earth-type animals—and can’t ever be produced, no matter what, by
mere software on a computer."

I happen to agree, but the title is a tad misleading.

~~~
pdonis
The "so far as we know" qualifier destroys the whole argument; all the
argument really proves is that we don't (currently) know _how_ to produce
feeling with software on a computer. But that's a much weaker claim than the
claims being made in what you quoted.

~~~
erokar
Software can be represented in any arbitrary way (e.g. in a book) and
computations can be carried out from the software instructions in any
arbitrary way (e.g. by arranging rocks in certain patterns). If one believes
that consciousness can emerge from software on a computer alone, it also
follows that consciousness can emerge from placing rocks in a certain pattern
following instructions in a book.

I think this idea is absurd. I do not think consciousness necessarily can only
be produced by organic life, but I do think it has to emerge from physical
structures. As of today we have no idea what properties such physical
structures must have. It follows that computers are no more likely to become
conscious than e.g. washing machines.

~~~
breuleux
> If one believes that consciousness can emerge from software on a computer
> alone, it also follows that consciousness can emerge from placing rocks in a
> certain pattern following instructions in a book.

I don't think this is as absurd as it sounds. I think it was Dennett who said
our intuition about consciousness is pretty sensitive to timing. The rock
construction you describe would "think" a billion times slower than a human
brain, and there is something unsettling or unintuitive about a consciousness
that operates in slow motion. I would expect extremely fast-paced AI to think
that the idea that human beings are conscious is similarly absurd.

Also, consciousness "feels" like it's ineffable, so it makes sense that we
would have an inherent bias against understanding it as a process. There is
something we see in our consciousness that we simply cannot wrap our minds
around in any way (possibly because we're hallucinating it).

So yes, I would bite the bullet on this: consciousness _could_ emerge from
placing rocks in a certain pattern following instructions in a book. It would
just be an excruciatingly "slow" consciousness.

~~~
pdonis
_> It would just be an excruciatingly "slow" consciousness._

If it's too slow to interact with the rest of the world in appropriate ways,
it wouldn't be a consciousness at all.

~~~
breuleux
There's always something to interact with. If you had a conscious being made
out of star systems, it wouldn't meaningfully interact with anything within
our lifetime, but over billions of billions of years, it would presumably
shift entire galaxies. The rockputer just needs inputs that operate at its own
scale, like the shape of the coastlines it's expanding into, information about
geological processes, _another_ rockputer competing for territory, and so on.
Alternatively you could simulate a whole universe using these rocks, and feed
simulated inputs to the being.

Of course, part of the difficulty of imagining a conscious rockputer is that
it's also pretty hard to imagine its inputs :)

~~~
pdonis
_> There's always something to interact with._

This is trivially true as you state it; that's why I added the qualifier "in
appropriate ways". Not all interactions will produce consciousness. One
obvious difference between us and your hypothetical "rockputer" is that the
"rockputer" can't change its behavior based on its inputs in a way that
improves its chances of survival; rocks simply aren't built that way. Neither
are star systems or galaxies. But we are.

~~~
breuleux
> the "rockputer" can't change its behavior based on its inputs in a way that
> improves its chances of survival

Yes it can. Some natural events, for example a flood or an earthquake, can
destroy parts of the rockputer. It is therefore important for it to store the
various parts of itself strategically. It shouldn't put its vital parts near
the coast, or a tsunami may kill it. It should store its own consciousness in
a robust way, so that it can recover from an earthquake. It's probably too
slow to actually see either of them coming, but it can certainly prepare
itself.

Or imagine you build two rockputers, one with black stones, another with white
stones, and you have rules to remove stones when both rockputers try to expand
into the same territory, a bit like in the game of Go. Then one can kill the
other.

Star systems interact with each other through gravity, so you could conceive
of them as some kind of gargantuan atoms, capable of making complex
structures, including conscious ones. Granted, there doesn't seem to be an
equivalent of the other forces at that scale, so probably it wouldn't work,
but you see what I mean.

~~~
pdonis
What you are describing is a "rockputer" where all the actual computation is
being done by something other than the rocks.

------
erikpukinskis
For those who don't recognize the name, the author, David Gelernter, was one
of biggest AI theorists during the PC era. He published a 1991 book _Mirror
Worlds_ which basically suggested the now uncontroversial idea that "software
is eating the world". At that time software was just for accounting and video
games so no one paid attention.

However a math PhD and former Berkeley professor named Ted Kaczynski was one
of the few people to take those ideas seriously. He thought the idea of
computers controlling everything was terrifying, thought we were already too
absorbed in technology and that as a group we need to make an intentional move
away from otherwise innocuous-seeming technological advancements.

He tried to publicized these ideas by mailing live bombs to Gelernter and
other researchers, and was nicknamed the "Unabomber" (UNiversity BOMber). The
public decided he was looney and we like our Windows 95 just fine and we're
mostly OK with slowly becoming cyborgs.

Interesting to see Gelernter tapping the brakes.

~~~
njloof
How different is Gelernter's point from Kaczynski's in the end?

(Quoted from Kaczynski, 1995): 172\. First let us postulate that the computer
scientists succeed in developing intelligent machines that can do all things
better than human beings can do them. In that case presumably all work will be
done by vast, highly organized systems of machines and no human effort will be
necessary. Either of two cases might occur. The machines might be permitted to
make all of their own decisions without human oversight, or else human control
over the machines might be retained.

173\. If the machines are permitted to make all their own decisions, we can’t
make any conjectures as to the results, because it is impossible to guess how
such machines might behave. We only point out that the fate of the human race
would be at the mercy of the machines. It might be argued that the human race
would never be foolish enough to hand over all power to the machines. But we
are suggesting neither that the human race would voluntarily turn power over
to the machines nor that the machines would willfully seize power. What we do
suggest is that the human race might easily permit itself to drift into a
position of such dependence on the machines that it would have no practical
choice but to accept all of the machines’ decisions. As society and the
problems that face it become more and more complex and as machines become more
and more intelligent, people will let machines make more and more of their
decisions for them, simply because machine-made decisions will bring better
results than man-made ones. Eventually a stage may be reached at which the
decisions necessary to keep the system running will be so complex that human
beings will be incapable of making them intelligently. At that stage the
machines will be in effective control. People won’t be able to just turn the
machines off, because they will be so dependent on them that turning them off
would amount to suicide.

174\. On the other hand it is possible that human control over the machines
may be retained. In that case the average man may have control over certain
private machines of his own, such as his car or his personal computer, but
control over large systems of machines will be in the hands of a tiny
elite—just as it is today, but with two differences. Due to improved
techniques the elite will have greater control over the masses; and because
human work will no longer be necessary the masses will be superfluous, a
useless burden on the system. If the elite is ruthless they may simply decide
to exterminate the mass of humanity. If they are humane they may use
propaganda or other psychological or biological techniques to reduce the birth
rate until the mass of humanity becomes extinct, leaving the world to the
elite. Or, if the elite consists of soft- hearted liberals, they may decide to
play the role of good shepherds to the rest of the human race. They will see
to it that everyone’s physical needs are satisfied, that all children are
raised under psychologically hygienic conditions, that everyone has a
wholesome hobby to keep him busy, and that anyone who may become dissatisfied
undergoes “treatment” to cure his “problem.” Of course, life will be so
purposeless that people will have to be biologically or psychologically
engineered either to remove their need for the power process or to make them
“sublimate” their drive for power into some harmless hobby. These engineered
human beings may be happy in such a society, but they most certainly will not
be free. They will have been reduced to the status of domestic animals.

175\. But suppose now that the computer scientists do not succeed in
developing artificial intelligence, so that human work remains necessary. Even
so, machines will take care of more and more of the simpler tasks so that
there will be an increasing surplus of human workers at the lower levels of
ability. (We see this happening already. There are many people who find it
difficult or impossible to get work, because for intellectual or psychological
reasons they cannot acquire the level of training necessary to make themselves
useful in the present system.) On those who are employed, ever-increasing
demands will be placed: They will need more and more training, more and more
ability, and will have to be ever more reliable, conforming and docile,
because they will be more and more like cells of a giant organism. Their tasks
will be increasingly specialized, so that their work will be, in a sense, out
of touch with the real world, being concentrated on one tiny slice of reality.
The system will have to use any means that it can, whether psychological or
biological, to engineer people to be docile, to have the abilities that the
system requires and to “sublimate” their drive for power into some specialized
task. But the statement that the people of such a society will have to be
docile may require qualification. The society may find competitiveness useful,
provided that ways are found of directing competitiveness into channels that
serve the needs of the system. We can imagine a future society in which there
is endless competition for positions of prestige and power. But no more than a
very few people will ever reach the top, where the only real power is (see end
of paragraph 163). Very repellent is a society in which a person can satisfy
his need for power only by pushing large numbers of other people out of the
way and depriving them of THEIR opportunity for power.

------
gitcommit
From the article: "Still, No artificial mind will ever be humanlike unless it
imitates not just feeling but the whole spectrum."

When you hear a spoken computer voice, this can come pretty close to natural
sounding voice. Video demonstration: [https://pythonspot.com/personal-
assistant-jarvis-in-python/](https://pythonspot.com/personal-assistant-jarvis-
in-python/)

I do think we are far from passing the Turing test, but we are getting closer.

------
mirimir
I don't get why it's always human vs AI. Many humans will merge with AI. At
least in the short term, hybrids will outcompete both pure human and pure AI.

~~~
ThomPete
I don't believe we will. Exponential growth is going to make it more likely
that we will get some sort of strong aware entity before we figure out how to
merge. Also it still leaves the problem of transcendence.

~~~
drdeca
Well, Moore's law is ending within 10 years (I'm fairly sure that you can't
have a transistor smaller than an atom), so, I'm not sure why I would expect
that computational power would continue to grow exponentially.

~~~
danieltillett
Moore's law is about a particular way of implementing computation machinery,
not computational power. There are many ways of building computing machinery -
if one path becomes impassable we can move down another.

~~~
drdeca
Alright, sure, I guess some other sort of technology could be used.

But Koomey's law (amount of computation per amount of energy doubles about
once every 1.57 years) has some hard boundaries that aren't all that far off.
Landauer's principle / the Landauer bound will stop Koomey's law for
irreversible computation by 2048.

And even for reversible computation there is a limit in the Margolus–Levitin
theorem , which Wikipedia says (with a [citation needed] ) should run out
within about 125 years, which, admittedly, is substantially more than 32
years.

But still,

the earth is finite. Exponential curves about it tend to be more like
logistic?

Finitude. Scary stuff.

~~~
danieltillett
Sure given the laws of physics there is a finite amount of computation that
can be done in a fixed volume. We are still a long way from reaching such
limits and the human brain is very far from such limits.

One of the reasons we should fear a super AI is that it will be constrained by
the laws of physics and so will want to use all the resources within its light
sphere efficiently. Humans are not an efficient structure for computation
given the how we evolved so any unconstrained AI would likely not hesitate to
reorganise us into a more efficient structure.

------
unabst
This article continues to feed our baseless fears of AI.

If we're looking for an existential threat to humankind, we have it already in
nuclear bombs. If you want a "kill switch" compatible with any robot, it's
called a gun.

Weak AI is not strong AI, and strong AI is not necessarily "natural"
intelligence, as in human intelligence. Goolge cars and Alpha Go will never
become self aware. Weak AI does not lead to awareness. Awareness is its own
problem, and will require its own unique solution. There are no falsifiable
abstractions without a physical implementation - a fundamental tenant of
science.

"Feeling is uncomputable."

Then how the fuck do we do it? All metaphysics, dogma, and magic have proven
to be scientifically baseless. There is no special treatment of any specific
trait in science just because "we have it" or because it's "organic". This one
isn't even that unique or rare. A photo sensor "feels" light. And if we're
talking of emotions instead, then watching a scenario of a mother reunited
with a child certainly would require "computing" and if feelings are
"triggered" then they may lead to "tears". If all this happens in a robot, and
is convincing enough, the quotes disappear. And if a human pretends to cry,
then bring on the quotes.

There is no difference between simulation and the real thing, unless you're
intentionally faking it. And this not even a problem specific to robots!

"Emotion is a hugely powerful and personal encoding-and-summarizing function.
It can comprehend a whole complex scene in one subtle feeling. Using that
feeling as an index value, we can search out—among huge collections of
candidates—the odd memory with a deep resemblance to the thing we have in
mind."

That's a dramatic way of stating "input can trigger memory" and computers do
that already.

If we were to translate the external world into code the machine could digest
this would compose an interface that would allow the machine to "feel" things,
for lack of a better word. And wouldn't this deserve to be called "emotions"?
Hence, emotion is just another interface, or sense.

So the question then becomes, would there be strong intelligence void of such
an interface? And if no, wouldn't some of these emotions necessarily be
positive? Must it not have positive feelings towards others?

As corny as it sounds, if we ever build independent naturally intelligent
beings, love and friendship is the secret ingredient that will keep us
together. Scientifically, evolution already attests to this. Love is real,
it's physical, and it's an enormous part of our lives. We even have organs
dedicated to it. Where there is love, there is peace, and there is family.
Only organisms that don't love each other eat each other. This would not
change even if AI were to become a species. All strong AI needs is Strong
Love.

And we already love our robots. We just need them to hold up their end of the
bargain, and as their parents and creators, it's on us to raise them right.

AI is nothing to fear. We should be looking forward to it.

------
jacquesm
What with all the people stroking their smartphones you'd think they are in a
very intimate relationship already. I'll bet your average smartphone gets more
attention than the average significant other.

~~~
yarou
Spot on.

And chances are, you share the most intimate details of your life with your
smartphone.

------
Rainymood
"Asking whether a computer can think is the same if asking a submarine can
swim"

------
chewymouse
How do you read this (paywall)?

~~~
zajd
Click the "web" link

~~~
joe_the_user
They stopped that from working, apparently

