
Why people think computers can't - xtacy
http://web.media.mit.edu/~minsky/papers/ComputersCantThink.txt
======
dasil003
_So why is genius so rare, if each has almost all it takes?_

Simple statistical distribution. Compared to monkeys we _are_ all geniuses.
Compared to a hypothetical super-intelligent alien race we are all as
children. Individuals on average could easily be much more creative or much
less; it's all relative. The fact that we recognize and elevate genius within
our ranks is just an emergent psychology of being a sentient species.

~~~
loewenskind
Personally I think environment plays a huge roll in this as well. If you grow
up in an environment that constantly enforces that learning/growing/etc. are
pointless, most people _will_ eventually stop trying. Your environment also
has a lot to do with forming your "id" [1]. I personally believe that if you
could go back in time, take this same person and put them in a different
environment they would be recognizable only in appearance.

I think most people are capable of being so much more than they are.

[1] pg has an article that talks about "id". Your environment will often
influence how much you allow to become part of your "id" and what to do when
various parts of your "id" are impugned.

------
krakensden
If you assume that our brains are completely physical objects (no soul), then
you should be able to model them with a computer, like all other physical
objects, and create a thinking computer.

Just because we have an incomplete understanding now doesn't mean we always
will, or that it's impossible to have a complete (or reasonably complete)
understanding.

Or there's the idea of a non-physical, impossible to model soul. I don't see
much data to support that conclusion though, it seems the equivalent of cosmic
ether to me.

~~~
benpbenp
It's not that there are "data" to support the idea, but there is the fact that
consciousness is unobservable. That is weird and different from all other
physical phenomena, wouldn't you say?

~~~
vibragiel
Unobservable? You mean it doesn't emit visible light, right?

It's complex, it's hard to define, but it's definitely not unobservable. Also,
as a mere physical manifestation, you can fiddle with it in a pretty physical
fashion. You can put in on drugs and watch it warping. You can toy with its
physical structure surgically and see how it gets messed up (wonderfully
discussed in Oliver Sacks' books), you can bump it with a boxing glove and
literally switch it off.

Too much interactive for an unobservable phenomenon.

~~~
benpbenp
Sorry I forgot the qualification: excepting one's own experience of one's own
consciousness. That was silly.

What makes it weird and different from physical phenomena is the fact that it
is not generally observable, even though it is posited to exist for all
people.

Or do you disagree even with that? If you do: I _don't_ just mean that it
doesn't emit visible light. I mean it doesn't emit _anything_. Consciousness
is just the fact that the universe exists from a perspective. It is not an
object, "out there", "in the world", but something _only experienced_ by _each
consciousness for itself_. The mere fact that we could create a perfectly
convincing AI and then still have doubts about whether there really is
"anything in there" is a result of the fact that consciousness is
unobservable. Does that explain well enough what I mean?

~~~
sprout
If we're to assume that consciousness only exists where it is observable, only
I exist. All of you are automatons.

In fact, because of the fact that it is unobservable, you cannot even make a
reasonable argument that the netbook I'm typing this on lacks consciousness.

I for example can argue that anything that is executing a series of algorithms
as part of a system has consciousness, and therefore this computer is quite
conscious, just as I am (even though my series of algorithms is much less
well-defined and reproducible.)

~~~
benpbenp
I personally do not assume that consciousness only exists where it is
observable. I am merely interested in the fact that it is unobservable-- and
in how that makes it weird and different. Do you agree with me on this point?

~~~
sprout
Yes, but in that it's empirically indistinguishable, from an ethical
standpoint we should assume it exists when we see the hallmarks of
consciousness (though perhaps the primary hallmark is an intrinsic
understanding of the concept itself.)

------
justsee
The article doesn't really do a good job of explaining the real problem, which
is understanding what consciousness is, as a requirement for a thinking
'being'.

Obviously Penrose's book 'The Emperor's New Mind' came out some time after
this post by Minsky, but as one reviewer summarises:

"The Emperor's "new clothes," of course, were no clothes. The Emperor's "New
Mind," we then suspect, is nothing of the sort as well. That computers as
presently constructed cannot possibly duplicate the workings of the brain is
argued by Penrose in these terms: that all digital computers now operate
according to algorithms, rules which the computer follows step by step.
However, there are plenty of things in mathematics that cannot be calculated
algorithmically. We can discover them and know them to be true, but clearly we
are using some devices of calculation ("insight") that are not algorithmic and
that are so far not well understood -- certainly not well enough understood to
have computers do them instead. This simple argument is devastating. "
<http://www.friesian.com/penrose.htm>

~~~
snikolov
I believe Minsky is arguing that because these "devices of calculation" are
not well understood, you cannot conclude that there is anything special about
them that puts them beyond the reach of any computational (algorithmic)
processes.

I think he's right. There's been a lot of progress in understanding
information processing in the brain (especially the neocortex) since Minsky
wrote this, and I really believe someday soon, notions such as "insight" that
we, in our ignorance of how they work, reserve exclusively for humans, won't
be such a big mystery anymore.

~~~
justsee
That is exactly what Penrose demonstrates in his book - within mathematics
there are things that cannot be calculated algorithmically. Which is also why
the above summary makes the point that this simple argument is devastating
(for proponents of Strong AI).

A summary of his position can be found on his Wikipedia page: "He claims that
the present computer is unable to have intelligence because it is an
algorithmically deterministic system. He argues against the viewpoint that the
rational processes of the mind are completely algorithmic and can thus be
duplicated by a sufficiently complex computer. This contrasts with supporters
of strong artificial intelligence, who contend that thought can be simulated
algorithmically. He bases this on claims that consciousness transcends formal
logic because things such as the insolubility of the halting problem and
Gödel's incompleteness theorem prevent an algorithmically based system of
logic from reproducing such traits of human intelligence as mathematical
insight."

I searched for some decent refutations by Minsky, and was disappointed to only
find this: <http://kuoi.com/~kamikaze/doc/minsky.html> "Thus Roger Penrose's
book [1] tries to show, in chapter after chapter, that human thought cannot be
based on any known scientific principle."

Disclaimer: I'm not a student of AI, and only realised just now that Penrose
and Minsky represent the two opposing schools of thought on AI!

~~~
Tichy
"within mathematics there are things that cannot be calculated
algorithmically"

What is missing is the proof of the opposite: that the human brain can
calculate those things. I don't think it can - mathematics is the tool we use
to think about them, after all.

~~~
snikolov
I think you're right. The brain is a very specialized computing machine, and
it evolved for a specific purpose.* Evolution doesn't reward general computing
capabilities. Hence, computers are vastly more general than brains. If
something is, in principle, uncomputable, than there is no chance that a brain
can compute it.

*Sure, a networked system of neuron-like "units" might have general computing capabilities. It's as if you saw a fully functional modern computer for the first time, and it only gets fed arithmetic problems - the system itself is much more powerful and general than that, and you could hack around and make it do other things once you understand how it manages to do arithmetic. The brain is kind of like that too -- it gets fed stimuli and it has evolved to process those stimuli. Yet it might be that a computational system made of a network of neurons could be a lot more powerful and general than that in principle.

------
cma
Some of the OCR mistakes were humorous in light of the content.

------
username3
_Can Computers Think?_ debate graph,
<http://debategraph.org/Stream.aspx?nID=75>

------
ars
> I don't believe that there's much difference between ordinary thought and
> highly creative thought.

And I do. This quote says it perfectly:

In science, as well as in other fields of human endeavor, there are two kinds
of geniuses: the “ordinary” and the “magicians.” An ordinary genius is a
fellow that you and I would be just as good as, if we were only many times
better. There is no mystery as to how his mind works. Once we understand what
he has done, we feel certain that we, too, could have done it. It is different
with the magicians. They are, to use mathematical jargon, in the orthogonal
complement of where we are and the working of their minds is for all intents
and purposes incomprehensible. Even after we understand what they have done,
the process by which they have done it is completely dark. They seldom, if
ever, have students because they cannot be emulated and it must be terribly
frustrating for a brilliant young mind to cope with the mysterious ways in
which the magician’s mind works. Richard Feynman is a magician of the highest
caliber. Hans Bethe, whom [Freeman] Dyson considers to be his teacher, is an
“ordinary genius,”

(Quoted from Enigmas of Chance: An Autobiography, by Mark Kac. Harper and Row.
1985. p. xxv.)

~~~
andreyf
What a load of horseradish. Feynman was adored by his students, and was often
quite introspective about how he goes about solving problems.

------
milesf
I believe computers will one day be able to think, but certainly not using Von
Neumann architecture.

~~~
fleitz
Why do you feel that the key to thinking is the separation of program memory
from data memory? Or is there another characteristic of Von Neumann
architecture that I'm not thinking of?

If thinking can be done with a universal turning machine then it would work on
a Von Neumann architecture machine as VN arch is a class of UTM.

~~~
zephjc
Milesf might be implying that he thinks a different type of hardware might be
necessary to have true cognitive abilities, e.g. Quantum computing or
something taking advantage of quantum effects as described by Roger Penrose et
al. (I don't buy that requirement, personally)

~~~
fleitz
I think that we'll just end up gluing neurons onto transistors for the near
future. With the latest research they have rat brains flying flight sims. The
primary stumbling block is going to be the ethics committee or finding an
animal that has a large brain that ethics committees aren't particularly fond
of. Or networking the brains of rats together to form a much larger neural net
that interfaces to ethernet. Think EC2 with a rat brain on each server. Or
conversely a network interface for the human brain.

I do think it's possible to think with a UTM but I don't think that
discovering that ability is going to be economical compared to fusing
transistors to neurons.

~~~
pavel_lishin
So, how at what ratio of biology-to-technology does something seize being a
thinking person, and start being a thinking computer?

Clearly, a few rat cells hooked up to big processors are a thinking machine.
But someone with an electronic implant in their brain that prevents seizures
is a thinking person. But at which point do we say, "you're a robot, beep
boop"?

~~~
apsurd
you want "cease" (to stop/end), rather than "seize" (to take hold/control of).

no offense or anything, I am just assuming you are a non-native english
speaker (by your name)

~~~
pavel_lishin
Oops! Well, you're right - my first language was Russian - but at this point
English is my primary language. I'd attribute that mistake less to my
heritage, but rather to all the wine I drank last night :P

------
naner
I posted something tangential to this topic, but none of you philistines
upvoted it.

Bam! There it is. --> <http://news.ycombinator.com/item?id=1526734>

------
newhouseb
This seems to conflate consciousness with creativity, which are not the same
thing. Computers can be creative combinatorically with evolutionary algorithms
provided you can provide a fitness test.

~~~
wlievens
But that just moves the creativity bit into designing the fitness function,
does it not?

(I do agree with your sentiment though; just pointing out the obvious hole)

~~~
ithkuil
It depends on what are you measuring.

Let's say that your job is to solve some problem and want to use a genetic
algorithm to solve it.

You probably have to build a model, a simplified version of the reality, and
find out a fitness function which makes sense in that model, which is fast to
calculate etc.

Probably you have to be clever, you have to somehow have an intuition of what
matters and what doesn't. That is, you have to know when your cow is too
spherical.

This task requires intelligence, and I'm sure that creativity will certainly
help making a leap forward, especially if the problem is not well known in
advance, or you benefit from solving it in a completely different way.

But, this doesn't mean that each fitness function that will solve the problem,
given enough time, will require creativity, or even intelligence!

Natural selection provided really ingenious solutions to a huge set of
problems of any kind. The fact that we are discussing this very fact is an
astonishing achievement of that very fitness function which is natural
selection.

But nobody designed this fitness function, and so long we are interpreting the
word "creativity" meaning "created by a mind", no creativity was required to
design this "fitness function" or any of the products of that "genetic
algorithm".

Of course, one could argue that "creativity" the very act of "creating"
things, even without creators, even just emerging properties of a complex
system (like natural selection).

Well, redefining creativity that way is certainly not wrong, but I don't think
normally people think about creativity in those terms, but as an intrinsic
characteristics of minds (and perhaps also human mind only, to most people,
but that's another topic)

------
earth
I don't think computers can think, thinking is only a human perception and
most of it is useless. Being creative to direct an orchestra that other humans
may like is pointless to survival. Too think I think you need to have a
purpose, If I were to give humans a naive purpose I would say to find the
meaning of life through survival and attaining great knowledge. I believe a
machine will be able to do this better eventually if not soon. All it needs is
problem solving and to start with a set of resources that its allowed to use.

------
rabidsnail
Thinking is defined as the thing that separates humans from nonhumans.
Computers aren't humans. Therefore, even if computers can perform every task
that humans can perform, what the computer is doing is, by definition, not
thinking. Also, the question of whether X can think is completely without
value.

~~~
wlievens
It's hard to give a more circular argument than that.

~~~
_delirium
It's more or less the standard cynical joke in AI circles, though--- computers
aren't intelligent, so when AI succeeds at something, it isn't AI anymore, and
moves to some other field instead, so AI never has successes. :)

