

Peter Norvig: The machine age - pitdesi
http://www.nypost.com/p/news/opinion/opedcolumnists/the_machine_age_tM7xPAv4pI4JslK0M1JtxI

======
Smerity
In this AI/machine learning is compared to the space race and the fight
against cancer - I like the comparison considering I work in the field ;)
Unfortunately though I think AI and machine learning has the fundamental
problem of rolling improvement...

Think back a hundred years to a time before the internet, long distance
telephones, common air travel and so on. To people from a hundred years ago
we'd be the most outlandish science fiction story ever told.

I consider the same thing whilst watching science fiction films - in reality
they wouldn't be amazed that they can reach Mars in a half hour. They'd
instead be complaining about the in-flight entertainment or the turbulence
caused by navigating asteroid fields.

AI has the same problem. People can now (naively) communicate with each other
even though they share no common language. Texts and documents can be re-
written in real time to be legible in another language. Credit card fraud and
spam emails are handled transparently by systems trained on hundreds or
thousands of hours of cumulative knowledge. No-one sees these things. No-one
respects these things.

Human kind judges itself from the position it currently stands, not from where
it began. For this reason alone incremental improvements will never really be
considered milestones of achievement.

~~~
zeteo
"To people from a hundred years ago we'd be the most outlandish science
fiction story ever told."

You'd be surprised:

[http://en.wikipedia.org/wiki/Paris_in_the_Twentieth_Century#...](http://en.wikipedia.org/wiki/Paris_in_the_Twentieth_Century#Predictions)

~~~
derleth
Technology is one thing; society is another: I doubt anyone from the 19th
Century could possibly imagine a debate over gay marriage. A crazy promoting
it getting beaten down by a righteous world, possibly, but not a debate.

------
Luyt
A link to the 'print version' of the page: the whole article on one page
without the incredible clutter and cruft that the New York Post splashes all
over its site:

[http://www.nypost.com/f/print/news/opinion/opedcolumnists/th...](http://www.nypost.com/f/print/news/opinion/opedcolumnists/the_machine_age_tM7xPAv4pI4JslK0M1JtxI)

------
mthomas
Perhaps offtopic, but I find it odd that there is an opinion piece by Peter
Norvig in the NY Post.

~~~
pgbovine
true, but i'm sure that NY Post has more subscribers than NY Times, so if his
goal was to reach the largest possible audience, perhaps NY Post was an
appropriate venue

~~~
necubi
This isn't true. According to Wikipedia, the NY Times has 876,638
daily/1,352,358 Sunday subscribers [0] while the NY Post has 525,000/343,361
[1]

[0] <http://en.wikipedia.org/wiki/NY_Times> [1]
<http://en.wikipedia.org/wiki/NY_Post>

------
wooby
The episode "The Thinking Machine" of the Nova miniseries "The Machine That
Changed the World" is my favorite introduction to the history of AI up until
around AI winter.

The episode, along with four others, are available here:
[http://waxy.org/2008/06/the_machine_that_changed_the_world_t...](http://waxy.org/2008/06/the_machine_that_changed_the_world_the_thinking_machine/)

------
bmh100
This is an excellent article for raising awareness of AI. However, I disagree
with its conclusion that we will always relate to AI as tools, pets, advisers,
etc.

When we ask whether machines can think, I believe it is a question of volition
and self-direction. Does the machine have an open-ended goal and is self
aware? Can it alter its own programming to change its own methods, its own
knowledge, and its own goals? If the answer to these two is "yes," then I
believe we will consider the machine to be thinking.

What are the implications? If machines gain those capacities, as well as the
ability to compute emotions, then what separates them from us? If you agree
that humanity is fundamentally a function of our minds, then, if machines can
compute in fundamentally equivalent ways, they become "human." What evidence
is there that this is forever impossible?

~~~
tgflynn
_If you agree that humanity is fundamentally a function of our minds_

What about the ability to feel pleasure and pain ?

I don't believe a deterministic machine will ever experience such subjective
feelings (though it may behave as if it does). Computers may well become
sapient but they (at least with current technology) will never be sentient.

~~~
feral
Thought experiment: If there was a human that suffered an injury that left
them unable to feel pleasure, or pain - some form of nerve damage, perhaps - I
wouldn't think any less of that human's mental function. I wouldn't think they
had lost any humanity, or argue they should be accorded any less human rights.
I think that they'd still have hopes, and fears, would be much more important
than their ability to feel pain or pleasure.

Given this, how important is the ability to feel pleasure, or pain, really?

I think you are focusing on the wrong thing, and disagree with this way of
assessing humanity.

~~~
tgflynn
I used pleasure and pain as examples of subjective experiences. Hopes and
fears are other examples.

If you believe that humanity amounts to more than merely electrical signals
and other physical phenomena (ie you're not a reductionist) then I'm surprised
it's my side of the argument you're disagreeing with.

~~~
feral
It is hard to have this conversation on hacker news.

The format works well for general discussion, but this is a particularly
subtle topic - its hard to make conversational progress before several essays
back and forth to pin down a set of agreed terminology.

This is something I'm struggling with here. I don't want to get bogged down in
a debate on good semantics, but without clarification its hard to have any
meaningful discussion. Sometimes you need to spend some time discussing
semantics, in order to move past them.

I need a word to describe the sort of higher level awareness that we humans
have in and of ourselves, that makes it not ok to 'switch off' another human.
Normally, I'd say sapience, after 'wisdom' - but you are clearly using that in
a different meaning, so I'll go with 'personhood', without limiting that
necessarily to homo _sapiens_.

While we still know relatively little about personhood, I would be happier to
use hopes and fears as a defining criteria, over pleasure and pain.

Pleasure and pain are very low-level sensory phenomena. Very simple life can
feel pleasure and pain - pain particularly; to my mind, they are neither
necessary, nor sufficient, criteria for personhood.

Hopes are fears are, I think, the product of a much higher level reasoning
process. For a start, they are not merely short term and reactive; they pre-
suppose some model of the future. Further, I think they generally necessitate
a model of self, and an awareness of self in that future.

That makes them very different criteria, in general.

If you were arguing that 'subjective experiences' were necessary for
personhood, it'd have been better to state that explicitly.

The post you were responding, when you mentioned pleasure and pain, said that
'humanity was fundamentally a function of our minds'. I agree with that post.

If you require pleasure and pain - which clearly require some level of
situatedness in a sensory apparatus which can communicate these sensations -
for your definition of personhood, then I can see how you would argue against
a machine 'simulating' personhood.

However, with hopes and fears, no such situatedness is required. I see no
reason why a machine couldn't potentially simulate/execute a
program/consciousness that had hopes and fears, even without a body, or
conventional sensory input.

I see no reason why a simulated/executed (it is _very_ important to realise
that these two verbs have the same effective result in this context) program
couldn't have subjective experience.

>If you believe that humanity amounts to more than merely electrical signals
and other physical phenomena (ie you're not a reductionist) then I'm surprised
it's my side of the argument you're disagreeing with.

So many difficult (loaded?) words there: 'amounts', 'merely', 'reductionist',
'side'.

I believe humanity amounts to more than merely electrical signals, in the same
way I believe the mona lisa 'amounts' to more than 'merely' a collection of
oils on a poplar.

Join the technological revolution: its not the atoms that are important - its
the bits. The pattern things are in is much more important than the things
themselves. A quicksort algorithm running on vaccum tubes, or transistors, or
water pipes, or CPU simulated in minecraft, or rocks in an XKCD desert, is
still a quicksort algorithm.

Its not 'merely' a collection of electrical signals. Its become more than
that, because the electrical signals are arranged in a certain pattern. But
its still fully explainable in terms of electrical signals.

>(ie you're not a reductionist) I do believe our minds are defined and
explained (in theory) by physical phenomena, like everything else in the
universe.

I'm not a supernaturalist. How about you?

~~~
tgflynn
I'll try to explain my point of view on this to the extent that I myself
understand it.

First to me what you call "personhood", which you define as that which makes
it wrong to kill (or I would assume as well cause suffering to) another human
being, is based on the capacity of the being in question to experience
suffering. I think that humans have a much higher capacity for suffering than
most (if not all) other terrestrial animals due to their more complex nervous
systems. Part of this increased capacity may be due to increased cognitive
capacity per se (for example I will suffer if I think my life is in danger
whereas an animal may not be able to conceptualize that danger and hence will
not experience suffering in the same situation). Humans clearly have greater
sapience than animals and this can lead to greater suffering. It is also
possible that humans have a greater intrinsic capacity for subjective
experience independently of any cognitive process. If this is true I would say
that humans have greater sentience than animals. I don't know if it's true
because I lack a conceptual model for understanding the nature of sentience
and what physical processes may give rise to it.

In the absence of such a model, I can only guess and my guess is that
sentience somehow arises from the interaction between information and matter.

I think it's likely that deterministic computers will achieve human level
cognitive abilities in the relatively near future. I know of no cognitive
function that humans perform which I can't imagine being executed on a
sufficiently powerful computer. I would consider such machines to be sapient
but as far as I can see they would have no sentience whatsoever (not even as
much as lower animals).

As for your last question:

It seems to me that human beings are constantly falling into the same
intellectual trap. That trap consists of believing that their scientific
theories are not only accurate but also complete. At the end of the 19th
century physicists believed they knew essentially everything there was to know
about physics (except for a couple of pesky anomalies that could easily be
swept under the rug). Ten years ago it was common wisdom that the majority of
human DNA was junk which served no purpose. Only now are we beginning to
understand the role that "junk" plays in regulating gene expression.

I think that current scientific theories are not capable of explaining the
nature of sentience or of predicting which physical systems will be sentient
and which not. I'm not sure if that makes me a supernaturalist or not.

~~~
feral
Thanks for the detailed response.

>It is also possible that humans have a greater intrinsic capacity for
subjective experience independently of any cognitive process.

I must disagree with you here - I would say that subjective experience _is_ a
cognitive process.

Of course, this is a speculative opinion; but it seems likely to me:

We know that we can have simple algorithmic and statistical processes on
neural nets; we know we can build very complex processes from simple building
blocks; we know the brain is full of neural nets; we know that the high level
cognitive processes are build on the neural nets; and that if we interfere
with the brain, we disturb the personhood.

To me, this seems like a pretty good case for applying occams razor and
deciding that the personhood comes from the (big, incredibly complex) neural
nets.

Loosely, I believe that if we could write a sufficiently powerful piece of
software, and if we constructed it so that that was capable of complex
introspection, then it would say it had a subjective experience. And I would
tend to believe it. Such a feat is clearly some distance from the current
state of the art and we have no idea how to do this - but still.

>I think that current scientific theories are not capable of explaining the
nature of sentience or of predicting which physical systems will be sentient
and which not. I'm not sure if that makes me a supernaturalist or not.

This is tricky territory.

Its reasonable to believe our scientific theories are incomplete. Certainly,
we have a long way to go, scientifically, before we understand how minds
operate.

But, lets look at our physics knowledge. Our physics models seem to be pretty
good. Unless we are at very high energies, they seem to do a very accurate job
of describing the world. We've built components with that knowledge, that have
been reliable; our abstractions do not seem very leaky.

It is important to realise, that while relativity did come along and wipe away
aspects of the newtonian worldview, newtonian physics is still accurate for
deciding where to throw the ball to get it into the net. The more detailed
physics was only relevant when working with certain situations that demanded
the lower level of abstraction.

I don't believe nature evolved brains that require subtle properties of high
energy physics to work. I'd say the systems underlying them aren't even
quantum in nature. Some scientists would dispute this - notably Roger Penrose.

But is evolution really going to build brains, and processing software - which
are present, in some form, in a vast range of species - out of some mysterious
quantum - or lower level - physics?

Evolution tends to be about bootstrapping. Higher level systems are always
assembled out of lower level building blocks. Thats why I have cells, tissues,
organs - motifs and structure that repeat themselves.

If you evolve a system that is sensitive to the smallest changes in the
properties of its smallest units (quantum interactions, say) its much hard, in
terms of the search required. It also is going to be much more fragile.

Even neurons are a pretty high level construct, from a physics point of view;
high enough so that our current physics probably models their functional
characteristics completely. (Although we have yet to fully apply our current
physics to analyse them completely; ANNs as we see them in AI are but the
crudest approximations, and shouldnt be considered descriptive.).

So I would tend to believe that we don't need any new _physics_ to understand
how minds work.

If you are saying that there is some strange new physics needed to describe
how the information relates to matter, then I think you are, functionally, a
supernaturalist. I also put Penrose in that box, though, so its a personal
opinion, and the jury is still out.

If you think that there is a lot of science yet to be discovered, to come up
with theories of how computational building blocks are made from neural
architecture, and what the 'algorithms' that run on that architecture are,
then I agree with you.

~~~
bmh100
I am glad that we can have this discussion, despite its difficulty. It is an
intensely interesting subject, and I can't resist.

For the poster below, thank you for bringing up the p-zombie argument. Indeed,
with links to deconstructionism and physicalism, how can you even tell that
another human is "human"? Can you prove that that individual is conscious, has
subjective experience, and has a soul? Or is it, in fact, an assumption that
all humans possess these things? This fundamental assumption is put to the
test if a computer can perfectly simulate a human mind. Perhaps we will see
that such notions are purely subjective constructs, like "cold". Though we may
feel "cold", it is a sensation and an experience. That experience is not
provable, even if we might get everyone to agree that -50C is "cold." All we
can prove is the behavior of the brain, and assume that that behavior is
linked to experience in general.

------
Jun8
"Learning turned out to be more important than knowing." This seemingly simple
sentence, to me, sums up the article, i.e. the data driven approach as opposed
to model-driven ones.

------
l0nwlf
``One of the biggest issues in AI is determining what the question is really
asking i.e. translating a “natural language” query into something machine can
understand and find the appropriate answer.'' - so True.

~~~
jbri
That's also one of the biggest issues in software development.

~~~
celoyd
Surely it’s _the_ issue in software development. Everything else is a case or
subproblem of “how do I use a computer to solve this imperfectly defined
problem?”

I mean, it’s borderline tautological, isn’t it? Software development is the
practice of translating natural-language requests into executables. That’s not
just a recurring question – it’s definitional.

This is why I’m nervous when strongly antisocial people say they want to go
into software. The point of this field is to figure out what people need (or
want, or mean, or like). Of course you can do this creatively rather than
responsively, and with more rather than less respect for your customers. But
you don’t _escape_ human tastes and concerns when your job is to explain them
to a microprocessor.

~~~
jbri
Well, there's the second issue of doing it such that the machine can produce
results efficiently. Which tends rather towards the mathy computer-science
side of things - which is where I think the less social "I just want to write
code" people would be much happier.

~~~
celoyd
Efficiency is implicit in most natual-language queries. Managing efficiency
tradeoffs is a good example of something fuzzy yet important that you have to
be able to figure out from natural-language queries and human interaction.

But yeah, if you really just want to work with numbers, you’ll probably be
happier in CS academia.

------
TeMPOraL
"Spam filtering programs using A.I. learning and classification techniques
correctly identify over 99.9% of the 200 billion spam e-mails sent each day."

200 _billion_ spam e-mails _each day_. Something should be done with spammers
just on the basis of how much electricity it wasted on sending, processing and
discarding all those e-mails...

------
mtrn
Single page (print) version:
[http://www.nypost.com/f/print/news/opinion/opedcolumnists/th...](http://www.nypost.com/f/print/news/opinion/opedcolumnists/the_machine_age_tM7xPAv4pI4JslK0M1JtxI)

