
Ken Goldberg: Creative robots, Kurzweil fallacy, and what it means to be human - dnetesn
http://nautil.us/issue/20/creativity/ingenious-ken-goldberg
======
markbnj
I enjoyed the piece, and it's always nice to hear a voice of reason on the
"danger" of robotics. On the other hand, I found the format disappointingly
dumb. What was the value of making me scroll down, click a title, then scroll
back up to watch the video in the header? Maybe I missed something.

~~~
nautilus
We admit the player was clunky. We have a new one, as you can see in our
interview with Ricard Saykally, [http://nautil.us/issue/25/water/ingenious-
richard-saykally](http://nautil.us/issue/25/water/ingenious-richard-saykally),
And are going through the process of converting earlier interviews.

~~~
TheOtherHobbes
Transcripts!

You know this thing called reading? It's about ten times faster than video for
sharing verbal content.

------
MichaelGG
So his argument is "I can't think of how to build AI, and it looks pretty hard
so, yeah." At least he admits it is "special" to him.

How is that an argument against Kurzweil? So far, we have no real ideas on
consciousness. So it seems that until we've at least figured out that, no
should should be making flat prohibitions on what we can achieve. (Kurzweil
could be wildly off on timelines but that's the most obvious and only real
criticism there is right?)

Not to mention, as far as we can tell, we are machines. So it takes an extra
something to say we can't ever build ourselves. (Let alone figure ourselves
out and improve on the design.)

~~~
scottlocklin
Not knowing how to do a thing is a pretty good argument against it actually
happening in any short period of time. Since we can't even define
consciousness... conscious machines seem pretty speculative.

As far as Kurzweil goes as a prophet: how is his track record? Pretty bad as
far as I can tell.

~~~
roel_v
"As far as Kurzweil goes as a prophet: how is his track record? Pretty bad as
far as I can tell."

Actually, it's pretty good, if you consider his predictions from 15 years or
so ago and take some liberty in interpretation. I forgot where I read the list
that enumerated it, though.

------
joe_the_user
"Creativity is doing something new and interesting"

Uh, I don't doubt that this kind of sentiment arises when one has worked with
robots for a while and done some impressive stuff but had them consistently
fail to inspire a feeling of interestingness in one.

But the statement itself is, ironically, neither new nor interesting since
beyond interestingness being something people sense, the statement gives one
not a clue what the criteria is for determining its presence. And a remarkable
number of critiques of AI by very insightful people come down to a kind of
vitalism - come down to a long, round-about way of saying "robots just don't
that special something".

And, of course, humans so far have failed in producing a wide range of human-
like behaviors in robots and programs. There is something of a psychological
tendency to take everything a computer can do and deprecate relative to
everything a computer can't do - playing chess was once something people
imagined would involve "intelligence" but now no longer do, etc.

All that said, I'd like to shout-out for Douglas Hofstadters' Fluid Analogies
[1] approach, which actually makes an effort to describe and define some
aspects of creativity.

[1]
[https://en.wikipedia.org/wiki/Fluid_Concepts_and_Creative_An...](https://en.wikipedia.org/wiki/Fluid_Concepts_and_Creative_Analogies)

~~~
TheOtherHobbes
Also, this isn't comparing like for like - because you're comparing the most
exceptional and gifted individuals out of a human population of billions with
a single robot or piece of software.

The vast majority of humans aren't particularly creative either. People who
can do (e.g.) music a bit are maybe 25% of the population.

Of those, most are imitators with enough technique and expression to
successfully reproduce familiar tropes, but not to be inventive.

Innovators are maybe 0.1% of the population - and that number is a complete
guess, because so far as I know there's been absolutely no research into
creativity in populations.

So "that special something" is rare already. The problem with computational
creativity is that success is partly social and psychological - new tropes
have to capture the imagination in some way, and it's very difficult for a
machine with no model of human emotional and psychological responses to
calibrate its output to meet that goal.

~~~
svantana
Not only that, but the specialness is totally in the eye of the beholder - one
man's trash is another man's fine art. And if told that trash was painted by a
certain Mr Picasso, perceptions might change rather quickly. There's a reason
the scientific method is so important -- it's the best way we know not to fool
ourselves all the time.

~~~
TheOtherHobbes
Picasso didn't produce trash because he couldn't paint. He could paint and
draw incredibly well. But he got very bored by painting incredibly well before
he hit 20.

Everything after that was one attempt after another to invent new visual
languages for painting.

The cultural value of those attempts is completely different to the resale
auction value of a Picasso, which is driven entirely by market speculation and
the fact that highly rated artworks are treated as a convenient store of value
- rather like gold, but more compact and lighter.

Cultural value is probably best measured by influence. On that basis Picasso
scores very high, as one of the creators of abstracted figurative art. That
trope remains mysterious to people who believe painting should be about photo-
accurate or easy-to-read figurative work, but it's been a huge driver of fine
art and all kinds of commercial illustration.

------
tehchromic
it's nice to some dissent to Kurzweil and the ai singularity pov questioned.

In my opinion KG's view here is closer to the truth of it.

It seems to me that the human organism is the singularity, and like many of
our technologies, including metallurgy, cities, nuclear weapons, and
bioengineering, computer tech and automation has the ability to wipe us out.

But it won't be because it becomes conscious and turns on us; it will be
because we cause some chain reaction or blunder, by accident or on purpose:
like wiping out an invaluable resource like air or water, or by otherwise
destabilizing the global biosphere, or through some code typo that causes
automated manufacture machinery to eat us for breakfast.

But the fear of machines gaining consciousness is, at least in our age, pure
fantasy. If consciousness were a directly related to complexity, then the moon
and sun would be awake and singing -- both are stuffed to the gills with
highly organized information! Computers that can provide complex answers, or
walk and talk as if they were conscious are no more awake than the moon or
sun, and to say otherwise is to be a carnival salesman.

Kurzweil and his ilk are seeing the human consciousness in the mirror of the
machine and it scares them, as it should. But it's a mistake to think that the
machine will awaken accidentally or casually, as result of simply adding too
many circuits, or feeding the wrong code into the wrong compiler. That idea
applied to computers is a philosophical misunderstanding of the difference
between intelligence and consciousness.

A conscious machine is not something that can be made without extraordinary
technical ability and intention-ality - it's on par with the ability to
colonize planets - way beyond us, and not something likely to happen by
accident, except of course in the traditional sense of already having the
ability to do so :}

------
mark_l_watson
Nice, thanks for posting that. I knew Goldberg from his work / papers, but had
not heard him speak before. I need to check if he teaches any online classes.

