
Transcending Complacency on Superintelligent Machines - hedges
http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html
======
cliveowen
Am I the only one who's skeptical about the feasibility of AI? As I see it
there are two ways to think about AI: first there's the kind of AI that arises
from software emulating parts of the human brain based on our current
understanding of its inner functioning and produces human-like intelligence,
so even if the mechanisms are different from those actually employed by the
brain the output is similar in response and depth of reasoning; then there's
the AI that stems from creating an artificial brain by reverse-engineering the
human brain, but we are an awfully long way from doing that, mostly because we
can't expect to unravel in a few decades what evolution has spent millennia
perfecting.

It looks to me, a layman, that the only approach that holds any water is the
first one. But then again, it mostly looks like people are implementing
software based on a flawed understanding of cognitive functions and basically
hoping that something magic happens. How can a scattershot approach like this
ever produce anything even remotely resembling human intelligence?

~~~
Gobitron
I'm extremely skeptical of this, even though I spend a good deal of my time
trying to automate machine 2 machine communication (which gets me thinking
about this a lot). I also think that there's a good amount of hubris in these
scientists believing that the brain is something that can be emulated by
computing power of any kind...are we sure it's an apt analogy (brain as
computer)?

~~~
solipsism
Where is the hubris in attempting to do something without knowing whether it's
possible? What is it about AI that seems to make people defensive? Is it the
idea that human intelligence might not be as special as we think it is?

~~~
tluyben2
Religious people, even if not strict, have this reaction always. I don't see
this clashing with religion, but 'they' do. Then again, i'm not religious
although I was raised with the bible at home and in school and know it by
heart.

~~~
Gobitron
I guess that if you're religious, then yeah, almost by definition, you would
be skeptical of assertions like these. Although to the parent comments point,
I'm not defensive at all about looking into AI. I find it fascinating and
exciting and though I'm not trained in it, I read as much as I can about it
and don't want to stop any research or questions into it at all.

But hubris - yes it is hubris. Because there is no scientific basis for the
assertion that we will cross that chasm into 'true' AI, and thus it's based
just as much on faith as any religious belief. And it's hubris because they
claim a scientific basis where there is none.

When there is a scientific basis or proof that we've reached (or will reach)
this 'singularity', you won't see me complaining. I'm not anti-science. I just
don't think it's ever going to happen.

On a semi-related note, doesn't anyone find it kind of odd that Ray Kurzweil's
calculations for when the singularity will occur happen to be just about the
time his natural life will end (statistically speaking)? These projections are
all driven by ego and faith, very little by science...

~~~
tluyben2
I said exactly that about Kurzweil his predictions here on HN a while ago;
others have the same issue; fear of death moves their predictions near the end
of their own life. That's not weird though; if you don't get to see it
yourself, what's the point? Sure it's nice for the future generations but
that's not really how most people think.

About religion and science; it is about definition where there difference is;
IF you accept some definition X as being strong AI then when we reach that we
have a scientific reality. The chasm and 'true' AI and what these are in
scientific terms are vague, however in science we accept definitions of how
nature works and if those definitions are things you hold true there is no
reason why it won't be reached as there is not 'special' in the fabric of our
brains which we couldn't copy given advanced enough ehm, take your pick;
biology, nanotech, electronics, 3d printing etc. If you however cannot agree
on definitions and have that (to me alien) quality of accepting mystery above
all, then sure it's all believe or not. Not a good conversation maker as we
are done after 2s, but he.

~~~
Gobitron
I don't know tluyben, we've been having this conversation now for....24 hours
or so. I think 2s is a bit of a low estimate ;)

Seriously though, I don't think the Conversation needs to end there
(conversation with a capital C - ours can end whenever we want). I do indeed
believe in 'mystery above all'. I actually think that's a lovely way of
putting it. Because mysteries are just unknowns, and without unknowns, what
happens to scientific exploration? Do we just assume we know everything? And
then the exploration stops. I'll be more explicit than that as well - I
believe in God, and I am somewhat religious. I don't think that cancels me out
of any interesting conversations.

I think you're making an assumption when you say that if you hold scientific
definitions true then there is no reason why it won't be reached. Science says
nothing about the future certainty. It is composed of models whose intent is
to reflect reality, testable hypotheses to build and refine those models, and
the results of the tests of those hypotheses to validate or disprove the
hypotheses. We have no model (other than some vague calculations of processing
power of the brain), no testable hypotheses and no results for these
projections. It's not science.

But I would say I've proven you wrong that this isn't a good conversation!

~~~
tluyben2
You are right, it's a good convo. And one worth to be continued. I also do not
assume we know everything, but I think we can. And I also think that it's
giving up to just assign stuff we don't get to 'something we cannot understand
or see or ever find but is there and is intelligent on top of all that'.

Also like I mentioned before, I have no clue how AI would clash with the
existence of a God or religion. And so I don't understand why religious people
get so upset about it. There are many things we improved on where we don't try
to take god's place (as I guess that's what it's all about) according to
religious people; like when we made a wheel, did we better God's work or try
to out-do His work by showing that wheels are more efficient for a lot of
things than legs? I don't see the difference with copying or even improving on
intelligence. So what that clash is I don't know but religious people seem to
get downright aggressive when you talk about strong AI which gives me, and
many others, even more incentive to just side them with the crazies.

I was born in the Dutch bible belt and I was raised with religion in school
where I had to learn verses by heart and recite them every day; the people too
stupid to learn them (small village with lot of inbred) were punished for not
being able to learn them and I was punished for asking questions like didn't
God create these stupid people too, why punish them for something they cannot
do? My aunt used to give me physics books for my birthday written by religious
professors; actual physics books with quantum mechanics and string theory. And
although I did not believe in god at all from a very early age (mostly because
none of the people who tried to push me into christianity wanted to answer any
critical questions) and I don't and never will understand how someone can
believe in most of the the things religion dictates, those books my aunt
brought showed not everyone was a crackpot and actually there is no reason
(and there isn't ffs) why physics, AI, evolution theory would not simply rhyme
with religion. They are not mutually exclusive as so many (I cannot find
another way of saying it) misguided individuals seem to think especially in
the US. I assume you are not one of them as you don't mind critical discussion
etc.

~~~
Gobitron
No, I'm not one of those people. And I agree with basically everything you've
said here (except for the part about the possibilities of AI and the
limitations of mystery - but we've covered that!). I find the closed-
mindedness of many people so disappointing. I think asking these questions is
so important. I also find others' experiences with religion fascinating. Thank
you for sharing. My experience was extremely different - and the community I
was (and am) part of doesn't strike me as closed-minded. As you note, there
are many religious people who don't see a conflict between science and
religion. I count myself as one of them.

------
TheIronYuppie
I think that one thing that people are missing when they think AI is not a
threat is there does not have to be a singular AI for every problem.

Chess computers are better than humans, but I wouldn't trust them to manage
the electricity grid. What if there was an equivalent quality of computer
specialized for every significant area of society - electricity grid, packet
routing, high speed trading, etc etc.

------
swombat
Obviously agreeing with the points, but this seems to be more an awareness
piece than an actionable one.

------
todd8
I'm no expert in this area so I welcome corrections to my rough numbers below.

    
    
        Approximate number of human neurons:    1.0e11
        Approximate number synapses in a human: 1.0e14
    

These are big numbers, but not impossibly big numbers. There are different
kinds of neurons, and signals on synapses are not simply binary. However, even
with these complications, the hardware needed to reach these scales isn't hard
to imagine.

    
    
        Transistors in XBox One:                1.0e09
    

Brains are biological computers so they suffer from very slow switching speeds
at the neural level. Neurons run in parallel, but they are not fast:

    
    
        Approximate neural switching speed:     1.0e03/sec
    

Even if all of the synapeses could sustain this rate in parallel (they can't)
and even if all of the brain was 100% occupied with solving a single task (it
isn't) this would mean that we absolutely can't compute faster than:

    
    
        Speed * synapses (brain ops per second): 1.0e17/sec
    

For comparison, the fastest bitcoin hardware I see is advertised to operate at
the following speed:

    
    
        Minerscube 15 (hashes per second):      1.5e12/sec
    

And a regular GPU is capable of simple instructions that run at the following
speed:

    
    
        AMD Radeon HD 6990 32-bit instructions: 2.6e12/sec
    

From this we can see that hardware is catching up with the raw computing
ability of the human brain. Now consider the problem of programming a brain.
It isn't necessary to program every synapse. The brain learns, and essentially
programs itself. To see why this is true consider the programming that we are
born with:

    
    
        Bits of information in human genome:    1.0e10
    

This is far less than the number of synapses that we have. Therefore, the
brain must program itself, somehow.

Now, to address the argument about evolution taking millions of years. First,
we can evolve programs much faster than nature can evolve humans. There have
been, perhaps, 100 million generations of humans. Even if it takes six seconds
of computing time to run an evolutionary computation for a single generation
it will take no more than 20 years to run over 100 million generations.

Brute force evolution isn't the only way to build strong AI. A program can
exhibit behavior that we don't anticipate. I've written simple programs that
beat me easily at games such as Othello or Freecell.

Finally, once machines get smart enough to design other machines there may be
a rapid acceleration of progress in this area as we employ them in designing
subsequent generations.

I feel that strong AI may pose a significant risk to humans; consequently, we
should proceed with caution. Here is a thought experiment. If a chimpanzee
could be taught to drive, would you trust it to pick your kids up from school?
What sort of value judgements would it make in the case of an impending
emergency? Would you let an elephant baby sit for you? Even if was much
"smarter" than a normal elephant?

Strong AI will not be like us. It will learn and develop without a human body,
and it will not interact with the world and society as we do and may end up
being very foreign to us. Will it be sociopathic? Or will it be like whales,
intelligent, but mysterious, perhaps spending all its time singing AI songs to
other AIs.

