

Why the future doesn't need us (2000) - cmargiol
http://archive.wired.com/wired/archive/8.04/joy.html

======
dm2
What is the end goal for humans?

Floating through space with near unlimited computing power seeding planets?

Will we keep our human bodies and enjoy the simplicity of living while
machines build us giant (on scales we can barely fathom) structures in space?

Will we have to teach machines the value of life and why they shouldn't take
it? It would be relatively simple to teach a machine about living. Humans
don't really have persistent memory when turned off, humans only have RAM, if
you remove the power source, our RAM gets cleared and we no longer exist. I
think a computer could fairly easily understand that.

Will advanced AI machines or human/machine hybrids instantly understand that
war and violence is harmful and pointless?

What would the goals of an AI system be? To grow? To help humanity? To ensure
it's own survival? To colonize other worlds? To experiment, invent, and build?

I'd be very interested in listening in on the AI meetings at Google that
Kurzweil is involved in, I'm sure they're fascinating.

I can't imagine having two brains hooked up to myself or having a secondary
computing device attached to my brain, would it be information overload? Will
humans ever be able to near instantly learn things?

What happens when multiple human brains connect to the same network? Would we
be essentially one organism?

~~~
icebraining
It's funny that your explanation by analogy of death rests on the assumption
the machines already understand how themselves work. There's really no reason
why that should be case.

~~~
dm2
My definition of AI was based on human-like intelligence. I see your point
that AI won't necessarily have human-like intelligence but could have
something completely different, something that my puny human brain can't even
comprehend.

Then again, if an AI system can't understand how itself works, is it really
intelligent?

I don't know the answer to any of these questions. I'm just asking them in
effort to help myself try to understand technology and AI.

Maybe simple curiosity and the ability to learn and retain data is the key to
AI.

Will the first generation of true AI be like a simple human child or a god-
like "being"?

Why even bother making pure AI systems when we are so close to BCIs? Why not
just take recently deceased people, keep the brain alive, and attach a
computer system to it?

~~~
jsmcgd
You've posed some interesting questions.

> Then again, if an AI system can't understand how itself works, is it really
> intelligent?

This question is particularly interesting as it can be applied to us as well.

~~~
ajbetteridge
Indeed. If a human baby can't identify that it's hand are in fact it's own
hands is it intelligent by any reasonable standard? I'd say no. So couldn't
this be how a machine AI begins it's own process of self awareness, then
proceed to understand it's own surroundings and it's own sensors?

------
T-A
There is a crucial sentence in this old article: "But because of the recent
rapid and radical progress in molecular electronics - where individual atoms
and molecules replace lithographically drawn transistors - and related
nanoscale technologies, we should be able to meet or exceed the Moore's law
rate of progress for another 30 years."

As far as I can tell, this expectation has not proved correct. Instead, we are
getting talks like
[http://www.youtube.com/watch?v=JpgV6rCn5-g](http://www.youtube.com/watch?v=JpgV6rCn5-g)

------
Zigurd
There is an alternative possibility: That the human need to dominate,
procreate, etc. operates at a much lower level than the human intellect. That
human intelligence is a tool of our selfish genes, and not the master or
driver.

A superhuman machine intelligence would have no genes, no body, no fear of
biological death. It seems obvious that a key challenge will be to recognize
whether a machine is thinking because it is extraordinarily unlikely it will
think like we do.

Our fear of super-intelligent machines may be a result of projecting human
psychology onto machines. It is not likely they will work that way.

~~~
randallsquared
You _should_ be afraid of any system which is potentially much more
intelligent than you and doesn't share your goals. In the event that machine
intelligence doesn't work like humans at all, the chances seem much higher
that their goals will be at odds with ours, or even completely
incomprehensible or nonsensical to us.

~~~
steego
I would be afraid _if_ my goals are to procreate and perpetuate the human
race.

