
Will superintelligence emerge on the Web? - nreece
http://www.kurzweilai.net/articles/art0629.html?printable=1
======
pchristensen
I don't know that a superintelligence will emerge from the web, per se, but I
think it is becoming a sufficiently large body of textual information that an
appropriate AI program could use to bootstrap itself to amazing levels of
intelligence. AI programs have a problem that they have to infer all their
patterns just from text, which is fairly low-bandwidth. Humans (and robots)
get much more bandwidth through vision, hearing, touch, and body feedback.
This might not seem important, but it provides context around every piece of
information that comes in, making it much easier to make sense of things and
giving richer patterns to try to match against. Without that context, an AI
would need to multiply the amount of text needed by 1M (1B? 1T? X?) to train
itself the way a child does.

------
Hexstream
With all the lolmemes flying around on the net, I think the superintelligence
that would evolve from it would be superretarded.

------
bayareaguy
If it did, how would anyone know? Could it be already out there, desperately
trying to reach out to us but blocked out by our spam filters? Or would it
even want to? When was the last time you tried to have a meaningful
conversation with mitochondria?

------
bsaunder
I think it will... and I think it will blind-side the general populace when we
get there.

I actually think our hardware is already sufficient for this (obviously not
one computer, but a large network of them). I think it's mostly a software
problem at this point. I think a new generation of database products in the
works (with more flexible, dynamic objects) and the recent (last decade)
appreciation for statistical pattern recognition will take us a long way. I
think we are not much more than inefficient, redundant, statistical processing
systems.

------
MaDMvD
Let me begin by extending a large, rigid middle finger to those questioning
the eventual validity of such an amazing article. Having said that, I agree
with pchristensen in that the web itself will not simply become "self-aware",
but when an AI-capable system (or thing) gets a hold of such a pool of
information, it will feed itself a quantity of information that the human
brain could never understand. When this day comes, it will be the last step
towards limitless innovation and human/machinist achievement. We are so close!

~~~
ojbyrne
Ray Kurzweil has been milking the AI teat for a very long time. He's a
futurist without much future.

------
mainsequence
It seems like most of you are defining the web to be separate from the people
using it. Doesn't the Machine include the hamsters making the wheels spin?

------
rms
Humans in the Information Age already seem pretty superintelligent to me.

------
andreyf
Isn't that what reddit is?

------
edu
No.

~~~
mpc
why?

~~~
edu
First, the web is only a system of hyperlinked documents which can be browsed
by a man or a machine. This means that the web is only _information_ , and as
far as I can tell information is not intelligence. A system capable of using
this information may be intelligent, but not the information per se. So, maybe
the _internet_ will evolve to a "superintelligence". But this leads us to my
second point.

From the verb used in the question, emerge, it looks to me that this
"superintelligence" is reached without human intervention. But as spontaneous
generation does not exist, we need a system capable of evolve, without
external help. And as far as I know, nowadays, neither the Internet (network)
nor any of each nodes can evolve without human help.

So, until somebody invents an evolving device and connects it to the Internet,
my answer is no.

------
curi
The problem with _technology people_ making predictions about _intelligence_
is they don't understand the most relevant field: epistemology. In particular,
you can't make very good predictions about AI before you have a very good
understanding of how learning and knowledge creation works.

You can tell someone doesn't know these things when they never bring it up,
despite it being critical to their claims.

One way this is relevant is that people talk a lot about raw computing power.
As if that was the bottleneck in humans. But the more critical issue, by far,
is irrationalities. And they don't address whether AIs will have those, or
not. Nor do they address how to "parent" an AI.

This particular article doesn't even mention traditions or memes.

~~~
icky
> The problem with technology people making predictions about intelligence is
> they don't understand the most relevant field: epistemology.

How do you know that you know that? ;-)

~~~
curi
heh. that is another reason it's important to be an expert on the subject: the
mainstream view is very confused.

the whole knowledge is justified, true belief thing is such a mess. they've
known induction doesn't work for ages, but haven't replaced it. good answers
to these things, on the other hand, are largely ignored. (hint: Popper)

