

The AI Revolution: Our Immortality or Extinction - adwn
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

======
adwn
I think that in the various scenarios the author depicts, there are too many
"then a miracle occurs" steps [0]. For example, for an ASI to develop, a
reasonably intelligent AGI is required first. But even getting to an AGI which
is as intelligent as your average human, is very, very hard, and it doesn't
just spring to life from an advanced ANI on a fast processor.

The second step is even harder: Let's assume an AGI with an IQ of 100 exists,
then it's supposed to recursively improve itself within a short time. Well, so
far, humans have failed to improve themselves, and they had

1) a lot more time, think decades of neural research,

2) a lot more resources, especially lots of humans working on the problem,
exchanging ideas and knowledge,

3) most of which are a lot more intelligent than an IQ of 100.

So yeah, AGI -> ASI won't happen within hours, days, or even years. Maybe
decades, if not longer.

[0] [http://star.psy.ohio-
state.edu/coglab/Pictures/miracle.gif](http://star.psy.ohio-
state.edu/coglab/Pictures/miracle.gif)

~~~
visakanv
I hope you're right! But I think it's important to prepare for the worst case
scenario. Even if we had 200 IQ humans working at max cognitive capacity,
people still need to sleep, eat, poop. There are all sorts of limitations we
have as biological creatures- the bandwidth of our communication is limited
(this exchange takes us precious minutes, for example). It's difficult to
imagine things happening at breakneck speed, but it's not impossible. And I
think we should be prepared for that... We definitely don't want to
underestimate it and go "oops, we thought it would take decades"! WThis seems
to me like the most important thing in the world right now- or one of the most
important things- that we should all be talking about. We may only get one
shot at this...

------
pc2g4d
"There are no hard problems, only problems that are hard to a certain level of
intelligence. Move the smallest bit upwards [in level of intelligence], and
some problems will suddenly move from 'impossible' to 'obvious.' Move a
substantial degree upwards, and all of them will become obvious."

I think the utterer of this quotation needs to study up on computational
complexity theory.

~~~
adwn
I'm not sure if you're joking, but I don't think Eliezer Yudkowsky intended
his quote to be taken quite so literally. He was most likely talking about
diseases and world hunger, not about 3SAT.

------
sethvoltz
Much like XKCD, Tim Urban's graphics really seem to hammer the point home with
a minimal of fuss.

