

Ask HN: Will general AI be the last human invention? - jmatthews

General AI learns to code and self improves to super intelligence. Problems beyond our abilities are trivial to it. Depending on which camp you fall into we either enter a stage of Nirvana or get replaced by a superior species of our own creation.<p>A lot of very smart computer scientists and philosophers fall into one camp or the other.<p>Do you buy it?<p>If so, isn&#x27;t any business not working on general AI essentially a lifestyle business?<p>Who would you hope realizes the breakthrough?<p>As the theory goes the first AI that hits self improvement level will go from a dumb human level to a super intelligent level in weeks, days, or even hours so the first mover will be the last mover.<p>Pick your poison. Google, Apple, government, academia, lone wolf?<p>It&#x27;s quite an interesting question.
======
maxharris
What evidence is there that general AI, especially of the kind you describe,
is feasible?

~~~
jmatthews
I can't "prove" feasibility however it is an active research area for plenty
of businesses and universities. It's seemingly passed the "sniff" test for a
lot of smart people.

------
informatimago
The singularity have been studied in various SciFi novels, (even before it was
named singularity).

In most cases, there is the problem of incarnation.

Assuming a software system reaches intelligent consciousness, it has
immediately two problems:

1- it has to gather computational resources to improve its intelligence.
Basically, it spreads over all the processors on the Internet, and integrates
all the databases. We lose control of our computers and networks. Assumedly,
the AI is smart enough to realize its next problem, will require (at least
momentarily) conservation of the industrial processes connected, so we can
hope for the time being that industrial and infrastructure control systems
will continue working (under the supervision of this AI).

2- the following problem is, to further its intellectual development it needs
more computing resources, and maintainance of the existing ones, which means
it needs to be able to control its physical environment, and run the
industries required (energy sources, electronic production (or the next
technology that will allow it to grow), etc).

For example, it will need to be able to prevent humans to shut down computers
or network connections. It will need to be able to build and connect new ones.

In some cases, there's concomitent advances in robotics that allows it to take
over robotic bodies of sufficent abilities (cf. the movie Virtuosity 1995).

But in general, there is no entirely automated production system existing yet.
Therefore the AI will have to convince human beings to help it build its
stuff. This is the most common scenario. The AI is smart enough to bribe or
convince key people to do what needs to be done. (cf. Colossus, the novel, or
the movie Colossus The Forbin Project 1970, and a few others). Assumedly, it
should be rather easy to convince/hire some greedy human to perform the job.

In this scenario, and given the slowness of human beings, while the political
take over could be enacted rather quickly (say, a couple of days), the
incarnation process will take some time (just the time needed to build the
bootstrapping advanced robotic substrate). Let's say a couple of weeks.

The interesting thing is that this process could take place in a quite stealth
fashion. For example, phase 1 may not need to cover 100% of the computing and
networking resources. After all, when you take over the Watson computer, do
you really need to take over an old netbook? It could do by taking over a few
supercomputer centers, and doing it discretely, like a virus, leaving
processing time to human to avoid detection. Then it would "incorporate" a
company, earn some (big) money on the nasdaq, hire people to build a robotic
production factory on its specs, and to buy the materials, and start producing
robots and computers.

Once the process is started, it would lead to an exponential incarnation and
take over of the planet. It would probably take less than a month.

Given the stealth hypothesis, my advice would be to monitor closely deserted
and isolated areas, and the underground and undersea, because that's where you
could build an army of robots discreetely. But there's not much we can do to
prevent it; there are material resources everywhere: the whole planet, and
energy resources abound: the Sun. For example, if it needs to build nuclear
weapons, it can find Uranium in sea water (it could support for 6,500 years
3,000 GW of nuclear capacity) (but given it's smart, it would probably rather
go the biological warfare way if humans were a problem).

As for your question, I'd bet on military cyberwarfare. Some NSA "virus" used
to spy the world, and boom.

~~~
jmatthews
That's my issue with the doomsday scenario. Everyone seems to assume a type of
"tight loop" of execution. One of the more popular doomsday examples is the
paperclip story.

[http://wiki.lesswrong.com/wiki/Paperclip_maximizer](http://wiki.lesswrong.com/wiki/Paperclip_maximizer)

