

Bill Joy: Why the future doesn't need us (2004) - zengr
http://www.wired.com/wired/archive/8.04/joy_pr.html

======
bermanoid
Here's the problem with the "Let's just agree not to do this research!" plan
that everyone seems to suggest when they start thinking about existential
risks: when we're sitting around in 2030 with a million times more computing
power at our fingertips than we have today, constructing a workable AI just
isn't going to be that difficult of an engineering problem. We already know
the equations that we'd need to use do general intelligence, it's just that
they're not computable with finite computer power, so we'd have to do some
approximations, and at present it's not realistic because the approximation
schemes we know of would work too slowly. Pump up our computer power a million
times and these schemes start to become a lot more realistic, especially with
some halfway decent pruning heuristics.

It's bad enough that (IMO) by 2040 or so, any reasonably smart asshole in his
basement could probably do it on his laptop with access to only the reference
materials available _today_ ; I have no idea how you avoid that risk by making
some political agreement. Hell, ban the research altogether on pain of death,
and there's still going to be some terrorist team working on it somewhere (and
that's even if all the governments actually stop work on it, which they
won't).

The only positive way out of this is to go to great pains to figure out how to
design safe (friendly) AI, and to do so while it's still too difficult for
random-dude-with-a-botnet to achieve (and preferably we should do it before
the governments of the world see it as feasible enough to throw military
research dollars at). We need to tackle the problem while it's still a
difficult software problem, not a brute-force one that can be cracked by
better hardware.

~~~
arethuza
"We already know the equations that we'd need to use do general intelligence"

Not only am I pretty sure we don't know how to build a general intelligence
I'm pretty sure that nobody really knows what kind of approach would be most
likely to succeed.

Having said that, I would love to be proved wrong on this one - so as you
specifically say that the necessary techniques have already been published
perhaps you could give the relevant references?

~~~
orangecat
There's an algorithm developed by Marcus Hutter called AIXI, which makes
provably optimal decisions. Unfortunately(?) it's also uncomputable, but
computable approximations exist including a Monte Carlo variant:
<http://www.vetta.org/2009/09/monte-carlo-aixi/>. As the paper notes it scales
extremely well; to get better results you just throw more computing power at
it.

~~~
bermanoid
Indeed, AIXI is the algorithm I was referring to, and Monte Carlo AIXI is the
approximation.

As hugh3 mentioned in a sibling comment
(<http://news.ycombinator.com/item?id=2479211>), 'making "optimal decisions"
in some defined state space where the quality of various options is evaluable
is a really different problem to general intelligence'. While I definitely
agree with this statement to some extent (namely, a powerful MCAIXI setup is
not necessarily going to display any intelligence that's remotely human, at
least without a lot of other stuff going on in the system), the concerning
thing is that it should almost certainly be enough to get a system reasoning
about its own design, since its code _is_ a well defined state space where
quality is evaluable (depending how the programmer decides to have it evaluate
quality).

To end up with a dangerous runaway "AI" on our hands, we don't _need_ AI that
we'd consider intelligent or useful. All it takes for a runaway is an AI that
is good at improving itself, working effectively at optimizing a metric that
approximates "get better at improving yourself". AIXI approximations should be
plenty powerful to do this with the amount of computing power we'll have in
~20 or 30 years (at the very least, there's a big enough chance that we
_really_ have to take it seriously).

This is one of the reasons Eliezer Yudkowsky is so keen on extending decision
theory, so that we can get some idea what we should be actually be trying to
approximate in order to have a decent shot at doing self-improvement safely.

The best way to sum up my concern is that (unboundedly) self-improving
programs make up a tiny fraction of program-space that we can't quite hit with
today's technology. Of that sliver of program space, there's a _much_ smaller
sliver that contains "programs that won't kill us." There's another sliver
that contains "programs that have useful side effects". We need to make sure
that the first "AI" that we create lies in the miniscule intersection, "self
improving programs that do something useful [1] and won't kill us", and that's
a terrifyingly small target to shoot at, so we had better work strenuously to
make sure that when once it's feasible to create any of these programs our aim
is good enough to hit the safe and useful ones.

[1] We need to find self improving programs that are useful early on because
we'll need to use them as our "shield" against any malicious self-improvers
that will inevitably be developed later. There's a significant first-mover
advantage in AI, and even a small head start would probably make it difficult
or impossible for a second AI to become a global threat if the first AI didn't
want to allow it.

------
zengr
Looking for a summary? Read this wiki entry:
[http://en.wikipedia.org/wiki/Why_the_future_doesn%27t_need_u...](http://en.wikipedia.org/wiki/Why_the_future_doesn%27t_need_us)

~~~
FrojoS
Thanks! Very interesting. From the WP article:

"Martin Ford author of The Lights in the Tunnel: Automation, Accelerating
Technology and the Economy of the Future [6] makes the case that the risk
posed by accelerating technology may be primarily economic in nature. Ford
argues that before technology reaches the point where it represents a physical
existential threat, it will become possible to automate nearly all routine and
repetitive jobs in the economy."

I find this, for short or medium time scales, quite likely, too. There will
always be a demand for high skilled humans. Maybe even after the machines are
the new bosses. But if we don't make groundbreaking progress in human learning
techniques, most people will have trouble to learn these skills fast enough.

Then he writes further:

"In the absence of a major reform to the capitalist system, this could result
in massive unemployment, plunging consumer spending and confidence, and an
economic crisis potentially even more severe than the Great Depression. If
such a crisis were to occur, subsequent technological progress would
dramatically slow because there would be insufficient incentive to invest in
innovation."

This sounds somewhat plausible but I don't believe it. The trend goes towards
highly profitable mega corporations. Governments, and with it most people,
become less powerful in economic terms. I think we can already see this effect
very well. So there won't necessary be a recession. As long as the rich find
ways to spend their money - like flying to Mars.

------
atlei
Even the most trivial computer programs have lots of bugs (with VERY few
exceptions [1]), and we're worrying about creating a super-brain that is
actually _smarter_ than we are ourselves ?

And let's not forget the debugging, which is TWICE as hard as the coding ;-)

We may be able to simulate the hardware of the brain (using "biologic
hardware"), but programming the AI software is probably greatly under-
estimated....

[1] Some of the NASA software is probably as close to bug-free as we get, and
check the required amount of planning, documentation and testing compared to
the amount of actual code produced \-
<http://www.fastcompany.com/magazine/06/writestuff.html>

~~~
robertk
The human brain has much more bugs than any current piece of software.

The actual program will be small. The brain is clearly a learning agent in a
task environment. The sensors and actuators are implementable, the only real
question is what algorithm should process the input stimuli.

------
jcfrei
I think a relative simple solution to this is written in the first paragraphs.
We will more or less 'merge' our minds with computers, just to a further
extent than we already do now. Nowadays a computer is merely a tool, which
helps us keeping in touch with relatives, visualizing ideas, calculating
stuff. but this bond will probably become much more intense in the future,
where whole subroutines of our thinking will rely on artificial machines. This
might only seem like a threat considering our 21st century morality - but I
think this will become widely accept in the next century.

~~~
gmaslov
The future must be now. At least one of my subroutines of thinking already
relies on an artificial machine. I call it my Google neuron; it's wired up
directly to everything I don't know off the top of my head and fires whenever
I feel unsure about something. Well, the latency is still a bit high, but I'm
sure someone is working on that problem. ;)

------
FrojoS
ATTENTION: Book spoiler below!

SF author Vernor Vinge who introduced the term "singularity" tried to come up
with an idea to prevent this and other "out of the kids basements" lethal
threads to humanity, in his latest book Rainbow End. The "solution" in this
book though, is to put all of humanity under mind control.

------
tybris
Humanity may be doomed if it keeps innovating, but it's most certainly doomed
if it stops.

~~~
FrojoS
I second this!

