

Artificial intelligence: machine v man - lintroller
http://www.ft.com/intl/cms/s/2/abc942cc-5fb3-11e4-8c27-00144feabdc0.html#axzz3HsQK8b70

======
Animats
_Strictly speaking, according to Bostrom, the kind of machine-based
intelligence that is heading humanity’s way wouldn’t wish its makers harm. But
it would be so intent on its own goals that it could end up crushing mankind
without a thought, like a human stepping carelessly on an ant._

Like corporations.

What happens when computers get good at management? A network of computers may
be able to outperform human managers. Even if they're not quite as smart as
the smarter humans, a network of computers can coordinate better than a
meeting of people. Once computer-run companies start producing better returns
than human-run ones, the computer-run ones will dominate. That's basic
capitalism.

This doesn't imply that the organization is entirely automated. It just means
humans aren't at the top. If it produces more profits, companies will be
forced by investors to take that route.

~~~
robotkilla
I suspect at some point in time, (left without regulation) computers and
machines will even replace programmers, which is pretty much what the article
is saying, that would be one of the low level steps.

It seems that people think either a utopia ensues or that's our extinction
event – I don't understand why we think everything will spiral out of our
control though. We still have the power – we don't have to turn it over to the
robots now or ever.

~~~
hedges
>We still have the power – we don't have to turn it over to the robots now or
ever.

In theory, but can we control ourselves? There are a lot of financial
incentives in developing better machines and algorithms. We couldn't stop
nuclear weapons or global warming, and AI is a lot more attractive and
powerful than either of those to business and governments. Not to mention that
nuclear weapons and global warming are harmful in a very easily understood
way, whereas AI might be harmful in very strange ways. It's like a pack of
wolves left alone with a poisonous steak. It's a delicious steak and the
poison is somewhat beyond their understanding; some wolves even think there's
no poison at all. Isolating the poison from the steak is as difficult for the
wolves as making safe AI is for us.

You can imagine how incredibly valuable an AI capable of doing a programmer's
work would be. It's a technology far off, but not implausible.

But as soon as we reach that point, it seems unlikely that things won't spiral
out of control. Imagine a thousand highly intelligent programmers who are
capable of research, think fluently in statistics, and cooperate perfectly.
Additionally, these programmers can examine and modify how their own brains
work, and boost their performance, remove their errors. With the press of a
button, they can also create more copies of themselves.

Everything might spiral out of control.

How do you make sure things don't spiral out of control?

~~~
robotkilla
International laws and regulations as well as doomsday scenario planning that
is surely already underway by the US government.

I would argue that we actually have stopped nuclear weapons thus far (not the
proliferation, but the usage), as well as mass global terrorism, mass
extinction from disease, the list goes on.

I agree that AI is a threat – I'm legitimately worried that the AI capable of
doing a programmer's work isn't all that far off. I see things like Mozilla's
Webmaker popping up and its obvious that the process will become more
streamlined and automated as time goes on. I also don't think that all sense
and reason goes out the window - we will come up with a way to solve it like
we solve everything else – boring laws.

We should be more worried about the human beings that will no doubt use the
new wave of machines for their own ill will. I'm sure what the NSA has right
now will be laughable compared to what it will have in 10 years.

------
monochr
I always find it funny how the only people who think that computers will
overtake humans are the ones who don't deal with them at a basic level every
day.

"It will get so smart and so capable that it will destroy us" is somewhat hard
to believe when you realize that these types of AI will likely be so stupid
when it comes to the outside world they would trip on the power cord and stop
the apocalypse themselves.

~~~
nshepperd
That's untrue. I program computers for a living; I am well aware of how
unreliable everyday software is. But to generalise from that to "no-one will
ever write an existentially dangerous AI" is wishful thinking.

~~~
monochr
Stop and think what is needed for a human army to be effective. You need
everything from nuclear physicists to cooks to keep things running. Each of
those people was first and foremost the results of 4 billion years of
evolution which solved the hard problems of visual object detection and body
calibration. Yet for each of the hundreds of thousands of such people you
still needed parents to keep them from killing themselves between the age of
2-10 as a full time job and then 300 years worth of culture to get them to the
point where they were useful for anything at all.

There is a world of difference between being able to build an AI that can do
one thing, and an AI that can do everything a person can.

------
barbudorojo
There is an epistemological problem. If you want to program the machine to
respect that men is the most important animal in this planet you can't base
that only on intelligence because then the machine can deduce correctly than
once they become more intelligent than us, they should occupy the throne and
men would be then only an appreciate animal (a dog, a sheep, a monkey?). We
need a Turing test for any program intended to program such a supercomputer,
it should be necessary that the program deduce logically the Great Axiom: Men
is top animal of this planet. In case the boot program is not able to proof
the Great Axiom, the machine could proceed to self-destruction.

If there isn't any way to construct a logical system in which men is the more
important animal of this planet, then we are doomed to be dominated by the
machine, because our throne based in our intelligence should be now the
justification for the machine to take the lead and take us as their dogs or
sheeps.

------
robotkilla
Its fun to think about terminator-like disaster scenarios, but isn't the
solution as simple as: don't let the AI leak out? If we can contain humanity
ending viri in labs then surely we can contain an exponentially growing and
potentially humanity ending AI.

Am I missing something? Is the plan to actually create robots that can
replicate and grow at exponential rates and turn it loose on the environment?

AI will be regulated just like everything else.

~~~
sriku
Read about the "AI Box experiment" \-
[http://lesswrong.com/lw/up/shut_up_and_do_the_impossible/](http://lesswrong.com/lw/up/shut_up_and_do_the_impossible/)

[http://rationalwiki.org/wiki/AI-
box_experiment](http://rationalwiki.org/wiki/AI-box_experiment)

~~~
robotkilla
Interesting – but isn't this starting with the assumption that there is a
possible combination of words or actions that the AI could employ which would
work on a human? What if no possible combination exists?

I feel that the bigger and more immediate threat is the misuse of AI by human
beings against other human beings. Governments are already abusing it.

------
islon
"... And with the accelerating pace of technological change, it wouldn’t be
long before the capabilities – and goals – of the computers would far surpass
human understanding."

"...In their single-mindedness, they would view their biological creators as
mere collections of matter..."

These two sentences are contradictory.

------
barbudorojo
What I would find difficult to explain (or program) to a machine is that human
right ends with certain frontiers. One meter at one side the frontier your
life is abysmally important, one meter at the other side your life is of
utmost importance. I wonder where the machine would put the frontier if one
day they have to assess the value of our lives.

------
tomrod
David Brin (you may remember him as the author of "The Postman", but not the
screenwriter of the movie) has a fantastic novel on this.[0]

[0] [http://www.amazon.com/Existence-David-Brin-
ebook/dp/B0079XPM...](http://www.amazon.com/Existence-David-Brin-
ebook/dp/B0079XPMQS)

------
mkagenius
Wouldn't they(AI) do the same mistakes and create a superior race and so on?

AI seems like evolution to me - only if we put our DNAs in the AI, that would
make everyone happy I guess?

I mean, children do not kill their parents even though they are more
intelligent(evolution wise).

~~~
sriku
> I mean, children do not kill their parents even though they are more
> intelligent(evolution wise).

If it comes to the issue of more power, there's plenty of this happening in
history - ex: Mughal empire.

