
1,000 tech experts warn against AI arms race - callum85
http://www.bbc.co.uk/news/technology-33686581
======
simi_
Because the author was incapable of linking to Stephen Hawking's AMA, here it
is:
[https://www.reddit.com/r/science/comments/3eret9/science_ama...](https://www.reddit.com/r/science/comments/3eret9/science_ama_series_i_am_stephen_hawking/)

------
seynb
I think these people have read too much science fiction.

>First we had the legs race. Then we had the arms race. Now we're going to
have the brain race. And, if we're lucky, the final stage will be the human
race.

>John Brunner, The Shockwave Rider (1975), Bk. 1, Ch. "The Number You Have
Reached"

Great book, though.

------
callum85
I don't see how a ban could work. We already have technology that can identify
targets, so the issue is whether a human is involved to approve each kill
decision. But what constitutes human approval? Is it enough for a human to sit
and watch target details flashing up on a screen, and intervene if they see an
incorrect target?

What exactly would be banned?

~~~
pjc50
This is something of a problem. The system which scans mobile phone metadata
(among other things) and turns it into drone bombing targets is already enough
like the latter. A human is involved in pulling the trigger, but really
they're following orders rather than having made the target/not target
decision. And they may not even have the security clearance for the
information on which the decision is based.

~~~
bitJericho
I don't think that's so much a problem. The problem is literally robots
pulling the trigger. What happens when the president is targeted due to
error/hacking? A member of the military would have a major problem with
assassination of her own leader; a robot may not care. What happens if
everybody becomes a target? The military personnel would figure things out
real quick, a robot, again, may not care.

~~~
TheOtherHobbes
A member of the military might not have any problem at all with the
"accidental" assassination of their leader, whether that leader is the
president or the company sergeant. AI could easily provide plausible
deniability.

I've been very critical of claims of the dangers of AI, but this isn't about
AI - this is about giving lethal weapons to autonomous systems.

And _that 's_ just stupid. All software has bugs, and even if you don't have a
problem with autonomous killing systems - I certainly do - it's incredibly
irresponsible to build systems that could kill just because someone left out a
semicolon or couldn't convert between miles and km.

It's taking the "blue screen of death" experience just a little too literally.

~~~
bryondowd
Umm, we already have machines that can kill people if someone "left out a
semicolon". Airplane autopilots, medical equipment, car computers, etc. Not
saying this makes it OK, but that isn't the reason. Life-critical code isn't a
new thing.

~~~
jaawn
And those systems all have humans operating in conjunction with them, none of
those things are (currently) autonomous. The success thus far of life-critical
code usually depends partially on the assistance of humans.

~~~
bryondowd
Most of those systems operate on their own unless/until they hit something
they can't handle and a human is standing by as failover. So, unless you're
saying it's OK to have deathbots as long as a human is on hand as failover if
the deathbot gets confused, I think my point stands.

~~~
TheOtherHobbes
IMO, the current state of the F-35 rather suggests it doesn't.

------
ghshephard
I would think it better that we have weapon systems that intelligently make an
assessment as to whether their target is (A) a combatant, (B) a threat, (C)
surrendered, etc... before engaging and killing them.

The alternative is indiscriminate death that we see in mass bombings, mine
fields, artillery strikes, and drone strikes.

~~~
pmelendez
> I would think it better that we have weapon systems that intelligently make
> an assessment as to whether their target is (A) a combatant, (B) a threat,
> (C) surrendered, etc...

The problem is that those intelligent systems will be overlorded by the same
people overlording the actual military.

At least in our current situation a human has to pull the trigger and that
person could be potentially charged for crimes against humanity which would
make it thinking it twice and potentially resist to follow a direct order. You
won't have that with a machine.

~~~
fauigerzigerk
That's true, but the scenario you are thinking of is one in which the higher
ups order a massacre and the soldier on the ground refuses to carry out the
order. I believe that the more frequent case is when soldiers on the ground
are having to make split second life and death decisions, quite possibly
panicking themselves. If they get it wrong, they're dead.

An AI can make split second decisions without panicking and without any
consideration for its own "survival". For any mistakes an AI makes, those
higher up will get the blame (ideally), not some 20 year old who was scared
for his own life when he made the decision to kill everyone on that overloaded
truck because he it wouldn't stop 10 seconds earlier.

That said, I shudder at the thought of a world in which people get killed by
machines who will never be whistleblowers, who do not go home with post
traumatic stress disorder, telling everyone how horrible war really is.

~~~
pmelendez
> I believe that the more frequent case is when soldiers on the ground are
> having to make split second life and death decisions

Tha't ok, but I think you are assuming that the "good guys" are the only ones
that would have access to this. Truth is that once one government start doing
this, all the others will follow and even the organized crime would have
access to that technology.

~~~
fauigerzigerk
Even if both sides use machines, that doesn't make it any worse than both
sides using humans, on the contrary. But as I said, the wider consequences are
a different matter.

~~~
pmelendez
I think I didn't explain myself clearly... What I meant is that once one side
has it, it will go outside of common warfare pretty quickly.

You are thinking in Gov. A vs. Gov. B both using machines. My concern is that
after that, Gov. B will use machines against humans C inside the same country,
or in the neightborhood. And in those cases, "technical errors" can be used as
excuse after a tragedy. And that's only one of my concerns... add organized
crime and terrorism to the mix and you will have a very explosive soup.

~~~
fauigerzigerk
I think your assumptions are very realistic and I share your concerns. But
looking back at the history of war or combat, I don't feel that human nature
has been a mitigating factor. On the contrary.

------
Shivetya
their goal should be how to circumvent it or protect against it because no
government is truly going to give it up. As the technology progresses there
will more and more decisions removed from people to the point where what we
think the line is today is just ho hum by then.

~~~
tajen
Worse: By signing this protest there's all chances that AI be banned for
citizen. But it will certainly keep being accessible to governments while
making it legitimate to squash any citizen who attempts to build a counter-
power.

~~~
shard
Even worse: The coming AI overlords can use this list as the first people to
target for their "re-education program"

------
coldtea
Naives, warning against a far-fetched Californian fantasy.

Meanwhile, few warn against widespread surveillance, the repurcursions from
the use of drones, etc.

~~~
eli_gottlieb
Everyone's been warning against widespread surveillance. The entire antiwar
movement opposes drones. Where have you been?

~~~
coldtea
In the land where there are no "1000 tech experts" warning about them in
mainstream media and rich moguls like Kurzweill and Musk getting interviewed
every second week on the matter...

------
josephmx
Why is Stephen Hawking treated like a tech expert?

~~~
jacquesm
Because when a man that smart starts to think about fields outside his direct
expertise it tends to be worth listening to.

That and media have never ever exaggerated anything they printed.

~~~
tedunangst
It's weird. When we like what they say, it's "this guy is really smart. You
should listen." When we don't like what they say, it's "experts should stick
to what they know. Don't be fooled by appeal to authority."

~~~
jsutton
In some cases, I think you have to give credence to opinions from very
reputable people. Stephen Hawking, for instance, is one of the greatest minds
in the world; his word isn't gospel, but it's damn worth listening to.

------
wepple
This should be read as "race for armed AI", not "an arms race toward the goal
of AI". Very sneakily worded.

~~~
oneJob
agreed, and this is the whole reason the argument is academic. strong ai, once
developed, will be the ultimate dual-use technology. one must not only deny a
strong ai access to arms, one must also deny it access to anything/anyone that
might help it become freed from the constraints preventing it from obtaining,
using, or directing the use of arms. this is essentially the super well known
thought experiment fleshed out in Ex Machina. only one solution: ban ALL
strong ai. good luck with that.

------
dkx
I fear the AI arms race may be inevitable. Even if all the nations could agree
to place limits on AI research there will always be a huge incentive to
develop something in secret.

------
freddealmeida
Very few ML experts on that list I'm sure. But at the same time, I'm rather
against autonomous weapon systems. Yet not as naive to think that modern
armies will ignore the benefits of machine intelligence.

If you think about it, the US has not even ratified the Nuclear weapons
testing ban treaty. I doubt it will ever consider an AI weapons ban.

------
lectrick
This is not only overblown, it is misguided. All banning does is make people
continue in secret. In any event, research in that area will produce knowledge
that is both useful for non-weapon purposes as well as weaponizable... The
same as all knowledge that has ever existed.

------
CmonDev
I don't understand what kind of AI/robotics we are talking about here:

\- weak non-self-replicating;

\- weak self-replicating;

\- strong.

Besides, maybe it's an egoistic point of view. Many species have perished
while humanity established itself. Does it really matter if we go extinct, if
the result is going to be a superior species?

~~~
coldtea
> _Besides, maybe it 's an egoistic point of view. Many species have perished
> while humanity established itself. Does it really matter if we go extinct,
> if the result is going to be a superior species?_

In general, not wanting to die or go extinct is not considered an "egoistic
point of view".

Or, let's put it this way, from all the egoistic points of view, it's the most
excusable.

Why the duck should we care for a "superior species" (and a mchanical one at
that)?

Would you let a "superior country" fuck up your own country?

Would you let a "superior person" kill your family and use your resources to
sustain themselves?

Is it OK to extinct dolphins and lions, since we are a "superior species"?

Even more so since "superior" in this context has nothing to do with "morally
better" but just means "more powerful" and "more fit to overtake others and
survive".

------
andrewstuart
Just makes governments want it more.

------
winestock
For a humorous take on the issue, see

[http://www.supportkillerrobots.org/](http://www.supportkillerrobots.org/)

Shameless plug, I wrote it on a lark.

~~~
jackweirdy
Relatedly:

[http://stoprobotabuse.com/](http://stoprobotabuse.com/)

------
Qantourisc
Could we at-least use non-lethal weapons on humans ?

