
Deep learning pioneer Yoshua Bengio is worried about AI’s future - jonbaer
https://www.technologyreview.com/s/612434/one-of-the-fathers-of-ai-is-worried-about-its-future/
======
nabla9
"It should be noted that no ethically-trained software engineer would ever
consent to write a `DestroyBaghdad' procedure. Basic professional ethics would
instead require him to write a `DestroyCity' procedure, to which `Baghdad'
could be given as a parameter."

\-- Nathaniel S. Borenstein,
[https://en.wikipedia.org/wiki/Nathaniel_Borenstein](https://en.wikipedia.org/wiki/Nathaniel_Borenstein)

~~~
Dirlewanger
deleted

~~~
tim333
I think Borenstein's thing was a joke about people not being very ethical.

~~~
Dirlewanger
Right over my head...

------
killjoywashere
> I don’t completely trust military organizations, because they tend to put
> duty before morality

Ah, first, a duty flows from a moral position, so perhaps the master misspoke.
Second, I submit you can strongly trust military organizations to do their
duty. They will stop at nothing to kill their enemies with extreme prejudice.
And the people who lead military organizations, in most cases, have thought
quite long and hard about their moral positions. It is the nature of the
business, when sending people to die, to question everything about yourself.

Now, that doesn't mean war is a civilized affair. It is awful, insane. And I
frankly don't know a lot of senior military leaders who have ever been eager
to go to war, except in cases where they believed the operation truly met the
standards of jus ad bellum. What you often find in histories are senior
officers getting replaced when their advice to not enter war is not in line
with the positions of their civilian leadership.

~~~
solveit
A more charitable reading is that military organizations are structured so
that everyone but the very top leadership follow 'duty', i.e. orders, without
thinking about the morality of their orders (I know that in principle they're
supposed to refuse unlawful orders. In practice, we have fairly mixed results
on that front. Not to mention the fact that a great deal of barbarism can be
perfectly lawful in war.).

But as anyone who's been in an organization of more than ten people can
attest, what the leadership says it's doing, what the leadership thinks it's
doing, and what the organization is actually doing are three very different
things. In most organizations the mismatch usually just ends up in money
leaking everywhere and stupid things being build. In militaries, all sorts of
atrocities can happen, and this is amplified _because_ the boots on the ground
and the cogs in the machine are trained to not think too hard about the ethics
of their actions. So I think that quote is a perfectly reasonable take on the
matter.

~~~
killjoywashere
> are trained to not think too hard about the ethics of their actions.

My experience has been that a great many officers think very hard about the
ethics of their actions. War is mainly vast expanses of boredom punctuated by
short periods of terror. And thinking is about the only thing left to do in
those vast expanses.

~~~
PavlovsCat
> War is mainly vast expanses of boredom punctuated by short periods of
> terror. And thinking is about the only thing left to do in those vast
> expanses.

Hardly.

[https://www.youtube.com/watch?v=tixOyiR8B-8](https://www.youtube.com/watch?v=tixOyiR8B-8)

> On the third day I was there, this guy who had picked me up in the Jeep, a
> corporal who I was ultimately going to replace, he and I were in the
> battalion intelligence section, we were sent down to the tractor park, the
> amphibious tractor park to meet a bunch of detainees. It was our
> responsibility to take care prisoners, and detainees were a classification
> of civilians, they were not combatants; they could be detained for
> questioning, which is why they were called detainees.

> And Jimmy and I went down to the tractor park and two tractors came in, they
> had a whole bunch of Vietnamese up on top high flat-topped vehicles about
> eight or nine feet tall, and as the tractors wheeled into the park the
> Marines up on top immediately began hurling these people off, and they were
> bound hand and foot, so they had no way of breaking their falls, and they
> were old men, women, children, no young men, and I couldn't believe these
> guys were treating these people this way, and I turned to Jimmy and said, I
> grabbed him by the arm and said "What are those guys doing? We're supposed
> to be helping these people." And Jimmy turned to me and he looked at my
> hands on his arm, I sort of took them off, and he said "Ehrhart, you better
> keep the mouth shut until you know what's going on around here." I think it
> was at that point that I realized things were not quite what I was
> expecting.

> [..]

> None of that distilled itself into the clear kind of expression that I'm
> presenting now. What I began to understand within days and which became
> patently clear within months was that what was going on here was not what I
> had been told, what was going on here was nuts, and I wanted to get out. I
> knew if I was still alive on March the 5th 1968 they'd stick me on an
> airplane in Danang we used to call it the freedom bird and I could fly away
> and forget the whole thing. Turned out not to be quite so easy to forget it,
> but that was the notion, and certainly my last eight to nine months I ceased
> to think, I quite literally ceased to think about why I was there, or what I
> was doing. The sole purpose for my being in Vietnam at that point was to
> stay alive until I could get out.

> And the reason for that is, you know, the kinds of questions that began to
> present themselves were just.. the questions themselves were ugly and I
> didn't want to know the answers. It's like banging on a door, you knock on a
> door, and the door opens slightly and behind that door it's dark and there's
> loud noises coming like there's wild animals in there or something. And you
> peer into the darknees and you can't see what's there but you can all this
> ugly stuff.. do you want to step into that room? No way, you just sorta back
> out quietly, pull the door shut behind you, and walk away from it. And
> that's what was going on, those questions, the questions themselves were too
> ugly to even ask, let alone try to deal with the answers.

~~~
quaice
How does this refute the sentence you quoted?

~~~
PavlovsCat
Thinking about what one is doing is _not_ the only thing that is possible.
It's perfectly possible to think about it in ways that further remove oneself
from ones actions (scapegoats, rationalizations, etc.), and it's possible to
not think about it at all.

That in turn does away with the implied claim that we can "trust" people to
think about what they're doing, because there it's the only thing they can
even do.

It's like saying the only thing you can do when you're in a gang is to
consider your actions and discuss them with others, so just from first
principles (we pulled from thin air) we can make this deduction about gang
life.

I mean, the context of this is taking issue with the statement that a person
doesn't "completely trust" military organizations, that's bad enough. But the
claim that being bored a lot between periods of terror means people genuinely
reflect, that's stunning.

------
eksemplar
One of the first cyberpunk books I read opens with a drone hit on some
European business man at a company resort.

It seems like we’re hell bent on making this reality. I mean, military AI is
basically an inevitable future for us at this point, and while it’ll probably
take a few years to leak into private hands, it eventually will, and the world
will be a little more shitty for it.

Aside from that I’d rue the day the Americans get AI murder drones, especially
if I was living in one of the 7 or 9 countries they are currently
assassinating people in with the use of the current drone fleet. As terrible
as a murder assassin drone is, it’s at least controlled by a supposedly moral
being.

~~~
Joe-Z
Meh, maybe it's not that bad. Sure, you'll get the occasional psycho trying to
kill his pop-star stalker-victim. But for many high profile people -
politicians, mega-corp CEOs, media tycoons - it might be a good way keeping
them in line, knowing 'the people' can finally strike back again. For most
normal/low-profile people this won't be a profile... since most of us don't
have arch-enemies.

(Playing devil's advocate here, so please take this with a grain of salt ;))

~~~
bendoernberg
Sounds like the "Assassination Market" theory of decentralizing power from the
1990s.

------
netcan
Seems like the interviewer focuses a lot on juicy questions where Bengio does
not have highly developed answers. (Q: _AI for War?_ A: _Bad_ )

Where his opinions are more interesting is the last questions, outside of the
articles agenda of "stuff to freak out about."

Asked what new progress areas he's excited about, Bengio (charmingly) responds
with slow progress areas he's frustrated with.

Anyone know (or can guess) if he's refering to anything specific when he (I'm
connecting dots) using deep learning to "learn causality?"

~~~
Voloskaya
One of the related project he mentioned being very interested in is Baby AI,
which is trying to improve grounded language learning, so incorporating world
knowledge into NLP:

* [https://arxiv.org/abs/1810.08272](https://arxiv.org/abs/1810.08272)

* [https://github.com/mila-udem/babyai](https://github.com/mila-udem/babyai)

------
joe_the_user
He's talking mostly about military technology. I'd agree that killer-robots
would be horrible - for the potential to make military action easier, to make
it even easier for a single mad man launch a war and so-forth.

I don't think "real" military AI is that close because things on the battle-
field have to be robust and current AI seems to be inherently fragile - not
always reliable and less reliable in chaotic situations.

But semi-military applications like deciding who a drone will kill have
potential ... to do even more harm than drones have already done.

~~~
atupis
That semi-military approach is already happening
[https://www.theguardian.com/science/the-lay-
scientist/2016/f...](https://www.theguardian.com/science/the-lay-
scientist/2016/feb/18/has-a-rampaging-ai-algorithm-really-killed-thousands-in-
pakistan)

~~~
EGreg
Is that drone carrying out kills w no humans in the loop?

~~~
KineticLensman
No. Predator / Reaper drones are remotely piloted via satcoms link by a crew
of two or three, typically pilot / sensor/ weapon operator. They are not
autonomous with the exception of auto-pilot functions. The crew are typically
embedded in a larger HQ where there may be political / legal cells to advise.

If the target list is wrong it won’t help the targets but the human in the
loop may help to avoid blowing up a school accidentally, depending on the
amount of collateral damage that can be accepted for a given target.

------
nuguy
This shocks me. Here is this man who is clearly a complete novice in
international politics, economics and history. He says that we should make
rogue countries “feel guilty” for developing malicious ai implementations. He
hand-waves away the military as simpletons who “act on duty” like a f*king
high school student. When confronted with the existential threats of ai he
launches into an analysis of African visa process and “inclusivity.” This is
complete and utter garbage. Worst of all he talks about reaching general,
human-like ai without ever mentioning that it is basically an automatic
extinction level event for human kind. If you disagree then check out my
comment history and change my mind. Preventing general ai is extremely
important and anyone who wants to form some kind of group to encourage
regulation and legislation around ai please get in touch with me.

------
pietroppeter
> we need to be able to extend it to do things like reasoning, learning
> causality, and exploring the world in order to learn and acquire
> information.

Having recently read the Book of Why by Judea Pearl, I found particularly
interesting his remarks on causality. Pearl’s approach is based on causality
being a testable assumption that requires domain expertise to be expressed.
Having done that, it provides powerful techiques to address causal questions
from data.

Any efforts towards a general AI capable of causal reasoning should instead be
able to create causal assumption from experience and “reasoning”.

I have not seen around (have not looked much) discussion on how to combine
general purpose AI approaches such as DL with domain specific approaches such
as the Causal Inference techniques described by Pearl. Anyone has references
to share?

------
hummingurban
How much sleep would he lose when you realize the very educational
institutions which he attended were designed by large to benefit the
military's R&D?

Anything we invent or discover can and will be used by the military or
intelligence agencies and even law enforcement agencies who are doing real
human right violations. Hell, there are graduates that go work for the
government because it's stable, or even unwittingly be writing RAT or
researching zero days.

The truth is, as researchers, engineers, our guesses are as good as touching a
part of the elephant. Everyone thinks the part they hold are the whole, and
make the claim, this is what an elephant should be.

When in fact, if you follow the authors logic, we are all complicit. Every
walk of life is influenced by the military and for the military. Internet?
Designed for resilient military comm against nuclear attacks. Microwave? TV?
Radio?

So the genie is out of the bag and now is going to work for the military.
Should we feel outraged? Should we stop all AI research because of the authors
view that the military kills people so its automatically evil?

What about the people who are ready to give their lives so these researchers
can continue living and doing great work? It seems to me that beggars can't be
choosers. It's good to have a strong sense of morality so you don't end up
writing a RAT tool for a corrupt government that ends up torturing dissidents.
But _rarely_ will you even know who is using it and where it's being applied.
It's simply designed so you don't have to be burdened with the moral dillemma
of how a state should think and behave.

By large, a state is not a person, no conscious, no morals only national
interests dictated by the few in power. This skewed power dynamics will remove
the decision makers from the burden of making immoral decisions to further
"national" agenda. ex) Do we torture an alleged terrorist to extract
information that can stop an imminent attack on hundreds? It's certainly not
the call of the people who wrote the software to manage torture and it's not
the call of those who follow orders.

We are ruled by ideology, the one that sells to us constantly of an unknown,
unpredictable threat. If the author has anything to blame, it is that people
by far and large have already voted with their money and hearts-ignorance is
bliss, gathering material wealth is priority.

------
gaius
_don’t completely trust military organizations, because they tend to put duty
before morality_

The rockets go up, who cares where they come down / That’s not my department
says Werner von Braun

------
colechristensen
My first thoughts were on the failure of logic of just being opposed to the
military doing things without proposing how they should reasonably behave
but...

Well it seems like the author of the article had the headline in mind before
even talking to YB. The responses don't seem like well thought ideas on a
complex topic, they seem like random off-the-cuff answers to a journalist's
leading questions, and it isn't fair to criticize them.

------
rademacher
So he's saying that concentration of "wealth" is bad, and war is bad.

A quick glance at the definition of moral gives, "a person's standards of
behavior or beliefs concerning what is and is not acceptable for them to do"
which suggests that they may be fluid. Are killer robots necessarily any less
moral than killer humans? We seek to replace humans with "robots" in many
cases under the assumption that they perform better. I suppose in the case of
killer robots this could mean more effective killing, or perhaps it could mean
more accurate strikes and less civilian casualties? (I'm not saying I am an
advocate for military AI, just posing some questions).

Finally, suggesting that we need to focus less on incremental progress when DL
still isn't completely understood seems premature. I'm not sure another great
leap in AI is on the horizon until a leap in computational power or a new
framework is discovered.

~~~
indigochill
My take on the "robots mean more accurate/less risky warfare" is that that's
precisely the problem, actually (at least if we start from the assumption that
war is bad). By "industrializing" warfare and reducing the cost in lives, we
make it more politically palatable.

Risk assessment is a massive part of waging war. If the risk to one side is
reduced by using robots (or other weapons-of-cheap-destruction) instead of
humans, then the likelihood that side will favor war as a conflict resolution
mechanism is (all other things being equal) increased.

On the other hand, if the risk is too high, then alternative options are more
likely to be favored. This seems to be the thinking around things like nuclear
disarmament and why proliferation is generally seen as a bad thing despite
nukes being hands-down the most cost-effective way to end a war (at least
against an enemy not similarly equipped - see the Cold War). The reason given
for the US bombing of Japan was to save lives by shortening the war - though
I'm not about to get into whether that decision was justified.

And that's before we introduce AI, which has had notorious bugs like failing
to identify dark-skinned faces relative to light-skinned faces
([https://www.bostonmagazine.com/news/2018/02/23/artificial-
in...](https://www.bostonmagazine.com/news/2018/02/23/artificial-intelligence-
race-dark-skin-bias/)) or driverless car crashes. As uncomfortable as I am
with proliferation of killer tech in general, introducing AI actually makes my
skin crawl.

Even without the unpredictability of AI, "precision strikes" are not
necessarily as precise as their name implies: [https://www.theguardian.com/us-
news/2014/nov/24/-sp-us-drone...](https://www.theguardian.com/us-
news/2014/nov/24/-sp-us-drone-strikes-kill-1147)

~~~
rademacher
All good points. It would be nice to see some game theory type assessment of
these imbalances.

------
TangoTrotFox
People too often come to one side or another of an issue without considering
context and alternatives. War unfortunately drags everybody down to the lowest
common denominator. The reason for this is that the lowest common denominator,
the nation that would act with the greatest disregard for anything other than
their own victory, is the nation that wins wars and gets to set the rules for
the rest of the world.

In 1945 the United States became the only nation to ever launch a nuclear
strike on another country. In a two moment's flashes we killed hundreds of
thousands of individuals - the vast majority being innocent civilians playing
no direct role in the war whatsoever. But that act of arguable amorality not
only immediately ended a world war, but has since led to a great fear of war
among any nation with these weapons which has led to an unprecedented period
of _relative_ world peace. This [1] great video shows the borders of Europe
throughout time. The sudden lapse in that constant warfare and transition, the
one we now still live in, that occurs just shortly after the more widespread
development of nuclear weapons is striking.

But nuclear weapons will not remain the ultimate weapon and deterrent forever.
And whichever nation is able to develop appropriate defenses against nuclear
weapons and offenses beyond nuclear, will be the same nation that decides the
direction of our species moving onward into the future. Again, it's a lowest
common denominator problem - but it's one that's ultimately unavoidable by any
means other than by ensuring that you yourself are always the most militarily
capable nation. And ideally multiple nations will develop these weapons in
unison. I'm not particularly fond of e.g. China having unopposed reign
throughout the world, but neither am I fond of the idea of an unopposed reign
of e.g. the United States. I think we find the best outcomes when we have
multiple balanced powers, and in this regard it is our responsibility to never
fall behind.

[1] -
[https://www.youtube.com/watch?v=P9YnYRk8_kE](https://www.youtube.com/watch?v=P9YnYRk8_kE)

------
deepnotderp
I don't understand the argument against military robots and AI. Why is it more
moral to send men to die instead of machines?

Edit:

About asymmetry in the battlefield, yes, but then you should say the same of
_all_ advanced weaponry, right? Each side is _aspiring_ for asymmetry in war
technology.

About the army refusing to carry out orders. That's a good point that I hadn't
thought of. I'm not super sure that current humans are really that great, but
it's a fair point.

About the cost of war. High end military technology is very expensive and
something this sophisticated probably won't be cheap. But even disregarding
that, I would prefer thousands of machines being destroyed to humans dying.

~~~
beojan
The problem is AI taking the decision to kill people.

~~~
wetpaws
Get your own AI, problem solved.

~~~
drb91
One robot vs state violence?

~~~
wetpaws
I should've probably elaborate on my comment cause of course people will
reflexively downvote it, but proxy wars, smaller armies, and now AI driven
conflicts are the natural evolution of warfare.

We went from slaughter of millions to hanful of extremely professional
soldiers to, eventually fight of machines. (MGS4 depicted it in striking
details years ago).

~~~
simion314
Do you think that a country would surrender because your robots won a robot
fight? There will still be an army and an occupation resistance movement that
will fight your robots and lives will still be lost. The attackers would
continue to destroy the roads,train station, food reserves, energy production
facilities ... maybe AI is the future but is not better and we should try to
make killer robots usage illegal.

~~~
wetpaws
The concept of proxy fight is not new, it is probably as old as humanity.
Bible had David vs Goliath. Maya used a variation of soccer to solve military
conflicts. (You can see it depicted on temple walls). You are focused on
"destroying" part, but in fact it is only a part of the warfare, and a very
expensive one. (Part of the reason why we see progressively less and less
large scale conflicts is because we are growing more and more economically
interconnected and a cost of open warfare just not worth it anymore)

~~~
simion314
I would love to see politicians or generals fight directly, but this does not
happen at all , we see proxy wars where powers A and B fight indirectly by
getting involved in some country C conflict.

I do not believe that if we get AI robots involved we will not see bombs land
on bridges,as recently as the war in Yugoslavia bombs were droped on economic
targets:

"NATO bombed strategic economic and societal targets, such as bridges,
military facilities, official government facilities, and factories, using
long-range cruise missiles to hit heavily defended targets, such as strategic
installations in Belgrade and Pristina."

Are you imagining 2 robot teams fighting each other and the winner gets the
loser resources and impose it's politics, the population will accept that
their robots lost ?

------
break_the_bank
With the AI soldier/killer drones the second amendment becomes fairly useless.
The government cannot bleed.

~~~
jahewson
Unfortunately the Supreme Court already ruled that the second amendment
applies to self defence irrespective of the “militia” part (DC vs Heller,
2008).

~~~
TangoTrotFox
The ruling specifically focused on the militia aspect militia. Quoting that
ruling:

 _The prefatory clause comports with the Court’s interpretation of the
operative clause. The "militia" comprised all males physically capable of
acting in concert for the common defense. The Antifederalists feared that the
Federal Government would disarm the people in order to disable this citizens’
militia, enabling a politicized standing army or a select militia to rule. The
response was to deny Congress power to abridge the ancient right of
individuals to keep and bear arms, so that the ideal of a citizens’ militia
would be preserved._

People conflate militia now a days with military, but they are not the same.
The "militia" are the armed citizens of a state. And similarly the security of
a free state is talking explicitly about security against government itself -
not foreign invaders, which was the domain of the federal government. An armed
population can prevent the imposition of tyrannical rule, an unarmed
population cannot. In verbose modern text the amendment might read something
like, "A well regulated and armed population being necessary for the
protection of a state against tyranny, the right of the people to keep and
bear arms shall not be infringed."

------
gdrift
Additional read:

"Autonomous Military Robotics: Risk, Ethics, and Design" By Ethics + Emerging
Sciences Group at California Polytechnic State University, San Luis Obispo,
sponsored by the Department of the Navy

    
    
        In this report, we will present: the presumptive case for the use of 
        autonomous military robotics; the need to address risk and ethics in 
        the field; the current and predicted state of military robotics;
        programming approaches as well as relevant ethical theories and considerations 
        (including the Laws of War, Rules of Engagement); a framework for technology 
        risk assessment; ethical and social issues, both near- and far-term; 
        and recommendations for future work.
    

[http://ethics.calpoly.edu/ONR_report.pdf](http://ethics.calpoly.edu/ONR_report.pdf)

and "Malak" by Peter Watts

Inside the mind of an autonomous drone with a conscience, but as usual with
Watts it all goes wrong.

[https://rifters.com/real/shorts/PeterWatts_Malak.pdf](https://rifters.com/real/shorts/PeterWatts_Malak.pdf)

------
plaidfuji
>> and it has proved incredibly powerful and effective for all sorts of
practical tasks, from voice recognition and image classification to
controlling self-driving cars and automating business decisions.

> and it has proved incredibly powerful and effective for all sorts of
> practical tasks, from signal processing and signal processing to signal
> processing and a dubious application of deep learning

------
chooga
This is not worth reading.

I worked with Bengio for a couple of years, and he's a classic example of a
not-particularly-talented mid-level prof who's been elevated by the bubble of
hype in his field, attracted some talented students who publish papers on
which he ends up as a coauthor... and now thinks he is a voice of authority in
AI (and many other fields).

Those who disagree with me -- can you name a single important contribution to
the field he has made, that wasn't in fact done by one of his students?

Hinton did backprop, leCun did convnets, Schmidhuber did LSTMs, and Bengio did
... ?

------
wiz21c
FTA :

>>> The best students want to go to the best companies.

If you're a parent, maybe it's time to teach your kids that "going to the best
company" may not be the best outcome for a citizen of the world.

------
tim333
As a devils advocate for military AI, it might be better in some ways - after
all if you are lunching military action do you want it to be unintelligent?
The trend has been from blanket bombing killing mostly civilians to precision
strikes taking out some bad guys and some wedding parties and in the future AI
robots might be able to do things like disable vehicles without killing the
soldiers.

------
eli_gottlieb
>Right now, we don’t really have good algorithms for this, but I think if
enough people work at it and consider it important, we will make advances.

Yeah, I think there's a lot of space for advances, if we can combine different
backgrounds and intuitions. I'm working on a project where I try to combine
some programming-language semantics and Bayesian learning to learn structures.

------
patio131
I think we should all remember that being a pioneer in AI does not give you
any experience or authority in international politics...

------
discoball
The right to bear arms was given to humans who happen to live in this so-
called freedom loving country. It was never given to autonomous non-human
entities. So while the military may develop robotic kill squads their non-
military use is illegal. Unless the treasonous Supreme Court doubles down on
their treason and give the right to bear arms to AI, by considering AI to be a
digital person with rights. It could happen. The Supreme Court judges have
already shown that they're fully capable of making decisions that favor big
corporations over us the people. Why not autonomous bots? After all, big
enough corporations are pretty much autonomous (as they follow profit above
all and do so regardless of who is in charge of them).

~~~
shard972
Seems like it could go either way, its not like the bots themselves need the
rights or become legal entities beyond a weapon.

Now maybe this weapon fires by you telling it to shoot that guy over there, or
it's programmed to shoot anyone who enters through that door over there, but
in the end it's still a weapon under control and responsibility of the owner.

~~~
discoball
What you describe is not Autonomous AI that is driven by a dynamic mission, as
the military AI will be. An AI killer bot can be developed that not only tries
to achieve the goal specified by the mission but also alter the mission as
needed to achieve a higher level goal, and the higher you go up in the goal
specification the more autonomy you're giving to the machine.

------
dnprock
AI researchers over romanticize their robot AI technology. They don't know
enough about wars. Morality is something we talk about outside of the
battlefield.

It is not going to be your robot army fighting a human army. Instead, it will
be your robot army fighting another robot army, perhaps less sophisticated.

They'll supplement their disadvantage with humans. You want to be the army
with more advanced robots. Otherwise you'll need to put humans on the line.
Always better to equip yourself with better fighting capability.

~~~
onion2k
_It is not going to be your robot army fighting a human army. Instead, it will
be your robot army fighting another robot army, perhaps less sophisticated._

I don't think there are any current conflicts[1] that could reasonably be
described as being between two armies. It's more often between one group of
armies from several countries working together and several groups of
[terrorists|freedom fighters|"unlawful combatants" ] using guerrilla tactics
to attack them.

The exception are the ongoing civil wars, but they tend to be an army fighting
rebel groups in that country - would rebels ever get their own robots? How
would that happen?

It's obviously possible that we'll see two armies of robots fighting in the
future, but that's not really what war is right now.

[1]
[https://en.m.wikipedia.org/wiki/List_of_ongoing_armed_confli...](https://en.m.wikipedia.org/wiki/List_of_ongoing_armed_conflicts)

~~~
dnprock
The Syrian government got Russian weapons, air support. They even have
chemical weapons.

One side or many sides. You can be sure that the other sides are not fighting
with their bare hands.

------
noir-york
"“Some rogue country will develop these things.” My answer is that one, we
want to make them feel guilty for doing it"

Feel guilty?! The man may have contributed greatly to the field of AI, but
that kind of comment just comes across as very naive about how the world
works.

"Shouldn’t AI experts work with the military to ensure this happens?

If they had the right moral values, fine."

The military adheres to the laws of war and is lead by a civilian politician,
not a pope. A nation's military carries out the policies of the civilian
leadership. If you want a moral military, get moral politicians.

I don't understand why the distate at one's own military forces. These people
are not aliens, they're fellow Americans, or French or Japanese or wherever
you're from. At least for the major democracies, these men and women are
volunteering to defend your ass. They don't get to start wars, the politicans
you vote for do.

I very much want the military forces of my country to kick the ass of any
threat, and if some tech such as AI could help eliminate that threat faster
and with less blood than brilliant.

~~~
PavlovsCat
> These people are not aliens, they're fellow Americans, or French or Japanese
> or wherever you're from.

Unless they're a threat that needs its "ass kicked", of course. Which is an
euphemism for killing them dead. But that's somehow not _worse_ than simply
critizing people, wanting them to be _more_ alive, by connecting their deeds
and the consequences of those deeds?

Any teenager can realize this: Militaries only positive use is to defend
against other militaries, it's like a debugger that can only debug bugs in
itself, but it can also be used to slaughter civilians, and does that quite a
bit too much already.

> They don't get to start wars, the politicans you vote for do.

Then how come George Bush wasn't booed of that aircraft carrier when he
declared mission accomplished? If you want respect, if you don't want to be
ashamed, don't get caught in situations like that. If you want respect,
instead of shame, by the time something like Abu Ghraib reaches the public it
should include details about the ruckus that caused in the military, and how
people got beaten up by their comrades for partaking in the torture of people,
long before the police could get to them. And so on. Instead, we get these
uncanny valley stories about honor and duty and whatnot, by people who don't
quite remember what being human is, but also can't quite leave humans alone so
they can find a solution for this mess.

When I opted against military service I didn't just say no, I wrote them a
kinda fiery letter. If 19 year old me can do that, others can do it, too. But
now I don't even get to criticize the military because I'm not in it, because
they "volunteered to defend me"? Nah. I see the ads for the Bundeswehr, they
volunteered because they're not right in the head if they responded to any of
that. They want power, importance, comraderie. Anything BUT personal deep
responsibility, which is why all the promotional material is about how joining
the army is "stepping up", just like you repeat the old chestnut of the
military protecting us, rather than leeching off and killing us. By "us" I
mean humans, not $country, since as I said out that country stuff cancels
itself out.

If people want respect, let them be respectable. If they do shameful shit,
they get shamed. If you are for the swift elimination of threats you should
welcome that.

And yes, I do feel for soldiers, I just hide it very well. The thought of kids
getting sent off to be made murderers for the protection of the wealth of
people who couldn't give a shit about them, that doesn't make me think "serves
them right", it breaks my heart. But that doesn't mean I buy into all those
rationalizions that always get trotted out. They don't hold up, and at at this
level of technology we simply cannot allow that level of foolishness anymore.

