
EFF: Google Should Not Help the U.S. Military Build Unaccountable AI Systems - confounded
https://www.eff.org/deeplinks/2018/04/should-google-really-be-helping-us-military-build-ai-systems
======
tomohawk
If it's not safe for Google to be building stuff for the military, then it's
just as not safe for them to be building stuff for themselves.

Also, these kinds of things can't be looked at in a vacuum. There are are
other, much more belligerent, countries in the world that are investigating
the use of AI.

What should the US do in response? Nothing doesn't seem to be a very good
answer.

~~~
pde3
We did think about those issues before writing the piece.

It's safer to figure out how to get AI systems to be robust, reliable and safe
in civilian contexts before rushing to weaponize them. We need to understand
how to avoid technical problems like adversarial examples, and how to
recognize and avoid accidental action-reaction-escalation pathways, before
militaries start deploying this stuff.

Objectively speaking, the US is one of the planet's more belligerent nations.
But if there was evidence that other belligerent countries were already
deploying AI weapons systems, there might be an argument for the US keeping
pace. If there isn't such evidence, the US should think more carefully about
whether to move first, and how to move first, or whether certain kinds of
restraint in this space might be in its long term strategic interest.

