
The best defense against malicious AI is AI - etiam
https://www.technologyreview.com/s/608288/ai-fight-club-could-help-save-us-from-a-future-of-super-smart-cyberattacks/
======
strictnein
On a related note, at Blackhat this year there's a presentation on using
Machine Learning based malware detection to train your malware to evade ML
based malware detection:

[https://www.blackhat.com/us-17/briefings/schedule/index.html...](https://www.blackhat.com/us-17/briefings/schedule/index.html#bot-
vs-bot-for-evading-machine-learning-malware-detection-6461)

------
bem94
Depending on ones level of idealism, the best defense against malicious AI is
actually educating programmers / hackers / engineers on the implications of
their work and to have a sense of decency & foresight.

It baffles me that we just accept the kind of malice inflicted on people by
programmers because "someone will always do it". As a profession / collection
of skilled persons, we should really be better than that.

Obviously, one cannot see the future, nor would we want to be paralized by
fear of doing anything. But there is a certain minimum requirement for
collective responsiblity which I really don't think we are meeting at the
moment.

~~~
bad_alloc
Decency and foresight is highly subjective. Actually encoding what we
understand as "decent human behaviour" into AI is a huge problem: If you
create a system that decides about giving somebody a loan in the US based on a
financial data set it might come up with any of these hypotheses:

* Don't give loans to people living in [poor area].

* Avoid people with names that aren't similar to the most common ones in the database (i.e. foreign ones)

* When linking the customer data to their social media and you see the picture is dissimilar to [preferred ethnicity], do not give a loan.

Now without any malice from the developer, the system has become racist: It
saw the correlation that in the US blacks and hispanics live in poverty more
often that others [1]. It knows that poor people pay back loans less
frequently and makes the rational decision to not give a loan to that group.
This of course reinforces the problem.

But how are we to solve this? Introduce an additional column "race" to the
data and bias the results with it? Would that not be just as racist? How do we
give the system an awareness not to discriminate against ethnic groups, if the
data contains implicit clues? This comes down to giving such an AI human
intuition about such questions.

[1]
[http://www.ssc.wisc.edu/irpweb/faqs/faq3/Figure1.png](http://www.ssc.wisc.edu/irpweb/faqs/faq3/Figure1.png)

~~~
bko
>Now without any malice from the developer, the system has become racist: It
saw the correlation that in the US blacks and hispanics live in poverty more
often that others [1].

If the data really suggests that these factors (poor areas, foreign names,
social media) affect liklihood of default, should they be completely ignored?
I'm sure credit scores and salaries are correlated to race in the US too.
Should those be ignored? Just give out loans indiscriminately?

They're not ignored now by loan approval. Worse, inaccurate stereotypes are
used as a proxy for some of these factors that may be relevant to a borrowers
ability to pay. We don't live in a perfect world where people ignore certain
factors.

~~~
barrkel
Are you familiar with correlation vs causation? Without a causal link, making
an inference based on correlation is unsound. Encoding unsound reasoning in an
AI model, particularly when it reinforces an existing social imbalance, would
be less than ethical.

"Weapons of Math Destruction" by Cathy O'Neil delves into this in more depth.
It's a very valid concern, particularly in the way non-technical people are
trained over time to give deference to the algorithm.

~~~
bko
I'm not sure what you mean. AI doesn't involve "encoding unsound reasoning"
into anything. That's more akin to how humans think and act, by making
decisions based on heuristics learned from a lifetime of experience and
societal influences. It would be unethical to encode a "race" factor to be
considered in an algorithm, although even if it were included I think it would
prove insignificant since race has no direct causation to ability to prepay
when you consider all the other factors (e.g. poor white person in same
neighborhood would have same default prob). However other factors might, such
as the ones you have listed (where they live, credit score, etc)

I am familiar with the book you mentioned.

You still didn't answer my question. If credit score and income are correlated
to race in America, should they be ignored from loan applications? What are
valid factors to consider?

~~~
barrkel
> If credit score and income are correlated to race in America, should they be
> ignored from loan applications?

Is correlation causation? Is it fair to encode a judgement based on
correlation but not causation?

Or to make your position more clear, do you support racial profiling? If not,
why not, and how do you justify that position, and how does that justification
apply to judgements based on correlation but not causation?

(My position is that making decisions based on correlation but not causation
is the general case for which racial profiling is a specific case.)

~~~
bko
I already answered that. No race should not be a factor as it does not convey
any info not explained by other factors.

What are factors that are causation? Presumably not where you're living and
other considerations you determined to be implicitly racist

------
gsg
So... the only way to stop a bad guy with a GAN is a good guy with a GAN?

~~~
sametmax
AI, AI, captain.

~~~
vedanta
You know, HN could use some of this /.-like humor.

------
zitterbewegung
For something that is more oriented toward patches and network defense see
[http://archive.darpa.mil/cybergrandchallenge/](http://archive.darpa.mil/cybergrandchallenge/)
.

~~~
dguido
I competed in that! It was a fun contest and if you want to read more about
it, we have a section on our blog dedicated to it with lots of rich technical
detail: [https://blog.trailofbits.com/category/cyber-grand-
challenge/](https://blog.trailofbits.com/category/cyber-grand-challenge/)

I am sorry to admit, though, that it did not involve AI or machine learning in
the slightest. Most of it was about melding the capabilities of fast, dumb
dynamic testing like fuzzing together with the deep analytical capabilities of
more advanced program analyses like symbolic execution.

~~~
zitterbewegung
I was there so I probably saw you. I remembered that the competitors used
symbolic execution . My comment should have been more general and say
automated .

~~~
hueving
The sports-caster style commentary was hilariously awkward at times. A room
full of people staring a stage full of server racks with a projector showing
basic shapes being spewed between locations. Then off on another screen a
technical person trying to explain to the Science channel person what the hell
was going on with the different types of exploits and code execution graphs.

It's completely indescribable and epically nerdy. +10, would attend again. :)

~~~
zitterbewegung
It was awkward at times, the visualizations were cool but didn't really help
you understand what was occuring. But, given their content and also to get
people not very technical excited? That is a heruclean task for anyone. But I
hope DARPA tries again to do it. They could attempt to do better and make it
interesting. OR they could embrace the awkwardness and own it. Make it
hilarious, make it MEMEy, make it like Tim and Eric.

------
etiam
I'm following the original title for the post for now, but frankly I'm unhappy
with almost everything about the style of it. I would encourage changing it.

~~~
Houshalter
I would go with "Kaggle Hosts Adversarial Machine Learning Competition".

~~~
etiam
I like it. Accurate and descriptive.

------
astrojams
Would strategies used by AI's that play imperfect information games like Poker
be useful for winning a contest like this?

~~~
inopinatus
Those who have not seen WarGames are clearly doomed to reinvent it. Just after
I finish this game of "Falken's Maze"...

~~~
rch
No kidding - what else is a potentially hostile autonomous agent if not
_everybody_.

------
kerkeslager
Yet another thing predicted decades ago by William Gibson in _Neuromancer_.

~~~
pmlnr
Came here to comment "Wintermute?"

------
dovdovdov
That's how Skynet got started.

------
s-brody
Well, if there were no AI, there were no malicious AI also.

------
sigzero
This reminds me of a cyberpunk story.

~~~
dave7
Reminds me of Person of Interest.

I'm still amazed this show had a full run on mainstream TV. One of the best
TV/movie treatments of "Hacker News topics" there has been.

[https://en.wikipedia.org/wiki/Person_of_Interest_%28TV_serie...](https://en.wikipedia.org/wiki/Person_of_Interest_%28TV_series%29)

------
jerry40
A name of the topic reminds me "Watchbird" novel written by Sheckley

~~~
wruza
Or, The City and the Stars by Arthur C. Clarke, though it is was not exactly
computer AI.

That book still makes me goosebumps every time I remember it.

------
zeep
let's hope that this good AI won't turn bad once it's too late to switch to
something else...

------
examplar
the best defence against AI is to reduce the # of bits for algorithm to use

------
prodtorok
The Oracle.

 _She possesses the power of foresight, which she uses to advise and guide the
humans attempting to fight the Matrix_

------
melling
Are we still wasting our time talking about evil AI at this premature time?
Can someone dig up Andrew Ng’s comments on this?

~~~
backpropaganda
This isn't Elon Musk-style evil AI. This is the more prosaic idea of hackers
being able to hack AI systems. For instance, a hacker can put stuff on a stop
sign (which won't be visible to humans) to make a Tesla car think it's not a
stop sign, and therefore cause an accident. I recommend reading up about it.
This is the kind of stuff that can actually happen today.

~~~
rimliu
If something looks like a stop sign to the human but like something else to
the AI, then "I" stands for "Idiocy" not "Intelligence".

~~~
ben_w
Your brain is not immune to the problem, it's just hard to automate the
creation of optical (and audio, and presumably all other sensory) illusions
when we don't have a synapse resolution connectome of the relevant bits of
your brain.

Examples include That Dress, duck-or-rabbit, stereotypes, "garden path
sentences", and most film special effects.

~~~
rimliu
I am talking about cases like these:

[https://blog.openai.com/robust-adversarial-
inputs/](https://blog.openai.com/robust-adversarial-inputs/)
[http://www.popsci.com/byzantine-science-deceiving-
artificial...](http://www.popsci.com/byzantine-science-deceiving-artificial-
intelligence)

~~~
ben_w
Ironically your second link starts with Clever Hans, which is another example
of my point. Machines, even organic ones like our brains, are not magically
able to know objective reality, and the failure itself isn't idiocy — the
_rate_ of failure (and things like metecognition about the possibility of
failure) is (at last part of) the intelligence-idiocy spectrum.

