
Hacking of artificial intelligence is an emerging security crisis - headalgorithm
https://www.wired.co.uk/article/artificial-intelligence-hacking-machine-learning-adversarial
======
peterwwillis
"AI" seems to be synonymous with "crappy statistical analysis algorithm".

~~~
stingraycharles
I’m not sure why this comment is being upvoted this much, but it seems fairly
snarky and adds little value to the discussion.

~~~
joe_the_user
The comment may just seem annoying but after a fair amount of reading on AI, I
think it actually is an OK if snarky comment on state of the art AI.

Current AI involves complex statistical analysis that essentially sacrifices
robustness and unbiased-ness for predictiveness. It's basically blindly
squeezing all the extrapolative qualities available in a huge dataset, getting
impressive seeming results while picking up all sorts of "garbage
associations" along the way.

~~~
pfortuny
At the same time, there are really no 'models' like in, say, Physics or
Engineering. The state-of-the-art consists in 'well, this seems to be
reasonable and seems to work'. I mean 'models' like 'mathematical models' not
like 'a model of a simplified neuron'.

Which for a chess/go-winning program is pretty harmless and even interesting.

Not so much for autonomous driving and/or security, for example.

------
kerng
Arent humans tricked the same or in similar ways? For instance advertising
exploits weaknesses in human psychology. Advertising is sort of an adversarial
tactic.

AI attacks take it to the next level, but the problems are not new I'd say,
just an evolution of existing issues that are now applied to computer
algorithms instead of humans.

~~~
dreamcompiler
> Arent humans tricked the same or in similar ways?

The human perceptual apparatus is eminently hackable. This is how optical
illusions work and it's how magicians make a living.

But those hacks are not usually the same as the relatively simple hacks
possible with current neural nets.

~~~
richardw
Sure but then AI Is working in terms of mirroring our brains. The nature of
problem is similar to ours and AI is just less capable of solving at this
point. It's not some weird problem brought about by the fundamental brokenness
of our approach to AI.

It shows that it's not ready for prime-time. That's different to "our approach
to AI is broken let's do something else."

------
diminish
Human "intelligences" are hacked in masses since the dawn of humanity costing
in millions of dead.

But it seems rather the force driving the society forward.

~~~
tim333
AI might be able to help with that by spotting subterfuge out there.

------
alicorn
"Fernandes explains that self-driving cars use multiple sensors and algorithms
and don't make decisions on any single machine-learning model." For now,
maybe, but, from what I have heard, there is push from the car manufacturers
to consolidate on one type of sensor feed as much as possible, to minimize
costs. Which implies just that - that the wanted state is one type of sensor,
as cheap as possible + one model that is expected to tell the absolute truth.
Which is stupid of course, but I assume that the cost-cutting guys are not
aware / willfully ignorant of the pitfalls. On another note, the article
conflates machine learning / neural networks with AI, I expected better of
Wired...

~~~
doublekill
DL is a subset of ML is a subset of AI. No fallacy or conflation.

~~~
alicorn
I see your point. From my perspective however it is akin to insisting that
foot == human.

~~~
BoiledCabbage
It's clearly akin to human is mammal.

One is a category, another is a sub-category.

------
omouse
This is the saddest part of this new artificial intelligence gold rush; not
everyone has security/cybersecurity/defensive programming training. The
mistakes we see with ETH smart contracts or with JavaScript on web apps is
going to be replicated at a far greater scale and at a more dangerous level
with machine learning.

Not only will there be algorithmic bias problems, but security problems too!
:'(

~~~
kzrdude
Do you have a link to go with that reference to mistakes? Thanks

~~~
xianb
For ETH the big one is the DAO where they had to fork it to recover

There's also other smaller bugs like coinbase and parity wallet

------
mlazos
I’m actually really tired of people acting like the world is ending because ML
algorithms can be tricked. The only time this will result in actual danger is
self-driving cars, and military applications, hardly a world ending crisis.
Other than that it’s a user sending bad inputs to a search algorithm or
natural language classifier, neither of which will actually result in anything
other than a wrong result for the end user. Adversarial examples are an
interesting research area to make ML more robust but hardly the world-saving
pursuit it’s proponents make it out to be.

Edit: the real security crisis is the massively non-secure web which our
entire society depends on and which we are going to connect everything to

------
motohagiography
Just means the main use case for "AI," (not ML) is probably too ambitious for
tech for the foreseeable future. ML has a lot of useful applications for
decision support and data analysis. But the key use case for AI is: to be
sufficiently sophisticated that it can shield owners and managers from
liability.

I would go so far as to say there are almost no instances where someone using
the term "AI" is not referring to the deflection of liability for an outcome
from a human owner or manager. It's a euphemism and we should identify it when
people use it. Perhaps to coin a phrase, we should specify that AI really
means "deflected or diffuse accountability," or DA.

~~~
excalibur
That's not exactly wrong, but it's not very sexy either. If you mean to
replace the term AI with something more accurate, it needs to be a term its
proponents can actually get behind.

~~~
c22
Accountability Indirection? Ass-covering Intelligence?

------
keyme
When security is only an afterthought, the product (and in this case, the
entire discipline) will remain insecure for decades.

Just like security was an afterthought in the software world up to the late
90's. Software today is still written in C...

~~~
userbinator
When security is the top and only goal, you'll end up with a dystopian prison
society controlled by authoritarian governments and corporations. IMHO that's
an even worse alternative.

------
m3kw9
In summary, adversarial examples.

~~~
Isamu
Aye. There may be some special definition of the word “crisis” known only to
copy editors.

------
FlowNote
Stumbled across this unique way to make ad exchanges violate Civil Rights
legislation. Clever way to hack AI.

[https://archive.fo/iMAbs](https://archive.fo/iMAbs)

------
simplecomplex
In other words, artificial intelligence isn’t intelligent at all.

------
ngcc_hk
How many kill by miliary weapon vs car? Can car kill? Can car be a weapon?

Would car used by x to kill?

Can car be remotely controlled — randomly, by target or massive used ... to
kill?

Would anyone interest to hack then ?

------
oyebenny
Feels like an episode of Travelers from Netflix.

------
solarkraft
What another shitty title. "Turning data against itself". It doesn't make me
want to click the link at all. I did though and, while blinded by the terrible
layout, it seems like they're showing off some of the _techniques_ used to
mess with AI systems.

~~~
blackflame7000
I guess the idea is that by feeding bad data they can do the equivalent of
implanting bad thoughts in someone. Think of it this way, if your parents
raised you racist you would probably think racist thoughts until you learned
enough perspective

~~~
tcbawo
Noticing differences in people is natural and starts at an early age. I
observed this in my own children. However, what we identify as racism and
sexism is learned behavior. It emanates from stereotyping, prejudice, and
bias.

~~~
zozbot123
Stereotyping, prejudice and bias _are_ natural behavior and are observed even
in children. Whether the specifics of racism, at most, are learned or not is
somewhat immaterial-- as for sexism, there's even less ambiguity about it
being deeply rooted in human nature somehow in a way that's very hard to
change - it certainly isn't our "societal structures" telling kids that
females have _cooties_.

