Hacker News new | comments | ask | show | jobs | submit login
Hacking of artificial intelligence is an emerging security crisis (wired.co.uk)
138 points by headalgorithm 29 days ago | hide | past | web | favorite | 52 comments

"AI" seems to be synonymous with "crappy statistical analysis algorithm".

I’m not sure why this comment is being upvoted this much, but it seems fairly snarky and adds little value to the discussion.

The comment may just seem annoying but after a fair amount of reading on AI, I think it actually is an OK if snarky comment on state of the art AI.

Current AI involves complex statistical analysis that essentially sacrifices robustness and unbiased-ness for predictiveness. It's basically blindly squeezing all the extrapolative qualities available in a huge dataset, getting impressive seeming results while picking up all sorts of "garbage associations" along the way.

At the same time, there are really no 'models' like in, say, Physics or Engineering. The state-of-the-art consists in 'well, this seems to be reasonable and seems to work'. I mean 'models' like 'mathematical models' not like 'a model of a simplified neuron'.

Which for a chess/go-winning program is pretty harmless and even interesting.

Not so much for autonomous driving and/or security, for example.

No, it's a scientifically accurate description of the absurd usage of the term "AI" by modern media, VC and just about everyone else.

Source: I make goofy statistical models for a living.

Arent humans tricked the same or in similar ways? For instance advertising exploits weaknesses in human psychology. Advertising is sort of an adversarial tactic.

AI attacks take it to the next level, but the problems are not new I'd say, just an evolution of existing issues that are now applied to computer algorithms instead of humans.

> Arent humans tricked the same or in similar ways?

The human perceptual apparatus is eminently hackable. This is how optical illusions work and it's how magicians make a living.

But those hacks are not usually the same as the relatively simple hacks possible with current neural nets.

Sure but then AI Is working in terms of mirroring our brains. The nature of problem is similar to ours and AI is just less capable of solving at this point. It's not some weird problem brought about by the fundamental brokenness of our approach to AI.

It shows that it's not ready for prime-time. That's different to "our approach to AI is broken let's do something else."

Human "intelligences" are hacked in masses since the dawn of humanity costing in millions of dead.

But it seems rather the force driving the society forward.

AI might be able to help with that by spotting subterfuge out there.

"Fernandes explains that self-driving cars use multiple sensors and algorithms and don't make decisions on any single machine-learning model." For now, maybe, but, from what I have heard, there is push from the car manufacturers to consolidate on one type of sensor feed as much as possible, to minimize costs. Which implies just that - that the wanted state is one type of sensor, as cheap as possible + one model that is expected to tell the absolute truth. Which is stupid of course, but I assume that the cost-cutting guys are not aware / willfully ignorant of the pitfalls. On another note, the article conflates machine learning / neural networks with AI, I expected better of Wired...

DL is a subset of ML is a subset of AI. No fallacy or conflation.

I see your point. From my perspective however it is akin to insisting that foot == human.

It's clearly akin to human is mammal.

One is a category, another is a sub-category.

"Which is stupid of course, but I assume that the cost-cutting guys are not aware / willfully ignorant of the pitfalls"

Or maybe they know something you don't?

A sensor(s) that is too expensive to deploy saves no one. A cheap sensor(s)/model that is not as reliable as an expensive one, but is cheap enough to deploy might save a lot of people, even if it occasionally kills a few.

A 100,000$ sensor will save 10 person / year and kill 1 person (hint, only the wealthy will have this one). A 10,000 sensor will save 100 people / year and kill 100 people (hint, only the 3% will have this one). A 1,000 dollar will save 25,000 / year and kill 10000 people (hint, everyone will have this one).

The cheap sensor wins in aggregate even if it's not perfect.

And a sensor/AI package that is cheap enough to deploy to everyone can kill everyone in a car if it is hacked properly, and monoculture means everyone gets hit.

You are not considering the black swan events, though, honestly, "someone with malignant intent hacks the entire car network" isn't even a black swan, it's perfectly predictable. What major hacking power involved in a war with some other country full of self-driving cars would pass up the ability to hack all the self-driving cars to crash themselves, with something as simple as "stop self-navigating and set throttle to 100%"? The resulting carnage would certainly serve as a solid distraction to the military.

Personally, I'm increasingly coming around to the position that self-driving cars ought to just be banned, or at least, held to exceedingly high security criteria, among which I'd probably start with "the self-driving car is not permitted to be hooked to any network, ever, and all radio communication must be provably unable to result in code execution at the electronic level, before it even gets into the computer layer". If nobody is going to give a shit about security and these things are all going to be hooked up to a network full time, the perfectly obvious resulting disaster outweighs all the potential benefits by multiple orders of magnitude.

There's this "intent" bias when people consider threats. Hacking a military drone seems a huge threat. Hacking a car seems a minor threat because cars aren't intended to kill people.

Yet terrorist attacks with trucks driving into a crowd are very deadly, with one truck there are often dozens of dead. Cars are as effective at killing as weapons. Arguably more effective, because they can kill lots of people at once, and can move around without causing panic till the last moment.

We're just used to them.

>And a sensor/AI package that is cheap enough to deploy to everyone can kill everyone in a car if it is hacked properly, and monoculture means everyone gets hit.

You obviously didn't watch Maximum Overdrive. The Russians have a satellite mounted laser that can save us from such a scenario.

I mean... if were going to talk about fantasy lets go all in.

This is the saddest part of this new artificial intelligence gold rush; not everyone has security/cybersecurity/defensive programming training. The mistakes we see with ETH smart contracts or with JavaScript on web apps is going to be replicated at a far greater scale and at a more dangerous level with machine learning.

Not only will there be algorithmic bias problems, but security problems too! :'(

Hacking isn't the crisis; everyone knows that everything new will be attacked. The crisis is the lack of prudence in deploying insecure tools in a way that leads to absolutely predictable outcomes.

If experimental software development were like experimental chemistry, people would be a lot more careful. If the detonations happen in the lab instead of the field, I'm pretty sure the robocar makers wouldn't be so quick to talk about the sad, necessary deaths while we learn how to make them, for instance.

idk if traditional security techniques can help with adversarial examples and that sort of thing. it is a very different kind of attack. training people to be paranoid and to worry about data provenance is important, though.

i bet a bigger problem will just be research-quality code with boring ordinary security flaws getting thrown into production.

Do you have a link to go with that reference to mistakes? Thanks

For ETH the big one is the DAO where they had to fork it to recover

There's also other smaller bugs like coinbase and parity wallet

I’m actually really tired of people acting like the world is ending because ML algorithms can be tricked. The only time this will result in actual danger is self-driving cars, and military applications, hardly a world ending crisis. Other than that it’s a user sending bad inputs to a search algorithm or natural language classifier, neither of which will actually result in anything other than a wrong result for the end user. Adversarial examples are an interesting research area to make ML more robust but hardly the world-saving pursuit it’s proponents make it out to be.

Edit: the real security crisis is the massively non-secure web which our entire society depends on and which we are going to connect everything to

When security is only an afterthought, the product (and in this case, the entire discipline) will remain insecure for decades.

Just like security was an afterthought in the software world up to the late 90's. Software today is still written in C...

When security is the top and only goal, you'll end up with a dystopian prison society controlled by authoritarian governments and corporations. IMHO that's an even worse alternative.

Just means the main use case for "AI," (not ML) is probably too ambitious for tech for the foreseeable future. ML has a lot of useful applications for decision support and data analysis. But the key use case for AI is: to be sufficiently sophisticated that it can shield owners and managers from liability.

I would go so far as to say there are almost no instances where someone using the term "AI" is not referring to the deflection of liability for an outcome from a human owner or manager. It's a euphemism and we should identify it when people use it. Perhaps to coin a phrase, we should specify that AI really means "deflected or diffuse accountability," or DA.

That's not exactly wrong, but it's not very sexy either. If you mean to replace the term AI with something more accurate, it needs to be a term its proponents can actually get behind.

Accountability Indirection? Ass-covering Intelligence?

I mean, there are broad areas where automation can improve things. For example, ML analyses of submitted resumes are just as crappy as human analyses of submitted resumes. On the other hand, replacing the resume pre-screen with an automated coding quiz / challenge (or at least providing the automated coding quiz / challenge to people whose resume is filtered out in the human pre-screen) greatly improves the quality of the pre-phone-call screening step.

Adversarial AI is mostly about warfare, not business. The focus on adversarial research is to avoid starting a war over an adversarial attack.

In summary, adversarial examples.

Aye. There may be some special definition of the word “crisis” known only to copy editors.

Stumbled across this unique way to make ad exchanges violate Civil Rights legislation. Clever way to hack AI.


How many kill by miliary weapon vs car? Can car kill? Can car be a weapon?

Would car used by x to kill?

Can car be remotely controlled — randomly, by target or massive used ... to kill?

Would anyone interest to hack then ?

In other words, artificial intelligence isn’t intelligent at all.

Feels like an episode of Travelers from Netflix.

What another shitty title. "Turning data against itself". It doesn't make me want to click the link at all. I did though and, while blinded by the terrible layout, it seems like they're showing off some of the techniques used to mess with AI systems.

>The hacking of artificial intelligence is an emerging security crisis.

They stress that it could happen but that it hasn't really been found to be purvasive in the wild but also call it a "security crisis". Sensationalisation much?

Sensationalism is definitely the right word for this. The article talked more about "white-noise attacks" on NNs than anything else, but I've yet to hear of a white-noise attack that did anything worse than make a NN misidentify an object. Sure, in the right system, that could possibly wreak havoc, but right now, it's not much more than a parlor trick. Maybe if an attacker knew enough about their targeted model, they could have a little more control over the outcome, but that would require some white-box insight to the model. But just because it's possible to feed corrupted pictures into a NN until it breaks isn't enough to call this an "emerging security crisis".

You do not need white box access to a target model anymore. You can find adversarial samples for an ensemble of similar networks and it will fool the target network.

The parlor trick becomes dangerous to the powers that that be, when you start fooling surveillance and smart gun turrets or drones. This is already happening in the background. That is where the funding comes from, not a SV company fearing that their face filter does not work, but governments afraid their deep net border security will be rendered moot.

If anything the article is countering hype by citing researchers saying we don't really know how deep learning learns and represent objects, and that deep nets are a very weak copy of the human brain.

It’s much easier to modify a 2D static photo than it is to modify the real world.

If you want to make a self driving car think there’s an Ostrich in the road, the easiest way would be to put an Ostrich in the road.

I guess the idea is that by feeding bad data they can do the equivalent of implanting bad thoughts in someone. Think of it this way, if your parents raised you racist you would probably think racist thoughts until you learned enough perspective

Noticing differences in people is natural and starts at an early age. I observed this in my own children. However, what we identify as racism and sexism is learned behavior. It emanates from stereotyping, prejudice, and bias.

Stereotyping, prejudice and bias are natural behavior and are observed even in children. Whether the specifics of racism, at most, are learned or not is somewhat immaterial-- as for sexism, there's even less ambiguity about it being deeply rooted in human nature somehow in a way that's very hard to change - it certainly isn't our "societal structures" telling kids that females have cooties.

I believe that's exactly what the parent was implying.


Using our "natural trains of thought" most people believed the earth was flat. Most beliefs about racial inequalities have similar merit.

Where exactly is the science that says all races and sexes are not equal? Sure there are different traits, but those don't speak to equality.

If there are different traits they can't be equal.

Whether they should be equal in rights, for example, that's a different matter. But races and sexes are different, mentally and physically.

Which is smarter, someone with a degree in physics or someone with a degree in biology?

Right now, the concepts we have of “intelligence” are not sufficiently rigorous to for a meaningful discussion on interracial differences — how much of “common sense” is culture, for example? How true is the Sapir-Whorf hypothesis? Test scores for Asian woman can be noticeably modified if you tell the participants it’s “to test if Asians are better than westerners at maths” versus “to test if men are better than women at maths”, how much effect does that observation have on any historical results?

That’s not to say “there are no genetic influences on intelligence” — there must be, otherwise bacteria would be as smart as humans — but rather that large scale groups of humans are so close that we don’t have sufficient evidence to support treating them as different.

Difference and equality are not mutually exclusive concepts, because "equality" is based on what you're measuring. A chicken sandwich and a burger are "different" but "equal" in value. Two markets in the company I work for may be geographically and demographically "different", but they could be "equal" in their revenue.

The only way two people who are "different" would not be equal would be if you're measuring specific traits alone, and not the holistic value of the person. Which, for me, raises the question of what traits you think are "better" than others.

Do I even want to know?

Curious example, for me personally it was the opposite

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact