Current AI involves complex statistical analysis that essentially sacrifices robustness and unbiased-ness for predictiveness. It's basically blindly squeezing all the extrapolative qualities available in a huge dataset, getting impressive seeming results while picking up all sorts of "garbage associations" along the way.
Which for a chess/go-winning program is pretty harmless and even interesting.
Not so much for autonomous driving and/or security, for example.
Source: I make goofy statistical models for a living.
AI attacks take it to the next level, but the problems are not new I'd say, just an evolution of existing issues that are now applied to computer algorithms instead of humans.
The human perceptual apparatus is eminently hackable. This is how optical illusions work and it's how magicians make a living.
But those hacks are not usually the same as the relatively simple hacks possible with current neural nets.
It shows that it's not ready for prime-time. That's different to "our approach to AI is broken let's do something else."
But it seems rather the force driving the society forward.
One is a category, another is a sub-category.
Or maybe they know something you don't?
A sensor(s) that is too expensive to deploy saves no one. A cheap sensor(s)/model that is not as reliable as an expensive one, but is cheap enough to deploy might save a lot of people, even if it occasionally kills a few.
A 100,000$ sensor will save 10 person / year and kill 1 person (hint, only the wealthy will have this one).
A 10,000 sensor will save 100 people / year and kill 100 people (hint, only the 3% will have this one).
A 1,000 dollar will save 25,000 / year and kill 10000 people (hint, everyone will have this one).
The cheap sensor wins in aggregate even if it's not perfect.
You are not considering the black swan events, though, honestly, "someone with malignant intent hacks the entire car network" isn't even a black swan, it's perfectly predictable. What major hacking power involved in a war with some other country full of self-driving cars would pass up the ability to hack all the self-driving cars to crash themselves, with something as simple as "stop self-navigating and set throttle to 100%"? The resulting carnage would certainly serve as a solid distraction to the military.
Personally, I'm increasingly coming around to the position that self-driving cars ought to just be banned, or at least, held to exceedingly high security criteria, among which I'd probably start with "the self-driving car is not permitted to be hooked to any network, ever, and all radio communication must be provably unable to result in code execution at the electronic level, before it even gets into the computer layer". If nobody is going to give a shit about security and these things are all going to be hooked up to a network full time, the perfectly obvious resulting disaster outweighs all the potential benefits by multiple orders of magnitude.
Yet terrorist attacks with trucks driving into a crowd are very deadly, with one truck there are often dozens of dead. Cars are as effective at killing as weapons. Arguably more effective, because they can kill lots of people at once, and can move around without causing panic till the last moment.
We're just used to them.
You obviously didn't watch Maximum Overdrive. The Russians have a satellite mounted laser that can save us from such a scenario.
I mean... if were going to talk about fantasy lets go all in.
Not only will there be algorithmic bias problems, but security problems too! :'(
If experimental software development were like experimental chemistry, people would be a lot more careful. If the detonations happen in the lab instead of the field, I'm pretty sure the robocar makers wouldn't be so quick to talk about the sad, necessary deaths while we learn how to make them, for instance.
i bet a bigger problem will just be research-quality code with boring ordinary security flaws getting thrown into production.
There's also other smaller bugs like coinbase and parity wallet
Edit: the real security crisis is the massively non-secure web which our entire society depends on and which we are going to connect everything to
Just like security was an afterthought in the software world up to the late 90's. Software today is still written in C...
I would go so far as to say there are almost no instances where someone using the term "AI" is not referring to the deflection of liability for an outcome from a human owner or manager. It's a euphemism and we should identify it when people use it. Perhaps to coin a phrase, we should specify that AI really means "deflected or diffuse accountability," or DA.
Would car used by x to kill?
Can car be remotely controlled — randomly, by target or massive used ... to kill?
Would anyone interest to hack then ?
They stress that it could happen but that it hasn't really been found to be purvasive in the wild but also call it a "security crisis". Sensationalisation much?
The parlor trick becomes dangerous to the powers that that be, when you start fooling surveillance and smart gun turrets or drones. This is already happening in the background. That is where the funding comes from, not a SV company fearing that their face filter does not work, but governments afraid their deep net border security will be rendered moot.
If anything the article is countering hype by citing researchers saying we don't really know how deep learning learns and represent objects, and that deep nets are a very weak copy of the human brain.
If you want to make a self driving car think there’s an Ostrich in the road, the easiest way would be to put an Ostrich in the road.
Whether they should be equal in rights, for example, that's a different matter. But races and sexes are different, mentally and physically.
Right now, the concepts we have of “intelligence” are not sufficiently rigorous to for a meaningful discussion on interracial differences — how much of “common sense” is culture, for example? How true is the Sapir-Whorf hypothesis? Test scores for Asian woman can be noticeably modified if you tell the participants it’s “to test if Asians are better than westerners at maths” versus “to test if men are better than women at maths”, how much effect does that observation have on any historical results?
That’s not to say “there are no genetic influences on intelligence” — there must be, otherwise bacteria would be as smart as humans — but rather that large scale groups of humans are so close that we don’t have sufficient evidence to support treating them as different.
The only way two people who are "different" would not be equal would be if you're measuring specific traits alone, and not the holistic value of the person. Which, for me, raises the question of what traits you think are "better" than others.
Do I even want to know?