
When an AI finally kills someone, who will be responsible? - fforflo
https://www.technologyreview.com/s/610459/when-an-ai-finally-kills-someone-who-will-be-responsible/
======
mywittyname
It should absolutely be the same as situations where "non-AI" systems fail and
kill someone.

It's too difficult to define "AI." So if a legal loophole emerges because "AI"
software is held to a different standard than traditional software, then we
will have people exploiting loopholes either way. People can claim nested ifs
are decision trees, or their linear adaptive filter is a neural network. Or
the other way around.

The legal system can't figure out prior art vis-a-vis patents, so it's pretty
clear they are far from able to tackle technical decisions regarding AI.

------
woliveirajr
And do we really need to find a responsible?

I don't mean it when the AI was just used as a tool, like someone programmed a
robot to kill a person, because then there was intention and the whole AI
package was just a tool to commit a crime.

But in cases where there is no relation, no intention, do we really need to
find the responsible? Shouldn't be better to focus on how to avoid and how to
correct this "bug" ? That, by its own characteristics, brings some problem.

We, humans, strugle do deal with cases where a death occurs, but it was
perhaps the best outcome. For example, is it better to change the train and
kill 1 instead of keeping it going and kill 5?

~~~
sp332
Yeah, when an accident happens you still need to figure out who has financial
liability for the damages. And you don't want a "tragedy of the common"
situation where no one takes responsibility for improving things. Just picking
someone to be "it" gives them some incentive to go out of their way to prevent
future accidents.

Edit: that said, excessive finger-pointing is bad too. If everyone thinks that
the one person deemed responsible is the only one who has to fix things,
they're not going to work on it themselves. The NTSB seems to have the best
model for this.

------
Isamu
The way it's phrased it sounds like there is a lot of doubt. I think it is
more like: the fog of public understanding of AI could give a creative lawyer
a toehold in a trial.

I mean, strong AI is still impressively far off, and we will continue to think
of these things as "products" that the maker carries a certain liability for.
Also the user has to employ the product in a way that does not show
indifference to the risks of harming others.

If it becomes less predictable what they will do (as they become smarter) it
becomes more of a manufacturer liability.

If you command your robot to wield a chainsaw around town, I believe you would
be accepting more liability (although the argument may be made that the robot
not refusing was a manufacturer's defect.)

------
John_KZ
When a defective elevator falls, is it the elevator's fault? AI is simply a
machine. The creators and maintainers of the machine are responsible. I don't
understand why someone would even argue about this.

------
woliveirajr
Well, unfortunately this happened: Self-driving Uber car hits, kills
pedestrian in Arizona Discussion at
[https://news.ycombinator.com/item?id=16619917](https://news.ycombinator.com/item?id=16619917)

------
mbrodersen
Industrial robots kill people all the time. And have for years. So nothing new
legally.

------
whatyoucantsay
When homo-sapiens out-competed and slaughtered the Neanderthal, which
Neanderthal was responsible?

