
Robot Criminals - raleighm
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3227710
======
inetknght
When you start labeling Robots as criminal you will find humans trying their
darndest to influence a robot to do criminal things instead of themselves.

And, funny enough, humans seem to be pretty good so far at influencing robots.

~~~
imh
The distinction doesn't seem to be 'xor', but 'and'. The question of human
criminality is unchanged, while they posit an additional criminality for the
robot itself. The most interesting bit to me here is that there might be times
where the human decision making is so diffuse that no humans are liable, but
something can still be legally done if you hold the robot itself accountable.

I could imagine a robo equivalent to the 2008 financial crash, maybe a
horrible transit accident or something, but where no single person or
corporation is quite criminal enough to hold accountable, but the end result
is still bad. In one world, all the people go "Oh hey, we weren't really
punished" and do it again. In another world, you put the robot in "jail" and
the people go "Oh shit, they took our property. That was expensive. Let's not
do that anymore." ... optimistically.

~~~
throwaway2048
Or they write it off as a cost of doing business because fixing it is more
expensive than buying another robot.

~~~
DuskStar
I mean, in that situation they have even less incentive to fix things if you
_don 't_ confiscate the robot... The precondition was "the human decision
making is so diffuse that no humans are liable" after all.

In that case, you have three possibilities:

1\. You don't confiscate/destroy the robot. BigCorp goes "hell yeah, cost of
committing $crime is 0!" and continues.

2\. You confiscate/destroy the robot. BigCorp goes "nice, we can make another
for $5 - we'll take another 100,000 thanks" and continues.

3\. You confiscate/destroy the robot. BigCorp goes "shit, that robot cost a
ton of money - better work out a way to avoid doing that again in the future"
and fixes the problem.

~~~
throwaway2048
Or you can directly the fine the company very large sums of money.

------
osterwood
There is an unfortunately large disconnect between the general public's
understanding of robotics, and the reality of robotics. I don't know how to
reduce it, and feel it's important to.

The hypothetical "how should a robot car decided which bystander to hit when
it looses control" or similar trolley problems are so so so far off in the
future. Right now autonomous cars can't even operate in inclement weather.

I do find the concept of "moral reasoning" in robots an interesting one, but
again feel the author has skipped into the far further and missed more
tractable questions. If moral reasoning were in place today, I think the
likely decision of all autonomous cars would be to drive slower. Is saving a
few minutes of a trip worth dramatically greater probability of fatality? US
DOT found that speed limit increases from 55 to 65 MPH increase the likelihood
of fatality by 24% once an accident occurred.

[https://safety.fhwa.dot.gov/speedmgt/ref_mats/fhwasa09028/re...](https://safety.fhwa.dot.gov/speedmgt/ref_mats/fhwasa09028/resources/Effect%20of%20Increases%20in%20Speed%20Limit.pdf)

------
jessaustin
This seems like a total dodge. Before the Singularity (which hasn't happened
yet), a machine that does something does so because some human or humans
arranged for that. TFA is a way to get distracted by the machine and not
address the human who set it in motion. _Cui bono?_ Oh, probably some big
firms who plan to release robots into the human and natural environments and
don't want to be held liable when those robots harm humans and property.

------
doodliego
They would be treated like pets (taken away, owner held liable) until actual
sentience is legally established, which isn't happening anytime soon.

------
Animats
Has anyone built a criminal dApp yet? One that needs no further human
supervision, and just goes on doing its thing?

~~~
tlrobinson
Casino dApps would be illegal to use in certain jurisdictions.

