
Robotics Open letter to the European Commission - T-A
http://www.robotics-openletter.eu/
======
jochung
Your Honour, the courtroom is a crucible. In it we burn away irrelevancies
until we are left with a pure product, the truth for all time. Now, sooner or
later, this man or others like him will succeed in replicating Commander Data.
And the decision you reach here today will determine how we will regard this
creation of our genius. It will reveal the kind of a people we are, what he is
destined to be. It will reach far beyond this courtroom and this one android.
It could significantly redefine the boundaries of personal liberty and
freedom, expanding them for some, savagely curtailing them for others. Are you
prepared to condemn him and all who come after him to servitude and slavery?
Your Honour, Starfleet was founded to seek out new life. Well, there it sits.
Waiting. You wanted a chance to make law. Well, here it is. Make a good one.

------
taneq
If I'm reading it right, this is about creating a legal equivalent of the
"human error" defense for robotic equipment, such that the operating person or
company can just say "whoops, the robot made a stupid mistake" and have that
be the end of it, the same way they'd currently say "whoops, our employee made
a stupid mistake."

~~~
femto
That was my reading too.

It's still a machine, so treat it like any other machine and hold the
designers liable.

The cynic in me says that this is what happens when the principles used in
idealised software environments, where failure has limited consequences, meet
the physical world. A better model would be that used in embedded software, an
area where software has a long history of interacting with the physical world
and the aim is intrinsic safety.

If a designer wants to use a pseudo-random AI system, then it should be
encased in a deterministic wrapper which keeps it within a safe envelope.

~~~
gumby
> It's still a machine, so treat it like any other machine and hold the
> designers liable.

I think you mean the operator. If I run someone over in a car, I'm
responsible, not the manufacturer nor the vehicle itself. Sure, manufacturers
can be liable for design flaws (steering wheel didn't work?) but it's
appropriate that that not be the default.

(This rule should also apply to another great invention: the corporation.
Sadly, however, the "drivers" never seem to be held accountable.)

~~~
rahimnathwani
How about setting up a new company to own each new machine, thus shielding the
parent corporation from liability for that machine's mistakes (beyond the
value of the individual machine)?

~~~
taneq
That's basically one of the approaches they discuss:

> The legal status for a robot can’t derive from the Legal Entity model, since
> it implies the existence of human persons behind the legal person to
> represent and direct it. And this is not the case for a robot.

They also cover treating the robot as a trust fund:

> c. The legal status for a robot can’t derive from the Anglo-Saxon Trust
> model also called Fiducie or Treuhand in Germany. Indeed, this regime is
> extremely complex, requires very specialized competences and would not solve
> the liability issue. More importantly , it would still imply the existence
> of a human being as a last resort – the trustee or fiduciary – responsible
> for managing the robot granted with a Trust or a Fiducie.

Edit: The silly thing is that they want to ensure that robots have "the status
of electronic persons responsible for making good any damage they may cause"
while simultaneously holding that a robot can't be a Natural Person "since the
robot would then hold human rights, such as [...] the right to remuneration".

So in their book a robot can't earn money but is personally financially liable
for any damage it causes.

------
lifeisstillgood
So the EU Parliament (bunch of nobodies every ignores) have recommended to the
EU Commission (EU civil service, bunch of generally experienced civil servants
who get to pass rules from their in-country colleagues that their in-country
civil servants and politicians find too unpalatable to do at home) asked the
commission to (eventually) pass laws giving robots legal person-like status
because you know, Asimov is cool.

And actual experts have written saying that's a bad idea.

But i had to read it twice to find that out - guys take a tip from newpapers
"Who the hell reads the second paragraph?" Next time put the story in the
first paragraph

Anyway, yeah. like nothing bad will come of making robots legal persons.

We can wait till Robin Williams asks

------
John_KZ
Wow, are these clowns really that stupid or are they paid by industry? US-
based thinktanks have to reveal their financial details (donations etc), in
the EU they don't. This needs fixing ASAP.

I'm sure Google, Amazon, Uber, Tesla and Boston Dynamics would just _love_ to
blame their AI products instead of themselves. "But it was the robot-person
that run over the pedestrian your honor. Jail the car sir, not the manager."
So very convenient eh?

------
eksemplar
I work in the Danish public sector and we’re slowly adopting robotics (which
is really just macros) and machine learning, and I’m a bit split in the issue.

On one hand RPA software needs to be able to log on systems as though it was a
real employee for it to function, because our systems are designed for people.
I mean, if the systems were designed to be efficient, I would be using a rest
API not a fucking macro.

On the other hand I think it would be dangerous if we start protecting machine
learning like we protect our human workers. Right now our approach to machine
learning is similar to how advertising companies predict your behavior based
on previous data. It’s build on determinism, and I think that is a flawed way
to look at human beings. If we look at the science of history, determinism
only works for so long, the more facts we have the harder it becomes to answer
why something happened the way it did.

I view machine learning results, and I’m talking about machine learning used
on unstructured data sets that make administrative legal recommendations, not
the sort that suggest analytic graphs on datasets, is fundamentally flawed.
We’re shooting toward a society where people at high positions in the
government actual,y believe we can predict behavior, we’ve done proof of
concepts that are promising as well, as an example we’re able to predict which
shitty parents will need “interventions” (I don’t know the English word,
sorry) but only with a 70% accuracy. If we start acting in those data sets, we
may make matters worse, because we already know that by expecting someone to
fail we actually increase the chance of it happening.

I think machine learning is interesting, I also think you can use it to say a
lot about what will happen, but we still can’t use it to tell us why something
is happening, and until we do (if ever) we should be careful how much we trust
it in administrative law.

