
The Death of Rules and Standards - T-A
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2693826
======
TeMPOraL
I can see some lawyers making pranks by setting up the micro-directives so
that when you follow them, you'll end up in a loop...

Seriously though, an important thing about the law is the ability to (more-
less) reason about it. That I can predict what rules may apply to my future
situation and use that to adjust my behavior proactively. With the description
from the abstract it'll be impossible by design - micro-directives tailored to
specific situations will be too specific and too minutiae-dependent to predict
beforehand.

\---

Skimming the paper, I realize the 'micro-directives' are basically cases of
computers making decisions. They are not laws, they're machine outputs that
tell humans what to do (operate or don't, drive forward or stop, etc.). So on
one hand, it's not as scary as it sounds. On the other, does it make sense to
view it from this angle? Why not just talk about what computers are better at
deciding than us?

~~~
hosh
In other words, we'd use an always-connected device (like a phone or a watch)
to help us comply with micro-directives. I can totally see some folks
rebelling against that. There's already fear related to strong AIs.

On the other hand, this is _exactly_ the same class of problems Google and
other self-driving car makers face when it comes to questions of ethics and
morality. There were several recent articles on HN about that. The question
posed was: should your self-driving car swerve to avoid those kids who just
got off the school bus, sacrificing you and the passengers?

That leads to an implicit question: should Google and other engineers be
responsible for writing such algorithms?

What these two law professors are proposing gives a different option. Change
the way policy is made. Such micro-directives (which requires a computing
device to execute) would include the behavior of self-driving cars as a
general case, rather than a special case. The paper might talk about getting
the benefits of rules and standards without the cost, but we're really talking
about making our legislation legible for machines to follow.

On an even bigger scale: how do micro-directives change how we conceive of and
think of a democracy or a republic? Are policy makers sufficiently versed in
engineering to reason through the impact of the laws? But then again, the idea
was to use Big Data methods to collect that. Does that require specialists to
design and deploy that system? Wasn't the original idea behind the American
form of democracy, a literate voting public that can reason and discuss the
impact of policy and make informed votes for those that represent them? Are
Big Data methods more accurate representation of the will of the people? Or
less?

Now, I've only read the first few pages, but I kinda wonder know if the paper
will mention any this (and in particular the self-driving car example). Gonna
have something to talk with my law geek buddies for a while.

~~~
digi_owl
> The question posed was: should your self-driving car swerve to avoid those
> kids who just got off the school bus, sacrificing you and the passengers?

That strikes me as seeing the car computer as having the same limitations as a
human driver.

More likely the car computer has long since recognized the bus as being, well,
a bus, and thus slowed down to provide shorter breaking distance etc.

These are the same exact rules that will be drilled into any driver if they
get a sane course before being issued the license.

Keep in mind that right now what is regulating speed limits etc, is human
reaction times.

Once held down, breaks can make a car stop very quickly.

But the actual breaking distance is just a fraction of the distance needed for
a human driven car to stop.

Most of that distances comes from how long it takes for the human brain to go
through the steps of noticing, recognizing, and acting on something.

Likely any computer driven car out there will be the most defensive driver on
the road. Frankly more vehicle related deaths comes from driver hubris than
anything else.

~~~
hosh
What you say has merit, yet it sidesteps and ignores the deeper, more
troubling issues: is it possible to go Asimov Three Laws of Robotics? Is it
possible to specify ethical and moral coding precisely, programmatically?

That car example is just a stand-in for an entire class of ethical and moral
dilemmas coming out of it. Let's look at it from a different angle. There is a
hospital in Oregon who are trying to squeeze more profits, so much so that
they induced doctors working there to unionize. The hospital promptly dropped
their attempt at outsourcing. It won't be last attempt at this. As the
pressure for profits increases, and as machine learning technology gets more
advanced, there will come a point when it is the AI that makes the judgement
call on who lives or dies. Who is going to program that? Are software
engineers necessarily all that better in creating systems that computes the
optimal outcome that includes both ethics and morals, than say, lawmakers or
philosophers?

Are you really OK with having moral choices taken out of your hand and given
to a computer, assuming that whomever wrote the code actually knew what they
were doing and didn't make mistakes?

From this frame, what you are suggesting is that a properly designed computer
is better at making moral and ethical judgements than humans.

~~~
digi_owl
I could have sworn that those 3 laws were specifically there to show how damn
hard it would be to codify morality.

Either by demonstrating how awry things went with strict adherence, or
similarly how one could twist things to get around them without violating
them.

------
civilian
The abstract is definitely scary.

But this could never get out of development, at least not with our current
corpus of laws. The product designers and engineers would quickly realize that
there are hundreds of conflicting laws, and beta-testing would show that it
would make life nearly unlivable.

Maybe it'd be a good thing, ya know? Instead of having selective enforcement
of laws, we'll have the oppressive boot coming down on each and everyone's
faces. That'll help smash the state.

~~~
crpatino
The really scare thought is that so called "beta testing" may happen in fairly
unambiguous contexts, and the real problems will be discovered after wide
scale deployment.

As painful as it might be, it's straighforward to turn off a malfunctioning
system that support a core business need. It is a whole new dimenssion to turn
off a gizmo that gives you boneheaded interpretations of the law, but that
comes with a preinstalled meta-rule that makes a felony to turn it off or
tamper with it in any way.

Imagine the amount of red tape required if each bug-fix required an executive
order to be deployed!!!

------
gojomo
A paradise where everything not compulsory is forbidden.

------
marcus_holmes
I for one, welcome our new micro-directive overlords

------
microcolonel
This is absolutely insane, but I wouldn't put it past today's society to let
it happen.

On a positive note, it may become very quickly apparent just how many
ridiculous laws there are.

Imagine, you're in your lovely Canadian basement, eating doritos. You're
engaging in a religious debate on youtube, and suddenly "blasphemous libel"
comes out of the spybot's speakers: you said "fuck" in a conversation about
scientology.

------
thecosas
The abstract alone really scares me. Micro-directives?

~~~
rubyfan
The language in the abstract does seem a little vile. I'm certain I don't want
government machine micromanagement directing citizen behavior. Yikes.

