
The dangers of letting algorithms make decisions in law enforcement - smil
http://www.slate.com/articles/technology/future_tense/2015/04/the_dangers_of_letting_algorithms_enforce_policy.single.html
======
okasaki
> Employees at the center referred her to the online system. Uncomfortable
> with the technology, she asked for help with the online forms and was
> refused.

Seems like it was humans that failed her. I'm not sure what algorithms have to
do with this.

~~~
kayfox
I think the main point of the article is that the algorithms are applied
without deference to real world conditions or impact.

This particular example appears to be a serious violation with the Americans
with Disabilities Act, which I have noticed often falls by the wayside when
developing many systems like this. In this case the humans deferred to the
machines the decisions, most likely because the administrative regulations say
so and leave no way to get around it "manually".

I have myself encountered these sort of issues, at one point many years ago I
stopped filing for unemployment because I had found a job, weeks later a
computer decided that my job search records needed to be audited for the
period I was not filing, and if that didn't fly, they would want the last 6
months of UI back. I filed a blank form with a letter saying I was employed
and who to talk to to verify that. This was based on advice of the employment
department. There was no process for this, so a blank job search form was
entered and nothing else done, I then was subject to collection actions,
appealed and an administrative law judge interviewed me, and determined the
collection was in error. A year later the employment department stopped many
of the practices.

We should not blindly defer to machines, the more we do, the less we know how
to, or have power to, correct situations when they go out of hand.

~~~
ams6110
_I think the main point of the article is that the algorithms are applied
without deference to real world conditions or impact._

I'm not seeing evidence that algorithms are worse than people in this regard.
How many horror stories are there of people citing rules or policy "without
deference to real world conditions," or just simply being flat-out wrong? It's
almost a cliche when talking about government bureaucracies.

I'm less concerned about algorithms strictly applying policy (that's what they
do after all) and more interested in whether overall results are better than
what humans do. Most people have biases about one thing or another (race,
gender, age, physical appearances) and it's very difficult to eliminate those
as they may be operating below the level of conscious thought. Also government
bureaucracies aren't exactly known for hiring the best and brightest. It would
seem to me likely that algorithms should eliminate those issues, at least.

Edit: posted before I saw jqm's reply

------
irl_zebra
I take from this that there needs to be some flexibility in how the results of
the algorithms are applied. I also find the first example in the article
unconvincing as a real mistake. It states that Robert McDaniel had:

> a misdemeanor conviction and several arrests on a variety of offenses—drug
> possession, gambling, domestic violence

then it seems like it's calling the algorithms a mistake that

> branded Robert McDaniel a likely criminal

Maybe I"m just sheltered, but a history of arrests, drug possession, and
domestic violence tell me that the person is probably a criminal (though
whether that rises to the level of bring one of Chicago's top 420 criminals I
can't say).

~~~
DanBC
> but a history of arrests

You're not looking at convictions? Just arrests?

~~~
rhizome
Yeah, it says "a" conviction, and we all are (or should be) aware that arrests
occur much more frequently than guilt.

------
jqm
How does the number of people negatively affected by algorithm compare to the
number of people who would be negatively affected by human processor?

I mean, someone could probably write hundreds of similar articles about
negative interactions with callous or incompetent human officials. Having
dealt with at least an average number of DMV type officials over the years, I
can't see that machines could do a whole lot worse.

I do agree with several points of the article though. Let the algorithms be
open to public critique. This is democracy and it should lead to improvement
(eventually). And of course there should always be recourse to human
intervention.

------
DanielBMarkham
I'm not sure if most folks really understand the nightmare we're setting
ourselves up for. It's the domestic policy equivalent of drone warfare.

The western legal system was built and functions inherently on the
precondition that it's people who use, administer, and maintain it. There's a
lot of slack and human interpretation built into the process, and no laws are
constructed such that they are enforced in a mechanical fashion. In addition,
there's the premise that the folks doing the work of enforcing the laws are
virtually the same as those being policed. Finally, severely unjust or
unpopular laws are many times ignored by both the population and the
enforcers.

All of that goes away with machine application of criminal/administrative law.
The system was not built for this.

~~~
icebraining
_Finally, severely unjust or unpopular laws are many times ignored by both the
population and the enforcers._

That's a bug, not a feature. That discretion that is permitted to LE often
leads to selective enforcement. "The best way to get a bad law repealed is to
enforce it strictly."

~~~
joesmo
"The best way to get a bad law repealed is to enforce it strictly."

Yes, it's done wonders for the drug war.

~~~
icebraining
The drug war is not even closed to being strictly enforced - it's actually a
good example of what I was describing.

There's plenty of evidence that poor minorities are punished way more harshly
than others (e.g. _" A 2013 study by the American Civil Liberties Union
determined that a black person in the United States was 3.73 times more likely
to be arrested for marijuana possession than a white person, even though both
races have similar rates of marijuana use."_).

Start enforcing the law with the same strictness on the kids of the rich and
powerful, and you'd see the laws change must faster.

~~~
DanielBMarkham
I usually don't dive in deep on threads, but I feel like I have to say
something. There are folks who seem to agree with this.

I believe you are correct when it comes _to individual laws_. If an individual
law is bad, apply it absolutely equally and watch it change. Very cute, very
succinct, and it sounds like it might work. (Makes for a great slogan even if
it doesn't. See North Korea) This assumes that somehow automatic processing
would constitute the same as equal enforcement. I doubt that.

The bigger problem is this: taking an individual law and publicly applying it
equally was not the scenario I was describing. The average person is guilty of
3 felonies a day. Assuming that stat is exaggerated, and assuming a state
where you have "three strikes and you're out", we'd all be in prison for life
within the year. So that's not happening. The question on the table is not
about one law or the other, it's about how to take a system of tens of
thousands of laws, all unequally applied, and try to make them all work the
same. If that doesn't keep you awake at night, you don't know the legal
system. It's Orwellian.

There will be no huge uprising, because the system has to continue working. So
what _will_ happen is that easy-to-data-process parts of the law will be
enforced against people who won't complain too loudly. Let's read that as
"crimes that make people feel morally superior to others" and "crimes
involving easy data collection where the suspects are ill-able to defend
themselves". Your car will report if you run a red light, or if you've had 3
glasses of wine instead of 2 at the restaurant. The DMV will know if you're
poor and driving without insurance just to get to work -- because a local LE
official will track you with a LPR. It'll be just more of the same, but there
won't be any humans involved.

So you'll still see prosecutorial judgment, it just won't be evident. At all.
Instead more and more people will run afoul of the law in little ways that
won't make too much of a stink.

As Thomas Paine said, it's better to be the victim of a bad king rather than a
complex system of government. If you're the victim of a bad king? You have
somebody to blame. If you're the victim of some complex, impossible-to-
understand system? You're still screwed -- but now there's nobody to point a
finger at. Much, much worse.

I find it disturbing that folks would think that taking people out of the law
enforcement system would be a good thing. Not only would it not be a good
thing, it would be a disaster for all concerned. </rant>

~~~
icebraining
But drunk driving laws are not "severally unjust or unpopular", you're talking
about a different issue than the one I replied to.

------
pdkl95
"Decision-making algorithms are politics played out at a distance, generating
a troubling amount of emotional remove."

This is absolutely key. Adding distance[1] between the point where a decision
is made and where the consequences of that decision are realized make it
harder for any feedback from those consequences to affect the person making
the decision. This makes the decisions worse (from lack of information) _and_
the implementation worse (error must be much larger before the feedback from
that error reaches the decision maker).

You see this effect in many areas. An obvious example is the law enforcement
mentioned the article (or military), where "just following orders" to the
modern variant of "just following an algorithm" end up causing problems.

A more interesting example might be the existence of the derivatives market
and the invention of increasingly-exotic financial instruments. A bank giving
someone a loan has some fairly well-known possible behaviors, and is
(probably) close enough to allow feedback between the parties for things like
capitalism to work (if you don't like the bank's behavior, you let them know
that isn't acceptable by refinancing at a different bank). On the other hand,
bad decisions bundled up and hidden in collateralized debt obligations
sheltered these bad decisions until the problem blew up and introduced the
world to the phrase "too big to fail".

A very interesting discussion of this problem - focused on how this kind of
distance relates to human _honesty_ (and rationalization) - is this RSA
Animate featuring Dan Ariely:
[https://www.youtube.com/watch?v=XBmJay_qdNc](https://www.youtube.com/watch?v=XBmJay_qdNc)

[1] measured in either number-of-hops or time

------
Maken
Using algorithms for support decision making and and putting a bad UI barrier
between the users and the managers are two different things.

Anyway, public administrations should indeed make publish how their algorithms
work in order to ensure they are reflecting the official policies.

------
bayesianhorse
This is not a problem of "algorithms" but rather of stupid policies. A
programming manager at Google would have been fired if he had put such obvious
errors in PageRank (or whatever they call it these days).

Algorithms and data can only improve effectiveness of these systems and
agencies. However, their use has been combined with drastic funding cuts.
These cuts and the resulting malfunctions aren't exactly a fundamental problem
with data science.

~~~
golergka
But bureaucracy is filled with stupid policies, and was filled for centuries.
Stupid policies have to expected, frankly. But unlike commercial companies,
people don't have anyone else to turn to. That's why, in a special case of a
government machine — which is (1) without alternative and (2) error prone —
the human sanity check should always be present.

~~~
bsder
The difference is that when a person is involved, you can apply pressure via
representative, the press, etc. Bureaucracy has been full of "jobsworths"
("Sorry, guv, more than me jobs worth.") forever.

In reality, the real issue is total underfunding of these services for those
who need them. People aren't switching to computers for these things because
they think it's better (obviously, the policing one is the exception), they
are switching because they don't have a choice to keep up with the workload.

------
brohoolio
I've been called as asked to take a survey about my interactions with an
employee at a company I do business with. I could tell that the survey as
constructed would not capture my actual concerns with the business processes
and would instead reflect poorly on the employee that I did business with. The
failure of the system would end up being used to mark the employee down even
though he did a good job with the constrains he had.

It's unfortunate that these sorts of automated processes are ending up
targeting edge cases, like things that should be covered by the ADA.

------
j2kun
In the CS community a new (sub)subfield has emerged called "Fairness,
Accountability, and Transparency in Machine Learning" (FATML). It's a young
research topic, but I find it quite interesting.

[http://www.fatml.org/](http://www.fatml.org/)

------
nitwit005
This seems to make the false premise that you need computers to make decisions
algorithmicly. If someone writes out a set of hard rules as to who can apply
for a welfare program, the result will be the same if a human or machine makes
the determination.

Long before computers existed, people complained about "rigid bureaucracy",
which is effectively a complaint that government or business employees stuck
to a process (an algorithm) that had some problems.

------
6d0debc071
I have sat across from a call centre in a government office and listened to
the workers running through the written version of algorithms. I've spoken to
people who worked there and heard how crushing it was to know that someone was
getting screwed but to be totally unable to do anything about it because the
policy dictated their reaction. And I've worked with charities and listened to
the other end of those phone calls; people screaming that their kids are going
to be taken away because their benefits have been delayed and they can't
afford to feed them.

The underlying assumption of this piece seems to be that turning decision
making over to algorithms reduces positive discretion. But the humans in these
situations frequently have no more discretion than the machine does, and
inefficiency also has a human cost. It seems false to me to pretend that what
these algorithms are doing, at least in terms of the majority of their
immediate effect, is qualitatively different.

What you're losing when you encode something as an algorithm is the insight
that you get from having humans in the loop. Intuition; the things that people
haven't thought to measure yet. That's the weakness in any statistical
technique - you need a human to lend numbers relevance; to say what is
important to know the relationships of; otherwise they're just a sequence of
events.

But you need to start off with a system that leverages human strengths in
order for that criticism to make sense. Human judgement only has an advantage
in a system designed to use the different sorts of value that it offers. If
your call centre worker is not truly responsible for the outcome of the call,
and if you don't regularly attempt to get feedback from them to inform policy
decisions, then it makes no difference if they are replaced by a machine. They
were being treated as one to begin with, and the value that they added to the
organisation by virtue of being human; of having professional judgement; was
being thrown away anyway.

All this does, in a lot of cases, is make existing flaws more obvious.

The exception I can think of to this is the criminal justice system, where
there are examples of positive discretion. However, there are also examples of
negative discretion there. There are many stupid laws on the books, and
selectively enforcing those laws allows you to screw, more or less, whoever
you want. It's not surprising that a system that would mechanically implement
those laws would produce undesirable outputs, it's just that it's finally
being applied to people who have the power to say something about it, (and,
perhaps, have their concerns taken seriously enough to alter policy.)

For all that there is a loss in the case of the criminal justice system, there
is also a gain: Encoding something as an algorithm makes the flaws in the
process more apparent.

~~~
avivo
Exactly this.

I often describe programming as _creating tiny bureaucracies_.

You put some information into a "form" (e.g. a search bar). The front desk
bureaucrat (mouse, keyboard, screen, etc.) sends it off to other bureaucrats
and they follow a bunch of rules to process it and give the front desk some
new "paperwork" to give to you (e.g. the resulting web page).

What we are doing with automated algorithms is getting rid of the human
bureaucrats and replacing them with "robotic" bureaucrats. That _can_ be a
_really_ bad thing depending on the context, but even the human bureaucrats in
many cases were already ~ robots.

~~~
avivo
That said, once you can automate this stuff, there are new types of policies
you can make which would be too complicated otherwise, and that is a place
where we can _unwittingly innovate into new ways of hurting people_.

On the other hand, exposing the inherent inhumanity of strict bureaucracy via
conversations about automation may actually be a force of awareness and
change. An opportunity to explicitly create "human integrations" at key touch-
points where people would otherwise fall through the cracks (think API hooks
where you can integrate PagerDuty).

------
EdwardCoffin
This reminds me of the terrifying epistolary short story Computers Don't Argue
[1] by Gordon R. Dickson

[1] online here:
[http://www.dave.rainey.net/calendars/dystopias/process3.html](http://www.dave.rainey.net/calendars/dystopias/process3.html)

------
godisdad
See also:
[http://en.wikipedia.org/wiki/Therac-25](http://en.wikipedia.org/wiki/Therac-25)

------
zby
compare and contrast this with Tim O’Reilly essay proposing Algorithmic
Regulation [http://beyondtransparency.org/chapters/part-5/open-data-
and-...](http://beyondtransparency.org/chapters/part-5/open-data-and-
algorithmic-regulation/)

------
davidgerard
This sort of thing is why Smart Contracts are actually the worst idea.

------
soup10
I'll take the algorithms any day of the week. The sooner we remove assholes
from administering the law, the better.

