
Courts are using risk-assessment software to sentence criminals - mayava
https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now
======
mcherm
One of my big concerns is whether the algorithms are institutionalizing
racism, with legal decisions that make it impossible to challenge these.

After all, the algorithm has been trained on information about recidivism that
was collected in a world where racism skews arrest rates, conviction rates,
and sentencing.[1] That means the algorithm is almost certainly baking in a
racial bias. Now, I'm sure they aren't foolish enough to put "race" in as one
of the input factors, but other correlated factors will allow the algorithm to
continue to enforce this racism, but now with legal immunity.

[1] Do I really need to footnote this? [http://www.huffingtonpost.com/kim-
farbota/black-crime-rates-...](http://www.huffingtonpost.com/kim-
farbota/black-crime-rates-your-st_b_8078586.html) is one source that addresses
all of these, but there are many, many other sources.

~~~
mproy
" ... other correlated factors will allow the algorithm to continue to enforce
this racism"

Which correlated factors are you thinking of?

~~~
advisedwang
Some proxies for race: \- Where somebody lives \- Their name \- What sports
teams somebody supports \- Their finances \- consumer preferences \- education
level \- what medical conditions somebody has \- political leanings

I'm not saying these factors can't legitimately be used in a risk assessment,
but they could be used to make a good bet on race.

~~~
Sacho
Finances? While it is true that blacks and native americans have the highest
poverty rates, if you take a random poor person they are about 50% more likely
to be white than black. Same with the many of your other "proxies". Therefore
they would be a _poor_ bet for race.

If you were using these proxies to identify the race of a person jailed for a
crime, they might be good predictors, but only marginally more useful than
just blindly guessing black, since they constitute the majority of
incarcerated people.

I think, by attempting to remove the smoke screens a "real racist" would use,
you give them a new one - "look, our opponents want to ignore actual data".

------
rayiner
> These algorithmic outputs inform decisions about bail, sentencing, and
> parole. Each tool aspires to improve on the accuracy of human decision-
> making that allows for a better allocation of finite resources.

It's really not clear to me that much is gained from having very precise
decisions made about bail and sentencing. Trying to predict the future is a
fool's errand, whether a judge does it or a computer. It'd be better to just
set fair, uniform standards (particularly for bail where bail should be
granted presumptively unless unique circumstances are present).

Unfortunately, using machine learning for sentencing is just the tip of the
iceberg. "Scientism" is rife in the criminal justice system. The U.S.
Sentencing Guidelines, for example, are utter gibberish. Sentences are
calculated to the month using complex formulas:
[http://www.ussc.gov/guidelines/2016-guidelines-
manual/2016-c...](http://www.ussc.gov/guidelines/2016-guidelines-
manual/2016-chapter-4).

> The total points from subsections (a) through (e) determine the criminal
> history category in the Sentencing Table in Chapter Five, Part A.

> (a) Add 3 points for each prior sentence of imprisonment exceeding one year
> and one month.

> (b) Add 2 points for each prior sentence of imprisonment of at least sixty
> days not counted in (a).

> (c) Add 1 point for each prior sentence not counted in (a) or (b), up to a
> total of 4 points for this subsection.

> (d) Add 2 points if the defendant committed the instant offense while under
> any criminal justice sentence, including probation, parole, supervised
> release, imprisonment, work release, or escape status.

> (e) Add 1 point for each prior sentence resulting from a conviction of a
> crime of violence that did not receive any points under (a), (b), or (c)
> above because such sentence was treated as a single sentence, up to a total
> of 3 points for this subsection.

But it's not like this is based on an empirical statistical model correlating
sentences with recidivism or deterrence effects. It's classic scientism,
believing that an algorithmic sentence based on completely arbitrary rules is
somehow better than an arbitrary sentence handed out by human judgment.

~~~
mattnewton
I don't disagree it could be much better and based on something other than
arbitrary nonsense, but these guides are designed to bring consistency to
sentences within the same jurisdiction which would make them fairer than
completely arbitrary judge sentences. At least everyone is subject to the same
arbitrary nonsense guidelines instead of the mood and familiarity of the
judge.

~~~
Karunamon
I'm reminded of that quote about a "foolish consistency being the hobgoblin of
little minds", but when you consider that statistically, people get lighter
sentences immediately after a lunch break [1], there's probably some value in
removing the human from the equation...

[1]:
[http://www.pnas.org/content/108/17/6889](http://www.pnas.org/content/108/17/6889)

------
dsfyu404ed
>While this can be the fastest route, the GPS’s algorithm does not concern
itself with factors important to truckers carrying a heavy load, such as the
43’s 1,300-foot elevation drop over four miles with two sharp turns.

I know this is somewhat off topic but the lack of advanced options for GPS
routing is such a PITA. It would be trivial to add check-boxes for things
like:

"I'm towing a trailer, don't make me take dumb lefts across multiple lanes,
avoid clustterfawks and don't make me take unnecessary turns"

"Yes I'm wealthy enough to afford an iphone, that doesn't mean I want you to
send me through a $5 bridge toll"

"I'm taking a road-trip, send me on a route that uses ten fewer roads even if
it takes twenty more minutes, I don't want to have to look for a turn every
10min"

I know tons of options aren't good for the UI but just hiding all that stuff
behind an "advanced preferences" menu or something would be nice.

Just a simple tie in to a weather API that increases the cost of route
features that are a PITA in snow would be nice (no I don't want to stop on a
downhill to take a >90deg left across 40mph traffic in snow thank you very
much).

~~~
michaelt
Special GPSes for truckers are available from companies like TomTom and
Garmin.

If truckers are choosing to save a few hundred bucks by using their
smartphones, which don't have the trucker-specific features, surely the fault
lies with the trucker for using a consumer-grade product, not with the
consumer-grade product for not being professional-grade?

~~~
macNchz
A truck GPS is extremely helpful if you're driving a commercial vehicle any
appreciable distance in an unfamiliar place. They include many additional data
points in routing: vehicle height for overpass clearance, vehicle weight for
bridges, "No Trucks"/"Passenger Cars Only"/"No Commercial Vehicles"
restrictions etc. You can get yourself into hot water if you're following car
GPS directions with even a mid-sized U-Haul. See:
[http://11foot8.com](http://11foot8.com)

~~~
logfromblammo
You're already in hot water with a U-Haul. Anecotally, they have been the
worst cargo vehicle rental business in my life. Get a truck from Budget,
Hertz, or Penske instead. Every friend whom I have helped with their move has
experienced a mechanical failure of some kind with a U-Haul vehicle that
significantly increased the stress of an already stressful event.

Know your clearance and pay attention to road signs. Your regular car
insurance may cover rental _cars_ , but it probably doesn't cover rental
_trucks_. So if you peel your top on a low underpass, you may find yourself in
a world of pain and regret for a long time. Ideally, if you are in a rental
truck, you should already know how to get where you are going without GPS
assistance.

~~~
macNchz
Indeed, I used U-Haul as kind of generic word for rental truck here, but I
much prefer Penske if I'm renting a truck. They have fewer locations but their
trucks are clearly better maintained.

------
bmh_ca
I wonder if the algorithm is evidence-based, and learns from the results of
prior decisions.

In which case it is possible it would eventually discover that in the USA
incarceration is very strongly linked to recidivism. It follows that the
algorithm might refuse to incarcerate many convicts.

Which is arguably exactly what the algorithm should do, namely what
politicians will/can not: employ evidence to advance the methods and improve
the outcomes of the criminal justice system.

~~~
dsfyu404ed
The problem with that is that criminality has it's own positive feedback loop.

More crime in an area -> more cops -> more convictions for little stuff ->
more big sentences handed out -> more lives ruined -> more crime -> goto
start.

~~~
humanrebar
> more convictions for little stuff

Seems like an easier way to break that loop is to decriminalize the "little
stuff" instead of capriciously enforcing it. No algorithm is going to be able
to do that.

------
jpalomaki
Playing devils advocate I'd say that the main problem with employing AI would
be that it will how bad decisions humans make. This is exactly the kind of
case where computers, just relying on hard data would do much better than
humans.

The article complains that AI is a black box for the defendant. How is this
any different from judge's brain? You can't peek into his mind to figure out
what is behind the decision. Judge can give some justifications, but you won't
know if those are the real reasons or if the decision is just mostly based on
defendants skin color, socioeconomic background or clothing.

~~~
fao_
> just relying on hard data would do much better than humans.

Humans can acknowledge and correct for biases in training data much better
than computers can.

EDIT: this person put it much better than I have:
[https://news.ycombinator.com/item?id=14139772](https://news.ycombinator.com/item?id=14139772)

------
cmahler7
based on some studies I've seen, human judges are terrible due to basic human
nature, so I'd like to see something happen to make things more objective. Of
course, the algo would have to be open, as opposed to the proprietary software
being used now.

For example just having your sentencing be before lunch or at the end of the
day results in harsher punishment simply due to the judge being hungry or
tired.

Same goes for other basic things like women getting lighter sentences,
minorities harsher sentences, etc.

~~~
daveguy
> Same goes for other basic things like women getting lighter sentences,
> minorities harsher sentences, etc.

The problem is, "AI" today is glorified pattern matching. So, it learns what
"should be" based on the current state. In other words because the pattern
exists it will learn precisely that "women should get lighter sentences and
minorities should get harsher ones"

Not only that, they are _so good at pattern matching_ that you don't even need
to provide the gender or race for them to identify those as a parameter.

It would be great if we could train an AI to eliminate bias, but as it is they
are training to reinforce bias that already exists.

Until we get natural language _understanding_ and context aware AI, using AI
for sentencing is a terrible idea.

------
pizzetta
Instead of asking to stop its use, why not ask that it be "supervised" up to
the point where its results beat the average judge in a given area of law?

~~~
tangent128
What metric do you use to say it "beats" a human judge?

~~~
jpalomaki
If thinking about setting bail, isn't the target there to keep the bail
amounts (or cases where bail is denied) as low as possible while making sure
that most people appear in court?

Here a simple metrics would be the average bail and the percentage of people
who appeared on court.

On a job like this, I can't really see how a human could beat a machine. And
you don't need any kind of "artificial intelligence". This is just standard
statistics stuff that every insurance company and bank does.

------
I_am_neo
AI is only able to keep performing as it has learned from the data set. In
other words, it keeps the status quo within a few percent of an intended
target. Unless given the clear go ahead to just keep learning, at which point
they seem to fail at random

~~~
marcosdumay
I'd say it optimizes something from the point of view of the status quo.

There's nothing in machine learning that bias it into keeping the status quo
(differently from people), there's also nothing there biasing it into
optimizing the correct thing (also differently from people).

That said, having machines in a consultative position under a judge may be a
good thing.

------
Fjolsvith
I wonder if it were possible to get a copy of the software so as to figure out
what the best possible responses to a presentencing interview would be in
order to get the most favorable computation.

Like - Yes, I'm very social. I play bridge, take my kids to soccer, I'm a
member of the PTA. Yes, I exercise. I have a weight set at home, ride my
bicycle, play racquetball at the gym. No sir, I don't do drugs, never touched
them. No sir, I don't drink either. Yes sir, I do have a degree, two in fact!

Just in case you might need it at a presentencing interview, of course.

------
openasocket
Using traditional machine learning techniques for this purpose is a non-
starter and completely unacceptable. Neural networks are just a black box, and
don't produce an inspectable justification or reasoning. The best you can say
is that the model correctly predicts recidivism in X% of cases for some
sample.

I'm not opposed to using other algorithmic methods, but the algorithm needs to
be transparent. Though that would be difficult to do outside of some pretty
tightly controlled parameters. We can't currently make a system that can take
into account arbitrary facts about the case and weigh ethical implications.

------
deepnotderp
Oh dear....

Could we demand that they pass a course in machine learning before they use
it?

------
gunnyguy121
Holy sit we literally live in a dystopia

~~~
amelius
If only we could replace lawyers by computers too :)

~~~
Danihan
That is my #1 wish for us for the next 20 years.

------
stevenalowe
Even if the software was statistically perfect and somehow immune to our
cultural biases, it should still not be used. An individual stands before the
judge, not a statistical demographic group. The probability of an imaginary
statistical individual's recidivism is irrelevant; what is relevant is the
current state of that individual. Relying on statistical models is both lazy
and unfair

------
orasis
"Simple rules for complex decisions" Use machine learning techniques to allow
humans to make complex decisions:
[https://arxiv.org/abs/1702.04690](https://arxiv.org/abs/1702.04690)

------
basicplus2
Could also lead to a type of self fulfilling prophesy, by ignoring the
'individual' in the decision making process, thusly groups being targeted will
learn over time that their personal efforts to reform are a waste of time.

------
accra4rx
what if somebody takes a 5th and doesnot answer any question. would'nt that
force court to comply some manual process and come to a conclusion

