Hacker News new | past | comments | ask | show | jobs | submit login
Algorithmic Justice Could Clear Convictions in California (artificiallawyer.com)
144 points by LegalProduction 25 days ago | hide | past | web | favorite | 124 comments



> Why is this a big deal? It matters because algorithmic justice, of a complex or simple variety, has come in for a lot of flack in the US, in part because of one truly major mess that was caused by the COMPAS system. So, it’s great to see algorithmic justice doing good.

That's because these don't seem like the same things at all to me. Lumping them both under "algorithmic justice" with an agenda to make the one look better doesn't change that. This thing in this story involves zero machine-learning, zero statistics at all, zero attempts at statistical prediction of likelyhoods of anything, or even any heuristics at all. Those are all the things that have come for a lot of flack. Looking things up in databases and using them to automatically fill out forms, indeed, has not come under a "lot of flack."


The public doesn't care about those distinctions. So it's useful to cancel out a negative gut-reaction to encourage a more nuanced view.


You think the public is going to be opposed to _this_ sort of thing, because of what they've heard about the _actual_ statistical algorithmic things?

I seriously doubt it. I think the author was trying to do exactly the reverse, get people to _support_ the statistical algorithmic things because of this thing, which seems obviously a good idea.

It seems like (exagerrated of course) saying "Yeah, I know you don't like autonomous military robots killing people, but you like YOUR TOASTER, right? It also works on electric power, and turns itself off unattended when your toast is done. Maybe there's a role for Autonomous Killing Robots after all, they might even be able to toast your bread with their lasers!"

Most of the public has never even heard the term "algorithmic justice", of course. You're saying "the public doesn't care about those distinctions," but it's the OP author _suggesting_ we include software providing _decision-making_ based on statistical algorithms, along with simple rule-based automation, both under "algorithmic justice", _rather_ than making a distinction. The OP author encouraging us not to make distinctions. I think the public is smart enough to distinguish between software that is _making decisions_ and software that isn't, and it's the OP that is asking this distinction not be made. As the public is indeed just starting to learn how to think about and categorize these things, I think it's irresponsible to try to educate them _not_ to make distinctions, and I think it's being done for an agenda.


Yes, they really will. If you oppose this initiative, you start running ads:

"Your legislator wants to let a computer overrule judges and juries and put criminals back on the streets. Call your legislator now to tell them you oppose HB-555."

There's enough truth there, that explaining it away takes far too much effort, and in politics, if you're explaining, you're losing. If you don't think this is how this kind of thing works, you haven't been paying attention.


This isn’t algorithmic justice, just regular automation. So it’s dishonest to suggest it should cancel out the problems that have been created by algorithmic justice.


Agreed.

> select * from judgements where description like ‘%marijuana%’

Is one of the least impressive IT feats ever.


I strongly disagree. A trivial SQL statement that saves thousands of people lots of hassle is a very impressive IT feat in my eyes.


fair!


trees for the forest


It might be, but using the benefits of a very different thing to launder the reputation of something that people have very legitimate concerns about isn't nuance - it's dishonesty.


That seems manipulative.


I don't know man?

Just Devil's Advocate here, but it I think if Trump did that, and then tried to say he was just trying to encourage a "nuanced" view, we'd all yell, "See! Politicians are always lying!"


This issue is transparency. Iowa's "algorithmic" bail program has zero transparency. They have even violated Iowa's public records law and refused to let me inspect the software.


"Algorithmic justice" reminded me of a study where researchers predicted the risk of a crime better than judges:[1]

> Millions of times each year, judges must decide where defendants will await trial—at home or in jail. By law, this decision hinges on the judge’s prediction of what the defendant would do if released. This is a promising machine learning application because it is a concrete prediction task for which there is a large volume of data available. Yet comparing the algorithm to the judge proves complicated. First, the data are themselves generated by prior judge decisions. We only observe crime outcomes for released defendants, not for those judges detained. This makes it hard to evaluate counterfactual decision rules based on algorithmic predictions. Second, judges may have a broader set of preferences than the single variable that the algorithm focuses on; for instance, judges may care about racial inequities or about specific crimes (such as violent crimes) rather than just overall crime risk. We deal with these problems using different econometric strategies, such as quasi-random assignment of cases to judges. Even accounting for these concerns, our results suggest potentially large welfare gains: a policy simulation shows crime can be reduced by up to 24.8% with no change in jailing rates, or jail populations can be reduced by 42.0% with no increase in crime rates.

[1] https://www.cs.cornell.edu/home/kleinber/w23180.pdf


The authors note that judges may care explicitly about racial bias, but based on a quick read they're making a really, really big mistake in the language they're using: they confuse arrests with crime. Arrests and convictions are simply a measurement mechanism for crime, which is known to have severe biases.


Kleinberg uses arrests for violent crime because they are known to have substantially less bias/zero bias


There is no convincing evidence that arrest rates "severely" overestimate offense rates; if anything it is just as likely arrest rates underestimate offense rates.


While what you say specifically is true, using arrest rates to determine criminal activity by race (and the subsequent conviction rate, etc.) has been shown to have strong relationship to race; at least in the United States. There are entire books on the subject. You can't tie racial arrest rates to the underlying crime rate, as POC get arrested far more often for the same crimes.


Indeed. In a closed society where everyone is guilty the only crime is getting caught.

This concept is the core basis of the war on drugs.


I'm not sure I understand that correctly. Are you saying that POC are acquitted or have charges dropped far more often for the same crimes (i.e. have a far lower conviction rate)?


POC are more likely to be pulled over, then when pulled over more likely to ask to be searched, then when searched more likely to be arrested when situations are similar to non-POC folks.

It doesn't really stop there either, they are more likely to be convicted of the same crimes and then get longer sentences. They are less likely to be offered probation. This eliminates huge percentages of men permanently from POC communities. It is possible that this process can be blamed for the social issues present in the inner city.

"The New Jim Crow" covers a lot more in a lot more detail. I would strongly recommend the read.


In general, POC are more likely to be arrested for committing a crime.

The parent's point isn't about whether they are acquitted, it's that if you were to commit a crime as a POC, you are more likely to be arrested than if you had committed that same crime as a non-POC. In both scenarios you committed a crime, but in one of them the system never has a record of it. This is why arrest rates and crime rates are different: if a POC is more likely to get arrested for committing a crime, the arrest rates by race (POCs get arrested more) will not reflect the crime rates by race (differences are generally smaller).


Most empirical data indicates that white people are arrested and convicted at a higher rate relative to basal offense rate than black people.


Post your sources. I've seen otherwise (especially for low level drug offenses) but you're the one making the claim.


Communities of color are over-policed. There's a huge racial divide in income, and crimes that tend to be committed by the poor (like shoplifting, loitering, and fare-evasion) are far more likely to be prosecuted than crimes committed by the wealthy (smoking some weed in your suburban living room, fudging your taxes a bit). The end result is that people of color are more likely to have an arrest record, even if they're just as likely to commit a crime as a white person.


For an example of this backed up by data, the NYPD stop-and-frisk program has always overwhelmingly focused on black and Latino people (almost 90% of all people stopped in some years), even though they make up only 15% of the population of some of the precincts involved AND white people were more individually likely to actually have an illegal weapon (the supposed reason for the stop-and-frisk program to exist).

https://www.nyclu.org/en/stop-and-frisk-data https://www.nyclu.org/en/press-releases/analysis-finds-racia...


Just to add a bit, you can look at the justice system as a binary classifier if you squint hard enough. So it has both false positives and false negatives, both of which are difficult to actually measure.

On the one hand, you have poc arrested and convicted of crimes that wouldn't be charged in other parts of town (arguably false negatives, amongst the non poc). On the other, prosecutors use the plea bargain system to get people to plea out for smaller charges instead of risking decades of their lives at trial; this is an excellent way to produce false positives.


Until the populace learns how to improve their chances of getting released and starts to game the system, introducing endogeneity.

(Also noticed the nice coincidence of a professor with user name klienber having a NBER Working Paper)


Which is totally not possible with judges, right ?

I feel like a lot of arguments being made here fail the A vs B test. Any argument that purports to provide help with choosing Judges vs Algorithms needs to apply differently to Judges, and differently to Algorithms.

How about: with Judges we simply won't know (for sure) what influences them. Are they racist ? Who knows. Do they prefere to let people with jobs out (realistically: yes, but we don't know for sure). Do they ...

With algorithm we can literally test, by presenting them with artificial cases, lots of them, and see how they judge. With a judge, you can't.


> I feel like a lot of arguments being made here fail the A vs B test. Any argument that purports to provide help with choosing Judges vs Algorithms needs to apply differently to Judges, and differently to Algorithms.

Out of curiosity, is there a name for this "fallacy", if it is one, since to me it mostly seems like the other party is failing at some basic level of critical thought.

I've been dealing a lot with arguments of this nature at work, and it'd be great to have a name to it. Pointing it out in the verbatim sense ("ok, but that's true of <your counter position> as well") becomes tiring quickly, and honestly, just causes the person to move on to the next fallacious claim.


Well, humans are already capable of dealing with it. The judges know that prisoners know what is expected of a good prisoner. The decisions are already being made with that in mind.

Contrast this with evaluating a programmer's performance. Everyone knows that lines of code written, number of tickets closed, number of fixed bugs or lines of documentation written do correlate well with performance. But the minute they are revealed to impact performance reviews, those metrics becone trash. Until you can find viable instruments, you shouldn't ever put those into a model and expect to have good predictions. If your model is not explicitly equipped to deal with endogeneity (like structural equation models), it will fail when faced with it.

If you think a judge is influenced by things that are unrelated to the case, you should appeal to the court above (which you can readily do in Continental Europe, but I don't know about Common Law).


> Well, humans are already capable of dealing with it

If that were true, algorithms wouldn't be able to outperform those humans on the metrics that matter.

About your faking metrics issue, the trick in this case is simply taking the metrics that matter and feeding them into an algorithm. Problem solved.

Take metrics:

1) will suspect face justice if released

2) will he reintegrate faster if released

Anyone criminal who wants to game those metrics, well I for one will be applauding that !


Until the populace learns how to improve their chances of getting released

As long as that correlates with behavior we want to see from the populace; https://xkcd.com/810/


>By law, this decision hinges on the judge’s prediction of what the defendant would do if released.

That isn’t the legal standard for release pending trial.

Thus is may look like an algorithm “predicts” crime better than judges, but judges can’t withhold bail/bond because they “predict” a certain defendant will commit another crime (because that’s not exactly judges are doing when determining bond).


What's remarkable is the article uses the word "algorithm" while stripping it of its current near-magical connotation. Bravo.


A big part to help would be forbidding employers, landlords and banks to access or use conviction databases, with limited exceptions for security-sensitive employment such as banks, security guards, childcare/education and the likes.

Or, simply, do not make them "public" in the sense of "putting it on the Internet". By all means, make convictions and court documents public in the sense that one can go to the local library and do research there in person (to provide a natural scaling limit), but it should not be acceptable that there are "data mining" companies that get the name searches on Google polluted by conviction records or court documents as sensitive as one's financial worth during / after a divorce judgement process and then charge people extortion fees to "remove" the records from their site (only to reappear on another site, rinse and repeat).

In Germany, this is the norm. Drug testing by employers or checking their credit score is also not allowed, with highly limited exceptions. As a result, while we do have a problem with convicts being discriminated against after release, it doesn't even come close to the level of problems the US has.


There is a fundamental flaw with the approach - the right to remember and the right to print/speak. You can keep a newspaper in your house. Why then is it permissible to stop someone from archiving the facts? Even if later proven false it is still useful data that say the New York Times featured an angry Trump full page editorial calling for the death penalty for them?

I don't disagree that undue judgement is an issue for rehabilitation but I have heavy doubts that forced amnesia is a good idea - especially given the repeat scammers who would exploit it.


There's an absolutely huge distinction between "try to rewrite history so [thing] doesn't exist anymore" and "law preventing you from taking [thing] into account when making hiring/renting/whatever decisions."

It's not impossible, insurance companies have no problem with this, the insurance industry is highly regulated about specific things they can/can't consider while underwriting a policy, which varies by jurisdiction. They just don't ask about/look up factors that they aren't allowed to take into consideration while underwriting a policy, but aggressively ask about/look up factors they are allowed to take into consideration.

For example, in Massachusetts insurance companies are forbidden from considering credits score when setting premiums and making underwriting decisions, a practice which is extremely common in the industry where legal (people with high credit scores have less insurance claims). That doesn't mean Massachusetts is trying to rewrite history so that your poor credit doesn't exist, it still exists, and others are allowed to use it for other reasons (such as underwriting loans).

Saying "we forbid you to take into consideration conviction history when hiring" isn't anywhere near "forced amnesia."


Unfortunately, if "people with high credit scores have less insurance claims," then preventing insurance companies from taking this into account is a deadweight loss to society.

People who are more likely to have more insurance claims should pay more for insurance (that's the whole point of underwriting), otherwise people who are less likely to have claims are being unfairly overcharged.


Now if you take a look at how many people got hits to their credit score thanks to the recent government shutdown or due to massive medical bills or due to identity theft/other fraud, things look differently.

Credit scores deserve to rot in hell forever. Humanity has managed to exist for thousands of years without this degradation of human worth to arbitrary totally intransparent numbers.


I was making no moral judgements on insurance underwriting laws.


How about lower premiums as a reward for having good credit or a crime free history?


> Why then is it permissible to stop someone from archiving the facts?

The "publication" laws were written in a different time - newspapers have, simply because they're physical objects, not searchable at the vast scale and speed that a quick Google search allows now. The laws simply have to be modernized, that's it.

> especially given the repeat scammers who would exploit it.

These can be sued until they burn down to the ground. Making money off of people's misery in that way is as despicable as it gets in a civilized society.


Maybe it’s naive, but I’ve often thought law itself should be spelled out in code. Obviously some things are human judgement. But take those as inputs into an unbiased machine that outputs fair sentencing, procurement etc. Still let meatbags review it at least for the time being. It could be a guidance system for judges and juries at least at first. Seems like it could help clear cases quicker and provide fairness.


Yes, it is naive. All code has bugs. Some time ago some people though it would be good idea to encode contractual agreements with code. I guess it should be no sutprise that someone found a bug in the contract and got him/herself most of the money that was tied into the contract framework. If I recall correctly, we are talking about tens if not hundreds of millions of dollars of involuntary bug bounty in this particular case (google DAO hack of you have not heard)

Further, in my opinion there does not seem to be any solid logic what me and other humans think that is right or wrong, which makes it quite difficult to spell out preferred laws into code. (If someone disagrees, I am happy to hear one counterexample of moral axiom that holds always, without any exceptions whatsoever, realistic or unrealistic. Even for one single person)


Loopholes are essentially bugs, and they're common. Is there a reason to believe another way of encoding laws would necessarily be more bug-prone than it currently is?


One person's loophole is another persons civil right.


Loopholes are another level of 'business logic', and would be the 20% of the code that takes 80% of the effort.


Today's compromise is tomorrow's loophole.


This happens with real contracts too. Even been in government contracting ? Same with large companies.


I'm not saying you're right or wrong, but the hard fork after the DAO hack returned all of the money that was stolen.


Yes. Which means that Ethereum guys quite quickly after DAO hack started to think that "code as law" is actually a really stupid idea. What is beyond my understanding is how anyone believes into the smart contracts using Ethereum after that, though.


Law as code solves a problem nobody has: application of unambiguous law to unambiguous facts. Even the simplest cases involve law that is unambiguous by design: what is a “unreasonable” search for fourth amendment purposes? All cases involve ambiguous facts. Was the eye witness in a position to see the murder? Is a witness lying? Every criminal charge and many civil claims involve an intent element. What was the defendant’s intent? Litigation is almost entirely concerned with these things, and codifying law addresses none of it.

It’s like applying code to coding. Yeah you can have IDEs that generate skeleton classes. Does that actually solve a problem anyone has, or create a real productivity boost compared to just using Emacs?


Great comment. Here's a good page from CodeX: The Stanford Center for Legal Informatics[1].

> "One technical problem with Computational Law, familiar to many individual with legal training, is due to the open texture of laws. Consider a municipal regulation stating "No vehicles in the park". On first blush this is fine, but it is really quite problematic. Just what constitutes a vehicle? Is a bicycle a vehicle? What about a skateboard? How about roller skates? What about a baby stroller? A horse? A repair vehicle? For that matter, what is the park? At what altitude does it end? If a helicopter hovers at 10 feet, is that a violation? What if it flies over at 100 feet?

> The resolution of this problem is to limit the application of Computational Law to those cases where such issues can be externalized or marginalized. We allow human users to make judgments about such open texture concepts in entering data or we avoid regulatory applications where such concepts abound.

> A different sort of challenge to Computational Law stems from the fact that not all legal reasoning is deductive. Edwina Rissland [Rissland et al.] notes that, "Law is not a matter of simply applying rules to facts via modus ponens"; and, when regarding the broad application of AI techniques to law, this is certainly true. The rules that apply to a real-world situation, as well as even the facts themselves, may be open to interpretation, and many legal decisions are made through case-based reasoning, bypassing explicit reasoning about laws and statutes. The general problem of open texture when interpreting rules, along with the parallel problem of running out of rules to apply when resolving terms, presents significant obstacles to implementable automated rule-based reasoning."

[1] https://law.stanford.edu/2016/01/13/michael-genesereths-comp...


While I generally agree, yes ide with reasonable code generation is better then just using Emacs, unless you packed emacs with so many plugins that you turned it into yet another ide.


I dont know if law can be reduced to code, and it should ideally be intelligible and accessible to the average person, it seems?

Reminds me of this video, where a physicist argues math is not the language of the universe because it cannot convey meaning about many human concepts.(1)

Anyways, it's not like most people, or even apparently police officers, understand even basics about the law right now, anyways.(2)

1 - https://youtu.be/inPcQeYWVT8

2- https://youtu.be/4ap6Kmo69lQ


>I dont know if law can be reduced to code, and it should ideally be intelligible and accessible to the average person, it seems?

Pick One:

(a) There is only black and white - Laws are simple

(b) There are many shades of grey - Laws are complex

You can't have:

(c) There are many shades of grey - Laws are simple


s/Laws/Justice. You can have shades of gray and simple laws, but they will ensnare a lot of innocent people.


Can't you? Where's your proof?


Take a relatively grey-filled topic like: One person has caused the death of another.

Now show me the simple law covering that?

People claiming "the law can be simple" are exactly the same as luddites who claim "why cant you just make computers work?!" - each side has no appreciation of the inherent complexities involved.


US Courts also hand down decisions based on equity which has little or nothing to do with law.


Yes, Law is the nexus where Moral, Political, and Commercial interest all meet. Its going to be messy and complicated.

But, it's common on tech boards to have a very loud minority of autistic-types claim that all these things can be solved by "code".


Well, (real world) code is nothing if not messy and complicated :D


> Take a relatively grey-filled topic like: One person has caused the death of another. Now show me the simple law covering that?

How is that "grey-filled"? There's murder and there's negligence, basically reflecting direct vs. indirect role in causing the death. Both scenarios consider intent to gauge the severity. So you have two knobs you can turn, but which cover the entirety of human-caused deaths.

This is not a "grey-filled" scenario in the least. In fact, it's probably one of the most accessible and clear-cut cases of the law being reducible to simple rules.

What's often not simple is fulfilling the standards of evidence required to know which rules to apply, ie. how much evidence is needed to establish mens rea? But that's not what you claimed. You claimed that you can't have simple, intelligible rules that cover many shades of grey.


>There's murder and there's negligence

And self-defense? or any of the dozens of other shades (car accidents?)?

So two people getting in a fight in a bar and one unintentionally kills the other should receive the same sentence as car accident victim? Or accidental sports injury?

You are attempting to simplify the world to simple morality tales. Hope you never have to live in such a world.


> And self-defense?

Good point, being under threat is a third knob. Still very simple.

> or any of the dozens of other shades (car accidents?)?

Again, car "accidents" are always due to negligence somewhere, either mechanical failures making the manufacturer liable, or distraction on the part of a driver, or inebriation, etc. These are all very simple considerations that everyone understands.

> So two people getting in a fight in a bar and one unintentionally kills the other should receive the same sentence as car accident victim?

Killing someone in a bar fight is third degree murder, so I've already covered that. If your "car accident victim" had intent to cause harm with his vehicle, that's murder too.


>Killing someone in a bar fight is third degree murder, so I've already covered that

So in the space of a minimal online discussion you've gone from 2 classes of murder to now 5 classes (3 degrees of murder + negligence + under threat).

Now continue this exercise for the next 250 years and you will eventually have a codification of the classes of murder which roughly mirrors our current laws. Which is to say, it will get progressively more complex.


No, you're not paying attention. There are 3 independent variables, negligence, intent and threat, and the various permutations of these are what define the classes of murder, criminal negligence, and so on.

3 independent variables is not a complex model. The complexity of the law that people bemoan are not about simple issues like murder, they are issues like zoning laws, tax law, and the numerous byzantine exceptions that are carved out to serve special interests, or laws that are archaic and have never been revisited or are only selectively applied.


How many different sub-variations of "intent" are there?

So if I plan to kill Person A, but miss my shot and kill Person B whats the charge? What if I was under threat by Person A? Who should even be charged for Person B's death...me or Person A?

Moreover, you've neglected to mention that there are all sorts of "human killing another" which are not even considered a crime (abortion, do not resuscitate orders, etc).

>The complexity of the law that people bemoan are not about simple issues like murder, they are issues like zoning laws, tax law

Lets not get ahead of ourselves, we still have barely scratched the surface with the 'easy laws'


> Maybe it’s naive, but I’ve often thought law itself should be spelled out in code

Any formal system can either be consistent or complete - not both. Therefore, you will always need an oversight to check for most individual legal cases.


Your therefore doesn't follow. You haven't established that the law would even need to form a "complete" logic, which means it must be able to encode both addition and multiplication. There is little reason to think this is the case.

Furthermore, oversight wouldn't necessarily yield any further insight even if it were. Humans aren't extra-logical, we have similar limitations.


Humans _are_ extra-logical.

The world is what is the case.

We humans are often interested in something else.


> You haven't established that the law would even need to form a "complete" logic, which means it must be able to encode both addition and multiplication. There is little reason to think this is the case.

The very basics of a legal system are consistency (fairness) and completeness (coverage). Are you making an argument against these two basics?


Please don't quote the incompleteness theorem without knowing it's precise statement. Far too many people do this to make absurd claims.

In particular, the tradeoff between soundness and completeness only holds for those mathematical systems which can internally encode peano arithmetic (naturals, addition, multiplication, first order logic)

Therefore, to invoke the incompleteness theorem, you need to prove that a legal system is powerful enough to encode arithmetic, which to me at least is not obvious.

There are many systems that are both consistent and complete, but are just not as expressive.


>The very basics of a legal system are consistency (fairness) and completeness (coverage).

You are using words that have a very precise formal meaning into a completely different (informal) sense. Probably this is the source of your confusion.


Indeed. These are completely out of context.


That makes no sense. You're trying to apply a mathematical theorem to a real-world situation on the basis of some very thin analogy.


Godel, Wittgenstein, etc.

Results in mathematics, logic, and philosophy have provided undisputed evidence that you can’t reason formally about human ethics.

You’re responding to someone who is correct but made the assumption that you’d be familiar with the field of study and the results.


Algorithmic justice is one of those problems where techies think that by applying algorithms and techniques du jour will somehow solve the problem. Unfortunately law is so complex nowadays that it is ridden with internal contradictions. Formalizing law would only reveal these. To change the system, a massive reduction of complexity of law would be required before any math could be applied on it with some success. But then if we simplified the law to make it comprehensible to mortals again, who would need any algorithms at all?

Relevant quote from Tony Hoare. s/software design/law/

There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.


I don't think it is techies at this point - more hanger-ons marketers given the complete lack of requisite understanding or interest in it and pushing a solution based on how good it sounds instead of how well it works.

Even by the standards of machine learning which has often been guilty of following spurious correlations. Anyone who actually worked with it would laugh like an artilleryman hearing Aristotole's impetus theory because it is so obviously disconnected from reality.

I have cynically theorized it is deliberate "bias laundering" ever since I heard of a defacto classist British algorithm which included /garden size/ (yard in American sense).

There is literally no legitimate reason to include it in a reoffense risk as a variable. Even if they thought it somehow had a rehabilitative effect then ownership shouldn't be a variable when they can just turn them into upstanding citizens through yardwork!


>But then if we simplified the law to make it comprehensible to mortals again, who would need any algorithms at all?

Because automating simple systems is what software is great at. Why wouldn't you?


Because it is likely to enable us to go further us down the path to 1984.

Fucking with people's lives is something that should be done by a human that can see the bigger picture, not a bunch of code daisy chained together.


Even if humans are more wrong than the machines? Do we care more about justice or our humanity?


The argument for is that judges fuck with people's lives and judges are already unreliable. A simple ML system may have errors but could have less error than a judge. And there is evidence to that effect.

https://www.nber.org/papers/w23180


The machine will probably be very predictable so expect people working around it soon.


> Formalizing law would only reveal these.

Which would identify easy targets for lawsuits against the government and cases which would overturn convictions and lead to the massive reduction in complexity that you seek.


OCR + Data mining with automatic form generation.

It works here because the state is willing to accept the data/results at face value.

Kind of a stretch to call it algorithmic justice.


I would see it as a stretch to call it “machine learning” justice (ignoring the OCR) but it is automated with an (albeit simple) rule -- and that process is an algorithm.

Like for all machine-learning project, I would recommend ‘Crawl before you run’ and have that project encourage the Justice system to modernise and have basic data-handling processes in place. We’ll get to Orwellian pre-cog soon enough. Right now, let’s get basic things like not arresting homonyms or people who have committed acts that are not crimes anymore.


We could call it:

Filtration Justice

or

Greater than Justice

if time_since_last_conviction > n years AND past_crimes_committed NOT IN [some list of crimes] { generate form and email to prosecutor. }

The OCR is the most important part.

edit-to-add:

some misdemeanor convictions are barred from sealing/expungement. It is jurisdiction dependent but they usually are crimes, such as, DUIs, domestic violence assaults, sex crimes, and so on.


Algorithmic justice can only be considered justice if the laws are scientifically proven to be just.


Naturally, there is no agreement of what exactly justice means. A pragmatic definition of justice that I found rather helpful for navigating the world: a society is just if it gives me (my in-group, my tribe) somewhat more than the others.

That said, and generalising from the success of formalisation of other previously informal domains (e.g formalisation of fair elections lead to insights like Arrow's impossibility theorem), I expect that formalising concepts of justice will lead to major progress.


I think the ambiguity with justice comes from the ambiguity of what exactly a crime is. What is or isn't a crime is largely a product of the society that defines it.


I believe we should start with a simple principle of "whatever an individual can do that does not directly or environmentally affect people who hasn't given their content can never be considered a crime". As long as this is not put among fundamental axioms of justice I can hardly recognize how any other attempts to apply logic to this area can make much sense.


Is that different from Human-Judge justice?


Human-Judge justice is judgement and human judges are supposed to have moral authority to judge. Algorithms hardly are.


Also: Algorithmic justice can only be considered justice if the training data are proven to be just.


imv, "just" good training data could definitely make things more just.

that is, if you clean your training data properly and adjust for correlates of injustice, e.g. race. I could imagine two ways (I am sure there are more): 1) you only train it on data from one sub-population, e.g. whites / wealthy people / etc.. 2) you carefully correct wrong decisions and biases in your training data and don't even train on such factors as race.

and then, in applying the model, you don't specify those correlates, either.

this could be a small revolution.


This is lovely until the same system and precedents are used in reverse (algo district attorney) or by banks to automate the foreclosure and seizure of thousands of homes. The banks did it before!


I really hope the term "Algorithmic Justice" doesn't catch on. It reminds me of the old saying that military justice is to justice as military music is to music.


Let me get this straight: Are there people who have been wrongly convicted and are currently locked up only because someone hasn't done their paperwork?


So far they have reviewed 43 years of eligible convictions, proactively dismissing and sealing 3,038 marijuana misdemeanours and reviewing, and recalling and re-sentencing up to 4,940 other felony marijuana convictions which were sentenced prior to Proposition 64’s passage in November 2016.

This sounds to me like it's quashing convictions for marijuana-related activity which is no longer a crime, and where sentences have already been served. So the convicts are no longer in prison, but they have a criminal record, which this project will wipe clean.

In a post from last year [1], a lawyer writes:

But here in California, there’s hardly anybody in prison for marijuana anymore because of the reforms in our system that have happened over the last decade.

[1] https://melmagazine.com/en-us/story/if-cannabis-becomes-lega...


There are tons of people in jail that shouldn't be in jail, for all kinds of reasons.


“Has done the paperwork” is a vague concept but, yes, there are a lot of people who are in jail for crimes that do not exist anymore.

There are more people who would be in if someone cared enough to raise key exonerating circumstances, judicial practices change since their condemnation. It’s less “straightforward” but it’s a lot less controversial than you would think.

This is why people who can read law books are so valuable in prison.


Here is the APNews article about dropping the pot convictions: https://apnews.com/1aeb5fed9e8746d8b120049e908af06c . They don't use the phrase "algorithmic justice", just technology.


Even simple linear models can outperform "experts" in numerous fields, especially in those where the feedback loop (for learning) is very long (as is the case in the criminal justice system).

An amazing paper that touches on this topic: In Praise of Epistemic Irresponsibility: How lazy and ignorant can you be? by Michael A. Bishop

You can find the PDF easily. Here's a ref: https://link.springer.com/article/10.1023/A:1005228312224


> https://www.codeforamerica.org/

Does anyone know of good examples of such initiatives in the EU?



what can be used to crawl through defendant's records to exonerate them can also be used for other purposes so care will always need to be taken with expanding processes such as this.

that out of the way, long term it would be best to standardize how this information is stored, categorized, and shared, so that exoneration in one locality can be more easily implemented in another.


This technology has long been used to dig through the past of anyone and everyone that comes in contact with the legal system. When you get pulled over for a taillight out and the cop runs your driver's license your pot charge from the 80s comes up.


The Digital Regime of Truth: From the Algorithmic Governmentality to a New Rule of Law - https://www.iainmaitland.com/pdf/Rouvroy-Stiegler.pdf


Lawyers and doctors are probably the best candidates for replacement by AI, and those professions will lobby the hardest against their replacement, but ultimately, I would rather be governed by transparent, open-source, AI than humans.


If you knew what lawyers do for a living you would probably realize that lawyers are probably one of the worst candidates for replacement by AI.

While a variety of smart or dumb machines are used today to assist lawyers, unless you are talking about Sci-Fi level AI, I can't see lawyers being replaced by AI. Okay, maybe a number of lawyers whose jobs are mostly secretarial in nature may be replaced by smarter tools, but not lawyers who counsel or advocate for clients.

It would only work in some kind of Sci-Fi dystopia where the legal system would be unrecognizable to those used to an adversarial based common law system, or the like.


Speaking as one who was raised by a physician, I cower in horror at the thought of having a treatment plan designed and implemented by a neural network without human intervention.

We've seen that for a number of difficult problems, deep neural networks, given sufficient data and expert trainers, do a good job most of the time and occasionally come to mortifyingly wrong conclusions.

Run a 'sed s/mortifying/mortal/' on that previous sentence and think about it for a minute.


Humans also make mistakes that are fatal for others, and we have been thinking about that for millennia.

Humans also subvert things in order to make more money.

Maybe not complete replacement, but where AI does most of the thinking, shows "This is what I found, and this is what I used, to decide this." and humans have the final approval on whether to go ahead with whatever it decided.

If someone has a disagreement, it would be submitted to the AI and make it re-evaluate everything while weighing it against the disagreement.


As an assistant to a heavily-trained, expert human, I do think it would be a helpful aid in catching human error.

I would make my tiebreaker another human, though.

If the AI system showed the reasoning behind its choice, that would help a lot with assuaging my fears. I was thinking of the current state of neural networks, where there's no way to see how it drew its conclusions.


You may be right about doctors. Their subject matter is repairing or maintaining, essentially, a mechanism.

Lawyers are different. Their subject matter is the interaction between human beings. A lawyer, for instance, must persuade a judge or jury or, in the transactional setting, the opposing party. So long as humans are governed by humans, there will be a role for lawyers.


I have the impression that machine learning is incredibly not transparent. Maybe I'm conflating machine learning with AI. Could you explain your vision of transparent AI a little more?


i.e. as long as it exposes all the facts/criteria it uses to arrive at each decision/verdict or reallocation of resources/manpower.

Of course, every rule or law would have to be initialized by humans in the first place.

Simplified example: "Murder is bad.", but after the AI observes enough cases where the victim attacked the killer first, it may present "But not when in self-defense." as an amendment, along with its reasoning, which again would have to be democratically approved by humans for it become a default condition.

It could even retroactively apply such new rules to all past cases, better than a human-run system, fixing wrongful convictions etc.


Except sufficiently complex trained AI no longer transparent. (Open source part relates only to code running network, but network's decisions are much more opaque)


I think I would prefer a system where Algorithms decide not to charge a person. Only those to be charged go to a real court.

Also, all plea bargains must be algorithmic so that narcissist DAs can't browbeat ordinary people into accepting guilt.


Remember the time Google Image search started mistaking black people for Monkeys ? I am worried about Algorithmic judges for the same reason that minorities might get into trouble here.


Judging... 97% complete.


So does this algorithm generate a report that predicts the future? Like a minority report?


Algorithmic justice, the one thing that is worst that electronic voting. I didn't think it would be possible.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: