I live in an English city and once in a while (when there was not a pandemic) I would walk into the city centre and attend some trials in one of our courts.
The last session in Magistrate's court most weeks is people who were supposed to be there for an earlier session and didn't show. So how are they there now? Well when they don't show the court orders police to arrest them, they get arrested and then to guarantee they appear they're sent to the cells until their case is called. Most of them look pretty sheepish. We are not talking about criminal masterminds here. You have a 10am court appearance, Police knock on the door at 2pm, your mum lets them in, they arrest you. Oh, shit, that was today? Oh, your phone has fifty messages from your lawyer asking where the fuck you are. Oops.
So this makes me think the most valuable tool for this problem would be a way to get defendants to actually do the introspection when a court asks them: Can we trust you to show up on this date and this time? Can they? Or are you just hopeless and it would actually be better to just imprison you a day or two before so that you're available?
I'm not suggesting we throw these people a celebratory party, but they're no Ted Bundy or Buster Edwards. Most of them would - if they had woken up on the right day and showed something resembling remorse for whatever they did - never have seen the inside of a jail cell. They'd have been given a fine or some type of "community pay back" type sentence like they have to clean graffiti off walls or pick litter from the side of a road, and told not to get into trouble again. Some of them won't. Some will.
Crown Courts deal with more serious offences, and obviously if it's alleged you are a real danger, that you will most likely offend seriously again if released, or you have means and motive to flee, they'll just imprison you "on remand".
But there isn't this business of "Oh well we'll just assume everybody is a flight risk and demand piles of cash for their release" which is just obviously corrupt.
If reducing crime is our only objective, we can do so trivially by indefinitely detaining anyone who ever does something suspicious. I think most people would find this system to be deeply unjust, even if it eliminated crime.
As a society, we have some tolerance for crime, preferring to err on the side of personal liberty against totalitarianism.
Asking for fairness in this recidivism model isn't a case of political correctness run amok, it's a reasonable request from stakeholders in the system it affects: Why should an individual be punished for the actions of others who happen to have similar identity markers?
You can say that a biased system will be "more efficient", but efficient to what end? A system that is unfair may reduce crime but it won't achieve "justice".
They shouldn't. But the algorithm shouldn't be making the decision in the first place.
Real world being what it is, I do however understand the worry that doing it the right way may be unrealistic.
The only part of your statement that's factual is that there were premature complaints that it increased crime. That has yet to be validated.
As for the rest, cash bail was not completely eliminated. Rather it was eliminated for most non-violent and some violent offenses.
The changes made by the NYS legislature made more (not all) violent crimes eligible for bail, and gave judges more discretion as to whether to set bail or not.
As such, in New York State most offenses are not bailable. Which is a huge improvement over the prior regimen.
Many people would languish in jail for months or years, while they lost their jobs and their homes.
Even more cruelly, prosecutors would use this to essentially force people to accept plea bargains which required them to plead guilty to crimes they may not have committed, just to get out of jail and try to salvage some semblance of their lives.
That just creates a group of people who are now homeless, jobless and have a criminal conviction -- just because they couldn't afford to pay for bail.
That's a great way to create more folks who have no means, a big red 'X' anytime they apply for a job and little respect for the law -- given how the "justice" system destroyed their lives with cash bail.
You might think "Well, they were put in jail for a reason." But this wasn't a sentence after conviction, it was criminalizing poverty. And that's just wrong.
I think cash bail is horrible. If you are so dangerous that you need to be kept away from the general public before trial, then you shouldn't have the option of bail at all.
And that should be proven via actual evidence before a judge, not any sort of ML model.
Many here seem to see this as a technical issue to be solved, with better training data, more refined models and clearer definitions of the problem/search space.
It's not. People's lives are often ruined by cash bail. Prosecutors should be required to prove, with actual evidence that each defendant is a real flight risk, or is likely to harm others before being remanded to custody without bail.
Everyone else should be required to respond appropriately to court orders WRT contact with prosecutors, appearance at hearings/trials, etc. but should not be required to post bail.
What was actually being compared in putting the models together was projection of accuracy, which as an estimate should not be confused with actual results... as the article eventually got around to pointing out.
When weighing guesswork against fairness, I agree with the decision to favor fairness, especially considering that the decision to grant bail has nothing whatsoever to do with a person's actual guilt or innocence- they have been charged, but not yet tried.
As long as the prediction is only one element of decision and not the decision itself, it can probably be useful for judges. If it becomes "the algorithm says", then the feedback loop could have unpredictable consequences.
By nature of a world with finite resources, I also wonder whether the 2.7 million USD could have made a better impact on likelihood to show up if they had been used to put in place or extend existing support structures for the majority of the crimes those courts see. It's still worth it to research how technology can help of course, but it's also hard to put in any kind of comparative study when only doing one thing at a time.
In the example case given, prosecutors appeared to have no evidence whatsoever that the defendant would skip court; the defense presented the algorithm and explained why he likely got the score he did.
I imagine judges may come to heavily weigh the score, but cannot imagine any would ever give such an algorithm a blank check to decide for them.
By that I mean that we could also look at what 2.7 million dollars (or even that flat cost + operating costs evened out over the period of time of the study) would do, were it put towards every one of those defendants in dire conditions. To ensure for example that they have at least a temporary place to stay, social programs in place, someone to follow their case and offer a reinforced legal counsel (which is available to everyone for free for sure, but I hope nobody kids themselves about how overworked those people are - I could bet everything I have that anyone with a good income would always pick a private lawyer over duty counsel).
As for your last point: it certainly doesn't seem to be the case right now but I can very well see that entering the realm of possibilities. You can't imagine that a system that performs on paper really well and is thrown into a field where people are incredibly over-worked would end up being relied on a little bit too much?
I mean, we see that sort of thing in place for way less reliable things with people who actually know better (i.e. how many scripts and old bits of code hold together much of the world). And in the medical field the development and usage of AI is on a steady rise ("chatbots" in psychology, predictive modeling for decision making and disease diagnosis, "telehealth" assessments using data from IoT devices). If AI gave people more work rather than less, it would be of no use after all.
Where did you get the idea that everyone gets free legal counsel for civil or criminal actions?
In many, if not most places in the US, in criminal actions, you must prove that you don't have the means to pay a lawyer before one is "appointed by the court." In fact, many places have very low limits on income/assets before providing free counsel.
The requirements surrounding this are a hodgepodge of state and local rules.
As for civil actions, most places do not provide legal counsel at all.
I think, also, that chatbots are an unfair comparison. Those are specifically designed to decrease triage time for non-emergency scenarios.
This is a fundamentally different scenario, where it provides evidence to counter the prosecution. It will always fall to the judge to weigh the score versus what evidence the prosecution provides. If the prosecution has no good reason to demand punishingly high bail, and the model continues to hold up well over time, I see no reason to assume that using it leads to bad outcomes. Remember that the most overworked people are the public defenders, and this is an aide for them.
There are fixed costs in the development of the system but there are running costs (owning or renting the servers, improving the ML and the system at all points, gathering more data, curating, maintenance of the data, etc). I don't think anyone reasonably believes that we can just build a box of truth and that nobody need be involved in that project ongoing afterwards (even just to ensure future browser compatibility, security upgrades, system outages, bias in data gathering, etc).
What I would be interested in seeing is what the very same money put towards such a project would do for a system that desperately needs much more influx of money towards structural pillars like mental health, housing, legal counsel, etc.
Since there's seemingly no A/B testing (with what I mentioned), no double-blind testing (which would test for the impact on the judges), or any form of comparative testing really, it's pretty hard to conclude a lot of things about this project. We surely wouldn't go too quickly to conclusions of usefulness if the subjects of the study were closer to us (i.e. if they weren't already assumed to have done something wrong to be in that situation).
As for the bad outcomes, I wasn't so much talking at the system at the current instant, but more about the feedback loop this creates: suppose the rating is biased against a community of people such that their measured score is lower than their actual likelihood to show up, then the judge is less likely to give them a chance to be out, which then reinforces the social bias against the category of people and drives what was an initial bias to be a systemic issue.
My masters is in Control Theory so I tend to see systems, feedback loops, and stability of trajectories everywhere, I guess.
Whatever reasoning a judge may have to overrule the algorithm must be quantifiable -- otherwise it would not exist in the judge's mind. Given enough data and compute, judges will be make worse decisions than algorithms, even on the edge cases.
Free the data. Train the nets. Highlight and eliminate any source of human bias and error we can find. Measure outcomes. Train again. It's the only rational way forward.
1. From a technical point of view, The mere fact of a single model responsible for all decisions is bad. The inherent variance of judges opinions in concrete cases is the best way to fight bias. Bias by definition means less variance. Consolidating all decision making will tremendously worsen the bias.
2. From a moral point of view, law is made by people for people. It is a convention and ritual which gets it's moral validity from it's connection with tradition. We can hardly quantify these things let alone incorporate them to an ML algorithm. We ensure these by having people study for years, take bar exams, and through apprentenship programs. Do you really believe the state of the art in ML can come close to this? As a practitioner of ML, I know we can at most replace human activities that usually don't require any training, and are done by a human in a few seconds.
3. ML is really crappy. But is hyped as fact based and scientific. This is dangerous, a judge seeing a prediction by an algo is forced to comply with the algo, otherwise he'll be scrutinized for dismissing objective facts.
Who said there has to be one model? Bias also definitely does not mean less variance. If I were to try to flesh out the argument I think you're trying to make, it'd go something like this: Judges have uncorrelated biases, and that low correlation between their biases ultimately results in a lower bias than a unified model. However, whether or not that's actually true in practice hinges on two important things: How uncorrelated those biases actually are, and their relative magnitudes to those of the unified model.
> 2. From a moral point of view, law is made by people for people. It is a convention and ritual which gets it's moral validity from it's connection with tradition. We can hardly quantify these things let alone incorporate them to an ML algorithm. We ensure these by having people study for years, take bar exams, and through apprentenship programs. Do you really believe the state of the art in ML can come close to this? As a practitioner of ML, I know we can at most replace human activities that usually don't require any training, and are done by a human in a few seconds.
I don't believe that ML is at the point where it should be the sole arbiter of these things, no. But I do believe that a properly calibrated model can be a very useful guide to these sorts of decisions.
> 3. ML is really crappy. But is hyped as fact based and scientific. This is dangerous, a judge seeing a prediction by an algo is forced to comply with the algo, otherwise he'll be scrutinized for dismissing objective facts.
This is just an education problem though.
This also ignores the large impact that going to jail tends to have on people’s lives. I don’t think it is reasonable to simply dismiss the likelihood of bias.
Who is "us"? And does it?
Does the firm writing the software actually allow this? Plenty of software sold to the government comes with contractual stipulations that it may not be reverse-engineered, inspected, or otherwise second-guessed.
Besides, principles shminciples. The way to judge these systems is by the actual impact they have. Will this result in a more just system? If not, we can put the principles right into /dev/null.
The article does seem to address this:
> Among the debates was how to balance accuracy and fairness, said Ojmarrh Mitchell, a professor at Arizona State University who served on the panel. An accurate algorithm would do a good job of predicting whether defendants showed up, and thus whether to recommend release, Mr. Mitchell said, and a fair algorithm wouldn’t result in more release recommendations for white defendants than for others.
> More than a year of tinkering ensued. To decrease the differences in outcomes for different races and ethnicities, the researchers excluded data on low-level marijuana offenses and “theft of service,” mainly subway turnstile jumping. Removing fare beating and marijuana arrests lowered the racial disparity in the tool by 0.4%, essentially making it slightly fairer but a little less accurate.
> Data released this month show the new tool’s recommendations didn’t have such racial disparities. From Nov. 12 through March 17, the algorithm recommended releasing without conditions 83.9% of Blacks, 83.5% of whites and 85.8% of Hispanics. Defendants with higher scores returned to court more often than those with lower scores, showing the algorithm seemingly made accurate predictions.
...Albeit with mixed results. You can't avoid bias. You can just choose whether you want to bias towards justice, or rigidity.
> Judges didn’t like it. “They said, ‘I was loving this, all this data-driven, evidence-based demonstration of what’s predictive. Now you’re putting a policy thumb on this,’ ” said Susan Sommer, general counsel at the Mayor’s Office of Criminal Justice.
> The researchers added those charges back in.
> Does the firm writing the software actually allow this? Plenty of software sold to the government comes with contractual stipulations that it may not be reverse-engineered, inspected, or otherwise second-guessed.
So don't accept contracts with those stipulations.
> ...Albeit with mixed results. You can't avoid bias. You can just choose whether you want to bias towards justice, or rigidity.
That's my point though. Bias we can actually inspect and understand and iterate on is better than bias that we have no insight into at all.
Shockingly, the government doesn't confer with me before making these decisions.
Perhaps this model could be used to generate a set of rules that were clear and less open to interpretation by judges, and in that way it could be valuable.
If the specific rules for a decision are not understandable by humans, then we're not talking about an "algorithm" but rather an opaque model. An algorithm would be something like: Defendant gave address and phone number (1 point), Defendant has no prior convictions within past 3 years (3 points), and so on. With rules ultimately understandable by humans, who can debate and modify specific rules such as ones based on race, or ones that would have disparate impact like zip code. This is 1970's tech, not 2020's.
I don't know about NYC's system, and it's hard to tell specifics from the article which only makes references to a few input datums but never says how the inputs effect the output. If there are specific scoring rules that can be individually debated, then the article is trying to drum up worry by leaving them out. If NYC's system is based on inputting data to a proprietary tool that only outputs a score and citizens are unable to know how it works, then the article needs to drive home the black box unaccountability aspect of this (and stop using the word "algorithm"), rather than focusing on the imposition of requiring a phone number.
> Finland's speeding fines are linked to income, with penalties calculated on daily earnings, meaning high earners get hit with bigger penalties for breaking the law. So, when businessman Reima Kuisla was caught doing 103km/h (64mph) in an area where the speed limit is 80km/h (50mph), authorities turned to his 2013 tax return, the Iltalehti newspaper reports. He earned 6.5m euros (£4.72m) that year, so was told to hand over 54,000 euros.
I think you should consider some of the ethical implications of imposing vastly different punishments for the same behavior.
I would actually say it's the opposite.
If you punish everyone with the same fine, their wealth is divorced from the punishment.
If you punish each person depending on their wealth, the punishment is directly proportional to their wealth.
In saying that, very poor people are definitely disproportionately affected by equal punishment. Though you could also say poor people would be more motivated to commit crimes if the punishment was proportional, since they stand to gain a lot more than a rich person.
I don't think there's a right answer.
You wouldn't, you'd impose the same punishment on everyone, e.g. 1% of net worth.
That should probably be priced in relative to their ability to buy get-out-of-jail cards.
Edit: buy, not 'but'
In reality, rich people have lower actual crime rates than poor people. Compliance with the law is as much a financial rounding error as fines. The police in rich neighborhoods have easier jobs. Your typical rich person isn’t going around making a nuisance of himself. Even John DuPont kept his behavior on his own property.
If a poorer person (say, someone that works 6 hours a day at Walmart) and a richer person (say, someone that works 12 hours a day at Walmart) both steal $10,000 from someone, they have both caused the same amount of damage to the harmed individual. That is, they caused $10,000+moral damages (let's say $20,000 total damage). Why should the person that slaved harder and provided more value to society fork out more when they caused the same damage as a poorer person? If anything, the poorer person should pay more as recompense for being less useful to their fellow humans and broader society in aggregate. Of course money isn't the perfect proxy for societal value add but it's the best we have, especially in the context of a strong rule of law which will punish fraud etc.
There is no bank account of cash involved.
But $200M is less than .2% of his wealth.
It’s not even 10% of the annual growth in his wealth.
Bezos can write a check for $200,000,000 today and his bankers will cover it.
He can spend $1,000,000 to pay Goldman Sachs to leverage his investments to most efficiently cover just that one check’s cash value, and not even notice it.
A "yes" vote would replace all cash bail with a ruling from the risk-assessment system.
There’s only been one poll so far (39% to uphold the new law, thus ending cash bail, 32% repeal, 29% undecided):
Unfortunately thinking on the subject usually leads to despair and pouring a quadruple bourbon
Failure to make bail more often results in job loss, homelessness, etc.
The very simple economics of bail, beyond the obvious injustice of incarceration before judgement only for the poor, is a complete and utter failure.
The only justification for the current system is cruelty.
The singular position of conservatives is, simply put, is that there must be out-groups for whom the law binds, but does not protect, and in-groups for who. The law protects, but does not bind.