Hacker News new | past | comments | ask | show | jobs | submit login
Algorithm Helps New York Decide Who Goes Free Before Trial (wsj.com)
71 points by jkuria 32 days ago | hide | past | favorite | 71 comments



It's completely crazy that most of the US uses paid bail. That can't be a good idea. On the other hand, it is true that when bailed some fraction of defendants won't show up.

I live in an English city and once in a while (when there was not a pandemic) I would walk into the city centre and attend some trials in one of our courts.

The last session in Magistrate's court most weeks is people who were supposed to be there for an earlier session and didn't show. So how are they there now? Well when they don't show the court orders police to arrest them, they get arrested and then to guarantee they appear they're sent to the cells until their case is called. Most of them look pretty sheepish. We are not talking about criminal masterminds here. You have a 10am court appearance, Police knock on the door at 2pm, your mum lets them in, they arrest you. Oh, shit, that was today? Oh, your phone has fifty messages from your lawyer asking where the fuck you are. Oops.

So this makes me think the most valuable tool for this problem would be a way to get defendants to actually do the introspection when a court asks them: Can we trust you to show up on this date and this time? Can they? Or are you just hopeless and it would actually be better to just imprison you a day or two before so that you're available?


I recommend reading stories of the law. It’ll tell you a plethora of reasons why people fail to turn up for court. Often as a result of an overstretched and massively underfunded CPS. Well worth a read - https://www.hive.co.uk/Product/The-Secret-Barrister/The-Secr...


Not everyone gets bail. Releasing defendants without bail is called release on "own recognizance". It depends on how severe the crime and how likely they think you are to run. Keep in mind criminal law is handled at the state level, so historically it was really hard to catch people who just moved to another state. Bail pays for people like "bounty hunters" to go find you.



"Oops?" Is it only in the US that some people are actual bad guys that run from the law?


I was talking about a Magistrate's Court, so when you say "actual bad guys" we're talking like, maybe a drink driver? Knocked somebody's tooth out in a bar fight? A year of unpaid parking fines? Shoplifted a frozen turkey?

I'm not suggesting we throw these people a celebratory party, but they're no Ted Bundy or Buster Edwards. Most of them would - if they had woken up on the right day and showed something resembling remorse for whatever they did - never have seen the inside of a jail cell. They'd have been given a fine or some type of "community pay back" type sentence like they have to clean graffiti off walls or pick litter from the side of a road, and told not to get into trouble again. Some of them won't. Some will.

Crown Courts deal with more serious offences, and obviously if it's alleged you are a real danger, that you will most likely offend seriously again if released, or you have means and motive to flee, they'll just imprison you "on remand".

But there isn't this business of "Oh well we'll just assume everybody is a flight risk and demand piles of cash for their release" which is just obviously corrupt.


Bail doesn’t stop people from running. If you’re facing the the death penalty or life in prison losing a large sum of money isn’t going to stop you. So, the issue is what does it stop?


Reducing accuracy to try and improve fairness is definitely the politically correct thing to do. There was a good paper on this topic about how trying to make a model more fair can result in worst outcomes: https://arxiv.org/abs/1803.04383 If some group does happen to have a higher recidivism risk, pretending this isn't true and just correcting against that fact will just result in more crime.


Fairness is a legitimate end unto itself, an essential part of what makes a justice system 'just'.

If reducing crime is our only objective, we can do so trivially by indefinitely detaining anyone who ever does something suspicious. I think most people would find this system to be deeply unjust, even if it eliminated crime.

As a society, we have some tolerance for crime, preferring to err on the side of personal liberty against totalitarianism.

Asking for fairness in this recidivism model isn't a case of political correctness run amok, it's a reasonable request from stakeholders in the system it affects: Why should an individual be punished for the actions of others who happen to have similar identity markers?

You can say that a biased system will be "more efficient", but efficient to what end? A system that is unfair may reduce crime but it won't achieve "justice".


> Why should an individual be punished for the actions of others who happen to have similar identity markers?

They shouldn't. But the algorithm shouldn't be making the decision in the first place.

The objection to making the algorithm incorrect in order to make the process fairer is that it's a quick hack done in the wrong layer of the system. It's like preventing SQL injection by adding escaping to JavaScript handling form submission - it's the wrong place and way to go about it.

Real world being what it is, I do however understand the worry that doing it the right way may be unrealistic.


I largely agree with what you're saying. But the challenge is that, while people are willing to tolerate some crime, they're generally not willing to tolerate more crime. Very few voters will bite the bullet and say realizing suchandsuch principle of fairness is worth 10 more murders each year, so any reform which can't convincingly explain how crime will stay down is dead in the water. (New York in particular ran into this problem; they actually abolished bail in 2019, but brought it back immediately in response to preliminary suggestions the reform might have increased crime.)


>(New York in particular ran into this problem; they actually abolished bail in 2019, but brought it back immediately in response to preliminary suggestions the reform might have increased crime.)

The only part of your statement that's factual is that there were premature complaints that it increased crime. That has yet to be validated.

As for the rest, cash bail was not completely eliminated. Rather it was eliminated for most non-violent and some violent offenses.

The changes made by the NYS legislature made more (not all) violent crimes eligible for bail, and gave judges more discretion as to whether to set bail or not[0].

As such, in New York State most offenses are not bailable. Which is a huge improvement over the prior regimen.

Many people would languish in jail for months or years, while they lost their jobs and their homes.

Even more cruelly, prosecutors would use this to essentially force people to accept plea bargains which required them to plead guilty to crimes they may not have committed, just to get out of jail and try to salvage some semblance of their lives.

That just creates a group of people who are now homeless, jobless and have a criminal conviction -- just because they couldn't afford to pay for bail.

That's a great way to create more folks who have no means, a big red 'X' anytime they apply for a job and little respect for the law -- given how the "justice" system destroyed their lives with cash bail.

You might think "Well, they were put in jail for a reason." But this wasn't a sentence after conviction, it was criminalizing poverty. And that's just wrong.

I think cash bail is horrible. If you are so dangerous that you need to be kept away from the general public before trial, then you shouldn't have the option of bail at all.

And that should be proven via actual evidence before a judge, not any sort of ML model.

Many here seem to see this as a technical issue to be solved, with better training data, more refined models and clearer definitions of the problem/search space.

It's not. People's lives are often ruined by cash bail. Prosecutors should be required to prove, with actual evidence that each defendant is a real flight risk, or is likely to harm others before being remanded to custody without bail.

Everyone else should be required to respond appropriately to court orders WRT contact with prosecutors, appearance at hearings/trials, etc. but should not be required to post bail.

[0] https://www.brennancenter.org/our-work/analysis-opinion/new-...


It seems to me that boosting accuracy is the only fair thing to do.

What was actually being compared in putting the models together was projection of accuracy, which as an estimate should not be confused with actual results... as the article eventually got around to pointing out.

When weighing guesswork against fairness, I agree with the decision to favor fairness, especially considering that the decision to grant bail has nothing whatsoever to do with a person's actual guilt or innocence- they have been charged, but not yet tried.


What's the point of your post? Are you really citing an ML paper as supporting the idea that the US justice system should not seek fairness? That's some weak FUD if you ask me.


I wonder to what extent this creates a feedback loop that isn't fully acknowledged/understood.

As long as the prediction is only one element of decision and not the decision itself, it can probably be useful for judges. If it becomes "the algorithm says", then the feedback loop could have unpredictable consequences.

By nature of a world with finite resources, I also wonder whether the 2.7 million USD could have made a better impact on likelihood to show up if they had been used to put in place or extend existing support structures for the majority of the crimes those courts see. It's still worth it to research how technology can help of course, but it's also hard to put in any kind of comparative study when only doing one thing at a time.


In a city of 8 million people, it is not likely that $2.7 million would substantially shift underlying social issues to reduce crime or failure to appear for court dates.

In the example case given, prosecutors appeared to have no evidence whatsoever that the defendant would skip court; the defense presented the algorithm and explained why he likely got the score he did.

I imagine judges may come to heavily weigh the score, but cannot imagine any would ever give such an algorithm a blank check to decide for them.


I might be misunderstanding the cost there but isn't it more the amount of people who were going through that system that matters, and not the city size?

By that I mean that we could also look at what 2.7 million dollars (or even that flat cost + operating costs evened out over the period of time of the study) would do, were it put towards every one of those defendants in dire conditions. To ensure for example that they have at least a temporary place to stay, social programs in place, someone to follow their case and offer a reinforced legal counsel (which is available to everyone for free for sure, but I hope nobody kids themselves about how overworked those people are - I could bet everything I have that anyone with a good income would always pick a private lawyer over duty counsel).

As for your last point: it certainly doesn't seem to be the case right now but I can very well see that entering the realm of possibilities. You can't imagine that a system that performs on paper really well and is thrown into a field where people are incredibly over-worked would end up being relied on a little bit too much?

I mean, we see that sort of thing in place for way less reliable things with people who actually know better (i.e. how many scripts and old bits of code hold together much of the world). And in the medical field the development and usage of AI is on a steady rise ("chatbots" in psychology, predictive modeling for decision making and disease diagnosis, "telehealth" assessments using data from IoT devices). If AI gave people more work rather than less, it would be of no use after all.


>and offer a reinforced legal counsel (which is available to everyone for free for sure, but I hope nobody kids themselves about how overworked those people are - I could bet everything I have that anyone with a good income would always pick a private lawyer over duty counsel).

Where did you get the idea that everyone gets free legal counsel for civil or criminal actions?

In many, if not most places in the US, in criminal actions, you must prove that you don't have the means to pay a lawyer before one is "appointed by the court." In fact, many places have very low limits on income/assets before providing free counsel.

The requirements surrounding this are a hodgepodge of state and local rules[0].

As for civil actions, most places do not provide legal counsel[1] at all.

[0] https://www.findlaw.com/hirealawyer/do-you-need-a-lawyer/do-...

[1] https://en.wikipedia.org/wiki/Legal_aid_in_the_United_States...


I believe they were referring to England, where you do get legal counsel for free (though my understanding is that restrictions to that have been placed in recent years).


I was also thinking of Canada, France, and Brazil. But nobody9999 is right, for the most part it's not a right in the world.


What you are advocating for would not last long at all as a one-time source of funding. On the other hand, letting people out without bail without relying on bail loan sharks, in perpetuity, will do much good on itd own. They don't need the bail loan, they might not lose a job from not showing up to work if they couldnt make bail, etc.

I think, also, that chatbots are an unfair comparison. Those are specifically designed to decrease triage time for non-emergency scenarios. This is a fundamentally different scenario, where it provides evidence to counter the prosecution. It will always fall to the judge to weigh the score versus what evidence the prosecution provides. If the prosecution has no good reason to demand punishingly high bail, and the model continues to hold up well over time, I see no reason to assume that using it leads to bad outcomes. Remember that the most overworked people are the public defenders, and this is an aide for them.


I understand your point, that's why I tried to say that even a budget "evened out" over the study itself could be interesting to look at.

There are fixed costs in the development of the system but there are running costs (owning or renting the servers, improving the ML and the system at all points, gathering more data, curating, maintenance of the data, etc). I don't think anyone reasonably believes that we can just build a box of truth and that nobody need be involved in that project ongoing afterwards (even just to ensure future browser compatibility, security upgrades, system outages, bias in data gathering, etc).

What I would be interested in seeing is what the very same money put towards such a project would do for a system that desperately needs much more influx of money towards structural pillars like mental health, housing, legal counsel, etc.

Since there's seemingly no A/B testing (with what I mentioned), no double-blind testing (which would test for the impact on the judges), or any form of comparative testing really, it's pretty hard to conclude a lot of things about this project. We surely wouldn't go too quickly to conclusions of usefulness if the subjects of the study were closer to us (i.e. if they weren't already assumed to have done something wrong to be in that situation).

--

As for the bad outcomes, I wasn't so much talking at the system at the current instant, but more about the feedback loop this creates: suppose the rating is biased against a community of people such that their measured score is lower than their actual likelihood to show up, then the judge is less likely to give them a chance to be out, which then reinforces the social bias against the category of people and drives what was an initial bias to be a systemic issue.

My masters is in Control Theory so I tend to see systems, feedback loops, and stability of trajectories everywhere, I guess.


In short, the society needs to decide the success metric for the algorithm. There seems to be little discussion on it.


> As long as the prediction is only one element of decision and not the decision itself, it can probably be useful for judges.

Whatever reasoning a judge may have to overrule the algorithm must be quantifiable -- otherwise it would not exist in the judge's mind. Given enough data and compute, judges will be make worse decisions than algorithms, even on the edge cases.

Free the data. Train the nets. Highlight and eliminate any source of human bias and error we can find. Measure outcomes. Train again. It's the only rational way forward.


I know a lot of people here object to systems like these, but I think they're really great, at least in principle. It's certainly true that early iterations will have problem of bias, and that's bad. But ultimately having an explicit model allows us to actually interrogate and correct those biases in a permanent way. You can't do that with a subjective system. Having an actual model, even if it starts out biased, that we can actually iterate on and tweak empirically seems like an invaluable step towards solving systemic racial and other forms of inequality in our justice system.


I think you are wrong.

1. From a technical point of view, The mere fact of a single model responsible for all decisions is bad. The inherent variance of judges opinions in concrete cases is the best way to fight bias. Bias by definition means less variance. Consolidating all decision making will tremendously worsen the bias.

2. From a moral point of view, law is made by people for people. It is a convention and ritual which gets it's moral validity from it's connection with tradition. We can hardly quantify these things let alone incorporate them to an ML algorithm. We ensure these by having people study for years, take bar exams, and through apprentenship programs. Do you really believe the state of the art in ML can come close to this? As a practitioner of ML, I know we can at most replace human activities that usually don't require any training, and are done by a human in a few seconds.

3. ML is really crappy. But is hyped as fact based and scientific. This is dangerous, a judge seeing a prediction by an algo is forced to comply with the algo, otherwise he'll be scrutinized for dismissing objective facts.


> 1. From a technical point of view, The mere fact of a single model responsible for all decisions is bad. The inherent variance of judges opinions in concrete cases is the best way to fight bias. Bias by definition means less variance. Consolidating all decision making will tremendously worsen the bia

Who said there has to be one model? Bias also definitely does not mean less variance. If I were to try to flesh out the argument I think you're trying to make, it'd go something like this: Judges have uncorrelated biases, and that low correlation between their biases ultimately results in a lower bias than a unified model. However, whether or not that's actually true in practice hinges on two important things: How uncorrelated those biases actually are, and their relative magnitudes to those of the unified model.

> 2. From a moral point of view, law is made by people for people. It is a convention and ritual which gets it's moral validity from it's connection with tradition. We can hardly quantify these things let alone incorporate them to an ML algorithm. We ensure these by having people study for years, take bar exams, and through apprentenship programs. Do you really believe the state of the art in ML can come close to this? As a practitioner of ML, I know we can at most replace human activities that usually don't require any training, and are done by a human in a few seconds.

I don't believe that ML is at the point where it should be the sole arbiter of these things, no. But I do believe that a properly calibrated model can be a very useful guide to these sorts of decisions.

> 3. ML is really crappy. But is hyped as fact based and scientific. This is dangerous, a judge seeing a prediction by an algo is forced to comply with the algo, otherwise he'll be scrutinized for dismissing objective facts.

This is just an education problem though.


The problem is that the system isn’t falsifiable. If the model says someone won’t turn up to court they are locked in cells until their trial, how will you know whether it was right or not? If you randomly release people as a test of the model then this is firstly unfair but secondarily won’t necessarily reveal small scale problems with the model that only affect certain groups of people because the sample will be small.

This also ignores the large impact that going to jail tends to have on people’s lives. I don’t think it is reasonable to simply dismiss the likelihood of bias.


For any one instance? You can't. But in aggregate, we do it all the time:

https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_diver...


> But ultimately having an explicit model allows us to actually interrogate and correct those biases in a permanent way.

Who is "us"? And does it?

Does the firm writing the software actually allow this? Plenty of software sold to the government comes with contractual stipulations that it may not be reverse-engineered, inspected, or otherwise second-guessed.

Besides, principles shminciples. The way to judge these systems is by the actual impact they have. Will this result in a more just system? If not, we can put the principles right into /dev/null.

The article does seem to address this:

> Among the debates was how to balance accuracy and fairness, said Ojmarrh Mitchell, a professor at Arizona State University who served on the panel. An accurate algorithm would do a good job of predicting whether defendants showed up, and thus whether to recommend release, Mr. Mitchell said, and a fair algorithm wouldn’t result in more release recommendations for white defendants than for others.

> More than a year of tinkering ensued. To decrease the differences in outcomes for different races and ethnicities, the researchers excluded data on low-level marijuana offenses and “theft of service,” mainly subway turnstile jumping. Removing fare beating and marijuana arrests lowered the racial disparity in the tool by 0.4%, essentially making it slightly fairer but a little less accurate.

> Data released this month show the new tool’s recommendations didn’t have such racial disparities. From Nov. 12 through March 17, the algorithm recommended releasing without conditions 83.9% of Blacks, 83.5% of whites and 85.8% of Hispanics. Defendants with higher scores returned to court more often than those with lower scores, showing the algorithm seemingly made accurate predictions.

...Albeit with mixed results. You can't avoid bias. You can just choose whether you want to bias towards justice, or rigidity.

> Judges didn’t like it. “They said, ‘I was loving this, all this data-driven, evidence-based demonstration of what’s predictive. Now you’re putting a policy thumb on this,’ ” said Susan Sommer, general counsel at the Mayor’s Office of Criminal Justice.

> The researchers added those charges back in.


> Who is "us"? And does it?

> Does the firm writing the software actually allow this? Plenty of software sold to the government comes with contractual stipulations that it may not be reverse-engineered, inspected, or otherwise second-guessed.

So don't accept contracts with those stipulations.

> ...Albeit with mixed results. You can't avoid bias. You can just choose whether you want to bias towards justice, or rigidity.

That's my point though. Bias we can actually inspect and understand and iterate on is better than bias that we have no insight into at all.


> So don't accept contracts with those stipulations.

Shockingly, the government doesn't confer with me before making these decisions.


My point is that your criticism if of the types of contracts being signed. Not the concepts themselves.


What about negative loops. If (when) the system consistently makes bad decisions, and people react negatively, the system will re-enforce those bad decisions with newfound evidence it was correct. Seems like the story of law enforcement in this country, except with the shine of a "system" to do it which will be trusted as unbiased.


Surely it would be better to concentrate on the existing sources of bias, e.g. the judges? This would presumably carry over to their other duties.

Perhaps this model could be used to generate a set of rules that were clear and less open to interpretation by judges, and in that way it could be valuable.


How are we going to eliminate the human bias of judges?


And I'm sure the ethnic groups targeted by such systems are just thrilled to have their freedom determined by them for the sake of "research".


There is no inherent problem with using an algorithm for these decisions. The growing problem seems to be the lack of algorithms!

If the specific rules for a decision are not understandable by humans, then we're not talking about an "algorithm" but rather an opaque model. An algorithm would be something like: Defendant gave address and phone number (1 point), Defendant has no prior convictions within past 3 years (3 points), and so on. With rules ultimately understandable by humans, who can debate and modify specific rules such as ones based on race, or ones that would have disparate impact like zip code. This is 1970's tech, not 2020's.

I don't know about NYC's system, and it's hard to tell specifics from the article which only makes references to a few input datums but never says how the inputs effect the output. If there are specific scoring rules that can be individually debated, then the article is trying to drum up worry by leaving them out. If NYC's system is based on inputting data to a proprietary tool that only outputs a score and citizens are unable to know how it works, then the article needs to drive home the black box unaccountability aspect of this (and stop using the word "algorithm"), rather than focusing on the imposition of requiring a phone number.


Why not scale cash bail to match income/wealth? Then people of all economic levels will be affected by it more or less equally and none of us will have our fate in the hands of an opaque proprietary algorithm.


I was wondering recently why fines are not scaled to income. A $200 fine is no big deal for a wealthy person, but devastating for a poor person. Why shouldn’t the wealthy persons fine be equally devastating? $2000, $20000, $200000, maybe $200,000,000 for Jeff Bezos.


Some places do this! Here’s a 54000 euro speeding ticket in Finland [1]:

> Finland's speeding fines are linked to income, with penalties calculated on daily earnings, meaning high earners get hit with bigger penalties for breaking the law. So, when businessman Reima Kuisla was caught doing 103km/h (64mph) in an area where the speed limit is 80km/h (50mph), authorities turned to his 2013 tax return, the Iltalehti newspaper reports. He earned 6.5m euros (£4.72m) that year, so was told to hand over 54,000 euros.

[1] https://www.bbc.com/news/blogs-news-from-elsewhere-31709454


See, this is what I'm talking about. The government (whether state or federal) already knows how much a person makes from their tax returns. If a person doesn't have a tax return, give them the lowest fine possible.


You're saying that in cases where the penalty is monetary, the punishment should fit the criminal rather than fitting the crime.

I think you should consider some of the ethical implications of imposing vastly different punishments for the same behavior.


I think a question is: is the same dollar amount the same punishment? Jeff Bezos doesn’t need to think twice about a $10k fine but there are plenty of people for whom that would be ruinous. Arguably, the “sameness” has to do with the impact of the fine, not the amount.


Well no, the punishment should fit the crime. It's just that punishing a poor person by taking $1000 could kill them, and punishing a rich person by taking $1000 is barely even a punishment.


Given progressive taxation, it already takes higher income persons more pre-tax income to come up with $1000 post-tax.


The punishment is $1000. 2nd order consequences, while possibly serious, aren't part of the punishment.


These "2nd order consequences" are the whole point of punishment. Money is only means of delivering them.


That's the same as tying the consequences a person faces for a crime to their wealth (given how predictable the severity of these "2nd order consequences" are!) and that is, to me, terribly unjust.


> That's the same as tying the consequences a person faces for a crime to their wealth (given how predictable the severity of these "2nd order consequences" are!)

I would actually say it's the opposite.

If you punish everyone with the same fine, their wealth is divorced from the punishment.

If you punish each person depending on their wealth, the punishment is directly proportional to their wealth.

In saying that, very poor people are definitely disproportionately affected by equal punishment. Though you could also say poor people would be more motivated to commit crimes if the punishment was proportional, since they stand to gain a lot more than a rich person.

I don't think there's a right answer.


> I think you should consider some of the ethical implications of imposing vastly different punishments for the same behavior.

You wouldn't, you'd impose the same punishment on everyone, e.g. 1% of net worth.


Rich people being a nuisance while piling up petty fines usually isn’t an issue because repeat offenses have escalating fines, loss of drivers license, or jail time.


This is extremely unjust. Just because someone produces more value for society (works harder, is more intelligent, and is therefore more useful to their fellow humans and thus gets higher compensation) shouldn't mean that society takes more from them when they break the same laws as everyone else. A poor person speeding and a rich person speeding are both creating the same negative externality and should compensate society by the same.


Consider that a poorer person's life could be devastated by just a few offences. While a very rich person never bothers to ever consider laws as they can easily buy their way out of nearly all of them thousands of times over.

That should probably be priced in relative to their ability to buy get-out-of-jail cards.

Edit: buy, not 'but'


> While a very rich person never bothers to ever consider laws as they can easily buy their way out of nearly all of them thousands of times over.

In reality, rich people have lower actual crime rates than poor people. Compliance with the law is as much a financial rounding error as fines. The police in rich neighborhoods have easier jobs. Your typical rich person isn’t going around making a nuisance of himself. Even John DuPont kept his behavior on his own property.


Is it really a lower crime rate or that less are reported against them? If so it could be due to their crimes being more white collar and complex, or because they can buy silence/settlement.


It really is lower. Not just from basic crime reports, but other ways of measuring like population surveys of victims. Nonscientifically, my feeling is that the median rates of finable crimes are about the same (~0) but all the people that can't help but go around and rack up infractions are usually bad at holding down a job, too, so they're less often rich, unless they're a successful misfit. Like, Marco Rubio comes to mind, he got a lot of speeding tickets. And two guys I know who are somewhat successful startup founders seem to have expressed the notion that getting pulled over is something that you expect to happen a lot, and I can't imagine them enjoying life in middle management.


The actual dividing line in terms of wealth and avoiding fines is more like people who are broke but sensible, versus people who have an income.


Should someone that dreads prison (say, they are physically weak, or have an anxiety disorder) be given less prison time than someone who is more comfortable there? I think we all recognize that this would be unjust, and I think the same reasoning applies to financial penalties.

If a poorer person (say, someone that works 6 hours a day at Walmart) and a richer person (say, someone that works 12 hours a day at Walmart) both steal $10,000 from someone, they have both caused the same amount of damage to the harmed individual. That is, they caused $10,000+moral damages (let's say $20,000 total damage). Why should the person that slaved harder and provided more value to society fork out more when they caused the same damage as a poorer person? If anything, the poorer person should pay more as recompense for being less useful to their fellow humans and broader society in aggregate. Of course money isn't the perfect proxy for societal value add but it's the best we have, especially in the context of a strong rule of law which will punish fraud etc.


It's not about justice, it's about pragmatic deterrents.


$200,000,000 is really not going to be noticeable to Bezos.


You can't be serious, his wealth is likely all tied up in stock and investments. Does even the richest person in the world have a cool 1 bil just sat in the bank? In that case it's very noticeable to have 1/5 your balance vanish.


Of course it’s in investments.

There is no bank account of cash involved.

But $200M is less than .2% of his wealth.

It’s not even 10% of the annual growth in his wealth.

Bezos can write a check for $200,000,000 today and his bankers will cover it.

He can spend $1,000,000 to pay Goldman Sachs to leverage his investments to most efficiently cover just that one check’s cash value, and not even notice it.


The challenge with scaling it is that many criminal defendants have no regular income and extremely little wealth, so any nonzero amount of bail is impossible for them to pay on their own.


How should negative net worth and/or no income be handled?



A system like this is on the ballot in California this year:

https://en.wikipedia.org/wiki/2020_California_Proposition_25

A "yes" vote would replace all cash bail with a ruling from the risk-assessment system.


To be clear, without this proposition the cash bail in California was going away. So this is a “do over” attempt by cash bail proponents via the proposition system.

There’s only been one poll so far (39% to uphold the new law, thus ending cash bail, 32% repeal, 29% undecided):

https://ballotpedia.org/California_Proposition_25,_Replace_C...


We need new theory and social contract about how to reconcile individual agency and statistical markers. Until it is in place all algorithms will be unfair and whatever the worst ism of the day on social media is.

Unfortunately thinking on the subject usually leads to despair and pouring a quadruple bourbon


Failure to appear rates are consistently under 5% in NYC. The cost of incarceration is approximately $500/night. The most common reason cited for missing court, approximately 70%, is childcare/elder care.

Failure to make bail more often results in job loss, homelessness, etc.

The very simple economics of bail, beyond the obvious injustice of incarceration before judgement only for the poor, is a complete and utter failure.

The only justification for the current system is cruelty.

The singular position of conservatives is, simply put, is that there must be out-groups for whom the law binds, but does not protect, and in-groups for who. The law protects, but does not bind.


Oregon has the improved version of the algorithm:

    if perp_is_left_wing:
      release()
      drop_charges()
It's been running for 120 days and counting, in a fiery, but mostly peaceful manner, per CNN reports. Only a few people have been murdered. Federal authorities have submitted a patch the other day to deputize officers as federal marshals so that you never end up in this branch. We'll see if that helps.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: