Hacker News new | past | comments | ask | show | jobs | submit login

I think you are wrong.

1. From a technical point of view, The mere fact of a single model responsible for all decisions is bad. The inherent variance of judges opinions in concrete cases is the best way to fight bias. Bias by definition means less variance. Consolidating all decision making will tremendously worsen the bias.

2. From a moral point of view, law is made by people for people. It is a convention and ritual which gets it's moral validity from it's connection with tradition. We can hardly quantify these things let alone incorporate them to an ML algorithm. We ensure these by having people study for years, take bar exams, and through apprentenship programs. Do you really believe the state of the art in ML can come close to this? As a practitioner of ML, I know we can at most replace human activities that usually don't require any training, and are done by a human in a few seconds.

3. ML is really crappy. But is hyped as fact based and scientific. This is dangerous, a judge seeing a prediction by an algo is forced to comply with the algo, otherwise he'll be scrutinized for dismissing objective facts.




> 1. From a technical point of view, The mere fact of a single model responsible for all decisions is bad. The inherent variance of judges opinions in concrete cases is the best way to fight bias. Bias by definition means less variance. Consolidating all decision making will tremendously worsen the bia

Who said there has to be one model? Bias also definitely does not mean less variance. If I were to try to flesh out the argument I think you're trying to make, it'd go something like this: Judges have uncorrelated biases, and that low correlation between their biases ultimately results in a lower bias than a unified model. However, whether or not that's actually true in practice hinges on two important things: How uncorrelated those biases actually are, and their relative magnitudes to those of the unified model.

> 2. From a moral point of view, law is made by people for people. It is a convention and ritual which gets it's moral validity from it's connection with tradition. We can hardly quantify these things let alone incorporate them to an ML algorithm. We ensure these by having people study for years, take bar exams, and through apprentenship programs. Do you really believe the state of the art in ML can come close to this? As a practitioner of ML, I know we can at most replace human activities that usually don't require any training, and are done by a human in a few seconds.

I don't believe that ML is at the point where it should be the sole arbiter of these things, no. But I do believe that a properly calibrated model can be a very useful guide to these sorts of decisions.

> 3. ML is really crappy. But is hyped as fact based and scientific. This is dangerous, a judge seeing a prediction by an algo is forced to comply with the algo, otherwise he'll be scrutinized for dismissing objective facts.

This is just an education problem though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: