40 hours a week times 52 weeks is 2080 hours. Subtract a few weeks for vacations and holidays, and you get a little less that 2000 hours. So, basically, this is a little more than one programmer-year of effort if the estimate is in the right ballpark.
It's gross that the decision not to fix this carries an apparent implicit economic calculation that one programmer-year is more valuable than the freedom that is being denied to an unknown number of people whom society deems less important. (Granted the actual situation is more complicated and the state is constrained by their contract with the vendor, which we can reasonably guess is going to charge as much as they can contractually get away with rather than the programmer's actual salary cost.)
At least the Department of Corrections has assigned people to do the calculations manually. That's better, but it sounds like they just don't have enough people on it to keep up.
Previously, it seems like there was a single standard, applied universally: 1 day of earned release credit for every 6 days served. The new rules have many more inputs, with lots of caveats: only certain offenses are eligible, and the inmate can not have been convicted of some other types of offenses, and the inmate must have completed some specific courses, and the inmate can't have previously been convicted of certain felonies.
The 2k hours may very well be excessive, and I don't care if it takes 20k hours, it means they should mothball their software and do it manually if that's the case, but just calling it a "bug" is misleading IMO.
I'm guessing up til now, the days of time off earned was done in real time. Basically all you need is the day they entered, divide by 6 and truncate and there's your days. If there are infractions that cost days, you could still probably do it in a single SQL query.
This new system will have to know what kind of crime they committed, which might mean integrating with some byzantine government software from the 90s that looks like it's from the 70s. Previously they only really needed the prison system's records, which may or may not include their past crimes.
I'm guessing they're worried the integration won't be as easy as proposed. I wouldn't be surprised if it takes a month just to get the dev access to everything. I might even be surprised if it was that short.
You'd have the private prisons and the prison guards union climbing up everyone's posteriors.
Spending money will remain economic decision until we can have government agencies fueled by the righteous indignation of their critics rather than having a line item added to their budget. Until you can convert that indignation into legal tender, agencies will remain subject to old fashioned accounting constraints.
* a UI change to check the box this information is on
* UI change to view whether the box is already checked or not
* a data model change to store the information
* business logic change that modifies some critical code that calculates when someone should be released
* security/access control change to decide who is allowed to check this box
* auditing logic to keep logs of this stuff
* possibly new UI/management code to add/remove members to this group of people who can check the box
* together with adding some procedures for tracking who completes what, documentation, training and auditing.
* Testing the whole thing.
Probably for piles of who knows what where the original author is long gone and the test code is no longer functional.
Just to keep things in perspective.
2. What I described is not the actual fix, it is the temporary stopgap. The prison department isn't going to pay someone to click checkboxes all the time for tens of thousands of inmates each year. They will want this info to be set automatically -- e.g. an integration from whatever software system(s) are used to track completion of the coursework to this system, so you are not hiring another employee to sit and click all the time nor do you need to create reporting procedures to get that info into the hands of the person who is clicking. The appropriate design then requires automation, which will require security controls, and it's a pain. It could easily be more than a year of work, again depending on how many systems they need to integrate against, what types of sign off/controls are required, how much paperwork is required, etc.
For example, maybe the coursework has no software tracking, in which case they need to throw up a portal and have the people running the course fill out who did what, and then throw up another portal to have someone else review that.
Lots of stuff ends up being passed around by ftp or csv uploads. I've seen horror stories. So it really depends on how they plan to do this integration -- the manual button clicking was just an example of a least effort system that relied on a lot of manual labor, but perhaps this is not in their budget either.
Yes, it's a few weeks of coding at best. Maybe more if they need to integrate with some external systems for data ingestion.
But it's not a lean agile MVP for a startup. Redtape is just as much a deliverable as the function itself.
And 2000 hours for a 4 person team is a quarter of a year. Sure it might take less for a full stack ninja with 5 years of rockstar experience, but those folks are not available for some reason (Pink Floyd - Money!).
Seems plausible it could balloon.
I’m surprised this doesn’t create a massive liability for the state.
That would basically result on patients not getting their ostomy bags on time, and I can't even imagine what would follow afterwards. What would be the reactions of patients and their relatives, what levels of stress would hospitals' employees would be subjected to, and so on.
I left the company some months after that, and I don't know what was the final decision, but they'd been warned.
Maybe one day some set of ethical standards will be considered non-functional requirements as important as robustness, security and others.
With technicians being responsible for warning their managers, managers being responsible for assessing risks and documenting their decisions, everything being made transparently and everybody being accountable.
That this problem is allowed to persist seems like an indication that the people in charge believe that prisoners have a low probability of successfully suing the state for damages.
If Arizona isn't acting quickly enough, I wonder if the federal government can get involved?
I think about that every time I read about another government (or private!) company that wastes tens or hundreds of million of dollars (or euros or pounds) on custom software.
It seems like there should be 1, 2, or 3 DMV programs. The same for building codes, tax codes, etc. And prison software. You can be more like Massachusetts or Mississippi or Montana (hypothetical examples) but pick one and harmonize with it.
1: compiled is the lowest of 3 standards that outside accountants can do; "reviewed" is higher and "audited" is the highest. Even at the compiled level they mailed out postcards to a certain number of customers asking if they were customers over the past year and had spent this much money. It was fairly easy for the acquiring company's outside accountants to review PWC's work and bring it up to audited standard.
How many unemployment systems, prisoner tracking systems, DMV systems do you need? These are common components across governments.
Example: Login.gov now supports local and state government partners. Your constituent IAM needs can now be met by a federal team that is efficient and competent, instead of every city and state reinventing the wheel (poorly and in expensively).
Outside of functions that are joint state-federal to start with, states tend to treat the federal government as just another outside sovereign (and one whose Administration is intermittently actively politically hostile), which is worse than a private contractor in terms of being able to get them to uphold their end of a contract.
So, not someone you’d outsource to unless you were more concerned about having someone else to blame if things go wrong than actually being able to assure that things go right.
> How many unemployment systems, prisoner tracking systems, DMV systems do you need? These are common components across governments.
Mostly, not, because while the names may be the same, the actual laws setting the system requirements tend to be radically different.
Other states might want to do the same, although the fees would probably differ. So the idea is that 10 or 15 states cluster around one solution for a department, 20 for another, 10 for a third and the rest go their own way. The states would have a lot of power in being able to replace working solution A with B or C. So there's 3 or 4 DMV vendors, there's 3 or 4 unemployment vendors, some for contact tracing (my state of Oregon still hasn't implemented the Google/Apple tracing), and so on.
The current situation is that you know a potential replacement will be late and over budget, you just don't know exactly how bad it will be. And Accenture and IBM like it that way and are very adept at persuading the decision makers that they're very special snowflakes and can't use an off-the-shelf solution.
The solution to which is “stop doing big-bang replacements of nontrivial operational systems, instead of incremental ship-of-theseus replacements, using something like the strangler pattern.” And that applies to initially automating existing manual processes, too.
Yes, but not just using existing ones, but having public agency specialized software be developed in the open in free software projects, which could then be forked, remixed, or used as is with or without upstream contribution, as appropriate, rather than closed silos.
I know somebody who audits municipalities. We did a graph that showed relations between different players. It’s basically just a big insider club of usually 20-40 people and families that give contracts to each other at the expense of the tax payer.
Locally, the city decided to disconnect its open air reservoirs and replace them with reservoirs underground. It's a huge construction project and there are supposed to be close relationships between the water bureau and the prime contractor. Close as the #2 at the water bureau being married to a VP at the contractor. People retiring from the bureau and going to work for the company. And so on.
The city, county, and state legally have to publish notices in a paper that meets certain standards. If the city doesn't like its coverage, it can move its advertising. This probably didn't matter 20 years ago but newspaper want/need every cent they can get.
I think the problem is that unlike our more notable branches, we don't hire experts in the field. I don't mean they're incompetent at technology, but that a problem like this really exists at the intersection of government and technology. We keep hiring general-purpose contractors to build things like this, and then we're shocked when it falls apart in the environment governments exist in.
We need companies that specialize in this intersection. Companies that can keep public sentiment in mind and build an architecture that's flexible in the places where society is. It's the same way that most of us in general purpose IT try to build systems that can adapt to changes in the IT landscape. Put it in Docker so we can run it on a cloud, on bare metal, on k8s and probably on whatever's next. Governments struggle to pivot like that due to funding (how do you argue for funding for features since you can't earn revenue?), and because a lot of it is legislated out of their control. Learning to read the public sentiment is just like us reading trends in a newsletter.
In your cases you have items like: accounting, building codes, tax codes, automobile codes, etc.
While it makes sense to try and harmonize with the general policies, every state, every municipality, and every business is going to have special cases. Even software has edge cases for protocol behaviors.
What would be nicer, imho, is if all of these laws were written in domain specific languages that specify the law and then the software could just pick up the definitions signed into law. Lawyers as they are feel like a combination of legal interpreters, combined with a combination of being red/blue security team members depending on what they are doing.
are there popular languages for implementing these types of DSLs?
Kind of defeats the entire purpose of having states to start with.
If we want to make the US a centralized, unitary state, let's do that through the elected central government and not through deferral to IT contractors.
My immediate reaction is that either (1) it is possible, and the story is therefore more nuanced that might appear at first glance, or (2) it is not possible, and this is an even more egregious problem.
You know ... the obvious way.
I think you misunderstand the nature of authority by thinking that they would recognize this as an exception.
> You know ... the obvious way.
Whats obvious here is that an incarcerated person only has the options that the carceral state permits to them.
This is not the exception, this is the rule.
> and to have the Assistance of Counsel for his defence.
Prisoners are constitutionally required to have "reasonable" access to counsel. I'm sure there's heaps of case law on what exactly is "reasonable", and there's always a risk that the guards won't allow it. If they don't, and you can prove it, you have a very good constitutional case. Kansas had to release like 70 inmates because it was discovered guards were recording inmate phone calls with counsel and releasing the tapes to prosecutors.
Yes, thank you for the tautology. You'll note that this was exactly my question: does the state make such exceptions?
If you don't know, that's perfectly fine, but respectfully, your apparent mistrust of authority is hardly interesting to internet strangers. We're more interested in any factual elements you might have. Please, if you have any information specific to this case that we've overlooked, feel free to share. If not, hopefully someone else does!
The idea that someone in authority, on being reminded that they're wrong, will turn around and do what's being asked by their subject is contrary to virtually every study on authority and, indeed, daily experience.
This entire discussion is in response to an article highlighting the prison system's refusal to update a system to provide inmates with their guaranteed rights.
Of course the state, in principle, makes such exceptions. The question is whether those exceptions are respected. And especially in light of the context in which we're debating (the article), yours and not the parent's seems the extraordinary claim.
Why would they?
> I'm a bit perplexed by your responses -- it seems like you either have information you aren't sharing, or like this subject make you impassioned.
I’m sorry if I come across negatively. I am impassioned about this. People who are locked up have no recognized rights. They are at the mercy of the guards. Not the legislature, not the prison system, the guards.
You would think heating prisons so inmates don't shiver even in all the clothes they can pile on would be obvious. Yet every year the reports are the same. Some prisons can't manage to do it plus they willfully tolerate it and force inmates to be in their cell/room.
Nor do regulators care much, nor does the public.
Look at all the documeted law enforcement abuses. Prison guards are also enforcing laws. But inmates rarely have protests well documted by journalists. Think about that for a second. All the incentives are set up in the most fucked up way too. If an inmate reports a problem with the guards and it doesn't get solved (or even if it gets solved) they are still stuck with the guards. Who can and will make their life even more miserable. If they do if after release? Nobody cares, why didn't they speak up when it happened, blabla. So the system is pretty resistant to change (improvements).
That's just my assumption. Remeber prisoners tend to be from less privileged backgrounds and some may be very ignorant of how the law works or even functionally illiterate. So things that seem "obvious" to educated engineers may not be obvious to them.
More commonly though the people wouldn't even know to contact their lawyer, because they are credited for time served pre-conviction.
All of a sudden that person could no longer make calls for 30 days, and they did nothing wrong to get that.
If corrections staff were held personally liable for these failures, or the local jurisdiction faced steep financial penalties, it wouldn’t happen. No liability, no responsibility.
That is spot on, and generalizes well.
"iot vendors make post-sales money if they collect data from their device"
"phone vendors make money if they bundle terrible apps with their phone"
"robocallers make lots of money, with historically no fines paid out for violations"
They wouldn't even know the first thing about how to hire someone capable of doing this. They'd have to hire a consultant to hire another consultant.
In a situation where "computer says 'No!'" but the law says 'Yes', whoever signed off on the purchase/maintannces of that computer should be held as responsible as if they'd made that decision themselves.
There should be a very simple and obvious answer for any of these over-incarcerated inmates to the question "Who, as in which individual person, do I point an ambulance chasing no win no fee lawyer at for a compensation claim?"
I think the general understanding of paper filing systems vs. computer systems is less specialized!
Here's the relevant statute:
13-1303. Unlawful imprisonment; classification; definition
A. A person commits unlawful imprisonment by knowingly restraining another person.
B. In any prosecution for unlawful imprisonment, it is a defense that:
1. The restraint was accomplished by a peace officer or detention officer acting in good faith in the lawful performance of his duty; or
2. The defendant is a relative of the person restrained and the defendant's sole intent is to assume lawful custody of that person and the restraint was accomplished without physical injury.
C. Unlawful imprisonment is a class 6 felony unless the victim is released voluntarily by the defendant without physical injury in a safe place before arrest in which case it is a class 1 misdemeanor.
D. For the purposes of this section, "detention officer" means a person other than an elected official who is employed by a county, city or town and who is responsible for the supervision, protection, care, custody or control of inmates in a county or municipal correctional institution. Detention officer does not include counselors or secretarial, clerical or professionally trained personnel.
Assumption being that a detention officer is not acting in good faith if they have a list of people who should no longer be detained under state law.
> Assumption being that a detention officer is not acting in good faith if they have a list of people who should no longer be detained under state law.
I agree with your premise and assertion, but I'm not sure that's exactly what's happening here. I'd like to preface this by saying I absolutely believe there need to be ramifications; I'm just not sure that it fits "clearly defined false imprisonment." I think a category would have to be added to the false imprisonment statute for "negligence" for this to be considered false imprisonment and let me tell you why:
From what I can tell, this article is talking about a couple of massive issues but the wrongful imprisonment bit is about a specific bug (SB1310) in ACIS that can't calculate an updated release date for inmates that complete special programs that award additional release credits as per an amendment signed into law in 2019. Since they can't automatically update a release date for individuals that have completed this program, they keep track of it manually. To me, the article doesn't read like they have a list of people who should be released but aren't being released because the software says so; from my very limited perspective it reads like there are certain programs an inmate can complete to earn extra release credits and since the system can't track these extra credits, the detention officers do it manually. I would imagine their manual process goes something like this:
1) Compile list of inmates that have earned extra release credits through the aforementioned release programming.
2) Select inmate from list, possibly in order of original release date, earliest first.
3) Calculate the amount of release credits they received from completion of the programming.
4) Calculate the total hours those credits equal.
5) Deduct hours from release date.
6) Manually update the release date in ACIS (likely requiring warden and/or judicial approval, but idk).
6a) Since ACIS now has the appropriate release date, the inmate will be processed for release now (if the date has passed) or as they normally would be.
6b) Remove inmate's name from list unless currently enrolled in early release programming, in which case they are moved to the bottom of the queue.
7) Lather, rinse, repeat.
Being denied release because of a software error would be hellish for both an inmate and their loved ones... But because it doesn't seem like they have an actual list of people that should have already been released but haven't been because the software made a critical oversight, I don't think it fits the legislation as it exists today for false imprisonment. The tool is broken so they've switched to manual calculation until someone more important decides it's worth fixing.
If we add negligence to the false imprisonment statute, I'd agree wholeheartedly! But IA[very_much]NAL, so I'll confess I don't really know anything about anything.
See also: employment security sites, cannabis track and trace, driving license, etc.
Some of these bugs cause direct financial harm to citizens and this one is much worse!
Show me the test cases! Show me the code!!
Not arguing against it. State secrets are needed in some instances. Just pointing out that if you exempt something, there’ll be people who’ll construe as much as they can under than exemption. Is there any solution to that?
I agree though, it's a tough problem I haven't fully thought through. I can see an argument saying "well if a vulnerability was found and a violent felon/terrorist was released early, that would be _bad!_". Hell, a DMV appointment software could have a vulnerability allowing a drivers license to someone who then commits a terrorist act. I wouldn't put that past a politician to claim under "national security". Of course, as mentioned below, these vulnerabilities would probably be limited in scope if the devices are airgapped (which it better be!). But something tells me they likely aren't all airgapped.
But I genuinely hope that if such a thing were to happen, there would be more good eyes on it than bad ones. Personally I'd look at whatever was in my preferred language. Granted, it would be to learn from it, not to find vulnerabilities, but something tells me there are vulnerabilities in gov't systems even I know are bad.
That something must be kept secret does not mean the rationale for why it must be kept secret must also be. For example you don't need to tell me any secrets about how nuclear weapons are designed to convince me that nuclear weapon design software should not be open source. Even in cases where the devil is in the details and the discussion of whether something should be secret requires an understanding of those secrets, independent auditors with the proper qualifications and clearances can be appointed to validate the need for secrecy, and either they or the officials who appointed them can be publicly scrutinized.
Every system complicated enough to require decision making is open to potential abuse by the decision makers. The entire purpose of democratically elected leaders is to make sure those who would commit such abuses don't have the opportunity to do so for very long. If no one suffers any consequences for skirting a law, why even have laws to begin with?
I think it can be managed but it is a genuine concern nonetheless.
Granted, that doesn't make attack impossible, but it does make it very hard, especially when you disable all the USB ports and optical drives and socialize extreme consequences to any employees not following ITSEC rules.
Why? Because the spec for which the tests where written didn't include some contingency, for example with software that rigidly require certain steps to happen and doesn't provide a human-controlled override.
There was an Ask HN question the other day where the poster asked if the software we are building is making the world a better place. There were hardly any replies at all. Is this because for the most part our efforts in producing software are actually doing the opposite? It certainly seems that way reading articles like this.
> Instead of fixing the bug, department sources said employees are attempting to identify qualifying inmates manually... But sources say the department isn’t even scratching the surface of the entire number of eligible inmates. “The only prisoners that are getting into programming are the squeaky wheels,” a source said, “the ones who already know they qualify or people who have family members on the outside advocating for them.”
> In the meantime, Lamoreaux confirmed the “data is being calculated manually and then entered into the system.” Department sources said this means “someone is sitting there crunching numbers with a calculator and interpreting how each of the new laws that have been passed would impact an inmate.” “It makes me sick,” one source said, noting that even the most diligent employees are capable of making math errors that could result in additional months or years in prison for an inmate. “What the hell are we doing here? People’s lives are at stake.”
Comments like yours seem to glorify a pre-software world filled with manual entry. The reality is that manual entry is even more error-prone, bias-prone, with more people falling through the cracks.
If nothing else, software can be uniformly applied at a mass scale, and audited for any and all bugs. And faulty software can be exposed through leaks like the above, to expose and fix systemic problems. Whereas a world of manual entry simply ignores vast numbers of errors and biases which are extremely hard to detect/prove, and even then, can simply be scapegoated with some unlucky individuals, without any effort to fix systemically.
Instead, it's one where computers do calculations but don't make decisions; and then humans look at those calculations and have a final say (and responsibility!) over inputting a decision into the computer in response to the calculations the computer did, plus any other qualitative raw data factors that are human-legible but machine-illegible (e.g. the "special requests" field on your pizza order.)
Governments already know how to design human-computer systems this way; that knowledge is just not evenly distributed. This is, for example, how military drone software works: the robot computes a target lock and says "I can shoot that if you tell me to"; the human operator makes the decision of whether to grant authorization to shoot; the robot, with authorization, then computes when is best to shoot, and shoots at the optimal time (unless authorization is revoked before that happens.) A human operator somewhere nevertheless bears final responsibility for each shot fired. The human is in command of the software, just as they would be in command of a platoon of infantrymen.
You know policy/mechanism separation? For bureaucratic processes, mechanism is generally fine to automate 100%. But, at the point where policy is computed, you can gain a lot by ensuring that the computed policy goes through a final predicate-function workflow-step defined as "show a human my work and my proposed decision, and then return their decision."
Or, have the computer make decisions when there aren't any "special requests" fields to look at, and have outlier configurations routed to humans. Humans shouldn't need to make every decision in a high-volume system. Computers think in binary, but your design doesn't have to.
I have a great example that just happened to me yesterday.
• I signed up for a meal-kit service. They attempted to deliver a meal-kit to me. They failed. Repeatedly. Multiple weeks of “your kit went missing, and we’re sorry, and we’ve refunded you.”
• Why? The service apparently does their logistics via FedEx ground, though they didn’t mention this anywhere. So, FedEx failed to deliver to me.
• Why? Because the meal-kit service wants the package delivered on a Saturday, but FedEx thinks they’re delivering to a business address and that the business isn’t open until Monday, so they didn’t even try to deliver the package, until the food inside went bad.
* Why did FedEx think this? Well, now we get to the point where a computer followed a rule. See, FedEx is usually really bad at delivering to my apartment building. They don’t even bother to call the buzzer code I have in Address Line 2 of my mailing address, instead sticking a “no buzzer number” slip on the window and making me take a 50min ride on public transit to pick up my package from their depot. But FedEx has this thing called “FedEx Delivery Manager”, which you can use to set up redirect rules, e.g. “if something would go to my apartment, instead send it to pick-up point A [that is actually pretty inconvenient to me and has bad hours, but isn’t nearly as inconvenient to go to as the depot itself].”
I set such a redirect rule, because, for my situation, for most packages, it makes 100% sense. And, I thought, “if there’s ever a case where someone’s shipping something special to me via FedEx, I’ll be able to know that long in advance, and turn off the redirect rule.” But I didn’t know about this shipment, because the meal-kit service never mentioned they were using FedEx as a logistics provider until it was too late.
Some computer within FedEx automatically applied the redirect rule, without any human supervising. Once applied, there was no way to revert the decision—the package was now classified as a delayed ground shipment, to be delivered on Monday. (Apparently, this is because the rule gets applied at point of send, as part of calculating the shipping price of the sender; and so undoing the redirect would retroactively require the sender to pay more for shipping.)
A supervising human in the redirect-rule pipeline would easily have intuited “this is a meal-kit, requiring immediate delivery. It is being delivered on a weekend. The redirect location is closed on weekends. Despite the redirect rule, the recipient very likely wants this to go to their house rather than to some random pick-up point that we can’t deliver to.”
You get me? You can’t teach a computer to see the “gestalt” of a situation like that. If you tried to come up with a sub-rule to handle just this situation, it’d likely cause more false negatives than true negatives, and so people wouldn’t get their redirect rules applied when they wanted them to be. But a human can look at this, and know exactly what implicit goal the pipeline of sender-to-recipient was trying to accomplish by sending this package; and so immediately know what they should actually do to accomplish the goal, rules be damned.
And if they don’t—if they’re not confident—as a human, their instinct will be to phone me and ask what my intent is! A computer’s “instinct”, meanwhile, when generating a low-confidence classification output, is to just still generate that output, unless the designer of the system has specifically foreseen that cases like this could come up in this part of the pipeline, and so has specifically designed the system to have an “unknown” output to its classification enum, such that the programmer responsible for setting up the classifier has something to emit there that’ll actually get taken up.
That's not necessary. You can teach a computer to recognize anomalies and route those to humans. Repeated failures is an obvious one.
> A computer’s “instinct”, meanwhile, when generating a low-confidence classification output, is to just still generate that output
That's a poorly designed system. Human failure.
If we could get pure-AI systems to be “confused by default” like humans are, such that they insist on emitting “unknown” classification-states whether you ask for them or not, they’d be a lot more like humans, and maybe I wouldn’t see humans as having such an advantage here.
Yes, I'm certain that training ML models to separately classify low-confidence outputs, and getting a human in the loop to handle these cases, is a well-known technique in ML-participant business workflow engine design. But I'm not talking about ML-participant business workflow engine design; I'm talking about the lower-level of raw ML-model architecture. I'm talking about adversarial systems component design here: trying to create ML model architectures which assume the business-workflow-engine designer is an idiot or malfeasant, and which force the designer to do the right thing whether they like it or not. (Because, well, look at most existing workflow systems. Is this design technique really as "well-known" as you say? It's certainly not universal; let alone considered part of the Engineering "duty and responsibility" of Systems Engineers—the things they, as Engineers, have to check for in order to sign off on the system; the things they'd be considered malfeasant Engineers if they forget about.)
What I'm saying is that it would be sensible to have models for which it is impossible to ask them to make a purely enumerative classification with no option for "I don't know" or "this seems like an exceptional category that I recognize, but where I haven't been trained well-enough to know what answer I should give about it." Models that automatically train "I don't know" states into themselves — or rather, where every high-confidence output state of the system "evolves out of" a base "I don't know" state, such that not just weird input, but also weird combinations of normal input that were unseen in the training data, result in "I don't know." (This is unlike current ML linear approximators, where you'll never see a model that is high-confidence about all the individual elements of something, but low-confidence about the combination of those elements. Your spam filtering engine should be confused the first time it sees GTUBE and the hacked-in algorithmic part of it says "1.0 confidence, that's spam." It should be confused by its own confidence in the face of no individual elements firing. You should have to train it that that's an allowed thing to happen—because in almost all other situations where that would happen, it'd be a bug!)
Ideally, while I'm dreaming, the model itself would also have a sort of online pseudo-training where it is fed back the business-workflow process result of its outputs — not to learn from them, but rather to act as a self-check on the higher-level workflow process (like line-of-duty humans do!) where the model would "get upset" and refuse to operate further, if the higher-level process is treating the model's "I don't know" signals no differently than its high-confidence signals (i.e. if it's bucketing "I don't know" as if it meant the same as some specific category, 100% of the time.) Essentially, where the component-as-employee would "file a grievance" with the system. The idea would be that a systems designer literally could not create a workflow with such models as components, but avoid having an "exceptional situation handling" decision-maker component (whether that be a human, or another AI with different knowledge); just like the systems designer of a factory that employs real humans wouldn't be able to tell the humans to "shut up and do their jobs" with no ability to report exceptional cases to a supervisor, without that becoming a grievance.
When designing a system with humans as components, you're forced to take into account that the humans won't do their jobs unless they can bubble up issues. Ideally, IMHO, ML models for use in business-process workflow automation would have the same property. You shouldn't be able to tell the model to "shut up and decide."
(And while a systems designer could be bullheaded and just switch to a simpler ML architecture that never "refuses to decide", if we had these hypothetical "moody" ML models, we could always then do what we do for civil engineering: building codes, government inspectors, etc. It's hard/impractical to check a whole business rules engine for exhaustive human-in-the-loop conditions; but it's easy/practical enough to just check that all the ML models in the system have architectures that force human-in-the-loop conditions.)
Email programs generally have a mechanism for reviewing email and changing the classification. I think your "pure-AI" phrase describes a system that doesn't have any mechanism for reviewing and adjusting the machine's classification. The fact that a spam message winds up in your inbox sometimes is probably that low-confidence human-in-the-loop process we've been talking about. I'm sure that the system errs on the side of classifying spam as ham, because the reverse is much worse. Why have two different interfaces for reading emails, one for reading known-ham and one for reviewing suspected-spam, when you can combine the two seamlessly?
Perhaps you've confused bad user interface decisions for bad machine learning system decisions. I'd like to see some kind of likelihood-spam indicator (which the ML system undoubtedly reports) rather than a binary spam-or-not, but the interface designer chose to arbitrarily threshold. I think in this case you should blame the user interface designer for thinking that people are stupid and can't handle non-binary classifications. We're all hip to "they" these days.
You won't get this though. If the machines are the only ones capable of making the calculations with less error then a human can only validate higher level criteria. Things like "responsibility" and "accountability" become very vague words in these scenarios, so be specific.
A human should be able to trace calculations software makes through auditing. The software will need to be good at indicating what needs auditing and what doesn't for the sake of time and effort. You'll probably also need a way for inmates to start an auditing process.
Usually, in any case where a bureaucracy could generate a Kafkaesque nightmare scenario, just forcing a human being to actually decide whether to implement the computer's decision or not (rather than having their job be "do whatever the computer tells you"), will remove the Kafkaesque-ness. (This is, in fact, the whole reason we have courts: to put a human—a judge—in the critical path for applying criminal law to people.)
Let's be specific who you're comparing a judge to though. A guard, social worker, or bureaucrat with the guard being most likely. A guard probably has a lot of things to do on any given day, administrative exercises would only be part of them. The same could be said of a social worker. This is why I cautioned against making someone who is likely underpaid and doesn't have much time capital "responsible" for something as important as how long someone stays in the system.
I think that the pre-software world was quite bias-prone and extremely expensive for large processing jobs like this. The question is how this system was allowed to transition from the expensive manually managed system that used to be in place to the automatic software driven system that is replacing it at such a cut-rate that gigantic bugs were allowed to sneak in.
It appears this software is primarily used by the state government so why was such a poor replacement allowed as a substitute for the working manual process.
Also, the number of bugs this software has accumulated since Nov 2019 (14000) is astounding enough that I assume it's counting incidents - that's a fair way to go since these are folks' lives, but I'd be curious to know just how bug laden this software actually is.
Although there is another factor here - this specific release program was a rather late feature addition that may not have been covered in the original contract with ACIS since the bill was only signed into law two months before the software was rolled out.
Or we did, but then used the resulting easier-to-learn / easier-to-write languages exclusively for web dev, and further specialized them.
There's a mind-bogglingly huge chasm of simple business data processing software that has no performance requirements & no need to be written in an impenetrable language.
Any one of the employees there could probably tell you what should be done in each case, and it's an indictment of our profession that we haven't created a good language / system that lets them do so.
You can optimize along increase-developer-productivity or along increase-potential-developer-population. We chose the former.
I have to ask. How could it be any different?
The vast majority (all?) of the languages are made by devs. Devs work harder and produce better code when they're working on something they want to use.
And the mainstream corporate-sponsored languages (Java, C#, Go) all seem to have started with groups of devs that really didn't want to use C++, which provides roughly the same incentives.
The kind of drive needed to develop and maintain a solid language (to say nothing about an easy to work with language) kind of has to be a passion project, and people aren't generally able to choose what they're passionate about.
It doesn’t have to be. But when it’s subjected to the same incentives that produced this software and perpetuated its broken state, we should expect the result to be much the same.
When you pull back and try to look at it with fresh eyes, our prison system is abjectly terrifying. It’s designed to funnel wealth to private entities, not to implement justice or rehabilitate criminals or whatever other worthy goal(s) you might imagine for it. This story (as horrifying as it is just by itself) is only one little corner of the monolithic perversity of the system as a whole, and the executive powers involved in steering that system are about as close to evil as you can find in the real world.
The whole thing needs to be torn down and rebuilt. As long as it exists, it puts the lie to our claim of being a society that values freedom and justice.
Circling back, I guess the point is that the ideas about how to do software in your last paragraph have no chance of being implemented in the system as it currently exists. To fix “systemic problems”, we will have to aim a lot higher with a much bigger gun.
We can no longer afford to partition the people who understand/use business logic from the people who turn it into code and maintain that code. Period. It's ridiculous and endemic at this point. This problem permeates virtually every large organization in existence; public or private.
It's partly an issue of education, partly an issue of organizational structuring, and partly an issue of accessibility of technologies. But the sum of these parts has become entirely unacceptable in the year 2021.
The inmates in this article would be released immediately after the code-law is implemented; you could apply new tax laws (i.e. as a config file) to your accounting software.
Why maintain an obfuscated legal text when you need it in software anyway?
From a professional who works with data systems, you're more likely to have a database with bad data in it that not.
For every piece of software that can directly and materially harm someone's life like this, there should be a chain of responsibility. And within that chain, there should be legal recourse and, in most cases, penal consequences, especially in the case of inadequate software quality/testing/validation, should the software fail to perform its task correctly. Bonus side effect, software quality will go up across the board in the industry.
While I do agree that making software better/more reliable is a good goal, I believe we would be better off making the system as a whole more robust; the system that includes humans. For every situation where a piece of software has control of something that effects society (individual, group, etc), there should always be a clear and direct means of appealing / pushing back on the decision that was made. Those means should involve a human reviewing the information and making a decision based on that information, not on what the computer said. There's thread after thread of us saying the exact same thing about companies like Google and Facebook; it should apply as a general rule.
You don't hear anyone saying we should throw out finite-element analyses and other computational verification methods when designing bridges because bridges can never be perfectly secure. Yet that is exactly the sentiment I often hear on software.
Now think about the state of US infrastructure. Does it inspire confidence for the future?
I don't know about infrastructure in the US. I don't live there. I'm happy with the infrastructure in west Europe though. I wish that much care was put in to the software I use every day.
That’s one of the reasons I think Excel (and tools like Notion) are so popular - the people on the ground can learn to express themselves in the full context of the tools. I think this sort of software is far more important than we give it credit for. (It’s an invisible problem to us, because we can change the software.)
Like all our governance mechanisms have a built-in system of constant change.
I’m not saying it can’t happen, but it would be very unusual circumstances, especially since there’s usually an operator of the software sitting between the developer and the person harmed by the software.
Or at least a huge share of that burden needs to be on the client so that they define and then test and control the SW they receive properly.
The problems with the software sound like typical big software project problems. Trying to cover a huge breadth of use cases with lots of very important tiny details and released in a big bang (one migration). It sounds like more of a project mgmt problem than a software problem to me.
But maybe I am just a hammer and see nails everywhere.
It seems keeping inmates longer pays better than releasing them on time.
More people working with a gun to their head. I'd rather the gun be pointed at the person who already has a gun pointed at me, instead of both barrels facing in my direction.
If I'm (or my company is) personally on the hook for bugs, then I'm going to adopt a NASA-like software quality regimen, pushing up the cost of the product.
Every single part of the software stack below me, from hardware, OS, compiler toolchain, disavows responsibility so if I have to absorb all the risk, the product is going to be mind bogglingly expensive.
We're not talking about the newest social media hype. This software actually matters. Specially since today most of these bureaucratic processes can't be done without these softwares.
You see this in every topic.
Every "muh pride in muh trade" person says something like this about the relevant trade but the fact of the matter is that the world runs on off-brand duct tape, harbor freight tools, walmart jeans, economy tires, and all sorts of other "value" solutions and the race to the bottom is what has given us much of the modern world that we take for granted.
A balance needs to be struck. And it generally needs to be struck further toward the "quickly and cheaply build it like crap but make it easy to override or reset" portion of the available solution space than anyone pontificating about quality on the internet will readily admit.
This is a level of accountability that basically every other field of engineering is held to, and they've all risen to the challenge and left the "off-brand duct tape" behind.
Even within programming, planes don't fall out of the sky daily, so I feel safe assume the aerospace programmers are comfortable working with a high degree of responsibility. High speed traders are dealing with million-dollar stakes and a single mistake can make the news. I'd expect they've got a very accountable culture where people get fired when that happens.
There are costs, yes, but there's also costs to keep 733 people illegally imprisoned - we're talking two man-years of peoples lives lost every DAY this goes on.
If software developers are held responsible for the software then expect costs to multiply. Nobody would directly sell you software either - they'd sell you a hardware and software bundle that you must use exactly as the developers say. If you input a value that's out of bounds then that's on you. The software also won't get updates and it will run on 20 year old hardware. That's not too dissimilar to what we have in aerospace, right? And developers aren't even held responsible there! It's the companies, so expect it to be worse than even that.
When it comes to critical system I think it's fair to say that the engineers who build it are the only ones who can fully understand the risk.
This is the point behind accreditation. It forces the supplier to maintain a minimum bar for services to protect the reputation of the industry.
Before a bridge, house or even patio deck with a foundation is used a safety inspector needs to give approval.
Yes we do. People on the internet might not but look at the formal documentation that goes with any bridge plans. It will talk about factors of safety, various loads, environmental conditions and establish a set of constraints outside of which the bridge is not expected to perform as advertised.
>speed traders are dealing with million-dollar stakes and a single mistake can make the news
It's really easy to put HFT a pedestal when you can't inspect it up close but I assure you that for every Citadel and P72 there is half a dozen firms with sloppy software that goes absolutely crazy if non-ideal but foreseeable things happen. These people are making money hand over fist (kind of) by building to the minimum. There's one firm I want to name because of how much everything they have is held together with duct tape but they're nice guys so I won't.
So very, very true.
The other option is to accept we have mediocre software that creates a number of problems that we are willing to accept; NO, not when people's lives are at stake.
The answer is human oversight.
Because their constituents want people to be punished and if the inmates have to suffer a little extra so be it, "they shouldn't have committed a crime."
Our society is severely lacking in empathy.
No, you know how to blame people and punish people, but that doesn't mean you know how to deliver custom bespoke software for a price that the various government agencies can afford which doesn't have bugs that severely hurt peoples lives.
In fact, punishing people is not going to accomplish that.
That's the problem with a legislature that thinks it can pass any law it wants - let's take into account this new variable X that our software has no way of collecting or measuring - without looking at the feasibility of actually implementing the law given the infrastructure available, and without approving a corresponding budget for software upgrades to actually enact the law, and taking into account how much time it would take to write, test, deploy, and then train people to use the new software instead of just issuing streams of mandates like Emperor Norton and expecting the mandates to materialize into existence like the morning dew. And if said morning dew does not appear, then we can punish and sue the people in charge when they tell us there is no way they can do what we are asking them.
Of course there is blame on the prison leadership for covering things up and that leadership should be fired, but you can punish and sue people all day long and it's not going to result in any good code being written. Punish enough people, and it will just result in the Law being repealed.
The problem with this type of bespoke code is that it has exactly 1 customer, so it's going to be horrendously expensive while also being buggy and quickly thrown together compared to software whose development costs are leveraged over millions of customers. And then what happens next year when some crusader decides that they need to take some other new variable into account? Constantly changing requirements, underspecified projects, one-off projects whose schedules are impossible to estimate, and cash strapped local governments. Yeah, that's a recipe for success.
This is why everyone hates enterprise software, but even enterprise software has tens of thousands of customers. Bespoke software for the Arizona prison system -- forget it.
Wikipedia says, "Under common law, false imprisonment is both a crime and a tort".
The “new” systems are usually aping the old system behavior. In one case, I ran into a system where some company converted COBOL transactions into Java with some sort of automated tool to put the legacy system “on the internet”.
And don't tell me you can't buy two CRUD applications for 24 million dollars. It's a silly amount of money for such a buggy application.
They aren't cowardly; they are responding rationally to a constituency that hates "criminals". Prioritizing fixing discriminatory systems (such as this software, or "stop and frisk", or the death penalty) is bad electoral politics for "tough on crime" politicians.
But usually in most parts of the world these government contracts are intentionally made ambiguous for so many reasons including corruption and incompetence of the people writing requirements.
>“It was Thanksgiving weekend,” one source recalled. “We were killing ourselves working on it, but every person associated with the software rollout begged (Deputy Director) Profiri not to go live.”
I find the government "requirements" process tends to create situations like this. Rather than build flexible software that puts some degree of trust in the person using it, they tend to overspecify the current bureaucratic process. In many cases, the person pushing for the software is looking to use software to enforce bureaucratic control that they have been unable to otherwise exercise, with the effect of the people the project initiator wants to use the software simply working around it. They then institute all sorts of punishments and controls to insure it must be used. This then results in the kind of insane situation we have here, where you can't do something perfectly legal because "computer says no".
This is frequently my observation as well. In the process of creating stricter control the bureaucrat increases the the power of their bureaucracy while shifting the blame for any problems to a faceless entity.
They then institute all sorts of punishments and controls to insure it must be used.
This leads me to one of my primary frustrations with the bureaucratization of our lives. Severe consequences are attached to low stakes situations and rational individuals who see the harm caused by the situation are rendered powerless to make changes.
You can see the process at work within this very thread -- "And within that chain, there should be legal recourse and, in most cases, penal consequences, especially in the case of inadequate software quality/testing/validation, should the software fail to perform its task correctly." (https://news.ycombinator.com/item?id=26228195)
People seem unable to imagine any way to improve things except by adding more and more legal consequences. We need to stop doing this!
Is bureaucracy like violence? If it isn't working you aren't using enough of it?
They are basically using software to preserve the problem to which they are the solution. i.e. the shirky principle
In a civil law system it's more likely achievable, but I'm quite sure the requirements aren't that clear cut judging from the text.
That’s in part why international business to business contracts almost always specify a common law jurisdiction as the required venue for any lawsuits.
First problem coming to my mind: do they have the budget to pay the software developers to add this new functionality to the software? Do they have to ask the money to someone else, maybe to the very politicians that changed the requirement?
Then when this is settled there are the usual problems of analysis and implementation. Probably also where to get that input that they didn't have before. It could be a large project. But 16 months, ouch.
Same goes for software requirements. Good requirements make the intent clear but allow implementers some flexibility. Specifying everything in minute detail is usually a recipe for disaster.
You are attacking the wrong target. It's the government that's broken. This kind of outrage can happen just as easily with pencil and paper. The root cause is the lack of accountability and desire to make the government function better.
I'll note that this isn't the first time that people have said "well its the algorithm" when they were responsible. The example that springs to mind is bail risk assessments. You're very correct in that there are people making real decisions that are very cruel here. The machines give them something to hide behind.
Right, in the same way knives are used to rob people.
This is not a new problem: an organization strategically builds an unmanageable bureacracy and then profits off the issue while claiming incompetence.
Computers just make said bureaucracy cheaper to operate.
The fact people are not asking that is worrying. I understand why the system was not designed to do something that happened later (even if it could have been reasonably foreseen) but the fact that it was implemented with no override is really the scandal.
I don't know whether this comes down to an amount of power that exists in a Governor that means the rest of the organisation can't say, "sorry Guv, but we can't do this because the software wasn't written to". If TV is to be believed, Governors want things done yesterday and you worry about the problems.
This right here is the difference between conventional engineering disciplines where designs require a Stamp from an Engineer of Record who takes on personal responsibility in the event of design failures vs. the current discipline of software engineering.
There's a big difference between a software developer and a software engineer, and I think that difference should be codified with a licensure and a stamp like it is in every other engineering field in the states.
Software like this ought to require a stamp.
A decent analogy is the environmental work I've done. When we come up with solutions and mitigations to environmental problems, like software, we can't always predict the result because of the complexities involved. So we stamp a design, but we, or the agencies responsible for allowing the project often specify additional monitoring or other stipulations with very specific performance guidelines. It's a flexible system and possible to adapt to, but there are real consequences and fines when targets aren't met. When bad things happen, the specifics of what went wrong and why are very relevant and the engineer may be to blame, or the owner/site manager, or the contractor who did the work, or sometimes no one is to be blamed but the agencies are able to say: "Hey this isn't working and needs to be addressed, do it by this date or else."
In engineering, there's an enormous amount of public trust given to engineered designs. The engineer takes personal responsibility for that public trust that a building or bridge isn't going to fall down. And if you're negligent, it's a BFD.
Given the current level of public trust that we are putting into software systems, it's crazy to me that we haven't adopted a similar system.
I don't mean to understate the difficulty of being a hardware engineer, of any sort. But the whole reason we do things in software at all is because software is more flexible, and adding a new thing comes with less overhead. Hardware, while challenging, tends to follow similar sets of solutions to similar problems. There are only so many things a bridge, or a building, or even a CPU will be tasked to do.
Not saying this is impossible for software, either. Software gets built for man-rated tasks -- and jobs like this should be considered man-rated, because lives depend on it. That means it's going to cost more and take longer, especially when it's software of a kind nobody has ever built before. Who has experience in "software that releases prisoners?"
The reason they don't do that is, therefore, money. I doubt the prison system is willing to pay 10x as much for the software. The software was probably built by the lowest bidder technically acceptable, where "technically acceptable" was incredibly flexible because nobody really knew what had to be done.
That does happen with software a lot, frequently flying under the title of Compensating User Entity Controls (CUECs) or User Control Considerations (UCC). Basically the “here it is, don’t feed it after midnight and don’t let it get wet, and good luck” riders. Sounds like these problems happened way earlier in the lifecycle though - either the requirements were missed or the testing was thorough enough.
This is why penalties are such an important part of the feedback loop. Obviously we can't go back in time and restore someone's phone privileges, but we can award monetary damages for the mistake.
Monetary damages alone won't discourage this behavior, though, as ultimately taxpayers foot the bill. There also must be some degree of accountability for those in charge of the system. Software can't become a tool for dodging accountability. Those in charge of implementing the software, providing the inputs, and managing the outputs must be held accountable for related mistakes.
> There was an Ask HN question the other day where the poster asked if the software we are building is making the world a better place. There were hardly any replies at all.
Few Ask HN questions get many responses. This is also a loaded question, as HN is notorious for nit-picking every response and putting too much emphasis on the downsides. For example, I know farmers who have increased their farm productivity massively using modern hardware and software. However, if I posted that it would inevitably draw concerns about replacing human jobs, right-to-repair issues, and other issues surrounding the space. The world is definitely better off for having more efficient and productive farming techniques, freeing most of us up to do things other than farm.
However, all new advances bring a different set of problems. Instead of trying to force everything into broad categories of better or worse I think it's important to acknowledge that technology makes the world different. Different is a combination of better and worse. The modern world has different problems than we did 100 years ago, but given the choice I wouldn't choose to roll back to the pre-computer era.
> It certainly seems that way reading articles like this.
Both news and social media have a strong bias toward articles that spark anger or outrage. For me, the whole world stops feeling like a dumpster fire when I disconnect from news and social media for a while. I'm looking forward to the post-COVID era where we can get back to interacting with each other in person rather than gathering around a constant stream of negative stories on social media.
Absolutely, and I agree that disconnecting can have positive benefits. On the other hand, at least for me personally, covid has disrupted the mechanisms that normally prevent in depth observation. It has given me time to read books I normally would not have read because that time went to things like waiting for my car to warm up so I can get to work on time, commuting, going out to lunch with co-workers, and going out for drinks with co-workers, friends, and family.
What is described in the article is outrageous. My concerns about bureaucracy and software's role in enabling it, on the other hand, have developed separately because I have the time to consider it.
You're a software developer maintaining an eCommerce platform, on the one hand your platform helps perpetuate low margin and wasteful consumerism, on the other hand your software enables small businesses to compete in the new online world.
Consumerism is bad, but commerce is as old as civilization and supports all of our lifestyles, so on a macro level you're in a tough spot. You're a talented developer putting their skills to work building something the community needs, I personally think that means you're doing good work in the context of your society, but it is difficult to say if it's making the world a better place.
Social media is the same. On the one hand, it connects family and friends, on the other it drives narcissisms, consumerism and misinformation.
You almost have to try and calculate the "Net Good" or "Net Bad" of a type of software and see how the cards fall. For social media I would suggest that it's currently in a "Net Bad" situation, causing more harm than good for example.
All government software should be open source and anyone should be able to investigate the code and submit bug reports, including inmates. If they know there is something wrong, they have a lot of time on their hands to learn a useful skill to fix these issues.
The government should then not be allowed to close a bug as wontfix or invalid without approval from other citizen watchdogs verifying if a bug report is legitimate.
What's the alternative though? No human system I'm aware of can address this in any cost-effective manner. Linus' Law has been demonstrated as being one of the best human approaches. The only software approach I can think of that has addressed this is fault tolerant voting systems used in avionics (NASA, SpaceX, Boing) where the cost of failure is so high that typically three independent implementations vote on the outcome. It's impractical to treat every software system used in government to be built to the same standard.
Even in some of the better run software companies (e.g. Google and Facebook), it's incredibly challenging to achieve system correctness across the entire system. There are always tradeoffs to be made and even in the most critical systems there are limits to how to practically achieve correctness.
I work at one of these better run companies specifically on measuring and guaranteeing correctness and detecting failures in correctness both in production systems (via continuous probing) and as part of change management (integration testing the change against the system in production). It's a really hard problem even for us and we have far better engineers than you find working on the overwhelming majority of government software systems.
The only alternative (and the one I prefer) is to have less government involvement so that fewer systems are involved and you can have more eyeballs scrutinizing fewer systems. Government is far too big already, and there's too strong a desire to keep making it bigger before we adequately tame the complexity we've already built. The co-dependence between two codifications: (1) the law code and (2) software code, further contributes to ossification that is almost impossible to undo.
For a large segment of the US electorate, anything that inflicts pain on "bad people" is "making the world a better place".
If the software was causing prisoners to be released early, most US voters would be up in arms. But if they're being held too long, the calculus is different. In software terms, for many Americans, a "tough on crime" outcome is a "feature not a bug".
But as the complexity goes up and the number of these complex situations increases, are we reaching a point where we outstrip the amount of money, talent and experience our institutions would need to deliver solutions to successfully manage them?
With our resources and intelligence as a species being capped, it seems at some point this is inevitable.
Software does not have its own will. Software is only allowed to make decisions on our behalf because we let it do so.
I do agree that software has no will. It is a tool for facilitating our will for better or worse.
You can see in the film Brazil, from 35 years ago, that this was already a problem and concern even without modern software.
I think the most likely explanation is just that people didn't see the question or weren't interested in having the discussion. Most people believe the work they're doing is at worst neutral. A less likely candidate for the reason (but still more likely than your guess) is that people didn't want to be subjected to unfounded criticism of their work from people who don't know anything about it.
The second and third order consequences is that developers will insulate themselves behind licensing and proofs of practice like every other industry.
Until people actually advocate for real penalties for such harmful violations they don’t care. All their temporary whining and crying is just blowing smoke up our asses.
No, it's now all about "extracting value", "rent seeking", "subscriptions", "censorship", "monopoly" and "control". We got bribed by FAANG and this is the consequence.
It would be hard to see this in e.g. Scandinavian countries, where incarceration is seen as rehabilitative rather than punitive.
In the US, racial discrimination, free market extremism along with "tough on crime" laws have created unimaginably cruel systems; together with private prisons, the goal has been on cutting costs rather than rehabilitating prisoners. Software is just a tool to further that goal.
I brought up the Ask HN question mostly because I felt the lack of replies were a silent acknowledgement of the realities of most software endeavors. That they are not making the world a better place. Most aren't going out of their way to make it worse. Probably, it isn't even a consideration.
Even if ideas like "the medium is the message" are partially true and then just partially applicable, that should give us pause when we try to cross out tools in our morality equations.
I wanted to say that the tools we use impact how we behave, and that impact alone can have moral consequences. The "medium is the message" link I posted earlier talks about how the tools we use influence how we behave and experience the world. For example, if I want to tell a story, I might film a movie, or write a book, or record a radio program, or tell the story around a campfire. Each of these tools will impact the tellers, the receivers, and the story itself. This logic also applies to other more boring forms of communications. For example, using complicated software to say who gets to leave jail makes a difference in how the "tellers" and the "receivers" experience the world. To make a comparison, what if the jailer recorded information on a clipboard and paper calendar?
Here's a thought: Why do we permit private companies to not hire ex-cons? Why do you just get to decide that you don't want to hold up your civic responsibilities like that? Who wants to work with someone that used to be violent maniac, sleazy thief, or worse?
I agree about cost cutting measures and the criminal justice industrial complex. Still we have bigger issues around crime and reconciliation that prevent us from making progress. To be honest, I have trouble understanding how we're going to change, unless the average person can live with someone ruining their life, then spending "only" a year or so in prison and moving on to be successful in a decent paying job.
We still find that outrageous in the US, and it's going to be very tough to make progress that way. It's not about making something "a goal", especially in a country like the US, it's about convincing the wealthy and powerful class to do anything at all about it and stop making it worse.
I don't really think any justice system is actually putting rehabilitation first. Otherwise, you'd be sentenced to "until you get rehabilitated or no more than X years."
I don't think that example you gave is a flaw in the logic. We don't know when people are rehabilitated, and there's reasons for having minimum sentences, whether you agree with them or not.
The greater issue in my opinion is that we do a bunch of stuff to create this illusion of safety for the public (and businesses), and we're not willing to budge to give people a chance. We very reluctantly pass laws that clear some peoples records if they're young enough and haven't done anything so terrible... and that's if you're lucky.
Disenfranchisement should be an exception for people that are likely highly-dysfunctional, not a general purpose solution. A new despised underclass to place a substantial percentage of the population. Seems dystopian to me.
Highly underrated movie, with ever more contemporary relevance.
If it costs the prison 10x normal costs to do calculations by hand.. well, that's the cost of business.
If that description is accurate, that doesn't meet the definition of a "software bug", if the software was produced before that law was passed, and not updated since.
The bug is in the process of not having a plan for updating the software in a timely way when laws change, and not having a requirement in place for overriding the calculations in the interim.
What if an inmate suddenly receives a pardon?
My wife had a citation that affected our liberties. The cop even knew that he didn't have probable cause but let the charge stand for more than a month. Nobody in the system cares. The magistrates and judges don't care, even though the new charge should be dismissed with prejudice over this and other rights violations. The supervisors and IA for the state police don't care and even cover some of the stuff up. The DA's office doesn't care either.
IT'S ALL ABOUT THE MONEY
FTA - "“Currently this calculation is not in ACIS at all,” the report states. “ACIS can calculate 1 earned credit for every 6 days served, but this is a new calculation.”"
tldr; a new law was passed that allowed for a different credit schedule for days served, and the system hasn't been updated to make that calculation.
Of course, if there's money to be made in having a change-resistant system, well that's a different story. YAGNIAYWPTTNFI (You ARE gonna need it, and you will pay through the nose for it) isn't quite as catchy though
A big (some might say forgotten) part of the procurement and development process is research - know your customer, know your market, know your niche.
In this case, that includes
- know how prisons work
- know how the system will be administered
- know how recent law changes might be handled by the system
- know how often prisoners need / deserve workarounds
- know how quickly law changes need to be reflected in code
Also, I'm not saying that the user interface needs to contain this. But we should always advocate for modular systems that are easy to maintain through addition, rather than through re-writes.
If it is genuinely difficult to add a new type of inmate release schedule, then that is poor planning.
But again, I'm not necessarily subscribing to that theory. Money is involved, so facts are in the eye of the sales team.
But I'm also not not subscribing to that theory.
This doesn’t violate YAGNI.
1. You’d have to know in advance what the scope of rule changes would be in order to implement the configuration system.
Human laws do not fit this constraint.
2. You’d also need a way to prove that the configuration system itself was sound.
3. You’d need a way to test configurations to make sure they executed as expected.
That is likely to be no better than just updating the codebase as requirements change, and there are many ways it could increase the cost.
From a planning perspective though, it isn't the rule changes that are the problem; it's the types of input. For example, if you write a system that says "Given X class of crime, and they were convicted in year Y, and Z much time already incarcerated, return f(X, Y, Z) days left". If you can build all your rules around that, that's great.
If you then say "Oh, but for crimes a, b and d, we need to take into account some measure of inmate behaviour" - you now need to incorporate a whole new path from that data point to your decider, and all these functions need to accept this information and either use it or discard it - and that might take some time. (I should insert a discussion about monads as a design pattern that would simplify this code and remove this excuse... but we'll assume ignorance of that for now)
So the question is whether this is some new datapoint that they should have already expected. Maybe it wasn't - but if that's the case, then they definitely looked at the wrong subset of legal history.
2. and 3. are moot points in my opinion. Your configuration system is the basis for all your existing rules. They already have multiple rules, so they have a baseline for saying that the configuration system is sound. If your configuration system fails to properly account for unused datapoints, then fine. But as I mention elsewhere, configurability doesn't necessarily mean end-user configurability. A well-written system should not be resistant to change.
I have worked on a few systems that would serve as prime examples.