Hacker News new | past | comments | ask | show | jobs | submit login
Whistleblowers: Software keeping inmates in Arizona prisons beyond release dates (kjzz.org)
987 points by macg333 14 days ago | hide | past | favorite | 417 comments

> “When they legislate these things, they need to be appropriating enough money to make sure they work,” a source said. They estimated fixing the SB1310 bug would take roughly 2,000 additional programming hours.

40 hours a week times 52 weeks is 2080 hours. Subtract a few weeks for vacations and holidays, and you get a little less that 2000 hours. So, basically, this is a little more than one programmer-year of effort if the estimate is in the right ballpark.

It's gross that the decision not to fix this carries an apparent implicit economic calculation that one programmer-year is more valuable than the freedom that is being denied to an unknown number of people whom society deems less important. (Granted the actual situation is more complicated and the state is constrained by their contract with the vendor, which we can reasonably guess is going to charge as much as they can contractually get away with rather than the programmer's actual salary cost.)

At least the Department of Corrections has assigned people to do the calculations manually. That's better, but it sounds like they just don't have enough people on it to keep up.

That is horse shit and we all know it. What bug takes 2k hours???? thats 250 work days. jesus christ, if I took that long to fix a bug, fire me. And yes, I'm also talking about time to test, write/fix unit tests, write/fix integration tests, releasing into production, and data conversion.

Look at the description of the issue. It's really less of a bug and more of a feature request, in the sense that the legislature changed the rules for how "earned release credits" could be calculated. All of the details are here: https://corrections.az.gov/sites/default/files/documents/PDF... .

Previously, it seems like there was a single standard, applied universally: 1 day of earned release credit for every 6 days served. The new rules have many more inputs, with lots of caveats: only certain offenses are eligible, and the inmate can not have been convicted of some other types of offenses, and the inmate must have completed some specific courses, and the inmate can't have previously been convicted of certain felonies.

The 2k hours may very well be excessive, and I don't care if it takes 20k hours, it means they should mothball their software and do it manually if that's the case, but just calling it a "bug" is misleading IMO.

Totally agree. Looking at this, it is substantially more complicated than it was before.

I'm guessing up til now, the days of time off earned was done in real time. Basically all you need is the day they entered, divide by 6 and truncate and there's your days. If there are infractions that cost days, you could still probably do it in a single SQL query.

This new system will have to know what kind of crime they committed, which might mean integrating with some byzantine government software from the 90s that looks like it's from the 70s. Previously they only really needed the prison system's records, which may or may not include their past crimes.

I'm guessing they're worried the integration won't be as easy as proposed. I wouldn't be surprised if it takes a month just to get the dev access to everything. I might even be surprised if it was that short.

It would sure get fixed if it was releasing prisoners early.

You'd have the private prisons and the prison guards union climbing up everyone's posteriors.

You don't know the codebase. Even people who know a codebase have a hard time giving accurate estimates.

> It's gross that the decision not to fix this carries an apparent implicit economic calculation

Spending money will remain economic decision until we can have government agencies fueled by the righteous indignation of their critics rather than having a line item added to their budget. Until you can convert that indignation into legal tender, agencies will remain subject to old fashioned accounting constraints.

The onerous budget item we are talking about here is a feature that multiply days sentenced by 0.7 if the inmate completes one checkbox item. You know, just to keep things in perspective.

Right, so that's just

* a UI change to check the box this information is on

* UI change to view whether the box is already checked or not

* a data model change to store the information

* business logic change that modifies some critical code that calculates when someone should be released

* security/access control change to decide who is allowed to check this box

* auditing logic to keep logs of this stuff

* possibly new UI/management code to add/remove members to this group of people who can check the box

* together with adding some procedures for tracking who completes what, documentation, training and auditing.

* Testing the whole thing.

Probably for piles of who knows what where the original author is long gone and the test code is no longer functional.

Just to keep things in perspective.

Ok, but you must be charging by the hour if you think this is anywhere near 2000 hours of work

1. Once you work for a big bureaucracy, you realize how slow this stuff is. In terms of how much actual work it takes, it depends very much on the condition of the test environment -- whether it has been maintained and how much effort is required to get back up and running. Plus, how much paperwork is involved in dealing with the government agency that contracts this out. I can easily see a situation in which you don't even accept jobs that are less than 2000 hours as it's not worth it.

2. What I described is not the actual fix, it is the temporary stopgap. The prison department isn't going to pay someone to click checkboxes all the time for tens of thousands of inmates each year. They will want this info to be set automatically -- e.g. an integration from whatever software system(s) are used to track completion of the coursework to this system, so you are not hiring another employee to sit and click all the time nor do you need to create reporting procedures to get that info into the hands of the person who is clicking. The appropriate design then requires automation, which will require security controls, and it's a pain. It could easily be more than a year of work, again depending on how many systems they need to integrate against, what types of sign off/controls are required, how much paperwork is required, etc.

For example, maybe the coursework has no software tracking, in which case they need to throw up a portal and have the people running the course fill out who did what, and then throw up another portal to have someone else review that.

Lots of stuff ends up being passed around by ftp or csv uploads. I've seen horror stories. So it really depends on how they plan to do this integration -- the manual button clicking was just an example of a least effort system that relied on a lot of manual labor, but perhaps this is not in their budget either.

I'm not saying it mightn't be 2000 hours of red tape, incompetence, and corruption. I could believe that easily enough. I'm saying it's not 2000 hours of actual work.

But that's actual work too.

Yes, it's a few weeks of coding at best. Maybe more if they need to integrate with some external systems for data ingestion.

But it's not a lean agile MVP for a startup. Redtape is just as much a deliverable as the function itself.

And 2000 hours for a 4 person team is a quarter of a year. Sure it might take less for a full stack ninja with 5 years of rockstar experience, but those folks are not available for some reason (Pink Floyd - Money!).

I’ve seen a year of dev time spent on less complexity than this a BUNCH of times in industry, and I’m not sure government is a more competent or efficient taskmaster than the Fortune 500.

Given that your post didn't mention the human rights being violated here once, I think it's you that needs to find the correct perspective.

Multiply by 0.7? Doesn’t that mean the sentence is shortened?

Yep! The problem here is that the software doesn't take into account extra release credits earned from a piece of legislation introduced in 2019. Currently, additional credits are earned and the software fails to apply them to the release date, so the parent is saying the fix is to multiply the number of time in the original sentence by .7 to get the new, reduced sentence.

I understand that there's a lot of work that could go into this sort of thing (mocks, accessibility, testing)... but is 2000 hours really a defensible number? It sounds like there's new per-inmate data and calculations for inmate eligibility and sentencing credit. But 2000 hours worth of work? Even sand-bagging it sounds like way too much.

Of course you're assuming they have anyone left at the company that understand the software. So many times the original team that wrote it is gone, and there is a plate of spaghetti left for the next group to figure out.

It's probably worse than that. Most likely the vendor subcontracted it out to an overseas contractor that hired mostly junior-level programmers who are now working elsewhere.

I was pretty incredulous until a cousin poster started talking about auditing, testing, red tape, out of date systems, and security.

Seems plausible it could balloon.

sounds much more like budget politics against the incompetent

> an apparent implicit economic calculation that one programmer-year is more valuable than the freedom that is being denied to an unknown number of people whom society deems less important

I’m surprised this doesn’t create a massive liability for the state.

I've had the opportunity of participating on the design phase of an application to control the distribution of ostomy bags for a network of public hospitals. I was a bit shocked when I knew that the decision-makers were about to cut functionality intended to provide workarounds in case of system failures, so that employees could keep delivering bags upon request.

That would basically result on patients not getting their ostomy bags on time, and I can't even imagine what would follow afterwards. What would be the reactions of patients and their relatives, what levels of stress would hospitals' employees would be subjected to, and so on.

I left the company some months after that, and I don't know what was the final decision, but they'd been warned.

Maybe one day some set of ethical standards will be considered non-functional requirements as important as robustness, security and others.

With technicians being responsible for warning their managers, managers being responsible for assessing risks and documenting their decisions, everything being made transparently and everybody being accountable.

Yeah, I'd assume this would be resolved real quick if the state (or the contractor responsible for the software) had to pay out, say, $100 per inmate per day that they improperly spent in jail past the end of their sentence.

That this problem is allowed to persist seems like an indication that the people in charge believe that prisoners have a low probability of successfully suing the state for damages.

Don’t they have liability? Unlawful imprisonment is a crime. I am amazed that the people can’t sue about this.

You'd think so, but then again it's not too surprising to find that not everyone enjoys the same rights in actuality that we all theoretically have. And rights can vanish if they aren't actively asserted and the people affected don't know about them. The article mentions that the people who have had the problem addressed were the people that complained, or who had family on the outside who knew what was going on.

If Arizona isn't acting quickly enough, I wonder if the federal government can get involved?

When I wanted to have compiled [1] financials, PriceWaterhouseCoopers told me to pick a recognized accounting system, then change the company's business processes to match that. They said absolutely not to go the other way, to try to customize any software to match our business.

I think about that every time I read about another government (or private!) company that wastes tens or hundreds of million of dollars (or euros or pounds) on custom software.

It seems like there should be 1, 2, or 3 DMV programs. The same for building codes, tax codes, etc. And prison software. You can be more like Massachusetts or Mississippi or Montana (hypothetical examples) but pick one and harmonize with it.

1: compiled is the lowest of 3 standards that outside accountants can do; "reviewed" is higher and "audited" is the highest. Even at the compiled level they mailed out postcards to a certain number of customers asking if they were customers over the past year and had spent this much money. It was fairly easy for the acquiring company's outside accountants to review PWC's work and bring it up to audited standard.

This appears to me to be a terrible idea. In effect you would have private companies writing the laws of the land. "I'm sorry California you can't change your laws because it doesn't fit into the three options we have available at our preferred software vendor". Seems like the tail wagging the dog.

Login.gov cribbed off of the UK’s digital office that built a similar system. I believe that’s what OP was alluding too.

How many unemployment systems, prisoner tracking systems, DMV systems do you need? These are common components across governments.

Example: Login.gov now supports local and state government partners. Your constituent IAM needs can now be met by a federal team that is efficient and competent, instead of every city and state reinventing the wheel (poorly and in expensively).

> Example: Login.gov now supports local and state government partners. Congrats, your constituent IAM needs can now be met by a federal team that is efficient and competent.

Outside of functions that are joint state-federal to start with, states tend to treat the federal government as just another outside sovereign (and one whose Administration is intermittently actively politically hostile), which is worse than a private contractor in terms of being able to get them to uphold their end of a contract.

So, not someone you’d outsource to unless you were more concerned about having someone else to blame if things go wrong than actually being able to assure that things go right.

> How many unemployment systems, prisoner tracking systems, DMV systems do you need? These are common components across governments.

Mostly, not, because while the names may be the same, the actual laws setting the system requirements tend to be radically different.

It could go that way. But the idea is that Massachusetts might charge EVs extra license fees because they want to replace the lost gas taxes whereas Montana and Mississippi wouldn't. Massachusetts already has different and higher pollution regulations (typically based on California's).

Other states might want to do the same, although the fees would probably differ. So the idea is that 10 or 15 states cluster around one solution for a department, 20 for another, 10 for a third and the rest go their own way. The states would have a lot of power in being able to replace working solution A with B or C. So there's 3 or 4 DMV vendors, there's 3 or 4 unemployment vendors, some for contact tracing (my state of Oregon still hasn't implemented the Google/Apple tracing), and so on.

The current situation is that you know a potential replacement will be late and over budget, you just don't know exactly how bad it will be. And Accenture and IBM like it that way and are very adept at persuading the decision makers that they're very special snowflakes and can't use an off-the-shelf solution.

> The current situation is that you know a potential replacement will be late and over budget, you just don't know exactly how bad it will be.

The solution to which is “stop doing big-bang replacements of nontrivial operational systems, instead of incremental ship-of-theseus replacements, using something like the strangler pattern.” And that applies to initially automating existing manual processes, too.

You can solve this problem pretty easily by using free software projects.

> You can solve this problem pretty easily by using free software projects.

Yes, but not just using existing ones, but having public agency specialized software be developed in the open in free software projects, which could then be forked, remixed, or used as is with or without upstream contribution, as appropriate, rather than closed silos.

You forgot the closing /sarcasm tag

An alternative is to have the federal government offer the "federal choice" which states and local governments can choose to use instead of rolling out their own.

In states, counties and cities a lot of contracting basically has the purpose of pushing money to well connected people. They don’t want an efficient and cost effective solution.

I know somebody who audits municipalities. We did a graph that showed relations between different players. It’s basically just a big insider club of usually 20-40 people and families that give contracts to each other at the expense of the tax payer.

One of the things we've lost with the decline of newspapers is investigating and publicizing things like this.

Locally, the city decided to disconnect its open air reservoirs and replace them with reservoirs underground. It's a huge construction project and there are supposed to be close relationships between the water bureau and the prime contractor. Close as the #2 at the water bureau being married to a VP at the contractor. People retiring from the bureau and going to work for the company. And so on.

The city, county, and state legally have to publish notices in a paper that meets certain standards. If the city doesn't like its coverage, it can move its advertising. This probably didn't matter 20 years ago but newspaper want/need every cent they can get.

That's hard to do because all of those systems are intertwined. If you use the Montana DMV program, and you want people who get DUIs to have their driver's license suspended, now you have to use the Montana Penal System program. Except Montana's Penal System has a bunch of exceptions written into it for laws that can be either a misdemeanor or a felony, and they don't allow any time off for good behavior. So now are you going to adopt Montana's laws too? There's tax code stuff in there too, so I guess we're lumping in the Montana tax system as well.

I think the problem is that unlike our more notable branches, we don't hire experts in the field. I don't mean they're incompetent at technology, but that a problem like this really exists at the intersection of government and technology. We keep hiring general-purpose contractors to build things like this, and then we're shocked when it falls apart in the environment governments exist in.

We need companies that specialize in this intersection. Companies that can keep public sentiment in mind and build an architecture that's flexible in the places where society is. It's the same way that most of us in general purpose IT try to build systems that can adapt to changes in the IT landscape. Put it in Docker so we can run it on a cloud, on bare metal, on k8s and probably on whatever's next. Governments struggle to pivot like that due to funding (how do you argue for funding for features since you can't earn revenue?), and because a lot of it is legislated out of their control. Learning to read the public sentiment is just like us reading trends in a newsletter.

This advice appears based on deficiencies in programming however. Programs operate on algorithms to process data. When the programs or algorithms fail to be able to do so properly the program is at fault.

In your cases you have items like: accounting, building codes, tax codes, automobile codes, etc.

While it makes sense to try and harmonize with the general policies, every state, every municipality, and every business is going to have special cases. Even software has edge cases for protocol behaviors.

What would be nicer, imho, is if all of these laws were written in domain specific languages that specify the law and then the software could just pick up the definitions signed into law. Lawyers as they are feel like a combination of legal interpreters, combined with a combination of being red/blue security team members depending on what they are doing.

My dad actually created a (failed) startup in the early '00s that modeled immigration law in Prolog, enabling the creation of legally accurate forms and resolving complex legal queries. It was a good idea, it just failed due to infighting and mismanagement.

are there popular languages for implementing these types of DSLs?

Sounds like a suitable task for formal verification languages like Coq and Isabelle.

FWIW this is actually (mostly) the case for building codes. The standard in the USA is the International Building Code (and other related code by the International Code Council), which each jurisdiction adopts into law and amends as needed for local conditions or practices. And these codes in turn reference other international standards specific to the knowledge domain.

> It seems like there should be 1, 2, or 3 DMV programs. The same for building codes, tax codes, etc. And prison software. You can be more like Massachusetts or Mississippi or Montana (hypothetical examples) but pick one and harmonize with it.

Kind of defeats the entire purpose of having states to start with.

If we want to make the US a centralized, unitary state, let's do that through the elected central government and not through deferral to IT contractors.

Wow. I know firsthand from family how this can severely destroy someone's mental health in what may not be so obvious; it is extremely heavy on someone every moment past the first hour they go past their release time, then the first day followed by a variety of things that will then be taken advantage of by other inmate and guards while one's defenses are down. The fun poked at by other jealous inmates and cruel guards constantly will also weigh down hard on another human being. Arizona penal system puts you into almost always into very nasty and dangerous places of incarceration. frompdx made a statement that truly made my gut feel as if I was at the top of a roller coaster I did not want to get on in the first place.

If a prisoner knows he's past his release date, can't he contact his lawyer?

I'm worried this question might get written off. I would actually like to know the answer to this as well.

My immediate reaction is that either (1) it is possible, and the story is therefore more nuanced that might appear at first glance, or (2) it is not possible, and this is an even more egregious problem.

The comment that spawned this thread is here and suggests phone privileges can also be taken away by mistake in this system: https://news.ycombinator.com/item?id=26227031

I know very little about the prison system, but surely the suspension of phone privileged cannot prevent an inmate from contacting his legal representation, right?

Probably not legally. Prisoners are required to be allowed "reasonable" access to their counsel. They might be able to suspend your ability to call your lawyer, as long as they don't stop you from writing them letters and meeting in person, for example. I'm not read on the case law so I don't know specifics, but they can't totally prevent you from meeting your lawyer without serious constitutional issues.

How do you propose for an inmate to contact his representation?

Presumably by saying something like "I understand my phone privileged have been revoked, but I need to contact my lawyer", at which point the authority in question would recognize that this constitutes an exception to said revocation.

You know ... the obvious way.

> at which point the authority in question would recognize that this constitutes an exception to said revocation.

I think you misunderstand the nature of authority by thinking that they would recognize this as an exception.

> You know ... the obvious way.

Whats obvious here is that an incarcerated person only has the options that the carceral state permits to them.

> I think you misunderstand the nature of authority by thinking that they would recognize this as an exception.

This is not the exception, this is the rule.

> and to have the Assistance of Counsel for his defence.

Prisoners are constitutionally required to have "reasonable" access to counsel. I'm sure there's heaps of case law on what exactly is "reasonable", and there's always a risk that the guards won't allow it. If they don't, and you can prove it, you have a very good constitutional case. Kansas had to release like 70 inmates because it was discovered guards were recording inmate phone calls with counsel and releasing the tapes to prosecutors.

>Whats obvious here is that an incarcerated person only has the options that the carceral state permits to them.

Yes, thank you for the tautology. You'll note that this was exactly my question: does the state make such exceptions?

If you don't know, that's perfectly fine, but respectfully, your apparent mistrust of authority is hardly interesting to internet strangers. We're more interested in any factual elements you might have. Please, if you have any information specific to this case that we've overlooked, feel free to share. If not, hopefully someone else does!

Rather than the parent's comment coming across as mistrustful, yours comes across as apologetic for authority.

The idea that someone in authority, on being reminded that they're wrong, will turn around and do what's being asked by their subject is contrary to virtually every study on authority and, indeed, daily experience.

This entire discussion is in response to an article highlighting the prison system's refusal to update a system to provide inmates with their guaranteed rights.

Of course the state, in principle, makes such exceptions. The question is whether those exceptions are respected. And especially in light of the context in which we're debating (the article), yours and not the parent's seems the extraordinary claim.

> Yes, thank you for the tautology. You'll note that this was exactly my question: does the state make such exceptions?

Why would they?

> I'm a bit perplexed by your responses -- it seems like you either have information you aren't sharing, or like this subject make you impassioned.

I’m sorry if I come across negatively. I am impassioned about this. People who are locked up have no recognized rights. They are at the mercy of the guards. Not the legislature, not the prison system, the guards.

Prisons are a clusterfuck, just like healthcare, education and so many big organized multi-level regulated and semi-privately provided services.

You would think heating prisons so inmates don't shiver even in all the clothes they can pile on would be obvious. Yet every year the reports are the same. Some prisons can't manage to do it plus they willfully tolerate it and force inmates to be in their cell/room.

Nor do regulators care much, nor does the public.

Look at all the documeted law enforcement abuses. Prison guards are also enforcing laws. But inmates rarely have protests well documted by journalists. Think about that for a second. All the incentives are set up in the most fucked up way too. If an inmate reports a problem with the guards and it doesn't get solved (or even if it gets solved) they are still stuck with the guards. Who can and will make their life even more miserable. If they do if after release? Nobody cares, why didn't they speak up when it happened, blabla. So the system is pretty resistant to change (improvements).

Mail, or asking another prisoner with communication to the outside to ask their contact to call the lawyer and have them come visit.

If somebody is objecting to this comment based on it being wrong, I'd love to hear any corrections. Perhaps unfairly, I suspect it's more of an objection to the setup being described.

I’m not sure why you’re being downvoted. You did offer a reasonable answer. I think other people are reacting emotionally, the same way I did, because they are morally outraged at the perceived situation.

From reading the article it sounds like laws dictating when a prisoner is released are pretty complicated and many prisoners may not know if they should be released early based on good behavior or changing laws.

That's just my assumption. Remeber prisoners tend to be from less privileged backgrounds and some may be very ignorant of how the law works or even functionally illiterate. So things that seem "obvious" to educated engineers may not be obvious to them.

Depends on if the computer has taken away phone privileges. I suppose a good lawyer would already know the release date and take action without being contacted? But I have no idea.

Most people who can afford good lawyers never spend a day in custody.

You assume people who are jailed have lawyers on retainer. They largely don't.

Sure, that still takes time to get through the system though. Probably months between scheduling and final release.

More commonly though the people wouldn't even know to contact their lawyer, because they are credited for time served pre-conviction.

That article just kept getting worse and worse. They mention assigning a penalty to the wrong inmate and they couldn't fix it.

All of a sudden that person could no longer make calls for 30 days, and they did nothing wrong to get that.

“Show me the incentives, I’ll show you the outcome.”

If corrections staff were held personally liable for these failures, or the local jurisdiction faced steep financial penalties, it wouldn’t happen. No liability, no responsibility.

> "Show me the incentives, I’ll show you the outcome."

That is spot on, and generalizes well.

"iot vendors make post-sales money if they collect data from their device"

"phone vendors make money if they bundle terrible apps with their phone"

"robocallers make lots of money, with historically no fines paid out for violations"

These are prison workers and you're asking them to run a social network (with certain constraints).

They wouldn't even know the first thing about how to hire someone capable of doing this. They'd have to hire a consultant to hire another consultant.

Corrections management is who I would consider the directly responsible party, not corrections ICs (to be clear, no scapegoats). The buck stops somewhere when we’re talking about infringing on someone’s right to freedom. Excuses are unacceptable.

The buck seems to stop at the computer/AI nowadays, in an alarmingly growing number institutions and companies. And you can't punish a computer or hold an AI accountable. This seems to be an end state desired by people who were previously accountable.

It'd be nice if the rules said that if decisions are pushed off to "the computer", then whoever authorised the use of that computer/software is "responsible" for its errors.

In a situation where "computer says 'No!'" but the law says 'Yes', whoever signed off on the purchase/maintannces of that computer should be held as responsible as if they'd made that decision themselves.

There should be a very simple and obvious answer for any of these over-incarcerated inmates to the question "Who, as in which individual person, do I point an ambulance chasing no win no fee lawyer at for a compensation claim?"

Don't let the people in charge skip out on accountability. If their excuse is "the computer", well, they are after all the ones that put the computer system in place and hold responsibility for its outcomes. Take action against them personally (pay cuts, firings) and they or their successors will for sure be motivated to ensure the computer works properly going forward.

How did they manage to do it 20+ years ago?

Sounds like they did it with paper.

I think the general understanding of paper filing systems vs. computer systems is less specialized!

Isn't this clearly defined false imprisonment under Arizona law?

Here's the relevant statute:

13-1303. Unlawful imprisonment; classification; definition

A. A person commits unlawful imprisonment by knowingly restraining another person.

B. In any prosecution for unlawful imprisonment, it is a defense that:

1. The restraint was accomplished by a peace officer or detention officer acting in good faith in the lawful performance of his duty; or

2. The defendant is a relative of the person restrained and the defendant's sole intent is to assume lawful custody of that person and the restraint was accomplished without physical injury.

C. Unlawful imprisonment is a class 6 felony unless the victim is released voluntarily by the defendant without physical injury in a safe place before arrest in which case it is a class 1 misdemeanor.

D. For the purposes of this section, "detention officer" means a person other than an elected official who is employed by a county, city or town and who is responsible for the supervision, protection, care, custody or control of inmates in a county or municipal correctional institution. Detention officer does not include counselors or secretarial, clerical or professionally trained personnel.


Assumption being that a detention officer is not acting in good faith if they have a list of people who should no longer be detained under state law.

You’re assuming that agents of the government are expected to follow their own laws. Those laws are for you and me, not our beknighted public servants.

Their list is of people eligible for a program that would give them an early release, so unless the inmate enrolls the prison would be acting in good faith. Almost like the law was intentionally worded to limit their liability.

> Isn't this clearly defined false imprisonment under Arizona law?

> Assumption being that a detention officer is not acting in good faith if they have a list of people who should no longer be detained under state law.

I agree with your premise and assertion, but I'm not sure that's exactly what's happening here. I'd like to preface this by saying I absolutely believe there need to be ramifications; I'm just not sure that it fits "clearly defined false imprisonment." I think a category would have to be added to the false imprisonment statute for "negligence" for this to be considered false imprisonment and let me tell you why:

From what I can tell, this article is talking about a couple of massive issues but the wrongful imprisonment bit is about a specific bug (SB1310) in ACIS that can't calculate an updated release date for inmates that complete special programs that award additional release credits as per an amendment signed into law in 2019. Since they can't automatically update a release date for individuals that have completed this program, they keep track of it manually. To me, the article doesn't read like they have a list of people who should be released but aren't being released because the software says so; from my very limited perspective it reads like there are certain programs an inmate can complete to earn extra release credits and since the system can't track these extra credits, the detention officers do it manually. I would imagine their manual process goes something like this:

1) Compile list of inmates that have earned extra release credits through the aforementioned release programming.

2) Select inmate from list, possibly in order of original release date, earliest first.

3) Calculate the amount of release credits they received from completion of the programming.

4) Calculate the total hours those credits equal.

5) Deduct hours from release date.

6) Manually update the release date in ACIS (likely requiring warden and/or judicial approval, but idk).

6a) Since ACIS now has the appropriate release date, the inmate will be processed for release now (if the date has passed) or as they normally would be.

6b) Remove inmate's name from list unless currently enrolled in early release programming, in which case they are moved to the bottom of the queue.

7) Lather, rinse, repeat.

Being denied release because of a software error would be hellish for both an inmate and their loved ones... But because it doesn't seem like they have an actual list of people that should have already been released but haven't been because the software made a critical oversight, I don't think it fits the legislation as it exists today for false imprisonment. The tool is broken so they've switched to manual calculation until someone more important decides it's worth fixing.

If we add negligence to the false imprisonment statute, I'd agree wholeheartedly! But IA[very_much]NAL, so I'll confess I don't really know anything about anything.

EDIT: formatting

To color this even further: the hundreds of people who are illegally imprisoned are being held for drug or even just paraphernalia possession. The law that grants them credits explicitly excludes violent felons[1].

[1]: https://corrections.az.gov/sites/default/files/documents/PDF...

It's like gov system don't even have test cases. They should, and they should be public. Why aren't these softwares for the public open source?

See also: employment security sites, cannabis track and trace, driving license, etc.

Some of these bugs cause direct financial harm to citizens and this one is much worse!

Show me the test cases! Show me the code!!

If my tax $ goes to it, it should have source available (excepting natsec). it would be nice to get some value out of it. If it's well written, I could learn how a large scale project works. If not, I can have something to petition and voice my concerns about, inform about vulns, etc.

You exempt national security, and suddenly everything is national security. Look at the FISA “courts”.

Not arguing against it. State secrets are needed in some instances. Just pointing out that if you exempt something, there’ll be people who’ll construe as much as they can under than exemption. Is there any solution to that?

I suppose a congressional committee separate from the intelligence community that can (hopefully) objectively decide whether something will directly damage our national security.

I agree though, it's a tough problem I haven't fully thought through. I can see an argument saying "well if a vulnerability was found and a violent felon/terrorist was released early, that would be _bad!_". Hell, a DMV appointment software could have a vulnerability allowing a drivers license to someone who then commits a terrorist act. I wouldn't put that past a politician to claim under "national security". Of course, as mentioned below, these vulnerabilities would probably be limited in scope if the devices are airgapped (which it better be!). But something tells me they likely aren't all airgapped.

But I genuinely hope that if such a thing were to happen, there would be more good eyes on it than bad ones. Personally I'd look at whatever was in my preferred language. Granted, it would be to learn from it, not to find vulnerabilities, but something tells me there are vulnerabilities in gov't systems even I know are bad.

At some point you need to hold people accountable when they defy the intent of the law.

That something must be kept secret does not mean the rationale for why it must be kept secret must also be. For example you don't need to tell me any secrets about how nuclear weapons are designed to convince me that nuclear weapon design software should not be open source. Even in cases where the devil is in the details and the discussion of whether something should be secret requires an understanding of those secrets, independent auditors with the proper qualifications and clearances can be appointed to validate the need for secrecy, and either they or the officials who appointed them can be publicly scrutinized.

Every system complicated enough to require decision making is open to potential abuse by the decision makers. The entire purpose of democratically elected leaders is to make sure those who would commit such abuses don't have the opportunity to do so for very long. If no one suffers any consequences for skirting a law, why even have laws to begin with?

Well, I can't show you the test cases and code, but the available requirements are pretty tough to go through:


I think there is still a genuine concern that open-source software allows bad people to find loopholes before the good people do. The last thing you want is someone finding a bug that allows a murderer to get released because the computer said-so.

I think it can be managed but it is a genuine concern nonetheless.

Restrict access. Why does a prison management system need to be connected to a public network and be accessible to more than 20 or so authorized users? I worked on plenty of government systems using insecure software galore but it didn't really matter because we were air gapped and you needed to get through Fort Knox level physical security to get physical access to a terminal.

Granted, that doesn't make attack impossible, but it does make it very hard, especially when you disable all the USB ports and optical drives and socialize extreme consequences to any employees not following ITSEC rules.

I would much rather error on the side of releasing someone early, instead of holding people longer.

In best cases, the test cases are good and pass... and yet such errors will still abound.

Why? Because the spec for which the tests where written didn't include some contingency, for example with software that rigidly require certain steps to happen and doesn't provide a human-controlled override.

This is an outrage. It is also a perfect example of how software is used to create increasingly more elaborate and faceless bureaucracies that force individuals to spend more and more time contending with them. Somehow software has become the ultimate vehicle for bureaucratic violence. Software is simultaneously infallible and the perfect scapegoat. The inmate who lost their phone privileges for 30 days is an example. They did nothing wrong but the computer says so and nothing can be done. The computer is right in the sense that its decision cannot be undone, and solely to blame since no human can undo its edict or be held accountable, apparently. It is tragic and absurd.

There was an Ask HN question the other day where the poster asked if the software we are building is making the world a better place. There were hardly any replies at all. Is this because for the most part our efforts in producing software are actually doing the opposite? It certainly seems that way reading articles like this.

The following is very illuminating:

> Instead of fixing the bug, department sources said employees are attempting to identify qualifying inmates manually... But sources say the department isn’t even scratching the surface of the entire number of eligible inmates. “The only prisoners that are getting into programming are the squeaky wheels,” a source said, “the ones who already know they qualify or people who have family members on the outside advocating for them.”

> In the meantime, Lamoreaux confirmed the “data is being calculated manually and then entered into the system.” Department sources said this means “someone is sitting there crunching numbers with a calculator and interpreting how each of the new laws that have been passed would impact an inmate.” “It makes me sick,” one source said, noting that even the most diligent employees are capable of making math errors that could result in additional months or years in prison for an inmate. “What the hell are we doing here? People’s lives are at stake.”

Comments like yours seem to glorify a pre-software world filled with manual entry. The reality is that manual entry is even more error-prone, bias-prone, with more people falling through the cracks.

If nothing else, software can be uniformly applied at a mass scale, and audited for any and all bugs. And faulty software can be exposed through leaks like the above, to expose and fix systemic problems. Whereas a world of manual entry simply ignores vast numbers of errors and biases which are extremely hard to detect/prove, and even then, can simply be scapegoated with some unlucky individuals, without any effort to fix systemically.

The "right" bureaucratic system isn't one with humans doing calculations (which we're bad at); nor is it one where computers on their own make decisions (which they're bad/inflexible at.)

Instead, it's one where computers do calculations but don't make decisions; and then humans look at those calculations and have a final say (and responsibility!) over inputting a decision into the computer in response to the calculations the computer did, plus any other qualitative raw data factors that are human-legible but machine-illegible (e.g. the "special requests" field on your pizza order.)

Governments already know how to design human-computer systems this way; that knowledge is just not evenly distributed. This is, for example, how military drone software works: the robot computes a target lock and says "I can shoot that if you tell me to"; the human operator makes the decision of whether to grant authorization to shoot; the robot, with authorization, then computes when is best to shoot, and shoots at the optimal time (unless authorization is revoked before that happens.) A human operator somewhere nevertheless bears final responsibility for each shot fired. The human is in command of the software, just as they would be in command of a platoon of infantrymen.

You know policy/mechanism separation? For bureaucratic processes, mechanism is generally fine to automate 100%. But, at the point where policy is computed, you can gain a lot by ensuring that the computed policy goes through a final predicate-function workflow-step defined as "show a human my work and my proposed decision, and then return their decision."

> humans look at those calculations and have a final say (and responsibility!) over inputting a decision into the computer in response to the calculations the computer did, plus any other qualitative raw data factors that are human-legible but machine-illegible (e.g. the "special requests" field on your pizza order.)

Or, have the computer make decisions when there aren't any "special requests" fields to look at, and have outlier configurations routed to humans. Humans shouldn't need to make every decision in a high-volume system. Computers think in binary, but your design doesn't have to.

It’s not just when there are “special requests.” (Or rather, it could be, if every entity in the system were able and expected to contribute “special request” inputs — but usually there’s no place for most entities to do this.)

I have a great example that just happened to me yesterday.

• I signed up for a meal-kit service. They attempted to deliver a meal-kit to me. They failed. Repeatedly. Multiple weeks of “your kit went missing, and we’re sorry, and we’ve refunded you.”

• Why? The service apparently does their logistics via FedEx ground, though they didn’t mention this anywhere. So, FedEx failed to deliver to me.

• Why? Because the meal-kit service wants the package delivered on a Saturday, but FedEx thinks they’re delivering to a business address and that the business isn’t open until Monday, so they didn’t even try to deliver the package, until the food inside went bad.

* Why did FedEx think this? Well, now we get to the point where a computer followed a rule. See, FedEx is usually really bad at delivering to my apartment building. They don’t even bother to call the buzzer code I have in Address Line 2 of my mailing address, instead sticking a “no buzzer number” slip on the window and making me take a 50min ride on public transit to pick up my package from their depot. But FedEx has this thing called “FedEx Delivery Manager”, which you can use to set up redirect rules, e.g. “if something would go to my apartment, instead send it to pick-up point A [that is actually pretty inconvenient to me and has bad hours, but isn’t nearly as inconvenient to go to as the depot itself].”

I set such a redirect rule, because, for my situation, for most packages, it makes 100% sense. And, I thought, “if there’s ever a case where someone’s shipping something special to me via FedEx, I’ll be able to know that long in advance, and turn off the redirect rule.” But I didn’t know about this shipment, because the meal-kit service never mentioned they were using FedEx as a logistics provider until it was too late.

Some computer within FedEx automatically applied the redirect rule, without any human supervising. Once applied, there was no way to revert the decision—the package was now classified as a delayed ground shipment, to be delivered on Monday. (Apparently, this is because the rule gets applied at point of send, as part of calculating the shipping price of the sender; and so undoing the redirect would retroactively require the sender to pay more for shipping.)

A supervising human in the redirect-rule pipeline would easily have intuited “this is a meal-kit, requiring immediate delivery. It is being delivered on a weekend. The redirect location is closed on weekends. Despite the redirect rule, the recipient very likely wants this to go to their house rather than to some random pick-up point that we can’t deliver to.”

You get me? You can’t teach a computer to see the “gestalt” of a situation like that. If you tried to come up with a sub-rule to handle just this situation, it’d likely cause more false negatives than true negatives, and so people wouldn’t get their redirect rules applied when they wanted them to be. But a human can look at this, and know exactly what implicit goal the pipeline of sender-to-recipient was trying to accomplish by sending this package; and so immediately know what they should actually do to accomplish the goal, rules be damned.

And if they don’t—if they’re not confident—as a human, their instinct will be to phone me and ask what my intent is! A computer’s “instinct”, meanwhile, when generating a low-confidence classification output, is to just still generate that output, unless the designer of the system has specifically foreseen that cases like this could come up in this part of the pipeline, and so has specifically designed the system to have an “unknown” output to its classification enum, such that the programmer responsible for setting up the classifier has something to emit there that’ll actually get taken up.

> You can’t teach a computer to see the “gestalt” of a situation like that.

That's not necessary. You can teach a computer to recognize anomalies and route those to humans. Repeated failures is an obvious one.

> A computer’s “instinct”, meanwhile, when generating a low-confidence classification output, is to just still generate that output

That's a poorly designed system. Human failure.

All systems are poorly designed. There is no perfect system. But the default failure state of AI not predictively accounting for a case is making a bad decision; while the failure state of a “cybernetic expert system” not predictively accounting for a case is stalling in confusion and asking for more input. Usually, stalling in confusion and asking for more input is exactly what we want. You don’t want an undertrained system to have false confidence.

If we could get pure-AI systems to be “confused by default” like humans are, such that they insist on emitting “unknown” classification-states whether you ask for them or not, they’d be a lot more like humans, and maybe I wouldn’t see humans as having such an advantage here.

I don't know what you mean by "pure-AI systems." I work in this field and have many times implemented a review in the loop, or a route for review. It's an old technique, predating computers.


A "pure-AI system" is a fully-autonomous ML expert system. For example, a spam classifier. In these systems, humans are never brought into the loop at decision-making time — instead, the model makes a decision, acts, and then humans have to deal with the consequences of "dumb" actions (e.g. by looking through their spam folders for false positives) — acting later to reverse the model's action, rather than the model pausing to allow the human to subsume it. This later reversal ("mark as not spam") may train the model; but the model still did a dumb thing at the time, that may have had lasting consequences ("sorry, I didn't get your message, it went to spam") that could have been avoided if the model itself could "choose to not act", emitting a "NULL" result that would crash any workflow-engine it's embedded within unless it gets subsumed by a non-NULL decision further up the chain.

Yes, I'm certain that training ML models to separately classify low-confidence outputs, and getting a human in the loop to handle these cases, is a well-known technique in ML-participant business workflow engine design. But I'm not talking about ML-participant business workflow engine design; I'm talking about the lower-level of raw ML-model architecture. I'm talking about adversarial systems component design here: trying to create ML model architectures which assume the business-workflow-engine designer is an idiot or malfeasant, and which force the designer to do the right thing whether they like it or not. (Because, well, look at most existing workflow systems. Is this design technique really as "well-known" as you say? It's certainly not universal; let alone considered part of the Engineering "duty and responsibility" of Systems Engineers—the things they, as Engineers, have to check for in order to sign off on the system; the things they'd be considered malfeasant Engineers if they forget about.)

What I'm saying is that it would be sensible to have models for which it is impossible to ask them to make a purely enumerative classification with no option for "I don't know" or "this seems like an exceptional category that I recognize, but where I haven't been trained well-enough to know what answer I should give about it." Models that automatically train "I don't know" states into themselves — or rather, where every high-confidence output state of the system "evolves out of" a base "I don't know" state, such that not just weird input, but also weird combinations of normal input that were unseen in the training data, result in "I don't know." (This is unlike current ML linear approximators, where you'll never see a model that is high-confidence about all the individual elements of something, but low-confidence about the combination of those elements. Your spam filtering engine should be confused the first time it sees GTUBE and the hacked-in algorithmic part of it says "1.0 confidence, that's spam." It should be confused by its own confidence in the face of no individual elements firing. You should have to train it that that's an allowed thing to happen—because in almost all other situations where that would happen, it'd be a bug!)

Ideally, while I'm dreaming, the model itself would also have a sort of online pseudo-training where it is fed back the business-workflow process result of its outputs — not to learn from them, but rather to act as a self-check on the higher-level workflow process (like line-of-duty humans do!) where the model would "get upset" and refuse to operate further, if the higher-level process is treating the model's "I don't know" signals no differently than its high-confidence signals (i.e. if it's bucketing "I don't know" as if it meant the same as some specific category, 100% of the time.) Essentially, where the component-as-employee would "file a grievance" with the system. The idea would be that a systems designer literally could not create a workflow with such models as components, but avoid having an "exceptional situation handling" decision-maker component (whether that be a human, or another AI with different knowledge); just like the systems designer of a factory that employs real humans wouldn't be able to tell the humans to "shut up and do their jobs" with no ability to report exceptional cases to a supervisor, without that becoming a grievance.

When designing a system with humans as components, you're forced to take into account that the humans won't do their jobs unless they can bubble up issues. Ideally, IMHO, ML models for use in business-process workflow automation would have the same property. You shouldn't be able to tell the model to "shut up and decide."

(And while a systems designer could be bullheaded and just switch to a simpler ML architecture that never "refuses to decide", if we had these hypothetical "moody" ML models, we could always then do what we do for civil engineering: building codes, government inspectors, etc. It's hard/impractical to check a whole business rules engine for exhaustive human-in-the-loop conditions; but it's easy/practical enough to just check that all the ML models in the system have architectures that force human-in-the-loop conditions.)

> humans have to deal with the consequences of "dumb" actions (e.g. by looking through their spam folders for false positives)

Email programs generally have a mechanism for reviewing email and changing the classification. I think your "pure-AI" phrase describes a system that doesn't have any mechanism for reviewing and adjusting the machine's classification. The fact that a spam message winds up in your inbox sometimes is probably that low-confidence human-in-the-loop process we've been talking about. I'm sure that the system errs on the side of classifying spam as ham, because the reverse is much worse. Why have two different interfaces for reading emails, one for reading known-ham and one for reviewing suspected-spam, when you can combine the two seamlessly?

Perhaps you've confused bad user interface decisions for bad machine learning system decisions. I'd like to see some kind of likelihood-spam indicator (which the ML system undoubtedly reports) rather than a binary spam-or-not, but the interface designer chose to arbitrarily threshold. I think in this case you should blame the user interface designer for thinking that people are stupid and can't handle non-binary classifications. We're all hip to "they" these days.

> and responsibility!

You won't get this though. If the machines are the only ones capable of making the calculations with less error then a human can only validate higher level criteria. Things like "responsibility" and "accountability" become very vague words in these scenarios, so be specific.

A human should be able to trace calculations software makes through auditing. The software will need to be good at indicating what needs auditing and what doesn't for the sake of time and effort. You'll probably also need a way for inmates to start an auditing process.

The human isn't there to check the computer's work; the human is there to look for overriding special-case circumstances the computer can't understand, i.e. executing act-utilitarianism where the computer on its own would be executing rule-utilitarianism.

Usually, in any case where a bureaucracy could generate a Kafkaesque nightmare scenario, just forcing a human being to actually decide whether to implement the computer's decision or not (rather than having their job be "do whatever the computer tells you"), will remove the Kafkaesque-ness. (This is, in fact, the whole reason we have courts: to put a human—a judge—in the critical path for applying criminal law to people.)

I never disagreed with the idea that humans should be involved. I was concerned about the use of "responsible".

Let's be specific who you're comparing a judge to though. A guard, social worker, or bureaucrat with the guard being most likely. A guard probably has a lot of things to do on any given day, administrative exercises would only be part of them. The same could be said of a social worker. This is why I cautioned against making someone who is likely underpaid and doesn't have much time capital "responsible" for something as important as how long someone stays in the system.

I think "human in loop" designs are a good idea at a high level, but a big practical problem you run into when you try to build them is that the humans tend to become dependent on the computers. For example, you could say this is what happened when the self-driving Uber test vehicle killed a pedestrian in 2018. Complacency and (if it's a full-time job) boredom become major challenges in these designs.

> Comments like yours seem to glorify a pre-software world filled with manual entry. The reality is that manual entry is even more error-prone, bias-prone, with more people falling through the cracks.

I think that the pre-software world was quite bias-prone and extremely expensive for large processing jobs like this. The question is how this system was allowed to transition from the expensive manually managed system that used to be in place to the automatic software driven system that is replacing it at such a cut-rate that gigantic bugs were allowed to sneak in.

It appears this software is primarily used by the state government so why was such a poor replacement allowed as a substitute for the working manual process.

Also, the number of bugs this software has accumulated since Nov 2019 (14000) is astounding enough that I assume it's counting incidents - that's a fair way to go since these are folks' lives, but I'd be curious to know just how bug laden this software actually is.

Although there is another factor here - this specific release program was a rather late feature addition that may not have been covered in the original contract with ACIS since the bill was only signed into law two months before the software was rolled out.

The problem is that we never evolved COBOL / VB.

Or we did, but then used the resulting easier-to-learn / easier-to-write languages exclusively for web dev, and further specialized them.

There's a mind-bogglingly huge chasm of simple business data processing software that has no performance requirements & no need to be written in an impenetrable language.

Any one of the employees there could probably tell you what should be done in each case, and it's an indictment of our profession that we haven't created a good language / system that lets them do so.

You can optimize along increase-developer-productivity or along increase-potential-developer-population. We chose the former.

>You can optimize along increase-developer-productivity or along increase-potential-developer-population. We chose the former.

I have to ask. How could it be any different?

The vast majority (all?) of the languages are made by devs. Devs work harder and produce better code when they're working on something they want to use.

And the mainstream corporate-sponsored languages (Java, C#, Go) all seem to have started with groups of devs that really didn't want to use C++, which provides roughly the same incentives.

The kind of drive needed to develop and maintain a solid language (to say nothing about an easy to work with language) kind of has to be a passion project, and people aren't generally able to choose what they're passionate about.

Probably like most government purchases, lowest bidder wins as long as they show on paper that they can do the job. Whether they execute on that is another matter, and some times the subpar work is accepted because of contract issues, sunk cost fallacy, politics/reputations, or schedule.

> The reality is that manual entry is even more error-prone, bias-prone, with more people falling through the cracks.

It doesn’t have to be. But when it’s subjected to the same incentives that produced this software and perpetuated its broken state, we should expect the result to be much the same.

When you pull back and try to look at it with fresh eyes, our prison system is abjectly terrifying. It’s designed to funnel wealth to private entities, not to implement justice or rehabilitate criminals or whatever other worthy goal(s) you might imagine for it. This story (as horrifying as it is just by itself) is only one little corner of the monolithic perversity of the system as a whole, and the executive powers involved in steering that system are about as close to evil as you can find in the real world.

The whole thing needs to be torn down and rebuilt. As long as it exists, it puts the lie to our claim of being a society that values freedom and justice.

Circling back, I guess the point is that the ideas about how to do software in your last paragraph have no chance of being implemented in the system as it currently exists. To fix “systemic problems”, we will have to aim a lot higher with a much bigger gun.

The way I see it, one aspect of this is software literacy. The bureaucrats would only be doing the task by hand instead of fixing the bug (or even cobbling together a more basic automation! Excel could probably get them most of the way there) if they are a) unable to do it themselves, and b) can't/don't want to pay an expert to do it.

We can no longer afford to partition the people who understand/use business logic from the people who turn it into code and maintain that code. Period. It's ridiculous and endemic at this point. This problem permeates virtually every large organization in existence; public or private.

It's partly an issue of education, partly an issue of organizational structuring, and partly an issue of accessibility of technologies. But the sum of these parts has become entirely unacceptable in the year 2021.

One of the issues is that laws are made on paper and then everyone needs to figure out how to map it to software. Instead, laws should be codified in software and legal APIs should be binding. This would do wonders for efficiency, but also force laws to be cleaned up, be consistent, simple and logical.

I don't even know what this would entail. Reality is continuous and subjective; computers are not. And there's no reason that "legal APIs" would be any more cleaned up, consistent, simple or logical than our current legal system.

Wait until you hear about “case law”. Just because it’s not on the law books doesn’t mean it’s not legally binding. If a court rules something unconstitutional or whatever, the law sometimes remains on the books instead of being removed. “Why bother writing and passing a bill to repeal a law when the court said it’s unenforceable?”

So the government writes up a spec for how the legalese should map to code that engineers then implement? How is that different from what happens now?

Only the programmed end result in code would be legally binding. Lawmakers would have a big interst in making sure the code is correct and provide incentives/change procedures accordingly.

The inmates in this article would be released immediately after the code-law is implemented; you could apply new tax laws (i.e. as a config file) to your accounting software.

Why maintain an obfuscated legal text when you need it in software anyway?

The legal text is the specification. What you're suggesting is the equivalent of the classic "the spec is whatever the implementation does", and would erase the distinction between correct, incorrect, and undefined behaviour.

In other words: “it’s not a bug. it’s a feature!”

I don't think it's so much that software is better or worse than manual entry. First, it's the attitude / rules that assume what's in the system is right. Second, it's no real procedure to audit or check the accuracy of the data.

From a professional who works with data systems, you're more likely to have a database with bad data in it that not.

While not universally true, a manual process does typically require a process to fix mistakes. This is true of software as well but the perceived lack of errors in software processes often leads to this being ignore resulting in the aforementioned “bureaucratic violence”. I do think automated solutions are inherently better because of the bias reasons you call out but it cuts both ways and removes interpretation from processes that may not respect nuance.

We know exactly how to fix it. Our cowardly politicians and toothless regulatory agencies are not up for the challenge.

For every piece of software that can directly and materially harm someone's life like this, there should be a chain of responsibility. And within that chain, there should be legal recourse and, in most cases, penal consequences, especially in the case of inadequate software quality/testing/validation, should the software fail to perform its task correctly. Bonus side effect, software quality will go up across the board in the industry.

It will never be the case that software will be perfect. We can get closer and closer, but the closer we are the more expensive the next step in closing the gap is.

While I do agree that making software better/more reliable is a good goal, I believe we would be better off making the system as a whole more robust; the system that includes humans. For every situation where a piece of software has control of something that effects society (individual, group, etc), there should always be a clear and direct means of appealing / pushing back on the decision that was made. Those means should involve a human reviewing the information and making a decision based on that information, not on what the computer said. There's thread after thread of us saying the exact same thing about companies like Google and Facebook; it should apply as a general rule.

No one is arguing that software must be perfect. But we aren't really even trying. Most software is written in extremely error prone languages without adequate testing.

You don't hear anyone saying we should throw out finite-element analyses and other computational verification methods when designing bridges because bridges can never be perfectly secure. Yet that is exactly the sentiment I often hear on software.

Are bridges and software comparable in complexity though? How many engineers work on designing a bridge compared to software developers that are writing a program? There are more than an order of magnitude more software developers in the US than civil engineers.

Now think about the state of US infrastructure. Does it inspire confidence for the future?

I'd say a lot of software is comparable in complexity to the large bridges that exist. Of course there are massive software projects that dwarf any bridge ever build. But a lot of software is only moderately complex.

I don't know about infrastructure in the US. I don't live there. I'm happy with the infrastructure in west Europe though. I wish that much care was put in to the software I use every day.

I agree with other commenters. And I think another part of the problem is the consumer model of software. In a pen and paper system if there was some reason why a record was special or different from the others, you could just attach a note to it and the next person who picked up the record could read your note. Custom software systems deny that sort of ad-hoc flexibility from the people using them. There’s no way to do anything that wasn’t planned for and programmed in. So office workers who use flows managed by custom software are actively disempowered from authorship over their own workflows.

That’s one of the reasons I think Excel (and tools like Notion) are so popular - the people on the ground can learn to express themselves in the full context of the tools. I think this sort of software is far more important than we give it credit for. (It’s an invisible problem to us, because we can change the software.)

There is no such thing as perfect software when requirements change with time. We need adaptable software instead.

Like all our governance mechanisms have a built-in system of constant change.

It’s not the software makers who are committing the crimes. It’s the people abdicating responsibility to software. You can’t wipe your hands of releasing a prisoner on schedule by delegation that to software. The software can help you with your task, but if it’s brought to your attention that there’s a mistake, your failure to promptly fix it is on you.

It's both. Look at the amount of unnecessary waste created by the abysmal Android update policy.

Most of the time, liability for defective software should be civil liability in tort or contract. In most cases where something bad has occurred involving software, it’s going to be hard for the developers of the software to have the sort of mental state and unattenuated causal connection with the occurred harm that we typically require for criminal liability for reckless conduct.

I’m not saying it can’t happen, but it would be very unusual circumstances, especially since there’s usually an operator of the software sitting between the developer and the person harmed by the software.

Which abysmal android update policy? What waste?

Imho that wouldn‘t cause quality to go up. It would just make it more expensive to develop and to fix bugs. Even more cover your ass would go on.

Or at least a huge share of that burden needs to be on the client so that they define and then test and control the SW they receive properly.

The problems with the software sound like typical big software project problems. Trying to cover a huge breadth of use cases with lots of very important tiny details and released in a big bang (one migration). It sounds like more of a project mgmt problem than a software problem to me.

But maybe I am just a hammer and see nails everywhere.

My cynical take is that the lack of accountability is exactly what makes software enabled bureaucracy so appealing. If this true, there is no incentive to change.

It's a cost/benefit analysis. They'll be taken to court by a few inmates, have to pay them or settle out of court and bill the government for their stays. Since everything is a matter of money, what the company will do is what brings them the most profit after they paid their legal bills.

It seems keeping inmates longer pays better than releasing them on time.

This is part of it. Otherwise it’s just way cheaper to run a faceless bureaucracy that sometimes throws people under the bus when a mistake has been made.

Or things don't get much better, and management finds a way to make it someone else's problem. Do you think it's the CEO or the people making these decisions that will be locked up? We'll just be arresting whatever yes men show up to be the pawn in the stupid game someone else architected.

More people working with a gun to their head. I'd rather the gun be pointed at the person who already has a gun pointed at me, instead of both barrels facing in my direction.

Yeah but the cost of that chain will also rise.

If I'm (or my company is) personally on the hook for bugs, then I'm going to adopt a NASA-like software quality regimen, pushing up the cost of the product.

Every single part of the software stack below me, from hardware, OS, compiler toolchain, disavows responsibility so if I have to absorb all the risk, the product is going to be mind bogglingly expensive.

I have to say that sounds exactly how this kind of software should be built.

We're not talking about the newest social media hype. This software actually matters. Specially since today most of these bureaucratic processes can't be done without these softwares.

>I have to say that sounds exactly how this kind of software should be built.

You see this in every topic.

Every "muh pride in muh trade" person says something like this about the relevant trade but the fact of the matter is that the world runs on off-brand duct tape, harbor freight tools, walmart jeans, economy tires, and all sorts of other "value" solutions and the race to the bottom is what has given us much of the modern world that we take for granted.

A balance needs to be struck. And it generally needs to be struck further toward the "quickly and cheaply build it like crap but make it easy to override or reset" portion of the available solution space than anyone pontificating about quality on the internet will readily admit.

We don't say "perfection is impossible" when it comes to bridges collapsing. We understand that yes, on rare occasion a bridge WILL collapse, but we go and find the people responsible, and we still hold them accountable.

This is a level of accountability that basically every other field of engineering is held to, and they've all risen to the challenge and left the "off-brand duct tape" behind.

Even within programming, planes don't fall out of the sky daily, so I feel safe assume the aerospace programmers are comfortable working with a high degree of responsibility. High speed traders are dealing with million-dollar stakes and a single mistake can make the news. I'd expect they've got a very accountable culture where people get fired when that happens.

There are costs, yes, but there's also costs to keep 733 people illegally imprisoned - we're talking two man-years of peoples lives lost every DAY this goes on.

Why not hold the people accountable that deployed the tool? Ultimately the tool helps a human do the job. It doesn't do anything on its own. If a contractor shows up to do repairs in your house, but their hammer is faulty and the head flies through your window you don't talk to the hammer manufacturer. You talk to the contractor. It's the contractor's job to deal with wherever they got the hammer from.

If software developers are held responsible for the software then expect costs to multiply. Nobody would directly sell you software either - they'd sell you a hardware and software bundle that you must use exactly as the developers say. If you input a value that's out of bounds then that's on you. The software also won't get updates and it will run on 20 year old hardware. That's not too dissimilar to what we have in aerospace, right? And developers aren't even held responsible there! It's the companies, so expect it to be worse than even that.

Would you blame the people who paid for the bridge for the collapse? Should they have understood the details and flagged where corners were cut.

When it comes to critical system I think it's fair to say that the engineers who build it are the only ones who can fully understand the risk.

This is the point behind accreditation. It forces the supplier to maintain a minimum bar for services to protect the reputation of the industry.

In real life the engineers don't police themselves.

Before a bridge, house or even patio deck with a foundation is used a safety inspector needs to give approval.

Are safety inspectors intended to validate the design of a bridge? Or just that construction and materials are up to spec?

>We don't say "perfection is impossible" when it comes to bridges collapsing.

Yes we do. People on the internet might not but look at the formal documentation that goes with any bridge plans. It will talk about factors of safety, various loads, environmental conditions and establish a set of constraints outside of which the bridge is not expected to perform as advertised.

>speed traders are dealing with million-dollar stakes and a single mistake can make the news

It's really easy to put HFT a pedestal when you can't inspect it up close but I assure you that for every Citadel and P72 there is half a dozen firms with sloppy software that goes absolutely crazy if non-ideal but foreseeable things happen. These people are making money hand over fist (kind of) by building to the minimum. There's one firm I want to name because of how much everything they have is held together with duct tape but they're nice guys so I won't.

> the world runs on off-brand duct tape

So very, very true.

that doesn't sound bad to me if we are dealing with people's life or people's freedoms

It is already costing as it should have that level of quality - a 24 million dollar system should be hard to justify that cost.

The other option is to accept we have mediocre software that creates a number of problems that we are willing to accept; NO, not when people's lives are at stake.

Good, this is precisely the objective.

If there’s penal consequences for bad software, you can bet that the development cost will easily 10x overnight.

Is there a better option? This is software that has to 100% work in order to be trusted. If the software cannot be trusted, additional process has to fill in to verify the decisions the software is making. There's no way to deliver a good solution cheaply.

All software has bugs. This is the mantra we are taught and for a good reason.

The answer is human oversight.

I wouldn’t work on such software.

You would for 10x the salary.

> Our cowardly politicians and toothless regulatory agencies are not up for the challenge.

Because their constituents want people to be punished and if the inmates have to suffer a little extra so be it, "they shouldn't have committed a crime."

Our society is severely lacking in empathy.

> We know exactly how to fix it. Our cowardly politicians and toothless regulatory agencies are not up for the challenge. For every piece of software that can directly and materially harm someone's life like this, there should be a chain of responsibility.

No, you know how to blame people and punish people, but that doesn't mean you know how to deliver custom bespoke software for a price that the various government agencies can afford which doesn't have bugs that severely hurt peoples lives.

In fact, punishing people is not going to accomplish that.

That's the problem with a legislature that thinks it can pass any law it wants - let's take into account this new variable X that our software has no way of collecting or measuring - without looking at the feasibility of actually implementing the law given the infrastructure available, and without approving a corresponding budget for software upgrades to actually enact the law, and taking into account how much time it would take to write, test, deploy, and then train people to use the new software instead of just issuing streams of mandates like Emperor Norton and expecting the mandates to materialize into existence like the morning dew. And if said morning dew does not appear, then we can punish and sue the people in charge when they tell us there is no way they can do what we are asking them.

Of course there is blame on the prison leadership for covering things up and that leadership should be fired, but you can punish and sue people all day long and it's not going to result in any good code being written. Punish enough people, and it will just result in the Law being repealed.

The problem with this type of bespoke code is that it has exactly 1 customer, so it's going to be horrendously expensive while also being buggy and quickly thrown together compared to software whose development costs are leveraged over millions of customers. And then what happens next year when some crusader decides that they need to take some other new variable into account? Constantly changing requirements, underspecified projects, one-off projects whose schedules are impossible to estimate, and cash strapped local governments. Yeah, that's a recipe for success.

This is why everyone hates enterprise software, but even enterprise software has tens of thousands of customers. Bespoke software for the Arizona prison system -- forget it.

> And within that chain, there should be legal > recourse and, in most cases, penal consequences,

Wikipedia says, "Under common law, false imprisonment is both a crime and a tort".

There is a chain. There is legal recourse. And there are considerations in government IT that you would not believe and they are incredibly difficult to deal with on minimal resources. It has to be harder than the private sector and this application isn't any different than buggy mainframe software run by major banks. It sits and gets crufty.

This is a new system that replaced a previous one not that long ago. This isn't some crazy old thing running COBAL on a VAX somewhere that nobody understands anymore.

The old COBOL crap is more likely to have been implemented by someone with a clue.

The “new” systems are usually aping the old system behavior. In one case, I ran into a system where some company converted COBOL transactions into Java with some sort of automated tool to put the legacy system “on the internet”.

I have no idea what that means, for newer is not necessarily more supportable. Who knows what the system is - maybe they had a multi-million dollar SAP implementation to manage prisons, and now you’re looking for functional support of that platform after it was customized all to hell, you need a-team SAP resources, not the new kid at Wipro... I can only imagine what’s behind that curtain. It’s the government so I imagine the worst.

Maybe every contract like this should be programmed to the same API twice. Then you could at least compare it the two pieces of software agree. You check the disagrees and get the companies to fix them (should be part of the contract).

And don't tell me you can't buy two CRUD applications for 24 million dollars. It's a silly amount of money for such a buggy application.

> cowardly politicians

They aren't cowardly; they are responding rationally to a constituency that hates "criminals". Prioritizing fixing discriminatory systems (such as this software, or "stop and frisk", or the death penalty) is bad electoral politics for "tough on crime" politicians.

The politicians and toothlessness of the regulatory agencies is a direct result of the electorate. The electorate likes these sorts of outcomes, and any politician that goes against them will find themselves either primaried or drummed out of office.

Nobody will take that contract. Any mistake in the government specifying what they want you to build can be politically manipulated into a legal matter with some probability you'll end up in prison.

Not at all: any mistake in the government specifications is the government's problem. If you have requirements and a requirement tracing matrix to deliverables and you deliver what is required, you are good.

But usually in most parts of the world these government contracts are intentionally made ambiguous for so many reasons including corruption and incompetence of the people writing requirements.

At what point do you feel the developers - the ones who actually wrote the code - should be held legally responsible for that code's execution?

This headlining issue is a specification change and it is an administrative rather than an engineering failure to knowingly rely on outdated software. The article also refers to other software problems but in scenarios like this the people who write the code (as well as the people who operate it, such as corrections department IT staff) tend to have no decision making power whatsoever:

>“It was Thanksgiving weekend,” one source recalled. “We were killing ourselves working on it, but every person associated with the software rollout begged (Deputy Director) Profiri not to go live.”

I dropped this in a comment elsewhere in this discussion, but also makes sense here...

I find the government "requirements" process tends to create situations like this. Rather than build flexible software that puts some degree of trust in the person using it, they tend to overspecify the current bureaucratic process. In many cases, the person pushing for the software is looking to use software to enforce bureaucratic control that they have been unable to otherwise exercise, with the effect of the people the project initiator wants to use the software simply working around it. They then institute all sorts of punishments and controls to insure it must be used. This then results in the kind of insane situation we have here, where you can't do something perfectly legal because "computer says no".

the person pushing for the software is looking to use software to enforce bureaucratic control that they have been unable to otherwise exercise

This is frequently my observation as well. In the process of creating stricter control the bureaucrat increases the the power of their bureaucracy while shifting the blame for any problems to a faceless entity.

They then institute all sorts of punishments and controls to insure it must be used.

This leads me to one of my primary frustrations with the bureaucratization of our lives. Severe consequences are attached to low stakes situations and rational individuals who see the harm caused by the situation are rendered powerless to make changes.

> Severe consequences are attached to low stakes situations and rational individuals who see the harm caused by the situation are rendered powerless to make changes.

You can see the process at work within this very thread -- "And within that chain, there should be legal recourse and, in most cases, penal consequences, especially in the case of inadequate software quality/testing/validation, should the software fail to perform its task correctly." (https://news.ycombinator.com/item?id=26228195)

People seem unable to imagine any way to improve things except by adding more and more legal consequences. We need to stop doing this!

This is an excellent observation about the process of bureaucratization in action. For some reason the solution to the failings of bureaucracy ends up being even more bureaucracy and even greater consequences for failing to play by the rules of the bureaucracy.

Is bureaucracy like violence? If it isn't working you aren't using enough of it?

Yeah, it's tempting to imagine that all the problems in the world can be solved in the same kind of way, whether through more bureaucracy or more violence. Another example is the libertarian idea that all problems can be solved through the application of market forces. Maybe the general term for this is solutionism -- the idea that problems have "solutions" which take certain standard forms, when really these problems are the natural result of various patterns of human activity which may or may not respond to the "solution" you chose. In the worst case, you can end up in a sort of feedback loop, where an earlier "solution" was actually the cause of the problem that justifies the next "solution" and so on.

Solutionism seems like an apt term. In addition, solutionists may feel bound to their ideology and unable to see problems caused by their solutions for what they are leading the types of feedback loops you describe. People double down on their beliefs when presented with opposing viewpoints or even contrary evidence all of the time. Interesting observation.

> In the process of creating stricter control the bureaucrat increases the the power of their bureaucracy while shifting the blame for any problems to a faceless entity.

They are basically using software to preserve the problem to which they are the solution. i.e. the shirky principle


In this case, the requirements should be as simple as implement the law.

Implementing the law, especially in a common law country like the US is really difficult. The case law that is of primary importance here is often contradictory and fuzzy, as well as changing constantly. In addition the written law doesn't capture the whole situation.

In a civil law system it's more likely achievable, but I'm quite sure the requirements aren't that clear cut judging from the text.

Civil law codes are often just as ambiguous as common law systems. The only difference is that in a civil law system, you can’t firmly rely on precedent, so a judge can interpret the law against you just because he doesn’t like you, even in the face of contrary precedent.

That’s in part why international business to business contracts almost always specify a common law jurisdiction as the required venue for any lawsuits.

Well, there’s the law (federal, state, etc), then the additional regulatory rulings (federal, state, etc), then the as-applied department policies. Not really a lack of “law” but a mess of constraints that need to be deciphered into functional requirements with footnotes, matrixed to testing and preserved/maintained for the next guy. Ugh.. I want to charge the gov. 2k hrs for just thinking about it today.

16 months is a long time, especially when people are in jail and they should not be there. However in my experience as software developer nothing is simple, but everything can be done.

First problem coming to my mind: do they have the budget to pay the software developers to add this new functionality to the software? Do they have to ask the money to someone else, maybe to the very politicians that changed the requirement?

Then when this is settled there are the usual problems of analysis and implementation. Probably also where to get that input that they didn't have before. It could be a large project. But 16 months, ouch.

Implementing "the law" is anything and everything but simple.

That’s as insightful as people saying “ just follow the constitution” when in reality people have been fighting about the exact meaning for centuries. Most laws leave some room for interpretation. This is pretty much necessary because the law can’t specify each corner case.

Same goes for software requirements. Good requirements make the intent clear but allow implementers some flexibility. Specifying everything in minute detail is usually a recipe for disaster.

"Implement the law" in a software product is as utopian as replacing judges with computers that "implement the law". Now does it makes sense why it is not possible?

Perhaps software isn't the right place to implement the law?

> It is also a perfect example of how software is used to create increasingly more elaborate and faceless bureaucracies that force individuals to spend more and more time contending with them.

You are attacking the wrong target. It's the government that's broken. This kind of outrage can happen just as easily with pencil and paper. The root cause is the lack of accountability and desire to make the government function better.

But they can be undone with pencil and paper too. The footgun of automation here can only be undone with either a really good patch (git commit -m 'finally finally works for real this time') or a lot of pencil and paper work that's slower than the processes that caused the problem.

I'll note that this isn't the first time that people have said "well its the algorithm" when they were responsible. The example that springs to mind is bail risk assessments. You're very correct in that there are people making real decisions that are very cruel here. The machines give them something to hide behind.

Yes. "It's just the algorithm" is the "it's just procedure" of the 21st century.

Except software allows a scale and efficiency that is impossible with pencil and paper while also creating an ideal scapegoat. Software is being used to avoid accountability at a scale much greater than what was possible with manual process.

The old saw about computers is that they're designed to allow humans to make millions of mistakes, very quickly and very accurately.

> Software is being used to avoid accountability

Right, in the same way knives are used to rob people.

This is not a new problem: an organization strategically builds an unmanageable bureacracy and then profits off the issue while claiming incompetence.

Computers just make said bureaucracy cheaper to operate.

Isn't this the basis of the book "IBM and the Holocaust"[1] where the author lays out how IBM's technology helped facilitate the wide scale of Nazi genocide?

[1] https://en.wikipedia.org/wiki/IBM_and_the_Holocaust

How is this a government problem? People frequently lose access to social network accounts and email because of broken algorithms. Google can blacklist a business and send it broke. Insurance companies, credit bureaus and banks can make a wrong decision and deny credit.

People decide to fix problems in software or anything else for that matter. If they don't get fixed it's usually a matter of somebody (or bodies) decided it was not a priority.

Corporations have similar issues. Just look at the biased image recognition technology that FAANG release

Software that infringes on the public (even if they are criminals) as opposed to software that people can opt to use or not, needs to have a very serious question asked at design time: If the software produces an incorrect result, what mechanism exists to override it/audit it/provide damages etc.

The fact people are not asking that is worrying. I understand why the system was not designed to do something that happened later (even if it could have been reasonably foreseen) but the fact that it was implemented with no override is really the scandal.

I don't know whether this comes down to an amount of power that exists in a Governor that means the rest of the organisation can't say, "sorry Guv, but we can't do this because the software wasn't written to". If TV is to be believed, Governors want things done yesterday and you worry about the problems.

As someone with a civil engineering background:

This right here is the difference between conventional engineering disciplines where designs require a Stamp from an Engineer of Record who takes on personal responsibility in the event of design failures vs. the current discipline of software engineering.

There's a big difference between a software developer and a software engineer, and I think that difference should be codified with a licensure and a stamp like it is in every other engineering field in the states.

Software like this ought to require a stamp.

A decent analogy is the environmental work I've done. When we come up with solutions and mitigations to environmental problems, like software, we can't always predict the result because of the complexities involved. So we stamp a design, but we, or the agencies responsible for allowing the project often specify additional monitoring or other stipulations with very specific performance guidelines. It's a flexible system and possible to adapt to, but there are real consequences and fines when targets aren't met. When bad things happen, the specifics of what went wrong and why are very relevant and the engineer may be to blame, or the owner/site manager, or the contractor who did the work, or sometimes no one is to be blamed but the agencies are able to say: "Hey this isn't working and needs to be addressed, do it by this date or else."

In engineering, there's an enormous amount of public trust given to engineered designs. The engineer takes personal responsibility for that public trust that a building or bridge isn't going to fall down. And if you're negligent, it's a BFD.

Given the current level of public trust that we are putting into software systems, it's crazy to me that we haven't adopted a similar system.

I would love for software engineering as a discipline to go that way, but it's going to be very hard. Software usually has more moving parts than hardware.

I don't mean to understate the difficulty of being a hardware engineer, of any sort. But the whole reason we do things in software at all is because software is more flexible, and adding a new thing comes with less overhead. Hardware, while challenging, tends to follow similar sets of solutions to similar problems. There are only so many things a bridge, or a building, or even a CPU will be tasked to do.

Not saying this is impossible for software, either. Software gets built for man-rated tasks -- and jobs like this should be considered man-rated, because lives depend on it. That means it's going to cost more and take longer, especially when it's software of a kind nobody has ever built before. Who has experience in "software that releases prisoners?"

The reason they don't do that is, therefore, money. I doubt the prison system is willing to pay 10x as much for the software. The software was probably built by the lowest bidder technically acceptable, where "technically acceptable" was incredibly flexible because nobody really knew what had to be done.

> additional monitoring or other stipulations

That does happen with software a lot, frequently flying under the title of Compensating User Entity Controls (CUECs) or User Control Considerations (UCC). Basically the “here it is, don’t feed it after midnight and don’t let it get wet, and good luck” riders. Sounds like these problems happened way earlier in the lifecycle though - either the requirements were missed or the testing was thorough enough.

Software is completely different from your typical other engineering fields. You just can't apply the same methodology there. In other fields such as building bridges you are quite often taking what has already been proven to work well and building it, while in software if you start to repeat yourself you are doing things wrong.

I have a very cynical take. Probably too cynical. The ability to shift blame to software as opposed to the humans responsible for administering a bureaucracy is exactly what makes it so appealing. The question is ignored intentionally.

That was my first thought actually. We are probably not cynical enough! In addition to blame shifting, the prison industrial complex is benefitting from having the prisoners stay longer, so there is zero incentive to fix the problem.

Not cynical enough: if private prisons with a profit motive delay prisoner release, they’ve made more money.

> The computer is right in the sense that its decision cannot be undone, and solely to blame since no human can undo its edict or be held accountable, apparently.

This is why penalties are such an important part of the feedback loop. Obviously we can't go back in time and restore someone's phone privileges, but we can award monetary damages for the mistake.

Monetary damages alone won't discourage this behavior, though, as ultimately taxpayers foot the bill. There also must be some degree of accountability for those in charge of the system. Software can't become a tool for dodging accountability. Those in charge of implementing the software, providing the inputs, and managing the outputs must be held accountable for related mistakes.

> There was an Ask HN question the other day where the poster asked if the software we are building is making the world a better place. There were hardly any replies at all.

Few Ask HN questions get many responses. This is also a loaded question, as HN is notorious for nit-picking every response and putting too much emphasis on the downsides. For example, I know farmers who have increased their farm productivity massively using modern hardware and software. However, if I posted that it would inevitably draw concerns about replacing human jobs, right-to-repair issues, and other issues surrounding the space. The world is definitely better off for having more efficient and productive farming techniques, freeing most of us up to do things other than farm.

However, all new advances bring a different set of problems. Instead of trying to force everything into broad categories of better or worse I think it's important to acknowledge that technology makes the world different. Different is a combination of better and worse. The modern world has different problems than we did 100 years ago, but given the choice I wouldn't choose to roll back to the pre-computer era.

> It certainly seems that way reading articles like this.

Both news and social media have a strong bias toward articles that spark anger or outrage. For me, the whole world stops feeling like a dumpster fire when I disconnect from news and social media for a while. I'm looking forward to the post-COVID era where we can get back to interacting with each other in person rather than gathering around a constant stream of negative stories on social media.

Both news and social media have a strong bias toward articles that spark anger or outrage.

Absolutely, and I agree that disconnecting can have positive benefits. On the other hand, at least for me personally, covid has disrupted the mechanisms that normally prevent in depth observation. It has given me time to read books I normally would not have read because that time went to things like waiting for my car to warm up so I can get to work on time, commuting, going out to lunch with co-workers, and going out for drinks with co-workers, friends, and family.

What is described in the article is outrageous. My concerns about bureaucracy and software's role in enabling it, on the other hand, have developed separately because I have the time to consider it.

It's such a hard question to answer, because software doesn't exist in a vacuum. Hopefully this example is relevant:

You're a software developer maintaining an eCommerce platform, on the one hand your platform helps perpetuate low margin and wasteful consumerism, on the other hand your software enables small businesses to compete in the new online world.

Consumerism is bad, but commerce is as old as civilization and supports all of our lifestyles, so on a macro level you're in a tough spot. You're a talented developer putting their skills to work building something the community needs, I personally think that means you're doing good work in the context of your society, but it is difficult to say if it's making the world a better place.

Social media is the same. On the one hand, it connects family and friends, on the other it drives narcissisms, consumerism and misinformation.

You almost have to try and calculate the "Net Good" or "Net Bad" of a type of software and see how the cards fall. For social media I would suggest that it's currently in a "Net Bad" situation, causing more harm than good for example.

> The inmate who lost their phone privileges for 30 days is an example. They did nothing wrong but the computer says so and nothing can be done. The computer is right in the sense that its decision cannot be undone, and solely to blame since no human can undo its edict or be held accountable, apparently. It is tragic and absurd.

All government software should be open source and anyone should be able to investigate the code and submit bug reports, including inmates. If they know there is something wrong, they have a lot of time on their hands to learn a useful skill to fix these issues.

The government should then not be allowed to close a bug as wontfix or invalid without approval from other citizen watchdogs verifying if a bug report is legitimate.

I agree with what you are proposing in principle. However, the notion that it is up to each individual to combat the system when it has wronged them while they languish in some kind of bureaucratic limbo is one of the core sicknesses of our system. Apart from having direct access to the source code and the ability to make pull requests, that is exactly what is happening here. The bureaucrats involved know there is a problem but are leaving it up to individual inmates and their advocates, both inside and outside the system, to sort it out.

> However, the notion that it is up to each individual to combat the system when it has wronged them while they languish in some kind of bureaucratic limbo is one of the core sicknesses of our system.

What's the alternative though? No human system I'm aware of can address this in any cost-effective manner. Linus' Law has been demonstrated as being one of the best human approaches. The only software approach I can think of that has addressed this is fault tolerant voting systems used in avionics (NASA, SpaceX, Boing) where the cost of failure is so high that typically three independent implementations vote on the outcome. It's impractical to treat every software system used in government to be built to the same standard.

Even in some of the better run software companies (e.g. Google and Facebook), it's incredibly challenging to achieve system correctness across the entire system. There are always tradeoffs to be made and even in the most critical systems there are limits to how to practically achieve correctness.

I work at one of these better run companies specifically on measuring and guaranteeing correctness and detecting failures in correctness both in production systems (via continuous probing) and as part of change management (integration testing the change against the system in production). It's a really hard problem even for us and we have far better engineers than you find working on the overwhelming majority of government software systems.

The only alternative (and the one I prefer) is to have less government involvement so that fewer systems are involved and you can have more eyeballs scrutinizing fewer systems. Government is far too big already, and there's too strong a desire to keep making it bigger before we adequately tame the complexity we've already built. The co-dependence between two codifications: (1) the law code and (2) software code, further contributes to ossification that is almost impossible to undo.


I'd argue that software like this has saved people from having to do millions of years worth of mundane work. This news is essentially like a traffic accident. Doesn't mean vehicles in general haven't benefitted the human experience. The fact that it is news worthy is evidence that it doesn't happen too often.

> making the world a better place

For a large segment of the US electorate, anything that inflicts pain on "bad people" is "making the world a better place".

If the software was causing prisoners to be released early, most US voters would be up in arms. But if they're being held too long, the calculus is different. In software terms, for many Americans, a "tough on crime" outcome is a "feature not a bug".

Little Britain's recurring bit Computer says "No", has always been a great illustration of this point. https://www.youtube.com/watch?v=0n_Ty_72Qds

A big problem is that while we might improve the typical use with software, the failure mode is generally ignored and swept under the rug. See google's customer service. You can speed up and improve the average case a thousand times more, driving costs down by maybe a thousand times, or you can bring costs down like 2x but keep the benefits of manual, person centric failure recovery. Even then, non-automation doesn't make it "human" in the sense we all want. A rep in a call center who is only allowed to follow the playbook almost might as well be an automaton for all the freedom they have. Are faceless economies of scale and the bureaucracies they bring the root issue?

What if the root cause is the ever increasing complexity that software is trying to manage? At all levels (legislature, management, bureaucrats, programming languages, developers, testers, users, subjects) we are creating more and more complex situations we ask software and the institutions that produce it to manage for us.

But as the complexity goes up and the number of these complex situations increases, are we reaching a point where we outstrip the amount of money, talent and experience our institutions would need to deliver solutions to successfully manage them?

With our resources and intelligence as a species being capped, it seems at some point this is inevitable.

Yes, software can be used as a cover for abuse like in this case, but that happens because some people in power let it happen. For other pieces of software that have consequences for people with more power than prisoners, the society will not allow failures to happen. I only need to mention the MCAS software of Boeing 737 Max for a counterexample.

Software does not have its own will. Software is only allowed to make decisions on our behalf because we let it do so.

It is interesting that you bring up the 737 Max. I was actually talking the 737 Max in the context of software being both infallible and the perfect scapegoat with someone last week. The 737 Max is an example of how software was believed to be infallible (it wasn't) and was ultimately the scapegoat for a design flaw. That doesn't mean the 737 Max was something that happened intentionally. However, when the time came to assign blame fingers began pointing at the software.

I do agree that software has no will. It is a tool for facilitating our will for better or worse.

The lack of desire to get things right throughout the bureaucracy is the problem. The software is just a mechanism. Other organizations that actually care figure out ways to get things right even when the software has issues.

You can see in the film Brazil, from 35 years ago, that this was already a problem and concern even without modern software.

Much older than that. The blackly humorous 1965 short story "Computers Don't Argue" by Gordon R. Dickson is pretty much the definitive "software as a bureaucracy" story. No spoilers - it's short and well worth it:


> Is this because for the most part our efforts in producing software are actually doing the opposite? It certainly seems that way reading articles like this.

I think the most likely explanation is just that people didn't see the question or weren't interested in having the discussion. Most people believe the work they're doing is at worst neutral. A less likely candidate for the reason (but still more likely than your guess) is that people didn't want to be subjected to unfounded criticism of their work from people who don't know anything about it.

The best solution to the problem is to hold developers personally liable for the software they write, as well as the owners. That could mean criminal penalties for negligent violations of industry standards and processes but will mostly result in civil penalties.

The second and third order consequences is that developers will insulate themselves behind licensing and proofs of practice like every other industry.

Until people actually advocate for real penalties for such harmful violations they don’t care. All their temporary whining and crying is just blowing smoke up our asses.

I am not sure how software bug is the exclusive enabler since it is plausible as well for administration bug to occur with pen and paper along with the compliant warden.

It's not that software is the exclusive enabler. It is that software is the ideal enabler because of its ability to create a truly faceless entity that seems to exist outside the power of even those who administer it. Of course these issues were always possible without software. Software is just so much more efficient and useful for creating these kinds of issues because it can scale and because it can be the scapegoat.

most of us have jobs that service the revenue streams of rich owners. what did you think was going to happen?

> if the software we are building is making the world a better place

No, it's now all about "extracting value", "rent seeking", "subscriptions", "censorship", "monopoly" and "control". We got bribed by FAANG and this is the consequence.

Bribed implies the natural course of the tech worker would be altruistic, which is quite the assumption

There is plenty of useful software. For example: scientific software.

That's true, and I am not arguing that useful software does not exist. Instead, a lot of energy producing software is often not useful, or useful in perverse ways.

software is a tool used by people. You are measuring America.

Software is just a tool, it can be used to build good or bad things.

It would be hard to see this in e.g. Scandinavian countries, where incarceration is seen as rehabilitative rather than punitive.

In the US, racial discrimination, free market extremism along with "tough on crime" laws have created unimaginably cruel systems; together with private prisons, the goal has been on cutting costs rather than rehabilitating prisoners. Software is just a tool to further that goal.

Indeed, software is an enabling technology and is morally ambivalent just like any other tool. A machine tool can make a medical device to save lives or military device to destroy lives. At the heart of the issue is the intricate web of institutional mouse traps designed to convert low stakes issues into serious offenses. For example, a person who does does respond to a citation for expired vehicle tags is now ensured in a mechanism that turns a minor issue related to tax collection into a real criminal offense. It is up to humans to make these things so.

I brought up the Ask HN question mostly because I felt the lack of replies were a silent acknowledgement of the realities of most software endeavors. That they are not making the world a better place. Most aren't going out of their way to make it worse. Probably, it isn't even a consideration.

I don't think tools cancel themselves out, and I suspect that nothing "is just" anything.

Even if ideas like "the medium is the message" are partially true and then just partially applicable, that should give us pause when we try to cross out tools in our morality equations.

- https://en.wikipedia.org/wiki/The_medium_is_the_message

I’m not sure what you mean by tools canceling themselves out. Are you talking about the balance between the good and bad things done with tools? If so, I do agree with you. Eg guns are tools that clearly don’t have a net benefit and controlling their manufacture and possession seems OK to me.

I was in a hurry when I typed my message, and re-reading it, I can see that it sounds a bit like babble.

I wanted to say that the tools we use impact how we behave, and that impact alone can have moral consequences. The "medium is the message" link I posted earlier talks about how the tools we use influence how we behave and experience the world. For example, if I want to tell a story, I might film a movie, or write a book, or record a radio program, or tell the story around a campfire. Each of these tools will impact the tellers, the receivers, and the story itself. This logic also applies to other more boring forms of communications. For example, using complicated software to say who gets to leave jail makes a difference in how the "tellers" and the "receivers" experience the world. To make a comparison, what if the jailer recorded information on a clipboard and paper calendar?

Is rehabilitation not the main goal in the USA? We call the places correctional facilities, and do other things that are ostensibly there to correct behavior and prevent recidivism. In fact, it seems like most of the things that prevent people from being successful are unintended side effects.

Here's a thought: Why do we permit private companies to not hire ex-cons? Why do you just get to decide that you don't want to hold up your civic responsibilities like that? Who wants to work with someone that used to be violent maniac, sleazy thief, or worse?

I agree about cost cutting measures and the criminal justice industrial complex. Still we have bigger issues around crime and reconciliation that prevent us from making progress. To be honest, I have trouble understanding how we're going to change, unless the average person can live with someone ruining their life, then spending "only" a year or so in prison and moving on to be successful in a decent paying job.

We still find that outrageous in the US, and it's going to be very tough to make progress that way. It's not about making something "a goal", especially in a country like the US, it's about convincing the wealthy and powerful class to do anything at all about it and stop making it worse.

We pretend the goal is rehabilitation, but its obviously not.

I don't really think any justice system is actually putting rehabilitation first. Otherwise, you'd be sentenced to "until you get rehabilitated or no more than X years."

That's kinda what I'm saying. It's not all "pretend" either. It's a goal, just not one we're willing to devote any resources towards. It's worth pointing out the difference. We want ex-cons to get jobs, just not working with us. It's the "not my problem" culture that is the USA.

I don't think that example you gave is a flaw in the logic. We don't know when people are rehabilitated, and there's reasons for having minimum sentences, whether you agree with them or not.

The greater issue in my opinion is that we do a bunch of stuff to create this illusion of safety for the public (and businesses), and we're not willing to budge to give people a chance. We very reluctantly pass laws that clear some peoples records if they're young enough and haven't done anything so terrible... and that's if you're lucky.

Disenfranchisement should be an exception for people that are likely highly-dysfunctional, not a general purpose solution. A new despised underclass to place a substantial percentage of the population. Seems dystopian to me.

Reminds of the opening scene from the 1985 movie "Brazil." Computer misprints the name on an arrest warrant as the result of a (literal) bug, and nightmarish tragicomedy ensures:


Highly underrated movie, with ever more contemporary relevance.

> On Rotten Tomatoes, the film has a 98% rating based on 48 reviews with an average rating of 8.7/10.

This seems like a problem for the prison, not the inmates. In general, the prison software being faulty means the prison should just hand-calculate this as needed. The inmates should still be able to be released as appropriate.

If it costs the prison 10x normal costs to do calculations by hand.. well, that's the cost of business.

And that's exactly what they're doing (according to the article)

> One of the software modules within ACIS, designed to calculate release dates for inmates, is presently unable to account for an amendment to state law that was passed in 2019.

If that description is accurate, that doesn't meet the definition of a "software bug", if the software was produced before that law was passed, and not updated since.

The bug is in the process of not having a plan for updating the software in a timely way when laws change, and not having a requirement in place for overriding the calculations in the interim.

What if an inmate suddenly receives a pardon?

Smaller point as most have covered the insanity of this, but am I reading this right that they are paying 125K for adding a field to some piece of data? I know government contracts can be bloated and there can be complications that don't make it straightforward, but give they itemized 3 separate fields and charged 185 developer hours each for them, that's just either insane gouging or blatant corruption right? That's nearly 400K for three fields being added.


Yeah, this sucks. It's (unfortunately) not surprising.

My wife had a citation that affected our liberties. The cop even knew that he didn't have probable cause but let the charge stand for more than a month. Nobody in the system cares. The magistrates and judges don't care, even though the new charge should be dismissed with prejudice over this and other rights violations. The supervisors and IA for the state police don't care and even cover some of the stuff up. The DA's office doesn't care either.


In the UK, people often sue for wrongful arrest etc. and I though the US was much worse for litigation. Is it simply that the victims are mainly poor and not connected enough to sue or is the legal system there really that immune from action due to wrongful process?

According to a civil rights lawyer I talked to, the judges for the courts don't care unless you suffered extensive monetary damages. That the DA and the judges tend to side with the cops because they work with them on other cases often. On top of that, there are so many minor rights violations and an extensive case backlog. Issues like ours are common, so the system basically overlooks them so it can process the 1-2 year case backlog (pre pandemic times even). Mix that with the public perception that if you were arrested or cited, then you must be guilty and deserve it.

I'd hesitate to call this a software bug, this is a complete breakdown of planning.

FTA - "“Currently this calculation is not in ACIS at all,” the report states. “ACIS can calculate 1 earned credit for every 6 days served, but this is a new calculation.”"

tldr; a new law was passed that allowed for a different credit schedule for days served, and the system hasn't been updated to make that calculation.

It's the problem with silver bullets like YAGNI: Laws change, if your system is dependent on laws, then you can be sure that new rules will need to be added. You need a system that is configurable, you can be sure you are going to need it.

Of course, if there's money to be made in having a change-resistant system, well that's a different story. YAGNIAYWPTTNFI (You ARE gonna need it, and you will pay through the nose for it) isn't quite as catchy though

YAGNI just means that you don’t know how the laws are going to change. All the configurability you add is just going to make the system more expensive and even harder to change the day when the laws are changed, and it wasn’t anything you thought of. And no one is ever using all your nice switches.

Yes and no. It's a bit like saying "Oh, I didn't know it was going to rain - it was lovely and sunny outside when I wrote the code".

A big (some might say forgotten) part of the procurement and development process is research - know your customer, know your market, know your niche.

In this case, that includes

- know how prisons work

- know how the system will be administered

- know how recent law changes might be handled by the system

- know how often prisoners need / deserve workarounds

- know how quickly law changes need to be reflected in code

Also, I'm not saying that the user interface needs to contain this. But we should always advocate for modular systems that are easy to maintain through addition, rather than through re-writes.

If it is genuinely difficult to add a new type of inmate release schedule, then that is poor planning.

But again, I'm not necessarily subscribing to that theory. Money is involved, so facts are in the eye of the sales team.

But I'm also not not subscribing to that theory.

No, of course you have to take into account known factors, like sometimes it rains. And if it is difficult to add a new type of release schedule, it is not poor planning, it is a poorly designed system. When we don't know how requirements are going to change, we must not guess, we should instead design the systems so they are easy to change.

> if your system is dependent on laws, then you can be sure that new rules will need to be added. You need a system that is configurable, you can be sure you are going to need it.

This doesn’t violate YAGNI.

1. You’d have to know in advance what the scope of rule changes would be in order to implement the configuration system.

Human laws do not fit this constraint.

2. You’d also need a way to prove that the configuration system itself was sound.

3. You’d need a way to test configurations to make sure they executed as expected.

That is likely to be no better than just updating the codebase as requirements change, and there are many ways it could increase the cost.

1. Yes. A subset of human legal history would give you that information. You could argue they did that, just chose the wrong subset.

From a planning perspective though, it isn't the rule changes that are the problem; it's the types of input. For example, if you write a system that says "Given X class of crime, and they were convicted in year Y, and Z much time already incarcerated, return f(X, Y, Z) days left". If you can build all your rules around that, that's great.

If you then say "Oh, but for crimes a, b and d, we need to take into account some measure of inmate behaviour" - you now need to incorporate a whole new path from that data point to your decider, and all these functions need to accept this information and either use it or discard it - and that might take some time. (I should insert a discussion about monads as a design pattern that would simplify this code and remove this excuse... but we'll assume ignorance of that for now)

So the question is whether this is some new datapoint that they should have already expected. Maybe it wasn't - but if that's the case, then they definitely looked at the wrong subset of legal history.

2. and 3. are moot points in my opinion. Your configuration system is the basis for all your existing rules. They already have multiple rules, so they have a baseline for saying that the configuration system is sound. If your configuration system fails to properly account for unused datapoints, then fine. But as I mention elsewhere, configurability doesn't necessarily mean end-user configurability. A well-written system should not be resistant to change.

You'd need a configuration system that is so general it would become a programming language. And then nobody could configure it except the original programmers because now your rules are written in the weird badly-designed programming language you developed.

AKA The Inner-Platform Effect


I have worked on a few systems that would serve as prime examples.

Now we can place configurable smart contracts on blockchaim that keep track of your prisoners!


Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact