It has always been possible to get stuck in bureaucratic hell by some mistake at some level, and often difficult to extricate oneself.
Before, each of those cases probably needed a human directly involved. Humans make mistakes, but they tend to make them one at a time, and we have a cultural understanding that humans can make mistakes, and systems to recover from them. Now that we have algorithms involved, we can screw people over wholesale, and we haven't yet developed the cultural knowledge that algorithms can make mistakes as well (astonishing, considering how often computers do not do what we expect them to do).
"and we haven't yet developed the cultural knowledge that algorithms can make mistakes as well (astonishing, considering how often computers do not do what we expect them to do)."
I feel like we've made some progress in the past ~20 years I've been able to witness that effect. Once upon a time, the output of a computer might as well have been divine writ. Nowadays, the person on the street is a bit more realistic. I think the problem with a bureaucracy now isn't that they consider the computer holy writ, it's that they hear "the computer is wrong!" more often than it actually is, so they grow their own rather cynical reaction to that claim... and, alas, that's a rational response, too. Especially bad when their system is telling them that you're a bad person in one way or another... of course you're not really six months behind in payment, you deadbeat, I totally believe the computer screwed up. The check's in the mail, too, right?
Yes, the human error rate is probably higher than a computer, but the correction rate is higher, too. A computer will never, in a million years, realize its mistake, or be swayed by an appeal. It's harder to fix some classes of computer errors, too. The DMV system's facial recognition algorithm will have to add a list of documented false positives, which is an additional feature with a cost. Errors will only be fixed if they are severe and affect multiple people.
A human can just fix the mistake in the paper file as a one-off and move on. No generalized solution necessary.
This is simply bad data governance. Algorithms should not be able to make destructive decisions like dropping people from insurance or canceling a driver's license on their own. A better system would use the algorithm to identify potential cases, then having a human perform a more stringent verification once you've narrowed down the set to a manageable number.
Yes, this is slightly more expensive than using an algorithm alone. But it could actually end up being cheaper in the long run if law suits are involved. Sloppy data governance causes all kinds of problems that are hard to quantify numerically, and that's the root cause here - not bad algorithms. At the end of the day, every algorithm is going to produce imperfect data: they're based on simplified logical models of real world systems. Recognizing that algorithms are imperfect is a key part of knowing when and how to use them.
> How they're monitored/dealt with is not an algorithm problem
Sure it is; what inputs go into producing the ultimate decision and how those inputs are processed is an algorithm problem. Particularly, whether and when output is sent to one or more humans requesting input, and how input from those human(s) is factored into the ultimate decision is an algorithm problem.
The fact that the decision algorithms at issue often do not involve human assessment as part of the input set when they should is a defect in the algorithms' fitness for their intended purposes.
I think you're combining "algorithm" and "process" which is something programmers often do (because for a programmer, they are the same thing). Businesses tend to think of "algorithms" as black boxes where you feed a data set in one side and get a modified data set out the other. That data is then incorporated as part of a business process.
If you're designing a process where you take the output of a program and delete all records flagged by the program, you can add a human validation step in pretty easily. Companies often have process engineers whose job it is to do exactly this: and when you're talking about data management, process and governance are very important.
Other examples: If you ever take a landlord to court or make an insurance claim, you'll be marked as a bad risk -- even if your grievances were 100% legit.
The unfortunate thing about these examples is that I don't think that they illustrate malfunctioning algorithms. As far as creditors or insurers are concerned, your bringing a claim against a landlord or insurer does indicate that, from the perspective of their bottom lines, you are a higher risk individual. I.e., they would prefer to lend to/rent to/insure a person who is unlikely to bring claims against them, even when she has a right to. That's one of the things that, from a cultural/social/moral perspective, is wrong with this system of risk assessment: it penalizes people for doing things that are entirely within their rights by decoupling risk from fault.
I'm not sure that there is a solution to this other than regulation, though.
If you are a company that is rarely at fault, then it would be great to be able to separate risk from fault and take the customers that your competition is overcharging.
Great point! One problem, though, is that it takes money to keep your own level of fault low, to accurately measure your track record, and to discriminate between fault and no-fault risk. All this cost may undermine your ability to sell a cheaper product to low-fault individuals. Another complication is that sometimes neither you nor your customer will be at fault (either because nobody is at fault at all, or because a third party is at fault).
Finally, in the credit arena, risk information is pooled between creditors, some of whom may be more likely than others to have been at fault in a given dispute. You'd have to figure out how to refine a customer's risk profile based on the data from a credit reporting agency, or source all the data you needed yourself.
If you could overcome these challenges, though, (and maybe you could) I think you'd have a great business.
No, not really. It reminds me of a shirt I saw once:
"Caffeine: do stupid things faster and with more energy!"
Likewise, these new tools enable us to make entire new categories of mistakes based on false correlations etc., and to make them more often. Before, it would have taken a lot of work to cancel thousands of people's health insurance. Just the sheer amount of time and number of eyeballs involved would have increased the chance that someone might have noticed a problem. Now it's just >click< and there's no chance for introspection before the results of bad data or bad analysis are made manifest in the real world. Some gears are meant to grind slowly.
(1) Grease the wheels significantly (allowing these decisions kinds of humiliations to occur much more quickly, and in greater bulk);
(2) Enable "action at a distance", both physically and psychologically (it's a lot easier for anyone at any level in the bureaucratic apparatus to shrug their soldiers and say "well, the computer flagged him, I just gotta do my job" than to make a personal determination about that person's guilt or criminal association;
(3) And they add a patina of (false) respectability, via the extreme sexiness of machine learning and everything data-sciencey these days.
> (1) Grease the wheels significantly (allowing these decisions kinds of humiliations to occur much more quickly, and in greater bulk);
None of the stories show that. For example, the health one could have happened just as easily in a paper bureaucracy - that's the magic of 'opt in' vs 'opt out'. Just declare invalid all health insurances which are not explicitly renewed. No computer necessary.
> (2) Enable "action at a distance",
What, like paperwork doesn't enable that?
> (3) And they add a patina of (false) respectability
The flagged problems have more to do with no staff and the bureaucracies being self-serving in refusing to review complaints and whatnot, they don't have to do with regular people going 'well, the machine said it, so it must be true!'
The headline is a little misleading; what I mostly got is that letting decisions like this be handled by automated systems creates more problems than it solves because those automations create too many biased false positives.
But the problem there isn't necessarily that there are biases encoded in the algorithms, but more likely that there are biases in the training data that the algorithms are working with.
Two comments so far saying how it's not the fault of the algorithms.
Huh.
Expert systems are given the role of, well, experts: Their authority is not to be countered by the folks "just doing their jobs". Blaming the "bureaucracy" is wrong-headed, quixotic, and idiotic.
The blame lies with the senior policy makers and politicians who wash their hands of due process and avenues of appeal, based on their choice of "expert systems" that cannot be possibly be confounded.
The person who told the driver it was their responsibility to clear their own name could be fired for saying otherwise. Nameless bureaucracy isn't to blame, executives and politicians with real names are.
Let's direct the discussion where it needs to go, shall we?
You're absolutely right. I just knew some people would respond by saying we shouldn't blame the tools, and in a narrow sense they're right. However, that doesn't detract from the point that some of these shiny new toys are pretty darn dangerous. We have a responsibility to understand and mitigate those dangers, not merely deploy tools that we barely understand with consequences we understand even less. Training, safeguards, and oversight are necessary. Failing to check the results from the latest bit of Hadoopery is the data equivalent of leaving sharp power tools on a children's playground. "It's not the tools' fault" is incomplete and misleading.
> I just knew some people would respond by saying we shouldn't blame the tools
Usually, I'm saying the same thing when it comes to misapplications of science or technology. However, when the "tools" are making our decisions for us, I think we can safely blame the tools themselves. At least in the sense that the tools might need to be re-engineered.
It depends what you mean by expert systems. I hear the phrase decision support software much more, which is clear in its place. There should be a human in the way where the results are major.
I think part of the problem is that we've gotten stingy with forgiveness and second chances. Consequently lots of people who might otherwise have good intentions have, rationally, designed systems to avoid making anyone responsible for any decision, because even a good choice (given the information at hand) that happens to turn out poorly can end careers, see people demonized in the media, generate lawsuits, and ruin lives.
That's why people were reluctant to try to fix the problems the rules (algorithms, in this case) caused, I think.
Sure, it could just be that a banal everyday sort of cowardice is on a multi-decade upswing, but I suspect instead that mistakes hit people harder and stay with them longer than they used to.
Or, maybe harsh punishment isn't more common but it's more visible and it's perceived as more common. The result is the same, though the solution—whatever that might be—could be different in that case.
More intense career specialization probably doesn't help. Can't cut it in (your educational focus here)? Have fun spending another half a decade in school and going $XX,000 into debt again, and going back to the bottom of the pay scale, or else delivering pizza for the rest of your working life. Either way, hope you didn't expect to retire, ever, or pay for your kids' college, or....
An article that I see as closely related to this, regarding changes in US military leadership between WWII and the present:
TL;DR (of the relevant points): Removing an US military officer from a post is now rare and typically career-ending, while before it was common and often simply resulted in doing a different (not necessarily lesser) job somewhere else. Perhaps as a consequence, no-one wants to fire an officer from a job they're not very good at since doing so punishes them more severely than may be warranted. Officers don't want to take appropriate risks because they fear being fired more than they should have to and acting timid is (these days) far less risky than making a bold call that goes poorly.
Certain classes of people seem immune to being punished for any mistakes, however egregious, of course, but for a huge percentage of the workforce and the bulk of the political class I think this holds.
I'd guess our poor social safety net in the US serves to aggravate the problem. Losing a job here can be a hell of a lot worse than losing a job in most other states with advanced economies.
I don't think that CYA is a new thing, of course, but I do think that in the recent past this degree of ass-covering wasn't rule #1 of public life for so many people. It's become a part of the background—a law of nature. I see it as a major, if not the dominant, factor enabling or encouraging bad police behavior, bad domestic and foreign policy, bad school administration, bad customer service, and bad employer-employee relationships.
Apologies for any disorder in this post, it's not something I've written about before.
Well sure. If the algorithms were all implemented perfectly, and performed as we would want relative to every possible evaluative criterion, then nobody would have a reason to object to the use of algorithms. I don't suggest holding your breath until this becomes the case, though. One of the interesting things highlighted by the article is the sheer diversity of evaluative criteria. A thoroughly inculturated human can usually navigate these complex systems of values in a way that remains extremely difficult to implement algorithmically.
In the meantime, when the algorithms we have fail to perform as we'd like, the damage they cause is multiplied by the very efficiency gain that makes them appealing. That's the macro-scale problem. And on an individual basis, the affected person has a difficult time reversing the damage to himself, because the human bureaucracy will tend to defer to the algorithm.
Given that it is the mere fact that an algorithm is used, combined by the effective impossibility of perfect algorithms in the context of complex social functions (and humans' tendency to defer to their results, even when incorrect), that causes the harm, I don't see what's wrong with saying that algorithms can ruin lives.
If not this, then what, to your mind, would it take to make good on that headline?
An un-implemented algorithm is just an abstraction.
Every algorithm has edge cases. You can try to find and handle these with special cases, but they will continue to crop up especially when you use them on real humans.
Humans come in an amazing variety of configurations. For some reason youtube wants me to watch alcohol ads in russian and spanish, even though I don't drink or speak russian (and poor spanish)
Before, each of those cases probably needed a human directly involved. Humans make mistakes, but they tend to make them one at a time, and we have a cultural understanding that humans can make mistakes, and systems to recover from them. Now that we have algorithms involved, we can screw people over wholesale, and we haven't yet developed the cultural knowledge that algorithms can make mistakes as well (astonishing, considering how often computers do not do what we expect them to do).