Before, each of those cases probably needed a human directly involved. Humans make mistakes, but they tend to make them one at a time, and we have a cultural understanding that humans can make mistakes, and systems to recover from them. Now that we have algorithms involved, we can screw people over wholesale, and we haven't yet developed the cultural knowledge that algorithms can make mistakes as well (astonishing, considering how often computers do not do what we expect them to do).
I feel like we've made some progress in the past ~20 years I've been able to witness that effect. Once upon a time, the output of a computer might as well have been divine writ. Nowadays, the person on the street is a bit more realistic. I think the problem with a bureaucracy now isn't that they consider the computer holy writ, it's that they hear "the computer is wrong!" more often than it actually is, so they grow their own rather cynical reaction to that claim... and, alas, that's a rational response, too. Especially bad when their system is telling them that you're a bad person in one way or another... of course you're not really six months behind in payment, you deadbeat, I totally believe the computer screwed up. The check's in the mail, too, right?
A human can just fix the mistake in the paper file as a one-off and move on. No generalized solution necessary.
Yes, this is slightly more expensive than using an algorithm alone. But it could actually end up being cheaper in the long run if law suits are involved. Sloppy data governance causes all kinds of problems that are hard to quantify numerically, and that's the root cause here - not bad algorithms. At the end of the day, every algorithm is going to produce imperfect data: they're based on simplified logical models of real world systems. Recognizing that algorithms are imperfect is a key part of knowing when and how to use them.
In this case "the algorithm" is the fly in the printer from the '85 movie "Brazil". Except these are real stories, it's absurd.
Sure it is; what inputs go into producing the ultimate decision and how those inputs are processed is an algorithm problem. Particularly, whether and when output is sent to one or more humans requesting input, and how input from those human(s) is factored into the ultimate decision is an algorithm problem.
The fact that the decision algorithms at issue often do not involve human assessment as part of the input set when they should is a defect in the algorithms' fitness for their intended purposes.
If you're designing a process where you take the output of a program and delete all records flagged by the program, you can add a human validation step in pretty easily. Companies often have process engineers whose job it is to do exactly this: and when you're talking about data management, process and governance are very important.
I'm not sure that there is a solution to this other than regulation, though.
Finally, in the credit arena, risk information is pooled between creditors, some of whom may be more likely than others to have been at fault in a given dispute. You'd have to figure out how to refine a customer's risk profile based on the data from a credit reporting agency, or source all the data you needed yourself.
If you could overcome these challenges, though, (and maybe you could) I think you'd have a great business.
"Caffeine: do stupid things faster and with more energy!"
Likewise, these new tools enable us to make entire new categories of mistakes based on false correlations etc., and to make them more often. Before, it would have taken a lot of work to cancel thousands of people's health insurance. Just the sheer amount of time and number of eyeballs involved would have increased the chance that someone might have noticed a problem. Now it's just >click< and there's no chance for introspection before the results of bad data or bad analysis are made manifest in the real world. Some gears are meant to grind slowly.
(1) Grease the wheels significantly (allowing these decisions kinds of humiliations to occur much more quickly, and in greater bulk);
(2) Enable "action at a distance", both physically and psychologically (it's a lot easier for anyone at any level in the bureaucratic apparatus to shrug their soldiers and say "well, the computer flagged him, I just gotta do my job" than to make a personal determination about that person's guilt or criminal association;
(3) And they add a patina of (false) respectability, via the extreme sexiness of machine learning and everything data-sciencey these days.
None of the stories show that. For example, the health one could have happened just as easily in a paper bureaucracy - that's the magic of 'opt in' vs 'opt out'. Just declare invalid all health insurances which are not explicitly renewed. No computer necessary.
> (2) Enable "action at a distance",
What, like paperwork doesn't enable that?
> (3) And they add a patina of (false) respectability
The flagged problems have more to do with no staff and the bureaucracies being self-serving in refusing to review complaints and whatnot, they don't have to do with regular people going 'well, the machine said it, so it must be true!'
But the problem there isn't necessarily that there are biases encoded in the algorithms, but more likely that there are biases in the training data that the algorithms are working with.
Expert systems are given the role of, well, experts: Their authority is not to be countered by the folks "just doing their jobs". Blaming the "bureaucracy" is wrong-headed, quixotic, and idiotic.
The blame lies with the senior policy makers and politicians who wash their hands of due process and avenues of appeal, based on their choice of "expert systems" that cannot be possibly be confounded.
The person who told the driver it was their responsibility to clear their own name could be fired for saying otherwise. Nameless bureaucracy isn't to blame, executives and politicians with real names are.
Let's direct the discussion where it needs to go, shall we?
Usually, I'm saying the same thing when it comes to misapplications of science or technology. However, when the "tools" are making our decisions for us, I think we can safely blame the tools themselves. At least in the sense that the tools might need to be re-engineered.
That's why people were reluctant to try to fix the problems the rules (algorithms, in this case) caused, I think.
Sure, it could just be that a banal everyday sort of cowardice is on a multi-decade upswing, but I suspect instead that mistakes hit people harder and stay with them longer than they used to.
Or, maybe harsh punishment isn't more common but it's more visible and it's perceived as more common. The result is the same, though the solution—whatever that might be—could be different in that case.
More intense career specialization probably doesn't help. Can't cut it in (your educational focus here)? Have fun spending another half a decade in school and going $XX,000 into debt again, and going back to the bottom of the pay scale, or else delivering pizza for the rest of your working life. Either way, hope you didn't expect to retire, ever, or pay for your kids' college, or....
An article that I see as closely related to this, regarding changes in US military leadership between WWII and the present:
TL;DR (of the relevant points): Removing an US military officer from a post is now rare and typically career-ending, while before it was common and often simply resulted in doing a different (not necessarily lesser) job somewhere else. Perhaps as a consequence, no-one wants to fire an officer from a job they're not very good at since doing so punishes them more severely than may be warranted. Officers don't want to take appropriate risks because they fear being fired more than they should have to and acting timid is (these days) far less risky than making a bold call that goes poorly.
Certain classes of people seem immune to being punished for any mistakes, however egregious, of course, but for a huge percentage of the workforce and the bulk of the political class I think this holds.
I'd guess our poor social safety net in the US serves to aggravate the problem. Losing a job here can be a hell of a lot worse than losing a job in most other states with advanced economies.
I don't think that CYA is a new thing, of course, but I do think that in the recent past this degree of ass-covering wasn't rule #1 of public life for so many people. It's become a part of the background—a law of nature. I see it as a major, if not the dominant, factor enabling or encouraging bad police behavior, bad domestic and foreign policy, bad school administration, bad customer service, and bad employer-employee relationships.
Apologies for any disorder in this post, it's not something I've written about before.
In the meantime, when the algorithms we have fail to perform as we'd like, the damage they cause is multiplied by the very efficiency gain that makes them appealing. That's the macro-scale problem. And on an individual basis, the affected person has a difficult time reversing the damage to himself, because the human bureaucracy will tend to defer to the algorithm.
Given that it is the mere fact that an algorithm is used, combined by the effective impossibility of perfect algorithms in the context of complex social functions (and humans' tendency to defer to their results, even when incorrect), that causes the harm, I don't see what's wrong with saying that algorithms can ruin lives.
If not this, then what, to your mind, would it take to make good on that headline?
Every algorithm has edge cases. You can try to find and handle these with special cases, but they will continue to crop up especially when you use them on real humans.
Humans come in an amazing variety of configurations. For some reason youtube wants me to watch alcohol ads in russian and spanish, even though I don't drink or speak russian (and poor spanish)