
Algorithms Are Great and All, but They Can Also Ruin Lives - ghosh
http://www.wired.com/2014/11/algorithms-great-can-also-ruin-lives/
======
ianferrel
It has always been possible to get stuck in bureaucratic hell by some mistake
at some level, and often difficult to extricate oneself.

Before, each of those cases probably needed a human directly involved. Humans
make mistakes, but they tend to make them one at a time, and we have a
cultural understanding that humans can make mistakes, and systems to recover
from them. Now that we have algorithms involved, we can screw people over
wholesale, and we haven't yet developed the cultural knowledge that algorithms
can make mistakes as well (astonishing, considering how often computers do not
do what we expect them to do).

~~~
jerf
"and we haven't yet developed the cultural knowledge that algorithms can make
mistakes as well (astonishing, considering how often computers do not do what
we expect them to do)."

I feel like we've made some progress in the past ~20 years I've been able to
witness that effect. Once upon a time, the output of a computer might as well
have been divine writ. Nowadays, the person on the street is a bit more
realistic. I think the problem with a bureaucracy now isn't that they consider
the computer holy writ, it's that they hear "the computer is wrong!" more
often than it actually is, so they grow their own rather cynical reaction to
that claim... and, alas, that's a rational response, too. Especially bad when
their system is telling them that you're a bad person in one way or another...
_of course_ you're not really six months behind in payment, you deadbeat, I
_totally believe_ the computer screwed up. The check's in the mail, too,
right?

------
exelius
This is simply bad data governance. Algorithms should not be able to make
destructive decisions like dropping people from insurance or canceling a
driver's license on their own. A better system would use the algorithm to
identify _potential_ cases, then having a human perform a more stringent
verification once you've narrowed down the set to a manageable number.

Yes, this is slightly more expensive than using an algorithm alone. But it
could actually end up being cheaper in the long run if law suits are involved.
Sloppy data governance causes all kinds of problems that are hard to quantify
numerically, and that's the root cause here - not bad algorithms. At the end
of the day, every algorithm is going to produce imperfect data: they're based
on simplified logical models of real world systems. Recognizing that
algorithms are imperfect is a key part of knowing when and how to use them.

~~~
cag_ii
Well said. Errors will happen. How they're monitored/dealt with is not an
algorithm problem, and it's importance can't be underestimated.

In this case "the algorithm" is the fly in the printer from the '85 movie
"Brazil". Except these are real stories, it's absurd.

~~~
dragonwriter
> How they're monitored/dealt with is not an algorithm problem

Sure it is; what inputs go into producing the ultimate decision and how those
inputs are processed is an algorithm problem. Particularly, whether and when
output is sent to one or more humans requesting input, and how input from
those human(s) is factored into the ultimate decision is an algorithm problem.

The fact that the decision algorithms at issue often do not involve human
assessment as part of the input set when they should is a defect in the
algorithms' fitness for their intended purposes.

~~~
exelius
I think you're combining "algorithm" and "process" which is something
programmers often do (because for a programmer, they are the same thing).
Businesses tend to think of "algorithms" as black boxes where you feed a data
set in one side and get a modified data set out the other. That data is then
incorporated as part of a business process.

If you're designing a process where you take the output of a program and
delete all records flagged by the program, you can add a human validation step
in pretty easily. Companies often have process engineers whose job it is to do
exactly this: and when you're talking about data management, process and
governance are very important.

------
rectang
Other examples: If you ever take a landlord to court or make an insurance
claim, you'll be marked as a bad risk -- even if your grievances were 100%
legit.

~~~
pdabbadabba
The unfortunate thing about these examples is that I don't think that they
illustrate malfunctioning algorithms. As far as creditors or insurers are
concerned, your bringing a claim against a landlord or insurer _does_ indicate
that, from the perspective of their bottom lines, you are a higher risk
individual. I.e., they would prefer to lend to/rent to/insure a person who is
unlikely to bring claims against them, even when she has a right to. That's
one of the things that, from a cultural/social/moral perspective, is wrong
with this system of risk assessment: it penalizes people for doing things that
are entirely within their rights by decoupling risk from fault.

I'm not sure that there is a solution to this other than regulation, though.

~~~
arebop
If you are a company that is rarely at fault, then it would be great to be
able to separate risk from fault and take the customers that your competition
is overcharging.

~~~
pdabbadabba
Great point! One problem, though, is that it takes money to keep your own
level of fault low, to accurately measure your track record, and to
discriminate between fault and no-fault risk. All this cost may undermine your
ability to sell a cheaper product to low-fault individuals. Another
complication is that sometimes neither you nor your customer will be at fault
(either because nobody is at fault at all, or because a third party is at
fault).

Finally, in the credit arena, risk information is pooled between creditors,
some of whom may be more likely than others to have been at fault in a given
dispute. You'd have to figure out how to refine a customer's risk profile
based on the data from a credit reporting agency, or source all the data you
needed yourself.

If you could overcome these challenges, though, (and maybe you could) I think
you'd have a great business.

------
gwern
Seems more like 'bureaucracies can ruin lives'. These stories could have just
as easily come from the 1800s.

~~~
notacoward
No, not really. It reminds me of a shirt I saw once:

"Caffeine: do stupid things faster and with more energy!"

Likewise, these new tools enable us to make entire new categories of mistakes
based on false correlations etc., and to make them more often. Before, it
would have taken a lot of work to cancel thousands of people's health
insurance. Just the sheer amount of time and number of eyeballs involved would
have increased the chance that someone might have noticed a problem. Now it's
just >click< and there's no chance for introspection before the results of bad
data or bad analysis are made manifest in the real world. Some gears are
_meant_ to grind slowly.

~~~
jerf
As I like to say, "To err is human. To screw up a million times per second you
need a computer."

------
jgriego
The headline is a little misleading; what I mostly got is that letting
decisions like this be handled by automated systems creates more problems than
it solves because those automations create too many biased false positives.

But the problem there isn't necessarily that there are biases encoded in the
algorithms, but more likely that there are biases in the training data that
the algorithms are working with.

~~~
sp332
Why is it more likely that the data is biased than the algorithm? And isn't
picking biased data just as big a problem as having biased algorithms?

------
dTal
The canonical sci-fi short story about this is the darkly humorous "Computers
Don't Argue", written by Gordon Dickson all the way back in 1965.

Link:
[http://www.atariarchives.org/bcc2/showpage.php?page=133](http://www.atariarchives.org/bcc2/showpage.php?page=133)

~~~
ilikemustard
Thanks for linking that, it was a great and quick read. That's very dark but
shockingly relevant to this topic.

------
PeterWhittaker
Two comments so far saying how it's not the fault of the algorithms.

Huh.

Expert systems are given the role of, well, experts: Their authority is not to
be countered by the folks "just doing their jobs". Blaming the "bureaucracy"
is wrong-headed, quixotic, and idiotic.

The blame lies with the senior policy makers and politicians who wash their
hands of due process and avenues of appeal, based on their choice of "expert
systems" that cannot be possibly be confounded.

The person who told the driver it was their responsibility to clear their own
name could be fired for saying otherwise. Nameless bureaucracy isn't to blame,
executives and politicians with real names are.

Let's direct the discussion where it needs to go, shall we?

~~~
notacoward
You're absolutely right. I just _knew_ some people would respond by saying we
shouldn't blame the tools, and in a narrow sense they're right. However, that
doesn't detract from the point that some of these shiny new toys are pretty
darn dangerous. We have a responsibility to understand and mitigate those
dangers, not merely deploy tools that we barely understand with consequences
we understand even less. Training, safeguards, and oversight are necessary.
Failing to check the results from the latest bit of Hadoopery is the data
equivalent of leaving sharp power tools on a children's playground. "It's not
the tools' fault" is incomplete and misleading.

~~~
michaelchisari
> I just knew some people would respond by saying we shouldn't blame the tools

Usually, I'm saying the same thing when it comes to misapplications of science
or technology. However, when the "tools" are making our decisions for us, I
think we can safely blame the tools themselves. At least in the sense that the
tools might need to be re-engineered.

------
ashark
I think part of the problem is that we've gotten stingy with forgiveness and
second chances. Consequently lots of people who might otherwise have good
intentions have, rationally, designed systems to avoid making anyone
responsible for any decision, because even a _good_ choice (given the
information at hand) that happens to turn out poorly can end careers, see
people demonized in the media, generate lawsuits, and ruin lives.

That's why people were reluctant to try to fix the problems the rules
(algorithms, in this case) caused, I think.

Sure, it could just be that a banal everyday sort of cowardice is on a multi-
decade upswing, but I suspect instead that mistakes hit people harder and stay
with them longer than they used to.

Or, maybe harsh punishment isn't more common but it's more visible and it's
perceived as more common. The result is the same, though the solution—whatever
that might be—could be different in that case.

More intense career specialization probably doesn't help. Can't cut it in
(your educational focus here)? Have fun spending another half a decade in
school and going $XX,000 into debt again, _and_ going back to the bottom of
the pay scale, or else delivering pizza for the rest of your working life.
Either way, hope you didn't expect to retire, ever, or pay for your kids'
college, or....

An article that I see as closely related to this, regarding changes in US
military leadership between WWII and the present:

[http://www.theatlantic.com/magazine/archive/2012/11/general-...](http://www.theatlantic.com/magazine/archive/2012/11/general-
failure/309148/)

TL;DR (of the relevant points): Removing an US military officer from a post is
now rare and typically career-ending, while before it was common and often
simply resulted in doing a different (not necessarily lesser) job somewhere
else. Perhaps as a consequence, no-one wants to fire an officer from a job
they're not very good at since doing so punishes them more severely than may
be warranted. Officers don't want to take appropriate risks because they fear
being fired more than they should have to and acting timid is (these days) far
less risky than making a bold call that goes poorly.

Certain classes of people seem immune to being punished for _any_ mistakes,
however egregious, of course, but for a huge percentage of the workforce and
the bulk of the political class I think this holds.

I'd guess our poor social safety net in the US serves to aggravate the
problem. Losing a job here can be a hell of a lot worse than losing a job in
most other states with advanced economies.

I don't think that CYA is a new thing, of course, but I do think that in the
recent past this _degree_ of ass-covering wasn't rule #1 of public life for so
many people. It's become a part of the background—a law of nature. I see it as
a major, if not the dominant, factor enabling or encouraging bad police
behavior, bad domestic and foreign policy, bad school administration, bad
customer service, and bad employer-employee relationships.

Apologies for any disorder in this post, it's not something I've written about
before.

[edit] grammar

------
jpetersonmn
In all of the examples the algorithms were not the issue, it was how they were
implemented. Completely misleading headline.

~~~
pdabbadabba
Well sure. If the algorithms were all implemented perfectly, and performed as
we would want relative to every possible evaluative criterion, then nobody
would have a reason to object to the use of algorithms. I don't suggest
holding your breath until this becomes the case, though. One of the
interesting things highlighted by the article is the sheer diversity of
evaluative criteria. A thoroughly inculturated human can usually navigate
these complex systems of values in a way that remains extremely difficult to
implement algorithmically.

In the meantime, when the algorithms we have fail to perform as we'd like, the
damage they cause is multiplied by the very efficiency gain that makes them
appealing. That's the macro-scale problem. And on an individual basis, the
affected person has a difficult time reversing the damage to himself, because
the human bureaucracy will tend to defer to the algorithm.

Given that it is the mere fact that an algorithm is used, combined by the
effective impossibility of perfect algorithms in the context of complex social
functions (and humans' tendency to defer to their results, even when
incorrect), that causes the harm, I don't see what's wrong with saying that
algorithms can ruin lives.

If not this, then what, to your mind, would it take to make good on that
headline?

