
Digital dystopia: how algorithms punish the poor - pseudolus
https://www.theguardian.com/technology/2019/oct/14/automating-poverty-algorithms-punish-poor
======
benjaminjosephw
> Automating Poverty will run all week. If you have a story to tell about
> being on the receiving end of the new digital dystopia, email
> ed.pilkington@theguardian.com

This is dangourous journalism. The reporter has clearly taken a stance and is
not attempting to open up a nuanced and careful discussion of the factors at
play here but is instead insighting fear and mistrust of technical progress in
the public sphere.

I can understand the fear and concern around the changing digital landscapes
and the potential impact on different parts of society. What's needed is a
public discourse about these things that doesn't conflate the issues into one
single problem of a "digital dystopia". Are we really talking about a "flawed
algorithm", for example, or simply the encoding of a badly designed government
process?

Technology _is_ changing how poor people interact with the state and this is
an important topic to discuss openly and broadly. We'll never have the quality
of discussion that's needed while there are fear mongering reporters like this
subverting that conversation to the level of luddism.

~~~
Sileni
I go back and forth on the value of a "devil's advocate". I hope someone else
will chime in to help me out if I don't explain this eloquently.

This is a case where I believe the topic won't be properly discussed unless
someone is willing to take a hard stance on the negative side. Someone has to
point out all the potential failings of the system that so many people are
pouring their lives into building. Even if that person ends up taking a much
harder stance than they actually believe in, and becoming a little too
disconnected from reality.

The situations being described in the article are horrifying, and sound an
awful lot like what you might expect from software bugs in their early stages.
That wouldn't be a problem if there was an appropriate human force behind the
systems going online, but we've all seen stakeholders push systems into
production before they're ready without adequate support.

You're probably on the right track to call it a "badly designed government
process", but I'd wager the human element is what softened the blow from that
bureaucracy. It shifts the burden of proof from the case worker to the support
seeker. It changes the conversation from "You think you deserve benefits?
Let's look over your evidence" to "You've already been rejected, why should we
support you?". When you're talking about people surviving, that's a
significant difference.

~~~
whatshisface
> _Even if that person ends up taking a much harder stance than they actually
> believe in, and becoming a little too disconnected from reality._

Sure, that will get the issues discussed - discussed and then promptly
rejected. A bad arguer arguing for the right side can do a lot of damage. For
example, imagine how successful climate change skeptics would be if someone
got in to the news by saying that New York was a month away from flooding.

------
iliketosleep
It's a bit like all the arbitrary account terminations we hear about with
Google. A machine makes a decision with virtually no recourse for the user.
When governments are applying this same concept to welfare, it does indeed
create a dystopian scenario where people are left to starve because some
algorithm got it wrong.

In essence, it's about cost cutting. Governments seem to think that machines
can replace humans in making decisions where there is nuance involved and the
stakes are extremely high. It is frightening and should not be accepted. The
furthest the automation should go is to flag irregularities for review.
Instead, the machines are given far too much autonomy, with the robodebt
collection being a scary example.

~~~
hcarvalhoalves
A human isn't necessarily less arbitrary, in fact it can be much worse.

You (and Guardian, it seems) are falling for the fallacy of the danger of
automation - "it's acceptable for bad decisions to be made as long as it's
been decided by a human".

Fear mongering is not how you prove/disprove automation, you should pick a
useful metric (e.g. are people starving?) and benchmark against that - the
same way you would measure if a department comprised of humans were doing the
same job.

~~~
Majromax
> You (and Guardian, it seems) are falling for the fallacy of the danger of
> automation - "it's acceptable for bad decisions to be made as long as it's
> been decided by a human".

Not necessarily. Even bureaucratic systems tend to recognize that humans make
imperfect decisions -- we're susceptible to everything from bribery to
exhaustion. These systems then tend to not treat first decisions as final, and
they allow an opportunity to appeal.

But what happens when the decision is made by an algorithm, especially one
that wasn't built to give an explicable reason?

> Fear mongering is not how you prove/disprove automation, you should pick a
> useful metric (e.g. are people starving?)

That's not a good metric.

Suppose the algorithm were in fact perfect and nobody starved -- except you.
For some reason, it couldn't recognize your ID (and only your ID). By the "are
people starving" metric, the algorithm would be doing a fantastic job compared
to an imperfect human system, but it would also be profoundly unjust.

~~~
hcarvalhoalves
> Even bureaucratic systems tend to recognize that humans make imperfect
> decisions -- we're susceptible to everything from bribery to exhaustion.
> These systems then tend to not treat first decisions as final, and they
> allow an opportunity to appeal.

So the problem is not automation, is lack of recourse.

Lack of recourse is a real problem that already exists today. People do go
unattended by not knowing how to navigate bureaucratic structures, and not
having money to pay a lawyer. It also affects the most the countries the
Guardian article targets as "automating poverty".

> For some reason, it couldn't recognize your ID (and only your ID). By the
> "are people starving" metric, the algorithm

This is not an algorithmic issue.

If you lost your ID, you wouldn't get your benefit even if you walked into a
social security branch either.

~~~
Faark
> So the problem is not automation, is lack of recourse.

The problem _is_ automation, if it exaggerates the "lack of recourse". It
means that, without being careful, adding automation would screw up most of
such systems. Thus such articles to boost awareness and make sure automation
is done well. I'm disappointed with what a hard time many on HN seem to have
with accepting the need to be cautious. Especially with Gov, were you cannot
just find or build a better alternative on the open market.

> If you lost your ID, you wouldn't get your benefit even if you walked into a
> social security branch either.

Is that actually the case in the US? The programs (run by the church and
heavily supported by german gov) I've come in contact with certainly did not.
Never was at a soup kitchen, but would be surprised for them to ask for IDs.

~~~
GreaterFool
I don't think you need to convince anyone here that automated systems fail. I
think many of us have been affected by such systems. Accounts blocked for no
reason, etc. Glitch.

The problem is not the automated systems. The problem is _zero_ recourse.

If nothing was automated then we'd have to wait hours on the phone to get to
an actual human to do something trivial. Or make an appointment, travel
somewhere, spend few hours to get something simple done.

In an automated world most of the time things work out.

The situation becomes despicable where there's no one to complain to and no
way to seek redress.

In the tech world it became pretty common to seek support by Twitter shaming.
Often there's no other route.

Still, it's not the automation that is at fault. It's the greedy humans. They
could save 80% of the cost while keeping everyone happy but they'll chose to
save 90% of the cost at the price of making many miserable.

Can't change the human nature?

------
DarkWiiPlayer
This is just fearmongering; misrepresenting and oversimplifying the truth to
the point where all meaning is lost.

\- There's no distinction between the different types of systems involved.

\- Human problems are blamed on the system (A lack of human support isn't the
programs fault)

\- Individual stories are used not to underline broader evidence, but meant to
_be_ the evidence ("a man died in india, therefore all computers are evil").

\- There's constant generalizations. It's not "some algorithms", it's "the
algorithms" that are being blamed. (Ctrl+F shows the word "some" appears just
once in the text FFS)

I don't know if the author is satisfied with this article; if they are, they
should be fired. Misinformation like this is ammunition for those crying "fake
news" to further destabilize the political landscape, which, in turn, harms
the reputation of real journalism, the kind that our societies rely on to
maintain a functioning democracy.

~~~
zAy0LfpBZLC8mAC
> \- Human problems are blamed on the system (A lack of human support isn't
> the programs fault)

Oh, you can sue programs now? I wasn't aware of that!

I mean ... seriously? What are you even trying to argue there?!

------
wiglaf1979
Two books that dived well into this subject. Unlike this article they have a
bit more meat to their research. I would recommend giving them a read before
getting your pitchforks only due to this article. It's bad and needs to be
fixed but a measured response instead of a purely reactionary response will
just make things worse.

Weapons of Math Destruction
[https://www.goodreads.com/book/show/28186015-weapons-of-
math...](https://www.goodreads.com/book/show/28186015-weapons-of-math-
destruction)

Automating Inequality
[https://www.goodreads.com/en/book/show/34964830-automating-i...](https://www.goodreads.com/en/book/show/34964830-automating-
inequality)

~~~
bigwavedave
> a measured response instead of a purely reactionary response will just make
> things worse.

I'm not sure I understand, would you clarify this for me? It's been my
experience that pure reaction is usually bad for discourse, so I'd like to
know more.

~~~
Pryde
Going from the context of the comment, I believe parent meant something along
the lines of:

"we need a measured response instead of a purely reactionary response which
will just make things work"

At least, that was my first impression, definitely could be interpreting the
comment incorrectly.

------
shadowgovt
There's an interesting assumption baked into this entire approach, which is
that the human-evaluated system is better. I'll be interested to see if that's
an artifact of this summary article or is reflected in the underlying
reporting coming up, because it's wise to not leave that assumption
unchallenged. FTA:

> Instead of talking to a caseworker who personally assesses your needs, you
> now are channeled online where predictive analytics will assign you a future
> risk score and an algorithm decide your fate.

If the caseworker is racist, this isn't a worse scenario... unless such racism
has also been baked into the risk assessment algorithm, which is definitely a
possibility. But at least from my personal experience, I trust our ability to
identify and de-train racism from assessment networks more than I do our
ability to consistently identify and hire non-racist caseworkers.
Deprogramming racism out of humans is hard (and even if you succeed for one,
you can't copy their state-vector into the brains of their peers).

~~~
chongli
I recall reading a story in the book _Weapons of Math Destruction_ [1] about a
system used to assess people’s recidivism risk which judges were relying on
for sentencing hearings. The problem is that the system showed a clear racial
disparity in the risk scores yet at the same time it was more accurate.

That means if we try to tune the system to make it less racist, we’ll be
making it less accurate. In essence, the system isn’t really racist, it’s a
reflection of the racism in society which is leading to these outcomes.
Ultimately, the problem is that putting someone in jail increases the
likelihood that their relatives will commit crimes. It increases the
likelihood that both they and their friends and family will reoffend. It’s a
vicious cycle and it doesn’t appear to have any technical solution.

[1]
[https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction](https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction)

~~~
Nasrudith
It brings to mind a sarcastic way to get perfect prediction of patient
outcomes. Step one is to decapitate the patient.

Predictability/accuracy isn't the important part except as means to the end -
the outcome.

------
chooseaname
This article is blaming technology. This is very dangerous because it diverts
the blame from who it should be on and that's the governments for making bad
decisions.

~~~
chongli
The whole point of switching to technology for systems like this is to block
people from holding you accountable. It puts the burden of proof on the
accuser to show that an inscrutable software black box is biased.

It very much resembles the shift from human tech support operators to
software-based robo operators. In effect, it’s a wall to keep you away.

The apotheosis of this concept is Google, a company that goes to ridiculous
lengths to stop people from being able to contact a human being that works
there, unless you’ve paid them money for the privilege.

------
4ndr3vv
Many here seem to cite this article as being low on fact, and high on
emotions.

It is worth noting that this is merely the introduction piece to a series of
articles that the Guardian seem to be running this week[1].

It's intention _is_ to be brief.

[1][https://www.theguardian.com/technology/series/automating-
pov...](https://www.theguardian.com/technology/series/automating-poverty)

------
idl3Y
This article miss the point, misrepresents and oversimplifies, it's not
denouncing the culprits, it's dancing to their tune. This is what they want:
"blame the algorithm, forget the real culprits".

------
vkaku
It tells you that a wrongly applied system can cause damage.

In respect of Aadhar in India, I know the Anti pattern where they deduplicated
many undeserving people receiving 'welfare' payouts from the state.

Is this to state that either of these approaches are right?

No. Eliminate the skunk out of the politics, and such things won't happen.

~~~
firasd
I'm not sure what welfare payouts you're referring to; the Indian state mostly
assists via subsidies rather than cash. Lately they have moved to direct
benefits transfer (ie a payment into a bank account) for the cooking gas
subsidy, which is an anti-corruption move meant to route around intermediaries
who divert subsidized cylinders and sell them at market rate.

The problem is that Aadhar (introduced initially for subsidies) has become a
biometric ID system to link more and more of a person's life to this
government ID. A few weeks ago a child died of rabies and the first excuse the
hospital gave to delay treatment was that they wanted her Aadhar:
[https://timesofindia.indiatimes.com/city/agra/rabies-
infecte...](https://timesofindia.indiatimes.com/city/agra/rabies-infected-
minor-who-died-waiting-for-treatment-denied-vaccine-15-times-at-
chc/articleshow/70832635.cms)

I am disappointed whenever I see Bill Gates mindlessly applauding Aadhar. It
is easy to side with technocratic ideas from just reading headlines, but if a
biometric surveillance state is such a great idea he should have it advocated
for it in the US first. (Another example of a technocratic authoritarian move
he initially made some positive murmurs about was demonetization, which now,
three years later, has proven to have had a disastrous effect on India's
economy.)

~~~
kristianc
> I am disappointed whenever I see Bill Gates mindlessly applauding Aadhar. It
> is easy to side with technocratic ideas from just reading headlines, but if
> a biometric surveillance state is such a great idea he should have it
> advocated for it in the US first.

I agree with this. It’s not acceptable to use the developing world as a
staging box for ideas and policies that are, at best, in beta and likely to
have bugs.

~~~
whatshisface
India is a democracy that votes for its own laws, nobody is using it as a
"staging box."

------
golergka
First six paragraphs offer no facts, but are very heavy with very emotionally
loaded statements. Even if the whole article after that is completely true,
it's impossible not to perceive this writing style as manipulative propaganda.

~~~
zwkrt
Reporting can be emotional and true. Journalism and activism can’t be
separated.

~~~
dexen
The article belongs in an Opinion section, not in a (Technology) News section.

 _> Journalism and activism can’t be separated._

They should and they can. Either have a separate publication, or at the very
least have a separate, clearly delineated "Opinion" section. Otherwise you run
the risk of media trust becoming very low, and thus media losing its ability
to guard the republic (or the democracy).

Consider a similar assertion - "journalism and advertising can't be separated"
\- it's obviously false. Granted, there is some undue influence from
advertisers to be expected. However this is considered a negative that should
be avoided. And indeed the good practice is to have a "firewall" between the
advertising department, and the news desk, in any publication.

~~~
kristianc
> They should and they can. Either have a separate publication, or at the very
> least have a separate, clearly delineated "Opinion" section. Otherwise you
> run the risk of media trust becoming very low, and thus media losing its
> ability to guard the republic (or the democracy).

Practically it is impossible to separate the two. Even news reporting doesn’t
usually come free of editorializing. There are places where you can go to get
‘just the facts ma’am’ but newspapers have never and will never be those
places.

~~~
shadowgovt
I agree that the mere act of choosing to observe or report something is, in
some sense, activism, at least in that it indicates the author thinks the
topic has some merit. But it's a sliding scale.

(1) And an author can identify words designed to elicit emotional response and
work to minimize them in their writing if they want to focus on the facts vs.
their interpretation of the facts.

(2) Listen friend, only an idiot doesn't understand words and phrases have
emotional tone, and an author chooses whether they want to piss you off or
stay out of your emotions when they choose their words.

... these two sentences have the same conceptual content.

------
eecc
I, Daniel Blake

watch it

