
Algorithms Designed to Fight Poverty Can Actually Make It Worse - chablent
https://www.scientificamerican.com/article/algorithms-designed-to-fight-poverty-can-actually-make-it-worse/
======
darkerside
> The systems analysis community has a lot of lore about leverage points.
> Those of us who were trained by the great Jay Forrester at MIT have all
> absorbed one of his favorite stories. “People know intuitively where
> leverage points are,” he says. “Time after time I’ve done an analysis of a
> company, and I’ve figured out a leverage point — in inventory policy, maybe,
> or in the relationship between sales force and productive force, or in
> personnel policy. Then I’ve gone to the company and discovered that there’s
> already a lot of attention to that point. Everyone is trying very hard to
> push it IN THE WRONG DIRECTION!”

[http://donellameadows.org/archives/leverage-points-places-
to...](http://donellameadows.org/archives/leverage-points-places-to-intervene-
in-a-system/)

~~~
ericbrow
Thanks for the great article. Long read, but lots of great examples.

------
noir-york
The only "algorithm" proven to fight poverty was discovered a long time ago
and its really simple: 1\. good jobs paying living wages (so parents have
economic security) 2\. decent, free education (so the children can have a
future) 3\. Well funded social welfare (because shit happens and people get
sick)

Aka, "social mobility"

$1.16 billion is a lot of money to give to companies when it could pay for a
good number of other things.

~~~
thanatropism
The easiest way to optimize for social mobility is to randomly reallocate
everyone's wealth every X years.

~~~
tokyodude
I'm not sure if this is comparable but I feel like lots of people already live
something like this. My visa has only been approved 1 year at a time. Each
year about 3-4 months before I have to renew it I start to worry that it might
not be renewed and end up postponing all major plans until it gets renewed.
Last year wanted to move apartments before I had to renew my lease but didn't
do it because I wasn't sure my visa would get renewed. Even if I had tried
landlords might have looked at my expiration date and said "no". Similarly I
planned to rent a new office/cube in a co-working space but needed to wait
until after my visa was renewed.

~~~
gcb0
you are replying to an irony... that was about the rich never going to agree
to a "solution" that won't make them rich.

------
niftich
The author of this article is Virginia Eubanks [1], who has studied and
written on this topic extensively, including a new book whose short title is
'Automating Inequality'. Although I can't be sure, I think it's reasonable to
assume that the content in this article is along the lines expressed in this
CityLab interview [2], and other media coverage the author highlights [3].

Eubanks presents the impact that rigid, opaque systems and processes can have
on individuals, and does so by focusing on troubling policy implications, but
also by highlighting the suffering experienced by named individuals whose
unique personal circumstances are examined. This is not only an effective
writing device, but also demonstrates the empathy gap of software, and how an
impersonal process -- whether intentionally or not -- serves as a tool for
bureaucrats and low-level decision-makers to offload difficult negative
decisions to an amorphous entity like 'the process'. I feel that this is a
key, if not always overt motivation of most bureaucracies, and has universal
applicability.

There's also the notion that when ML is fed with raw data, it picks up
correlations present in the data regardless of whether their presence is due
to a lurking variable. Later, making use of such a model for decision-making
perpetuates the bias, and a haphazardly chained sequence of such processes
increases the likelihood that a cohort's observed outcomes will self-fulfill
in the future. Conversation on this matter in recent years has tended towards
the social justice implications more so than the lack of intellectual rigor
that allows this to be possible -- the former is a compelling point, but it's
unfortunate that there's not enough coverage of about the harm of giving
complex tools to those who don't understand them.

[1] [https://virginia-eubanks.com/](https://virginia-eubanks.com/) [2]
[https://www.citylab.com/equity/2018/02/the-rise-of-
digital-p...](https://www.citylab.com/equity/2018/02/the-rise-of-digital-
poorhouses/552161/) [3] [https://virginia-
eubanks.com/media/](https://virginia-eubanks.com/media/)

~~~
Bartweiss
This is a great summary, and that Citylab piece is well worth checking out.

I'm sometimes skeptical of human-narrative analyses, because they make it easy
to launch equally-vivid attacks on a plan that fails 10 people and a plan that
fails 10,000. But with rigor behind them, there's a lot to be said for them as
existence proofs - especially when we're talking about the margins of a plan
instead of its core. Knowing what the edge cases look like _matters_ , and for
that it's hard to beat talking to individuals with problems that don't fit on
a form.

(As an aside, I think software often creates not only an empathy gap but a
mechanical one. An analog system can often be forced to accommodate oddities
like self-employment and unconventional address formats, but "the computer
won't let me enter that address" is a brick wall. I've never found an
investigation of this, but I suspect it's a major drawback of digitization.)

Your final point about ML is a vital one, and one I'm endlessly frustrated by.
We're increasingly aware of issues like racial bias in parole-predictions
systems, but there's alarmingly little focus on the fact that a system which
does that is flawed _in general_ , and might be making the same assumptions
about e.g. low-income neighborhoods without anyone noticing. Unlike human
bias, a flawed algorithm is equally happy to abuse any strange correlation it
can find.

------
pjc50
Paywalled, but the taster paragraph gives a strong hint:

> Near the end of 2006 Mitch Daniels, then governor of Indiana, announced a
> plan to give the state's “neediest people a better chance to escape welfare
> for the world of work and dignity.” He signed a $1.16-billion contract with
> a consortium of companies, including IBM, that would automate and privatize
> eligibility processes for Indiana's welfare programs.

\- Means testing frequently makes things worse, regardless of "algorithm"

\- If you want to relieve poverty, don't give huge amounts of money to
multinational mediocre consulting firms

~~~
tareqak
The same goes for foreign aid. There is definitely immediate help warranted in
the case natural disasters, and the outbreaks of war, and some longer-term
help when displaced people are unable to return to their homes. However, a lot
of foreign aid ends up being cases where taxpayers fund questionable projects
spearheaded by politicians and run by multinationals, consultants, and
contractors with dismal results. The main cause I see is that the local
population being helped often has poor or skewed representation as a
stakeholder in the decision-making process such that the outcome becomes
unhelpful or doomed.

~~~
OpenBSD-reich
Yea, like over thirty billion a year to Israel no strings attached!

~~~
TheOtherHobbes
A lot of that comes back to the US as arms deals. The rest comes back to the
US as "political lobbying."

~~~
RealityVoid
I never got this point. Are you not then giving them arms for free? Are you
not using the output of your country and redirecting it to another?

------
esotericn
Means testing is fundamentally broken as a concept.

Another (related) example would be access to mental health resources.

Waiting for someone to hit rock bottom before helping is obviously suboptimal.

------
sbhn
Next time the politician reminds you you’re are under attack from those on
welfare, think about the problems of the highly profitable businesses that
administer it. 1 for you, and, 2 for me.

------
jellicle
Err, so, this paywalled article is about how a Republican governor who ran on
a platform of cutting welfare in the state of Indiana brought in some
"algorithms", some "computers" of some sort, that ended up giving a lot of
money to private companies, slashing welfare, and making people more poor?

And the premise here is that this was unintended?

I think I found the problem: credulous writers.

------
tobyhinloopen
Why do people post and upvote webpages that are behind a paywall? I wish there
was a “no paywall or gdpr wall” rule for hackernews, or at least a [paywall]
“flair”-like thing so I don’t have to waste time loading the page.

~~~
leetcrew
> Why do people post and upvote webpages that are behind a paywall?

because it's explicitly allowed by the rules. I like your flair idea though,
seems like a nice middle ground.

------
j_m_b
It's almost as if good intentions can have unintended consequences.

