
Of course technology perpetuates racism. It was designed that way - rbanffy
https://www.technologyreview.com/2020/06/03/1002589/technology-perpetuates-racism-by-design-simulmatics-charlton-mcilwain/
======
salawat
I normally wouldn't even comment on such a poorly reasoned out article, but
given the climate, I don't think it does anyone good to hold back when trying
to explain things. If you're intimately a part of tech implementation, I
encourage you to speak up often, and politely to defuse these sort of
inflammatory pieces.

To be clear, practically no one sits down and goes "How can we discriminate
today?" outside possibly a few rumored actors I've heard of. This is
critically important to understand if you actually want to get to the core of
how we unintentionally end up with seemingly laughably biased systems. It is
almost always through second or third order effects through which thus type of
thing slips in.

Case and point:

Facial recognition would go something like this:

"Hey, we can use this neural network thing to recognize faces. What can we do
with that?"

"Well, let's see if it can find surprising trends by feeding in our datasets,
plus some control images to revolutionize the way we market to our customers!
We have their Facebook or email, let's just grab a picture associated with
that and see what happens."

<...something marginally useful happens...>

<...rumors spread of something marginally useful happening when applying this
technique in this case...>

<...multiple companies try to eke out some competitive edge by experimenting
with their own variations...>

<...a data scientist gets interested in all this hubub, and does a meta-
analysis, finding some flaw, or how the entire technique basically degenerated
to some form of wealth concentration or population based heatmap in a really
difficult for regular people to understand way...>

<...media reports that technologists are evil racist masterminds due to the
coincidental segmentation of disadvantaged populations...>

All of it coming out of a "Huh, I wonder if that would work here?"

Or you get another one, where you end up marketing to a particular
demographic, but without realizing it, the thing you market is in and of
itself part of the reason the demographic is what is, and doesn't foster much
customer migration out of. An example being businesses operating on the basis
of "poor people fees" as I call them.

Businesses set out to make tools that help people not be poor, but the income
stream is structured around collecting these types of fees from their target
market segment. They are looking a it business performance metrics and doing
things good for business. That something good for business done for the best
of reasons could in the end be bad is not a realization that a lot of people
end up putting in the work to arrive at.

Technology very rarely _solves_ problems they aren't directly designed to
solve. What it is really good at though is amplifying what problems already
exist within the axiomatic space in which they exist. In the words of Einstein
"We can not solve our problems with the same level of thinking that created
them."

Most tech of the last few decades has actually been designed to:

-Enable transactions that were previously untenable to facilitate given the reliance on physical means of exchange -surveillance/remote monitoring -accountability laundering -penny-pinching and fiscal optimization

Once you understand that finance has firmly decoupled from even the most
remote semblance of correlation to actual utility created in anything else
other than wealth movement; you'll see why it seems like nothing ever gets
solved.

The problem isn't racism. It's your money is in your pocket, and we want it in
our coffers with the largest ratio of value gained vs our value list that the
market will allow. That simple.

