
Know Thy Futurist - amorsly
https://bostonreview.net/science-nature/cathy-oneil-know-thy-futurist
======
MarkPNeyer
There's a pretty good response here:

[http://slatestarcodex.com/2017/10/09/in-favor-of-futurism-
be...](http://slatestarcodex.com/2017/10/09/in-favor-of-futurism-being-about-
the-future/)

~~~
GuiA
Heh. SSC ranges from very thoughtful to cringefully unaware knee jerk
reactions, and this response definitely falls in the latter category for me.

Of course, one can argue all one wants with the author’s quadrants, but that’s
just goofing around with semantics. The real core of the article is here:

 _”from personality tests that filter out qualified job applicants to crime
risk algorithms that convince judges to issue longer sentences, automated
algorithms are already replacing our most important human decision making
processes. As I look around, I realize there is no need to imagine some
hypothetical future of human suffering. We are already here. Data scientists
are creating machines they do not fully understand, machines that separates
winners from losers for reasons that are already very familiar to us: class,
race, age, disability status, quality of education, and other demographic
measures. It is a threat to the very concept of social mobility. [...] For the
average person, it doesn’t really matter if the decision to keep them in wage
slavery is made by a super-intelligent AI or the not-so-intelligent Starbucks
Scheduling System. The algorithms that already charge people with low FICO
scores more for insurance, or send black people to prison for longer, or send
more police to already over-policed neighborhoods, with facial recognition
cameras at every corner—all of these look like old fashioned power to the
person who is being judged.”_

And Scott Alexander, in his anger that the author sharply criticizes
privileged, wealthy white males, fails to address this core issue at all.
Technology only amplifies existing human systems and biases.

~~~
wdrw
_> there is no need to imagine some hypothetical future of human suffering. We
are already here_

That's exactly the problem. The whole point of the SSC response was that the
Boston Review is focused on _current_ problems (societal, political, etc)
instead of doing actual _futurism_ , i.e. looking at how to solve these
problems going forward, or at least prevent them from being worse. Futurists
in all 4 quadrants are already aware of all these current problems, and as
Scott says, they are "going to fight [their] hardest to end poverty, disease,
death, and suffering"

Even if it's true that the natural tendency of technology is to "only amplify
existing human systems and biases" (which I don't think is true, especially on
the timescale of the graph in the SSC response) - but still, even if it's 100%
true: Is it any reason to _not_ think seriously about the future? To ignore
the real possibility of the singularity? To judge ideas about the future based
on superficial characteristics of the people proposing these ideas, rather
than on their own merits? I feel like the Boston Review article is not
actionable, it is not at all clear what it's actually proposing.

------
Cacti
There is little difference in most cases between a futurist and a doomsday
prophet. It is a thin veneer of science around what is essentially a religious
troubodour.

------
api
"Futurism is the American dream on overdrive: a disdain for the status quo and
a belief that we can solve it all without unions, public education, and social
safety nets."

So scratch futurism off the list of usable political terms. It goes to the
same dust bin as libertarian, socialist, communitarian, etc.

Seems to me that any term that actually _means something_ and that gains any
use in politics will quickly get co-opted and turned into a dog whistle by
some political tribe, rendering it useless for the purpose of actual
meaningful communication. Futurism seems to have become a dog whistle for a
wing of the alt-right or something like that.

Maybe we need some kind of annotation scheme, like libertarian[1.14] where
1.14 refers to some kind of shared objectively published dictionary that
defines terms precisely. That might make actual communication about political
ideas possible.

~~~
Cacti
Objectively?

~~~
drdeca
Perhaps with minimal ambiguity.

I'm interested in possibly basing it on "semantic primes"

