Hacker News new | comments | show | ask | jobs | submit login
[flagged] Know Thy Futurist (bostonreview.net)
22 points by amorsly 9 months ago | hide | past | web | favorite | 9 comments




Heh. SSC ranges from very thoughtful to cringefully unaware knee jerk reactions, and this response definitely falls in the latter category for me.

Of course, one can argue all one wants with the author’s quadrants, but that’s just goofing around with semantics. The real core of the article is here:

”from personality tests that filter out qualified job applicants to crime risk algorithms that convince judges to issue longer sentences, automated algorithms are already replacing our most important human decision making processes. As I look around, I realize there is no need to imagine some hypothetical future of human suffering. We are already here. Data scientists are creating machines they do not fully understand, machines that separates winners from losers for reasons that are already very familiar to us: class, race, age, disability status, quality of education, and other demographic measures. It is a threat to the very concept of social mobility. [...] For the average person, it doesn’t really matter if the decision to keep them in wage slavery is made by a super-intelligent AI or the not-so-intelligent Starbucks Scheduling System. The algorithms that already charge people with low FICO scores more for insurance, or send black people to prison for longer, or send more police to already over-policed neighborhoods, with facial recognition cameras at every corner—all of these look like old fashioned power to the person who is being judged.”

And Scott Alexander, in his anger that the author sharply criticizes privileged, wealthy white males, fails to address this core issue at all. Technology only amplifies existing human systems and biases.


> there is no need to imagine some hypothetical future of human suffering. We are already here

That's exactly the problem. The whole point of the SSC response was that the Boston Review is focused on current problems (societal, political, etc) instead of doing actual futurism, i.e. looking at how to solve these problems going forward, or at least prevent them from being worse. Futurists in all 4 quadrants are already aware of all these current problems, and as Scott says, they are "going to fight [their] hardest to end poverty, disease, death, and suffering"

Even if it's true that the natural tendency of technology is to "only amplify existing human systems and biases" (which I don't think is true, especially on the timescale of the graph in the SSC response) - but still, even if it's 100% true: Is it any reason to not think seriously about the future? To ignore the real possibility of the singularity? To judge ideas about the future based on superficial characteristics of the people proposing these ideas, rather than on their own merits? I feel like the Boston Review article is not actionable, it is not at all clear what it's actually proposing.


"For the average person, it doesn’t really matter if the decision to keep them in wage slavery is made by a super-intelligent AI or the not-so-intelligent Starbucks Scheduling System."

And Scott Alexander ... fails to address this core issue at all

Did we read the same article? While it's true that Scott's writing tends toward burying the lede, instead of not addressing this issue, I considered his criticism of the "We are already here" line to be the primary target of his response. Right or wrong, Scott's assertion is that it's laughable to the point of malfeasance that anyone using a standard definition of "the singularity" would suggest that it might come without a noticeable impact on the lives of those who are currently struggling. Quoting his Fifth entry in the top 5 (like Letterman's top 10 lists, he tends to save his best for last):

  Fifth, another quote from the article:
In the end my taxonomy (as amusing as I find it) doesn’t really matter to the average person. For the average person there is no difference between the singularity as imagined by futurists in Q1 or Q2 and a world in which they are already consistently and secretly shunted to the “loser” side of each automated decision.

  I already posited that the author doesn’t understand
  “Singularity”, but this is something beyond that. This is
  horrifying. There will be no difference for the average
  person between a (positive or negative) post-singularity
  world and the world now? What?

  Listen up, average person. If there’s a negative 
  singularity you will notice. Because you will be very, 
  very dead. So will all the rest of us, rich and poor, old
  and young, black and white.

  And if there’s a positive singularity, you will also
  notice. I would promise you infinite wealth, but that 
  sort of thing kind of loses its meaning in a 
  post-scarcity society. I would promise you immortality,
  but who knows if we’ll even have individual
  consciousnesses at that point? I would promise you bread
  and roses, but they would be made of hyperintelligent
  super-wheat and fractal eleven-dimensional time blossoms.

  I don’t care if you think this vision is stupid. We’re
  not arguing about whether this vision is stupid. We’re
  arguing about whether, if this vision were 100% true, it
  would make a difference in the life of the average 
  person. The Boston Review is saying it wouldn’t. I’m 
  sitting here with my mouth gaping open so hard I’m
  worried about permanent jaw damage.

  A Singularity that doesn’t make a difference in the life 
  of the average person isn’t a Singularity worth the bits
  it’s programmed on.
I found Scott's response to be entirely on point to the horrendous flaws in the original article. While it's fair to argue that Scott's response is wrong and the Boston Review article is right (please do so), I don't think it's reasonable to say that his response ignores the issue.


There is little difference in most cases between a futurist and a doomsday prophet. It is a thin veneer of science around what is essentially a religious troubodour.


"Futurism is the American dream on overdrive: a disdain for the status quo and a belief that we can solve it all without unions, public education, and social safety nets."

So scratch futurism off the list of usable political terms. It goes to the same dust bin as libertarian, socialist, communitarian, etc.

Seems to me that any term that actually means something and that gains any use in politics will quickly get co-opted and turned into a dog whistle by some political tribe, rendering it useless for the purpose of actual meaningful communication. Futurism seems to have become a dog whistle for a wing of the alt-right or something like that.

Maybe we need some kind of annotation scheme, like libertarian[1.14] where 1.14 refers to some kind of shared objectively published dictionary that defines terms precisely. That might make actual communication about political ideas possible.


The body of the article doesn't actually include the line you quoted -- I assume it was a poor paraphrasing by some editor. The author cited those as being characteristics of futurists in the "Q3" quadrant.


Objectively?


Perhaps with minimal ambiguity.

I'm interested in possibly basing it on "semantic primes"




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: