
Deep Chernoff Faces - pxx
https://www.ihatethefuture.com/2020/06/deep-chernoff-faces.html
======
ibrarmalik
The idea behind Chernoff faces (or using faces for data visualization) seems
good: we humans are very good at distinguishing faces, so we can quickly find
groups and outliers if the data is encoded with a face.

But we have to be careful with this. Changing facial expressions is not the
same as increasing the height of a barplot, we're relating features with
expressions and the visualization might express things that you don't want.

There is a very famous example for this in "Life in Los Angeles" (1977) by
Eugene Turner [1]. Maybe you can infer the data well but in the end this just
ends up being a map of angry black people. The choice of features and how to
visualize them is clearly racist.

[1] [https://mapdesign.icaci.org/2014/12/mapcarte-353365-life-
in-...](https://mapdesign.icaci.org/2014/12/mapcarte-353365-life-in-los-
angeles-by-eugene-turner-1977/)

~~~
sukilot
Huh?

That map colored black people dark like their skin, and encoded misery as
unhappy faces.

The result accurately showed happy white people and unhappy black people. How
is it racist to acknowledge the racially biased distribution of suffering?

~~~
gnramires
See the legend. Things are ranked from "Good to bad", (Urban stress,
unemployment) and proportion white population is right there in parallel,
screaming "White"=="Good".

There are a number of other lesser reasons, including conflating regions of
high urban stress as "Evil" or "Angry" instead of just unhappy[0]. The face
visualization just implies too much of a value judgement on the data -- too
correlated with the issue (yet misrepresentative) to be a good idea imo.

[0]: Even happy/unhappy may not reflect well at all this variable (again
because that's a complicated human emotion, not a simple function of "Health
crime and transportation factors").

Edit: I think it's also not necessary to call the map creator (or maybe the
map itself) racist, this implies some kind of intentional discrimination (and
is quite strong imo), but it does have to me grave mentioned problems.

~~~
sameers
Yeah, I am also having a really hard time reading this visualization as
perpetuating any stereotypes - assuming ofc that the data itself is correct.
It would indeed by perpetuating them if it were not the case that urban stress
is disproportionately felt in the Black population - but even proponents of
racial justice are quite clear that this is the case in many American cities.

The legend uses the words "Low/High" and not "Bad/Good." It's a quantitative
measure, not a moral/aesthetic judgment.

The conclusion of this graphic would be - There is more stress in South
Central LA, which has a higher proportion of Black Americans. Or possibly, the
least stress is towards the West, where there are relatively fewer Blacks.

I guess if you are concerned it will read as, "In South Central LA, there are
angry black people, don't go there!" \- that leap of faith will be made
regardless of how you visualize the data. "Poor people are naturally
lazy/violent/immoral," is a stereotype that has existed for far longer than
any widely accepted attempt at data visualization.

~~~
gnramires
> The legend uses the words "Low/High" and not "Bad/Good." It's a quantitative
> measure, not a moral/aesthetic judgment.

This is definitely not true -- yes it uses Low/High, but different variables
have different qualities for Low/High -- and "Good" is always on top. "Low"
unemployment is on the top while "High" affluence is also on top (i.e. not
purely numeric). That should be obvious because the positive emotions are
being associated with positive ("good") variable ranges.

I mean, if a quantitative measurement were the main focus here surely we
shouldn't be using faces which are known to be loaded with emotions.

You did not seem to address my main concern, which is specifically that low
white percentages are portrayed as negative in the legend. (I'm trying to
avoid the Motte/Bailey here) -- do you disagree?

> Poor people are naturally lazy/violent/immoral

I think it's more excusable to equate "Poor"=="Bad" than "Certain
race"=="Bad", because it's generally accepted being poor is undesirable, while
you can't change your ethnic background (and generally I don't think you
should)

~~~
sameers
I think I see now why someone else asked, "Would it be okay if the legend was
horizontal?" Because then you don't have the "on top/below" connotation. Is
that what you mean by "portrayed as negative", because they are visually
lower?

Re how faces are emotionally loaded, that is somewhat the point of using
Chernoff faces as a visualization method, though I understand your concern
that therefore it SHOULD not be used as such a method because it will convey
ideas not implied in the data because of how we interpret faces.

Fundamentally though, to go to this data set in particular and away from the
merit or otherwise of Chernoff faces, there's always going to be a tension to
depicting the correlation between race and wealth in the US. One way or the
other, you have to say the same thing - Blacks are poorer; and/or Blacks have
lower factors of general well-being (though somewhat unintuitively, not lower
levels of hopefulness.) And you can't avoid the fact that if you have a data
visualization that really brings home the correlation then someone is bound to
assume causation in the wrong direction and feel the data validates their
racist feelings.

But that doesn't make the attempt at using that visualization a racist one.

------
asavinov
There is an implementation of Chernoff faces where a fish is used instead of a
human face, so it is called Chernoff fish:

[http://tmm-archive.github.io/chernoff-fish/](http://tmm-
archive.github.io/chernoff-fish/)

It is implemented in D3 and React and the source code is here:

[https://github.com/tmm-archive/chernoff-fish](https://github.com/tmm-
archive/chernoff-fish)

I implemented my own version long ago (for MS-DOS) and I am quite surprised
that there is still some interest in the topic.

------
w1
Super cool! Also, a character visualizes data this way in Watts' "Blindsight"
novel.

~~~
fab1an
Came here to say this! That creepy cool chernoff face utilizing vampire
captain. So cool.

~~~
outworlder
Given that humans were prey for vampires, the variables selected by the
captain showed faces in different states of anguish, which the predator was
well-equipped to detect. So it is even creepier.

------
ninjabiker
Chernoff faces are an old idea that have become more of a joke (eg
[https://dl.acm.org/doi/pdf/10.1145/3170427.3188398](https://dl.acm.org/doi/pdf/10.1145/3170427.3188398)
). The problem is that they are weird enough to occasionally lure in well-
intentioned people who are new to datavis. Encoding abstract data as faces
results in unpredictable, unrelated, gross visualizations that are difficult
to read. The only good one I have seen is
[https://projects.propublica.org/graphics/workers-
compensatio...](https://projects.propublica.org/graphics/workers-compensation-
benefits-by-limb) and it’s creepy.

------
totetsu
It kind of reminds me of the genre of study material in Japan that personifies
everything as a cartoon character. Here is the periodic table as little
figures
[https://resemom.jp/article/img/2017/03/16/37127/162618.html](https://resemom.jp/article/img/2017/03/16/37127/162618.html)

and this book is it as Manga girls.

[https://www.amazon.co.jp/%E5%85%83%E7%B4%A0%E5%91%A8%E6%9C%9...](https://www.amazon.co.jp/%E5%85%83%E7%B4%A0%E5%91%A8%E6%9C%9F-%E8%90%8C%E3%81%88%E3%81%A6%E8%A6%9A%E3%81%88%E3%82%8B%E5%8C%96%E5%AD%A6%E3%81%AE%E5%9F%BA%E6%9C%AC-%E3%82%B9%E3%82%BF%E3%82%B8%E3%82%AA%E3%83%8F%E3%83%BC%E3%83%89%E3%83%87%E3%83%A9%E3%83%83%E3%82%AF%E3%82%B9/dp/4569702783)

------
Daub
One of the weirdest research project I ever got involved in was 'Real-Time
Feedback System for Monitoring and Facilitating Discussions'. Using a game
engine as the platform, it used dance-off moves to visualize social
interaction within a discussion.

One quote from the paper taken from 'The Craft of Information Visualization'
(Bederson and Ben Shneiderma): 'Humans can recognise the spatial configuration
of elements in a picture and notice relationships among elements quickly.'

------
mysterEFrank
Nice!

------
gotostatement
this is a very interesting idea, but I think using gender and skin tone to
represent data differences is potentially problematic, particularly if the
data has any sort of normative meaning. the other variables are interesting
though

~~~
ebg13
I think this post is meant to be a joke.

~~~
ianai
It’s not funny.

~~~
ebg13
It's not a joke about data or race or sex, and it doesn't make any claims
about data or race or sex, and it doesn't signal anything about data or race
or sex either intentionally or accidentally, and it doesn't do those things
because it's demonstrably satire about how chernoff faces are a terrible idea.
See the first footnote on the word "favorite" and the second footnote on the
word "Clearly".

I think your contempt is a serious misfire.

------
ianai
I’ve got to pull the alarm on this one. This is a huge liability to objectify
and promulgate prejudice. It’s literally teaching people to associate good or
bad distinctions with specific faces or facial features. How is that not
objectifying? I think this should be treated highly suspiciously and probably
not done at all.

Edit-I see people may take this as a joke. Sorry, not funny. Especially not
right now.

~~~
identity0
You have to be joking. This is obviously one programmer's experiment on some
esoteric data visualization mode that nobody ever used. The author even says
in a footnote that:

> "One of my favorite¹ concepts for multi-dimensional data visualization is
> the Chernoff Face"

> 1: "Favorite" might be code for "useless," going with the theme of this
> blog"

Don't act like this project will have actual real world uses and
ramifications. No, it won't literally teach people to associate good or bad
qualities with facial features, because nobody will ever use it for practical
purposes. No, there's no liability to promote prejudice, this is just some
programmers ML side-project.

> How is that not objectifying?

It literally is. The generated faces are mathematical objects. They're not
real people.

~~~
ianai
In the faces I saw there was a clear connection between race and facial
expression of happiness or sadness.

~~~
identity0
So? That just means that two variables are closely related. You have to be
very blind and ignorant to take the implication that white people are less
happy, or something. Is it racist to take a photo of a sad white person?

