
Emotion-detecting tech should be restricted by law - pseudolus
https://www.bbc.com/news/technology-50761116
======
reaperducer
When I first read that some retailers (Walgreens) were putting face-scanning
emotion profiling tech in their stores, I couldn't help but think about how
many false readings would come from people who have resting bitch face.†

Then I thought it would be awesome to go to Dallas or some other place where
this is common and flood the stores with these people, making the technology
useless.

†
[https://en.m.wikipedia.org/wiki/Resting_bitch_face](https://en.m.wikipedia.org/wiki/Resting_bitch_face)

~~~
paggle
When all the retailers share data they’ll identify you as Rufus Postlethwaite
and calibrate to your resting bitch face.

------
dandanqu82
As someone from a less represented group in tech, I would want to know exactly
who is classifying emotions present on the dataset. I don’t want to be
mistaken as feeling angry anytime I don’t look happy or sad. There’s
definitely unconscious bias to how certain people’s feelings are interpreted
that lead to inaccurate perceptions.

~~~
randyrand
Note: as long as you are well represented in the dataset your representation
in tech won’t matter.

~~~
dandanqu82
I can be well represented in the data set, but here people can unwittingly
classify my emotions incorrectly due to unconscious bias. What good will the
data do if every time, I appear confused, I am labeled and angry?

~~~
FooBarBizBazz
The biases of labelers are tricky, because that work gets outsourced to low-
wage countries.

~~~
dandanqu82
That makes things worse, but we would still have a problem with the work being
done primarily by us tech workers. I wouldn’t place the blame solely on
unskilled low-wage workers. I’ve worked with Ivy League graduates that think I
am angry when I ask a question and am just confused. The US education system
is moderately segregated and many of our most educated students are not taught
to be more critical of stereotypes perpetuated in the media.

------
watt3rpig
So I guess with my anxiety disorder even computers will think “I have
something to hide” because I look nervous. Jesus christ this world is a
dystopia.

~~~
conistonwater
If it makes you feel any better, before computers it was the security guards
that thought you had something to hide instead.

~~~
pubutil
However, we tend to see a computer’s analysis as absolute while a security
guard’s analysis is more of an opinion.

You can also reason with a security guard. The trust we have in the computer’s
analysis prevents a perceived shoplifter from reasoning against the
accusation, for better or worse.

------
arielweisberg
Oh this is fun. So how clear is your hue?

There was an anime about where this might head
[https://en.m.wikipedia.org/wiki/Psycho-
Pass](https://en.m.wikipedia.org/wiki/Psycho-Pass)

Classic Dr. Who also had a fictional place where you had to be happy, but I
don’t recall the tech angle that arc used.

------
sambull
You dont all where balaclavas and sunglasses when interacting with technology?

~~~
dvtrn
My juggalo makeup works even better.

[0][https://www.fastcompany.com/90373952/to-thwart-face-
recognit...](https://www.fastcompany.com/90373952/to-thwart-face-recognition-
maybe-just-wear-juggalo-makeup)

------
kingkawn
How we express our emotions will change and this will be useless

~~~
larnmar
Unlikely; emotional expressions seem to be innate and hard-wired into the
brain, hence the fundamental similarity between emotional expressions of
groups separated for tens of thousands of years.

~~~
kingkawn
All of whom are connected and changing together now.

Also; hard-wired by who or what other than volume of use?

------
gojomo
To protect the jobs & social status of those lucky enough to have a natural
aptitude at emotion/deception-detection?

~~~
Lammy
To ensure that the ruling body telling you what you can't use is the only
group that gets to use it.

------
m3kw9
Isn’t every ad tech emotion detection? A like is an emotion

------
jdc
Emotion-detecting technology is software and distribution of source code is
speech, ergo banning it infringes on free speech.

However it might make sense to ban its use in certain cases — like say in
order to manipulate public opinion for instance.

~~~
chillacy
The article mentions police investigations as one use case.

Mass surveillance technology is software and speech too, but we also might
want to live in a world where it is restricted in application.

------
DonHopkins
Maybe certain professionally empathic people at MIT Media Lab should have used
their high-tech emotion detection software to determine if Joi Ito was lying
through his teeth to cover his ass by an appeal to authority when citing the
names of a bunch of "very respected & well-known" people who vouched for
Epstein being reformed. I still would love to know what the names of those
people are, and if they're "well known", are they still "very respected"? How
about a bit of empathy for his victims, huh?

[https://twitter.com/RosalindPicard/status/116899800880720281...](https://twitter.com/RosalindPicard/status/1168998008807202816)

------
mrob
And we should ban prosthetic limbs because amputees might use them to smuggle
drugs.

Emotion-detection tech is a prosthetic for autistic people, and if it's banned
I want consistent treatment for all disabilities.

~~~
dbsmith83
Give me a break. People don't use prosthetic limbs as a tool to decide if
someone is lying to the police or if they should get the job. Your analogy is
less than adequate.

~~~
dang
Can you please edit swipes like "Give me a break" out of your posts to HN?
Your comment would be fine with just the other two sentences.

[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

~~~
shadowprofile77
Seriously? What pedantic nonsense. It's a minor, hardly offensive phrase
designed to add some emotional color to comments. We're not robots here. Many
people write comments with expletives (and don't get downvoted) along with all
sorts of things, but you fixate here on "give me a break".

~~~
dang
I understand how this can seem pedantic, but when people lead with swipes like
that, it subtly cues the discussion towards disrespect and even aggression.
Maybe they don't land that way with you, but the spectrum of the audience here
is pretty broad, and it's best to just omit things that predictably activate
major subsets. Given that such remarks are cheap and mechanical, adding no
information, it improves the signal/noise ratio to take them out anyhow. To me
it seems clear how much better the GP comment reads without the first bit—how
much more visible the actual point becomes.

Emotional color is welcome in HN comments, but reflexive indignation is a
different thing. For the sake of the commons here, we all need to work on
containing the petty anger that flares up instantly and is all too easy to
vent onto the internet.

------
quotemstr
As a practical matter, it is not _possible_ to prohibit the use of an
algorithm, and any attempt to do so will cause more harm than anything that
algorithm might possibly do.

That we're even talking about restricting tech that tells us true things about
the world indicates that the level of internal contradiction in the standard
worldview has reached an unacceptable level. Only a bankrupt and illegitimate
philosophy needs to prop itself up with censorship and bans on true
statements.

You can't hold back progress, but you do have a choice between sticking your
head in the sand and facing the technological change head-on and making the
most of it.

> AI Now

Oh, this is AI Now. The article makes sense now. This organization isn't a
research group. It's a political advocacy organization dressed up as a
research group. All they do is put an "AI" spin on the standard tech-activist
political agenda. You don't have to actually read any of their papers, since
the conclusion is always the same: _they_ get to decide what technology _you_
can use.

~~~
carapace
> That we're even talking about restricting tech that tells us real things
> about the world indicates that society's internal contradictions on certain
> subjects have reached a breaking point and that we need to reevaluate
> certain previously-inviolable assumptions.

I agree with your general point, but TFA quotes a co-founder of "leading
research centre" saying,

> "At the same time as these technologies are being rolled out, large numbers
> of studies are showing that there is... no substantial evidence that people
> have this consistent relationship between the emotion that you are feeling
> and the way that your face looks."

So a large part of the problem is that we're rolling out cargo-cult crap
without adequate self-reflection. (That's a different problem than our tech
forcing us to confront real things we would rather leave latent.)

~~~
quotemstr
> large numbers of studies are showing that there is... no substantial
> evidence that people have this consistent relationship between the emotion
> that you are feeling and the way that your face looks.

Total, absolute bullshit --- which is exactly what I'd expect from AI Now.
Everyday experience shows that we can read people's feelings via their faces.
(Yes, deception is possible, but the possibility of putting on a fake face
validates the thesis!) Happy people look happy. Sad people look sad. Millennia
of literature and art confirm. "Large numbers of studies" confirm lots of
things that would be convenient if true, but aren't. The social sciences are
totally and catastrophically broken.

These expressions are older than people. Even dogs and humans can read each
others' emotions. Anyone who's met a dog understands that you can tell that a
happy dog is happy. I'm amazed that the person quoted in the article was able
to make the above claim with a straight face.

> So a large part of the problem is that we're rolling out cargo-cult crap
> without adequate self-reflection.

So what? Astrology doesn't work either and we don't have to go around banning
that. If this technology didn't actually work, activist groups wouldn't be
trying to ban it.

~~~
carapace
> ...which is exactly what I'd expect from AI Now.

I don't know anything about them before today.

Did you see [https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-
snake...](https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf)
which they link to in their report? It seems pretty solid to me.

> Everyday experience shows that we can read people's feelings via their
> faces.

Yeah, humans can, sometimes, but that doesn't mean these systems can.

> Astrology doesn't work either and we don't have to go around banning that.

First off, astrology _does_ work, depending on the astrologer.

Second, yes, if people were selling astrology to HR departments to screen
candidates I think we should ban that.

~~~
quotemstr
> astrology does work

Is anyone supposed to take seriously anything else you say now that you've
claimed that astrology works? You might as well be claiming we can predict the
future by looking at bird entrails.

~~~
carapace
> Is anyone supposed to take seriously anything ... you say ...

That's really entirely up to each person, and none of my business really.

I will say that, although I make mistakes, I never lie, and rarely exaggerate.
I don't participate in HN to talk nonsense.

Now then, yes, some people can predict the future by divination of entrails.
Thankfully this particular mode has become quite rare.

All forms of divination work, BTW, and have the same underlying motive.

\- - - -

Anyway, did you look at that PDF? If so, what did you think?

