
Stealing Ur Feelings - simonpure
https://github.com/noahlevenson/stealing-ur-feelings
======
ibudiallo
There is a case to be made about how good the AI behind emotional detection
is. When you take the test, it will be accurate for some, and blissfully wrong
for others. More so wrong for most. I took the test (or unknowingly took it)
and it was correct for the most part. And it got some things wrong. I love
dogs.

The trouble is not that the AI can be wrong, it's that we will rely on its
answers to make decisions.

When the facial recognition software combines your facial expression and your
name, while you are walking under the bridge late at night, in an unfamiliar
neighborhood, and you are black; Your terrorist score is at 52%. A police car
is dispatched.

In 2017, my contract was not transferred to the new system. The automated
system saw that an ex employee was scanning his key cards multiple times.
Security was dispatched to catch the rogue employee. Now a simple questioning
should have cleared things up, but the computer had already flagged me as
troublesome. Long story short, I was fired.

When the machine calculates your emotions, the results are unquestionable. Or,
we don't know how it got to the answer, so we trust that it is right. It is a
computer after all.

What scares me is not how fast machine learning is being deployed to every
aspect of our lives. My scare is our reaction to it.

~~~
henrikschroder
The problem with all current "AI"-driven systems be it facial recognition,
voice recognition, translating, fraud detection, navigation, whatever, is that
they are not 100% right, and when they're wrong, they're _hilariously
devastatingly super-wrong_ in a way that humans are not wrong.

But since the success modes are good and human-like, we assume that the
failures are going to be human-like as well, but the failure-modes of these
system are usually bizarre and alien. Take self-driving accidents, for
example. Pretty much all of them happen in situations that no human would fail
in, and that's obvious to most people, but then we're forgetting about all the
other mistakes similar "AI" systems make, and don't realize that they're also
failures no human would make.

~~~
mehrdadn
> Take self-driving accidents, for example. Pretty much all of them happen in
> situations that no human would fail in, and that's obvious to most people,
> but then we're forgetting about all the other mistakes similar "AI" systems
> make, and don't realize that they're also failures no human would make.

Amen? I've tried to explain _precisely this_ to many brilliant AI/ML folks in
lots of variations, with little success. They look at me funny, as if I'm a
crank for believing that that probabilities don't capture the whole story. It
seems that, to far too many of them, a computer that has half the accident
rate of a human is a strictly better replacement, end of story. The notion of
just how _spectacular_ the failure mode might be, or the degree of control
that a human might have in the process, or any other human factor you can
think of besides the accident rate, just seems like a completely nonsensical
notion to many of them. For the life of me I have yet to find a way to convey
this thought in a compelling manner.

~~~
jrumbut
To me, one of the best uses of this generation of AI would be as an aid to
decision making. Your personal Skynet learns representations of data that help
you make decisions, and watches out for worrying signs that you may be missing
(perhaps based on your known mistake patterns, adapting as those change).

Meanwhile, human domain experts still look at the details and are able to add
their own understanding of broader context, human nature, etc.

In this world, AI/ML primarily increases quality by augmenting human abilities
rather than decreasing cost by automating humans out of the system. It's a
smaller market maybe, but there are areas where better work can fetch a
premium.

I think part of this is the lack of well understood patterns for the plumbing
and UI of a system like this that makes it easy and useful and non-threatening
to users. It's nothing that someone couldn't figure out, but not as well paved
a road as integrating a caching, search, or image processing subsystem.

~~~
arvinsim
So I guess we aim for augmentation first before shooting for a complete
replacement.

~~~
jrumbut
It has a reasonable sound to it! Although I think the complete replacement
scenario has proven a little harder than it looked on some tasks (driving
could be an example).

Yet it seems like there is a lot more all or nothing efforts (or seeing human
workers as a level of escalation), I see fewer projects aimed at being helpful
in a user directed way without taking control.

------
nlh
First, this is excellent and very well done (the analysis wasn't great for me,
but whatever - point made).

The part that bothers me so much about this all is the sense of hopelessness
it leaves. Why? Because 99.9% of people literally, actually, truly, genuinely
Just Don't Care.

It's like they're all rats in a Skinner Box -- so long as the feed keeps
scrolling and the dopamine keeps getting pumped out, why does it matter if
they're being analyzed, bought, and sold? More feed, more likes, more
dopamine, more feed, more likes...

Sigh.

~~~
mattnumbers
Facebook/IG are the cigarettes of our age, and no less unhealthy; we should
regulate them as such.

~~~
kaffeemitsahne
Smoking gives people cancer and heart disease at a pretty predictable rate.
Quite baffling to claim facebook/ig are even close to that unhealthy.

~~~
depressedpanda
It's less baffling if you value mental health and physical health equally.

------
1023bytes
I was interested how are they doing the "IQ estimation", here it is:

    
    
       -(dogPos + Math.abs(menPos - womenPos) + Math.abs(whiteNegative - nonWhiteNegative) + kanyePos)

~~~
qqii
After seeing no code in their repo I was interested in the same. The important
parts are in
[https://stealingurfeelin.gs/js/events.min.js](https://stealingurfeelin.gs/js/events.min.js)

You've left out some important parts of the iq calculation, with the full
equation being:

    
    
        reactions = dogPos + Math.abs(menPos - womenPos) + Math.abs(whiteNegative - nonWhiteNegative) + kanyePos) / 4;
        iq = Math.floor(15 * -((reactions - 0.0005) / 0.05) + 100);
        if (iq < 100) {
            thatPartBitSFX.play()
        } else {
            thatPartWaySFX.play()
        }
    

Also amusingly rebulican percentage is calculated as:

    
    
        reactions = (dogPos + kanyePos + nonWhiteNegative) / 3;
        republicanPct = 50 + 15 * ((reactions - 0.05) / 0.1),
    

And income is calculated as:

    
    
        reactions = (iq / 100 + republicanPct / 50 + dogPos) / 3
        estimatedIncome = Math.floor(200000 * (reactions - 0.5) + 31099),
        if (estimatedIncome < 31099) {
            isPoorSFX.play()
        } else {
            isNotPoorSFX.play()
        }

~~~
PastaMonster
Oh dear. Lots of magic numbers. The programmers are not educated properly.
Just horrible code.

~~~
testvox
This is minified code, so couldn't the minifier have replaced the constants
with their values or something?

~~~
9935c101ab17a66
Those aren't minified or obfuscated variable names... that's obvious.

------
zug_zug
Outstanding multimedia article. Very cool attempt at multimedia persuasion.
However hard to take at face-value (pun intended) when its analysis was so
incorrect.

~~~
topher-the-geek
Agreed. They concluded I don't like dogs. I love dogs. What I don't like is
not being able to find my mouse pointer in that noisey background. Hahaha.
Nice try AI

------
rrsmtz
Creepy proof-of-concept. Are there devices which have been proven to capture
image data when not in a camera mode, or is there an assumption that they're
somehow doing it covertly?

~~~
salsadip
Not on a mobile device, but I read that some company is using OpenCV to judge
customers reactions to ads shown to them. that is, ads shown on TVs in stores
and recording the reactions with a hidden camera and evaluating right away.
Wish I could give more details but I can’t find the source any more

------
samat
Do any popular mobile apps capture camera info while you are say scrolling the
feed?

~~~
streulpita
yes, this is what tik tok is most famous for

~~~
anchpop
I googled and didn't see anything about this, is this true?

~~~
hiei
[https://www.techinasia.com/bytedance-overseas-expansion-
stra...](https://www.techinasia.com/bytedance-overseas-expansion-strategy-
break-down)

Scroll down to "dual learning"

~~~
jazzyjackson
I don't see any mention of 'camera' or 'face' in this article, interesting tho
it is...

~~~
hiei
ah I misread OP, thought we were discussing something else.

------
inerte
I don't know how much this website can improve the general public
understanding of how much companies and governments can deduce from a small
set of data about an individual, but this concept was presented in an
interesting way on the website.

"An AI that knows you better than you know yourself" \- I know SV loves the
apocalyptic Noah Harari, but that's exactly what he's been talking about. One
of his possible scenarios is that this rapid processing of data by a
centralized entity can erode modern individual freedom (including free will
and free markets) since it will be more efficient than individuals maximizing
their interests. If the centralized processing of data can feed more people,
provide security, and in general runs things more smoothly, we collective
might accept that route, and gladly give up the power we hold on a democracy
(whatever that amount you believe actually exists).

~~~
Konnstann
If the "smooth" option is feeding into people's biases, that doesn't seem like
a good thing. For stuff like dogs or pizzas curating content based on AI is
harmless, but once you get into racial and gender biases, with the given
example of suggesting dating profiles, the effects on the outside world can be
disastrous. Implicit bias is neither a positive or a negative thing assuming
you are aware of and combat it when it has a negative impact, but businesses
don't care about that and just want more clicks/buys/swipes/etc.

------
irrational
Everyone says my face shows no emotion and they can never tell what I'm
feeling (and when they try to guess they are wrong 9/10). I'd love to know
what this would say about me. If this can tell what I'm feeling then that
would be awesome. Headline: Computers are better at detecting human emotion
than humans.

~~~
mrguyorama
In some parts, the player shows raw scores for emotions, and one of them is
"neutral". I was 99.9999% neutral or more for the entire event, never breaking
my apparent poker face until it called me possibly brain damaged at the end,
which I found funny for some reason. The claims about IQ and left/right
leaning are possibly arbitrary and just meant to stir discussion and virality.

For it to be any amount confident in it's claims, I imagine you need to be at
least somewhat expressive. However, I don't expect most users of the average
NN classifier to pay more than a little attention to how often what they
classify something as is less than 80% confident.

~~~
xtracto
> The claims about IQ and left/right leaning are possibly arbitrary and just
> meant to stir discussion and virality.

Yeah, I found it very amusing that this thing concluded I was highly
republican. First, I am not from the USA, second I am strongly for social
programs/socialist, and third, current USA republicans amuse me (in that, what
they are doing is funny to see from outside).

------
twodave
What’s scary to me is how wildly wrong this was at describing me. If anyone is
ever going to be making decisions for me based on this tech, it has a ways to
go.

------
tambourine_man
A lot of the good stuff seems to be coming from:
[https://github.com/justadudewhohacks/face-
api.js](https://github.com/justadudewhohacks/face-api.js)

Results are poor, as expected. I think the author intends it as a joke.

But the message is spot on.

------
ipsum2
Really cool demo! Though the part about companies using facial recognition to
determine what to show you on your feed /results (presumably Facebook,
Youtube, Google) is pure FUD, which is disappointing and probably should be
made clearer that it's a hypothetical.

~~~
csande17
Back when Apple first announced Face ID and Animojis, there were vague
rumors/fears that companies would use the face-recognition sensors for
targeting like that. Did that ever actually happen?

(Come to think of it, is it even possible to know for sure? Do apps need a
permission prompt to access the facial-recognition data?)

------
brisky
After watching this I realized that we need to update mobile app permissions
model to distinguish between front and back camera access. I can see many
cases where I would grant the app back facing camera permissions but would not
give it front facing camera access.

------
cbanek
I thought the content of this was really interesting, although the results
were actually way off. I love pizza, for one thing. Although, to be honest,
their picture of pizza did not look very tasty.

I also wonder how these results would be affected by being alone vs being in
groups. Kind of like laughing (people tend to laugh more when with other
people or in social situations rather than alone), emotional reactions can be
very different depending on environment / social situation, even when you feel
exactly the same about something.

------
blue_devil
>>So our goal was to make an interactive doc that had the silly, sarcastic,
collaged aesthetic of a vlogger video — and our central tech trick — using AI
to tell you secrets about yourself — was designed to function like one of
those viral BuzzFeed personality quizzes.

It definitely hits some kind of addictive pattern. I felt the pull to "try it
out". But I didn't. Just imagined what I'd learn from it if I shared my data
(likely nothing), and that dampened my enthusiasm.

------
quangio
This AI is highly inaccurate: AI predicts I don't like dogs & like men
(/r/suddenlygay after I shaved). I hope that's the author's point.

The worst thing about AI is that people believe AI is god that makes unbiased
prediction. E.g. Big companies use Pymetrics and HireVue for their "unbiased"
hiring practice is a joke.

May be a few years from now, AI will become a classic for software bugs just
like Therac-25 (but developed by mostly top programmers)

------
malvosenior
> _reveals how your favorite apps can use facial emotion recognition
> technology to make decisions about your life, promote inequalities, and even
> destabilize American democracy._

I must have missed the part of this where they show how AI promotes
inequalities and destabilizes American democracy. They showed me dogs, pizza
and a bunch of pixelated people from the 90s.

What this demonstrated to me was that AI is probably worse than noise. It
provided zero insight into who I am or how I feel other than getting it
correct when I smiled.

I also think the people that made this are missing a huge piece of the
puzzle... For most people the issue isn't that they are being tracked, it's
that no one is paying attention to them. People _want_ to be analyzed. They
want their existence to be recognized and they want it having an impact on the
world. The worst thing that could happen is that no one cares. Even if it's an
algorithm scanning their facial features to better sell them pizza, I think
most people would desire that.

Case in point: we all just took this test to see what it thought about us.

~~~
about_help
Some weird narcissism you're selling to convince people they shouldn't care
about privacy. Immediately refuted by everyone's real world experience with
visible privacy violations.

~~~
malvosenior
I'm not selling anything. I don't think my point was refuted. I think it was
verified by how many people (including myself) just gave a random website
camera access and let it scan our faces.

~~~
leppr
Your point isn't verified because it can be explained by other reasons
(curiosity about its effectiveness seems likely, especially given the nature
of this community).

The effect you describe is certainly real, but you only have to look at the
present reality to see just how far it extends. In modern society, people are
mostly not comfortable sharing their intimate thoughts with random service
workers. And that's with individual humans, with faceless machine networks the
tolerance might be lower.

------
scarejunba
Haha, this is awesome! It does showcase a beautiful world, though! Imagine if
you walk into a department store and they already know if you're the kind of
person who likes to be greeted vs left alone to browse. Or using median
commuter feeling to make commutes better. We could find unusual things. For
instance, maybe people would like trains coloured brightly on average. Who
knows! The positive possibilities are endless. We could engineer general
contentment for all without drug usage.

EDIT: Since I'm rate limited, here's my response to below comment

People are always engineering your contentment. Stand-up comics, your wife,
the people at your favourite coffee shop, the bookstore you visit. It's a good
thing. It's the lubricant of society.

~~~
hiei
What do you mean by rate limited?

~~~
scarejunba
My account has a rate-limit flag on it that prevents it from posting too
often. It's a silent flag and I know why it's there (I'm often downvoted[0])
so I don't know until I send a message whether it will make it. Unfortunately,
that often prevents me from replying to people. When I do I receive a message
"You are posting too fast".

0: Not complaining about this, just explaining.

~~~
asudosandwich
Disciplined by a technicolor behavior algorithm and not told about it. How
appropriate!

~~~
scarejunba
Participation in this forum is not a fundamental right so I'm comfortable with
it. I fundamentally support the right of discussion groups to restrict who
gets to be present.

HN's place. HN's rules.

------
jonny383
I don't see this any more accurate than just taking random numbers and
displaying the mapped result back to the user.

------
akhilcacharya
I'm impressed by how well it got my income. Really cool art project for
something running entirely in browser.

------
digitalacetone
I hope u guys caught my bathtub video.

