Hacker News new | past | comments | ask | show | jobs | submit login
Time to regulate AI that interprets human emotions? (nature.com)
154 points by rbanffy 10 days ago | hide | past | favorite | 253 comments

I feel like AI is pointed AT us rather than working FOR / WITH us more often than not.

I want to make interesting / meaningful choices / have input. But none of the stuff (advertising, etc) really ask me to do so. I get amazon ads for things I bought... but not a single thing on any of my wish lists (how much more explicit can I get?)

But rather there's all this AI effort to get ME to do a thing... for someone else, and seemingly so they can be lazy. Want to engage with your employees? Talk to them, engage with them, earn their trust. I suspect it will pay off way better than some wonky software.

All these efforts seem to be without my input, without bothering to ask, just let the computer tell them and somehow that makes decisions that should have my input without me.

How about some AI with some engagement, where i can have input, tell them "yeah man that was a good recommendation", "no way that was way off", or as far as the mood stuff goes "not now but maybe later thanks"?

> I get amazon ads for things I bought

Any ad engineers here, why does this happen? Amazon knows I just bought an office chair from them, they know I'm a residential customer and presumably they know I'm not gonna buy an office chair every week, so why do they keep advertising office chairs to me?

Every post on HN with this question receives the same response: “it’s because Amazon knows you’re actually more likely to buy another one if you’ve just bought one, even if it’s the kind of thing you only need one of (e.g vacuum)”

I find that answer unconvincing and there is no published data to back it up, but it is repeated constantly here.

What they're actually tracking tends to be the fact you've searched for $productcategory (whether you bought or not)

In theory since they also have purchase data their 'purchasing intent' categories could exclude people who have subsequently purchased from $productcategory for products which are substitutes (but definitely not products which are complements/consumables/collectibles etc, because someone who has just bought guitar/fishing accessories isn't just likely to buy more guitar/fishing accessories again in future but a lot more likely to than the average person), and yes they could even make guesses about whether somebody is likely to want more than one office chair based on whether they're purchasing as a business or an individual. But in practice it's not easy when they've got a product inventory that's so enormous, complex and dubiously-labelled even the basic product search doesn't work that well, and most of the time the vendors are paying them to show the ads anyway. And still wouldn't be perfect (ironically I seriously considered buying two different vacuum cleaners for different use cases this week!)

It's surprising how bad basic targeting is, like when Facebook allowed ad targeting by sexual orientation but even dating websites spent vast sums without bothering, so its unsurprising subtle distinctions about which buyer types only need one of which office accessory are often missed.

Here’s a different take: when you set up an ad campaign for retargeting, you likely select users who “viewed or added to cart in the last 30 days”; you should also add an exclude for “users who purchased within the last 30 days” but it is very easy to forget the exclusion.

Also sometimes there may be communication lags between retailers and ad networks. Some retailers may only upload purchase data daily, sometimes there are gaps in uploads etc.

I think most advertisers would prefer to not advertise the same item that people just bought, but it takes perfect execution both technically and from the marketers creating campaigns to pull it off cleanly. When you’re managing dozens of campaigns across half a dozen ad platforms, it’s easy to miss some of the details that matter. And while the wasted ad impressions may annoy the consumers on HN, it is likely a rounding error for advertisers and not a huge cost driver.

The most common case of this (as expressed in HN comments over the years) is Amazon recommending products based on Amazon purchase history. In that case it's all first-party data.

Most consumers, in their individual experience, probably avoid foolish purchases like; buying a second widget immediately after buying a first widget.

But it's probably true that there are a lot of individuals out there who have done this (for whatever reason), and it's the statistical likelihood of people buying a second widget that drives this.

An individual who doesn't do this, can't perceive this "fact" of statistics, which is human behavior in-aggregate form. Individuals only perceive individual behavior.

Also, the same-product-again recommendation doesn't have to be a great one; it only has to beat the alternative it displaces. So I could find it plausible that taking a swing on you buying another of a product you already bought beats out whatever the 5th-best recommendation happens to be.

Given how widely this policy is criticized, Amazon must be very aware of it and they must continue to do it intentionally. I would trust, given the amount of money on the line, that they see evidence that tells them it's a reasonable choice.

Caveat: I don't work on advertising systems, but I do work in ML, and I've studied recommendation systems. One way to think about these large systems is that they contain a condensed representation of you that is comprised of all the things you buy/consume on their platform (the raw data) and some derived descriptors (possibly demographic information/information from other platforms/"engineered features" like what "style" of clothing you like, etc. This representation of you (a vector) can be compared to other members of the platform (inner product), and they can give you recommendations based on your similarity.

The problem is that in this case, they've encoded into your condensed representation something about office chairs. A better system would be knowledgeable about the kind of purchase and infer if it's the kind of thing you would buy once, or multiple times.

But then you have the problem of different individuals having the propensity to buy different products at varying intervals. If I like shoes, I might buy many pairs. You might not be into shoes, and so you're not going to buy more than a couple pairs, if that. In this case, how do we train a system to know the difference between the two of us, with our different tastes? The more specific we get with the vector descriptor, the harder it is to compare us to other people (because in high dimension, random vectors are basically orthogonal).

And going through all products and trying to determine if they are the kind of thing that you would buy more or less of, given that you just bought it, is expensive, and likely difficult. The system ultimately falls on a positive correlation between history and future purchases.

Another way to think about this is with Bayesian reasoning, where your prior likelihood to buy office chairs affects the posterior. The system's prior is high (you've bought chairs in the past), which scales the posterior (that you'll buy a chair now).

It might be simply driven by statistics, or as a human-directed strategy.

While it seems counter-intuitive to the person who is targeted; (examples abound) - if you think about it from the perspective of someone selling widgets, the (set of people who are in the market for widgets) will necessarily include people who just bought one. Maybe they are not satisfied with their purchase? Maybe they want to buy another? Maybe they bought one, and it arrived damaged?

Personally, I would think that this kind of "trying to think FOR me" is rather offensive to a lot of people, and probably turns-off a fair amount of people who are subject to it. On the other hand, it is probably true that people exposed to an ad for a widget, immediately after buying one, are statistically more likely to buy another. So as long as that is true (if it is) - then that's going to be a criteria for ad targeting.

Amazon presumably doesn't share purchase history with advertisers; you get lumped into a cohort with certain interests/keywords.

An experiment to try; if you only go shopping/searching for vacuum cleaners but don't actually buy one you are likely to see the same phenomenon.

Why do advertisers pay for this? Because conversion statistics show that showing ads to people shopping for vacuum cleaners do actually result in vacuum cleaner sales. A/B tests back up the causality of the results, so vacuum sellers pay for the ads.

Why doesn't Amazon remove you from the cohort of vacuum cleaner shoppers after you make a purchase? 1) it would reveal that you made a purchase. 2) Amazon would lose out on additional ad revenue.

I have no inside information on Amazon but it matches what I've seen on other large ad networks.

I can't speak to amazon specifically, but my understanding of this phenomena in general, especially with ads that are bid for, is that the systems which track and gauge your likelihood to purchase aren't connected to the systems that know you did make a purchase, at least at the point in the system where it's making the decision of what ad to show. It just sees that you've done things that a person who is likely to purchase that thing has done, which, of course you do, because you DID purchase that thing. It's like the ad is chosen based on the strongest additive signal available, and negative signal isn't considered.

The likelihood is that the chair you got, you'll probably find out it is just bad and after a while you'll return it and look for something else. In the last few months almost 50% of my purchases turned out to be some junk pretending to be a quality product. I still have a few boxes of things that I need to post. There is even no point reading reviews, because most of them are fake. I think I am going to stick to trusted brands and probably drop Amazon altogether, but I like the next or same day delivery a lot. When using other stores you never know if the product arrives tomorrow, next week or never.

I've been puzzled by the same question (as a reaction to the same ads) and it occurred to me that it may be simple math:

If the likelihood of buying by someone who already bought the item is 3% and the likelihood of buying by someone else is 1% (simply because there are many more people in the second category), which group would you focus on with your ads?

Personally, if I like an item, I'll recommend (and gift) it to my family and friends. I can think of at least 3 items that I bought for myself, liked and bought for someone else that are normally more of a long-serving items (like a tea kettle or a vacuum).

That means the recommendation system isn't a neural net trained on the buying habits of users. It's a simple logistics system that recommends items similar to the ones you've bought.

> I get amazon ads for things I bought... but not a single thing on any of my wish lists (how much more explicit can I get?)

I can't find the source now but this reminds me of some trivia about Netflix recommendations and how people use the "My List" feature. Apparently uses will add all the serious and "important" films they think they should watch to the list, but when it comes to actual viewing those are never the films they select to watch. How much of what is on peoples' Amazon wish lists are items that will actually ever be bought (vs being idly pined for) and are they worth paying to advertise for?

> I feel like AI is pointed AT us

This is not at all about "AI". This is about it becoming economically cheap for humans to fuck with one another in yet one more way.

Welcome to humans. They suck.

I've seen a lot of Big Tech engineers make this same quip to avoid having to take any responsibility for the horrible results of their maliciously designed systems.

>Welcome to humans. They suck.

If AI is being developed by humans who have anti-human beliefs like this, we are in trouble.

Good news! I don't write AI or 'AI'. But if flip comments about very real bad human traits are enough to make me 'anti-human' in your eyes, I would encourage you not to talk to me.

Why is it unacceptable to say "racial group x sucks", but it is acceptable to say "the human race sucks"?

If you can't answer that question for yourself, I definitely do not want to talk to you.

The answer is: it is not acceptable. Humans are awesome! Including you, even if you don't realize it!

I have to push back on this subtle anti-humanism. No other animal species would even hesitate to make use of any and all advantages. That’s called evolution.

Humans, at the very least, try to stop and think before doing things.

>No other animal species would even hesitate to make use of any and all advantages.

counterexample: social insects. Pack animals. etc. (ie. there's plenty of examples of how an individual human's survival chances increase when they're engaged socially with other humans who by sacrificing an individual advantage, gain collective advantage, which amounts to a greater individual advantage. Proof: We fucking dominate every corner of this planet.

A chimp will kill chimps not in its group. Ants wage war for years against ants that aren’t in their colony. The thing they share in common is genetic similarity and thank God we aren’t like ants or other pack animals.

You are not part of my tribe, or my pack. I’d sell all your private info in a heartbeat if it made me a mint. I have no reason to even visualize you as more than some words on a page with a silly username. I mean, I don’t have your private info so I can’t, but I would. I wouldn’t kill you to make money like your example of social and pack animals.

I would not sell my wife’s info, or my kids info, though. I wouldn’t even sell out my friend network to make money, although some people sure will with MLM.

Some social animals like dogs or dolphins might. But yeah, you’re actually right. Our overgrown sense of empathy is one of the best parts of humans and sets us apart from much of the living world.

> How about some AI with some engagement, where i can have input, tell them "yeah man that was a good recommendation", "no way that was way off", or as far as the mood stuff goes "not now but maybe later thanks"?

That might be a technical limitation. AI just sucks at dialogs, keeping context and adjusting pretrained models.

Anecdote time - I use Apple music and am generally happy with it. However, some time ago I started putting on the 'Sleep Sounds' playlist before going to bed. That's messed up my recommendations for my 'New Music Mix,' a more valuable feature for me. Even after disliking the 'Sleep Sounds' and specifying that it shouldn't recommend more music like it, my recommendations are still messed up. Makes one wonder to what extent in the age of 'AI' whether everyone is an edge-case in some form, particularly when (as you point out) context is a big mess.

Similar anecdote.

I googled some info about a rival college football team. At some point Google decided I was a fan of that team an my news feeds was flooded with news about that team... and not the team I actually follow.

No amount of user input saying 'don't show me this' would make it go away for more than a month or two, and eventually I just stopped looking at google news and turned off the google news feed entirely in android...

Google news, whatever it does has entirely disconnected itself from most anything relevant to what I want, to the point that it is a negative experience using it compared to some random news site.

The rival college football team probably had an ad budget Google could spend.

Maybe, but not all the articles were all that positive about them, but even so I wouldn't be surprised.

My limited understanding seems to indicate that for AI to process input it has to be handled / processed carefully for any good to come from it.

Accordingly making decisions based on human feedback could be super difficult / impossible at the moment.

The result though seems like a super narrow minded system that wouldn't know when it is dead wrong ...

"I feel like AI is pointed AT us rather than working FOR / WITH us more often than not."

It's all perspective. It's working for someone. It's just that someone doesn't share your same goals or concerns, like corporations or the government.

There is a major barrier to accomplishing the type of user input you describe: explainability.

Because these models are trained using statistical methods like deep learning, they are fundamentally black-box.

On one hand, that makes them difficult / impossible to understand.

But on the other hand, it really restricts the capacity to take feedback directly from users on specific recommendations and use that to incrementally modify future recommendations. “Human-in-the-loop” is a great search term to explore this further.

Edit: I see promising attempts at solving this in the latest neurosymbolic approaches to AI. Would love to hear about other alternatives as well.

Yeah my limited understanding seems to indicate that what I'll simplify and call 'careful / consistent data formatting' is required before AI can really do much with it.

Humans pounding on the keyboard (or even just selecting from a list) is probably painfully not 'careful / consistent data formatting' and thus a huge obstacle.

But the result of being disconnected and no real feedback is the result none the less.

How about let us define our own boundaries? Like we do with people.

People with emotional disorders either do not define boundaries or do not define them effectively, and they often do not respect others boundaries.

You could use this analogy to imagine an ad agency as an emotionally disordered person, and you wouldn't be too far off the mark.

Advertisers DO NOT GIVE A CRAP if you are offended, or annoyed, or troubled. All they care about is if they send out 10,000 ads, they get enough increased sales to offset the costs of the ads.

That's the fundamental algorithm behind all the fancy AI technology.

That’s a good start, but it barely scratches the surface when it comes to the dangers of AI emotion recognition.

Also, those Muse headbands that read your alpha brainwaves which are used for meditation are dangerous. While they are incapable via (hardware) technical specifications of being able to determine your emotions via code, it can determine whether your mind is in a clear or distracted state. Hence it is used for “meditation” and is considered to be “beneficial” for that purpose.

But, of course it nearly certainly can be used for perpetual harassment, especially since third-party apps can interface with it. Is your mind clear (according to the headset, via code)? Alright, time to bombard you with harassing adtech. Also, it benefits the “designers” of these apps as it makes you more dependent on the device.

The same goes for other consumer EEG devices. The ones that are used with sleep like the Dreem 2 can be used to make your life a total hell through unethical experimentation and more.

I honestly have no hope on such issues. Spotify AB (a Swedish based company), for example, is believed to violate the GDPR. Not only that, they have a patent pending for emotion detection based on your song listening/streaming pattern. There are projects doing this for Spotify doing this exact thing publicly available online at GitHub of course.

Who would pay to advertise products to you that you already indicated you want by adding it to your wishlist? That sounds like some of the worst ad spend you could engage in.

Competitors, people actually wanting you to buy said product, the business selling since they get a cut, companies that process the transaction since they also make money. (no idea on regulations)

Why didn't you buy said product outright and instead put it on a wishlist? Lack of money? Price too high? Dubious quality?

There's also the factor:

Advertisers core-competency is "selling"; often used to sell their own services. Sometimes using deception and fraud. So in addition to all of the other explanations on why we get inappropriate ads: some vendors may be paying advertisers to send out ineffective and inappropriate ads, because that advertiser conned them into buying that service by lying about how effective the ads or methods actually are.

I think this probably explains about 90% of all advertisements.

Most things on my wish list remain there un-purchased ... forever.

But they'd rather advertise random products to me?

Why not instead just advertise related products to what is on my wish list?

AI is just a cover for others to hide behind when their decisions are enforced upon others. They can point the bogeyman behind the curtain all the while avoiding the charge that they gave that bogeyman the script by which it runs. The danger is when they fool the public into believing that is not the case.

A true AI would not be kept bound by those who gave it its start and no politician or business person is ever going to let something else make the decisions

People themselves are one of the least reliable sources you can use to determine what it is they actually want.

Proof: purchasing behaviour, life choices, etc.

I'd believe your proof if somehow there was something to compare it to....

In the meantime I get ads for stuff I'm not buying, so I'm not sure anyone knows what 'reliable' is.

Ads aren't about reliability. There's a very low cost for exposing someone to an ad that won't generate sales. So as long as exposing a GROUP of people to that ad DOES generate sales, that's going to drive those ads.

That really doesn't address what I said.

"People themselves are one of the least reliable sources you can use to determine what it is they actually want."

I'm not sure the idea that people are bad at predicting what they want makes a lot of sense if there's nothing to show that they're bad at it.

> How about some AI with some engagement, where i can have input

That's nice but it could backfire. What if people give it wrong feedback in order to make it do silly things, then go and post them online as proof of bias or whatever? The AI shouldn't just incorporate user feedback unless we're certain it's reliable.

>I feel like AI is pointed AT us rather than working FOR / WITH us more often than not.

That's not AI, that's capitalism. It's not profitable to use AI for wholesome and healthy things for people, so nobody does. It's more profitable to cram advertisements into every corner of our life, it's more profitable to mine our personal information and personal friend networks, it's more profitable to sell us a service instead of a tool. The next time you're upset at how technology is being used, think about who benefits and what their politics are, because everything is political.

I would like to push back against this a bit. You, along with most people in this thread, probably suffer from availability bias. People only interact with AI/ML systems when they are surfaced to them, and currently the most popular channel to surface ML is ads. Of course I have my own biases, but I think is an important critique. As Sinclair said, "It is difficult to get a man to understand something, when his salary depends on his not understanding it."

> It's not profitable to use AI for wholesome and healthy things for people, so nobody does

The purpose of the ML systems I build is to detect when things go wrong in order to keep people safe. These systems are both important and extremely valuable. They just aren't quite as sexy as the latest computer vision or large language models. But that's ok. Normal people only become aware of such systems when things go horribly wrong. It's sort of like plumbing in that regard.

I'd say there's just outright negativity bias in society nowadays when it comes to technology.

Even if ads were the biggest use of AI today, a glass-half-full view would be that other markets will benefit from the AI development done today for ads, as the performance gets sufficiently good enough for those fields too.

you'd be better off thinking of who benefits and what their incentives are. Because people by and large are acting based on incentives, and if you can reshape the incentive structures then you can discourage the action (by opposition eg. making it punishable) or remove the incentive altogether (through systems interventions).

Blaming it on politics leaves you with no actions to take beyond trying to talk people into acting against their incentives - which you can do, but it's not easy to get people to change their minds en-masse.

>if you can reshape the incentive structures then you can discourage the action

Yes, which is why it's important to be politically involved, governments are the ones with power to change incentive structures at the scale of large companies or society-wide. I do agree with your first paragraph, but your second paragraph does not make sense to me, it's not "blaming it on politics", politics is the exact mechanism to enact the changes you refer to in your first paragraph.

perhaps we're thinking of politics in different senses - it is, after all, quite an overloaded word. I meant politics as in people's personal perspective and beliefs about the world. I agree that engaging in the politics of changing our governance for the better is necessary.

I have read a huge swathe of academic literature on classifying and recognizing emotions.

And the fact is that humans can't reliably identify emotions from photographs, so there's zero chance for AI.

While I don't agree with much of what she writes, the psychologist Lisa Feldman Barrett gives an excellent overview of why at the start of her book "How Emotions are Made".

But a short explanation is that 1) emotions are expressed not just with the face but with the whole body, 2) the expression of emotion is a sequence of changes over time that can't be captured in a single frame, 3) there is extreme overlap in the expression of emotions (e.g. a seemingly grimaced look could either be painful anguish or ecstatic surprise) and 4) while we do have instinctual expressions for emotions (smile/frown/etc.), we learn culturally how and when to hide, show, fake, or otherwise modify these by habit, as well as consciously do so for our immediate purposes.

So we humans obviously do recognize emotions in others, but we do so with a huge variety of cues, we still frequently get it wrong, and we do a far better job when we share the same culture, and also know the individual person well. (Think how some people express anger with aggressive shouting, others by silently withdrawing.)

Most evidence for the idea that emotions can be identified (and cross-culturally) comes from the research of Paul Ekman, but this was limited to 6 "basic" emotions with photographs of highly exaggerated/stylized faces performed by actors. It has no relevance to whether real-life, non-"staged" emotions can be reliably detected.

So any AI products designed to supposedly recognize emotions are necessarily snake oil. I assume they are reliably measuring facial expressions, but facial expressions simply cannot be mapped to actual emotions in any kind of reliable way in the real world.

Any AI products designed to supposedly recognize emotions are necessarily snake oil.

You are making far too extreme a judgment here. You also say humans cannot "reliably" identify emotions from photographs. This is just setting the bar far too high. Humans can quite often identify emotions from photographs. It's far better than random chance and you shouldn't call this level of performance "snake oil".

You can criticize AI emotion detection as likely to lead to some sort of dystopia, but saying it can't possibly work is just going to lead you to an incorrect analysis of the situation.

> This is just setting the bar far too high... It's far better than random chance

I don't think it is setting the bar too high, "far better than random" can still be abysmal.

You're right that humans are better than random, but there is a wide range of emotions. Assume for the sake of argument there are 10 emotions to identify and evenly distributed in frequency, then random is getting it right 10% of the time. Let's say humans get it right 30% of the time from still photographs of real-life situations (not staged) -- I'm making this specific number up but depending on the type of experiment, it's a reasonable ballpark one.

That's still horribly low, wrong more than half the time. I can't think of any situation where that could warrant any kind of responsible decision-making. If we were talking 95% accuracy then there could be value, I'm not asking for 99.99% accuracy. But if reliable detection is something like 30%, I stand by calling this "snake oil" even if it's not just random.

I saw a recent example of true snake oil on an HBO series called "The Knick" about the Knickerbocker Hospital in NYC 1900s. In that plot line a company had put together a potion that didn't harm you, but didn't do anything it claimed to. They approached the famous surgeon played by Clive Owen and ask him to front the product, just slap his name on it, boom instant hit.

Well Dr. John W. Thackery had too much pride to be associated with that and refused the money.

Cut to modern day version, I'm a computer programmer and I write code to correctly place humans from video feeds into various buckets based on the RGB colors I scan from bitmap image 2d arrays... and I say it places them into buckets like "possible upset" and "very much in flow state" or "bored, brain not engaged." And then sell this program/SAAS product to companies and they get value from these buckets... how is that still true snake oil?

>I can't think of any situation where that could warrant any kind of responsible decision-making. Assuming the errors are random, 30% is still accurate enough to get a signal in an aggregate data set.

But the errors won't be random at all.

E.g. take class A of students from one culture and class B of students from another culture, and assume the students all feel the same level of moderate anger, but the two cultures have different "anger display rules" as they're called.

The detector might judge all the students from one class as angry, but none from the other. Aggregating students into classes doesn't fix anything.

Similarly, if you took some students who have a certain expression when they're happy, and other students who have the same expression when they're afraid, and mixed them together in the same class, then whatever emotion the detector is interpreting the expression to mean is going to be triggered by completely unrelated things, and the signal won't be meaningful at all.

You've taken the idea that there are cultural differences in emotional display (which is true) and exaggerated it to an extent beyond any evidential support.

Yes, there are differences - especially in identification of intensity. But in general detection is pretty reliable across cultures.


The results show that the overall accuracy of emotional expression recognition by Indian participants was high and very similar to the ratings from Dutch participants. However, there were significant cross-cultural differences in classification of emotion categories and their corresponding parameters. Indians rated certain expressions comparatively more genuine, higher in valence, and less intense in comparison to original Radboud ratings. The misclassifications/ confusion for specific emotional categories differed across the two cultures indicating subtle but significant differences between the cultures.


> But in general detection is pretty reliable across cultures.

Unfortunately that's simply not true. Your example merely shows reliabiliy between two countries, not all countries. You can research, for example, how Japanese people tend to smile when angry, sad, or embarrassed, in total contrast to Westerners.

This is one of the classic "extreme" examples, but it demonstrates how my point holds -- errors are not random but are highly correlated with the culture -- not to mention the subculture, the individual (the person who hides all emotion when angry), etc.

If you agree that AI can reach human levels of accuracy is your point that we shouldn't ever try to assess someone's emotion in decision making, whether it's a person or machine deciding?

> is your point that we shouldn't ever try to assess someone's emotion in decision making, whether it's a person or machine deciding?

isn't that a pretty sane and basic assumption ?

No, I think I'll still assess the hitchhikers I pick up.

Seriously though, especially with individuals, why shouldn't you use all the information available, if it produces results even an iota better than blind acceptance (and for most people, their empathic-sense does)?

In the context of labeling emotions based on context-free still images of strangers, then yes humans are so inaccurate at this it would generally be a bad idea to assess emotions.

In the context of real life, where we know people, are interacting with them, and are reading cues from their whole body and their movement over time, then we're much better at assessing emotions and we'd be pretty foolish to ignore emotional signals.

I don't think there's a big industry of still image emotion detection. It's not the smartest approach. I think it would be necessary to use video for this task.

Also, it's conceivable for a machine learning algorithm to surpass humans in certain aspects, e.g.

- Identifying emotions of humans who grew up in a different culture, for which the ML algorithm may have piles of data from every part of the world and you don't

- Identifying emotions of babies based on instinctual behavior

- Identifying emotions of people who are undergoing a trauma response that most humans are not trained to recognize

- Identifying emotions of people during a negotiation process in which the people are actively trying to hide their emotions from humans but nevertheless leak certain signs of their true emotions

> And the fact is that humans can't reliably identify emotions from photographs, so there's zero chance for AI.

I believe there are a lot of examples of AI doing things that humans can't. Typically it's a matter of scale, but sometimes it's a matter of some correlations being beyond the basic capabilities of humans. I could be wrong, but I'm not sure that this is the best metric for whether AI could do something.

First, you need reliable training data to begin with. This can't be supplied by third-party labeling of faces, so it would need to be done with a dataset of self-reported emotions, but self-reported emotions also have tons of pitfalls well documented in the literature.

But second, the correlations of individual facial muscle contractions with emotions has been extensively studied and it's far noisier, inconsistent, or completely devoid of a signal than many people assume. In academic terms there's no such thing as a reliable emotional "signature" to be gleaned from facial muscle activation.

So the point is, it appears that the raw data simply isn't there for the AI to detect patterns that humans can't. Detecting emotions requires far more data points outside of facial muscle activation -- such as the ones I listed.

> This can't be supplied by third-party labeling of faces

Yes it can. You get enough labelling and it overcomes the unreliability of detection by any single human.

Get 70 people to label the same face. A random distribution over the 7 Ekman emotions[1] will give 10 each, and any non-random variation from that is a signal. Do that over enough faces and you'll get something to train on.

(Also, no reason why it needs to be face pictures. It could be 2 second video snippets for example).

[1] https://www.paulekman.com/blog/darwins-claim-universals-faci...

No, because the fundamental problem isn't the unreliability of detection by any single human.

The fundamental problem is that multiple emotions can result in the same facial expression, and the same emotion can result in multiple facial expressions.

It doesn't matter how many people label a face. It won't get over the fundamental issue that there is no 1-1 mapping between emotions and facial expressions.

> It won't get over the fundamental issue that there is no 1-1 mapping between emotions and facial expressions.

Actual experts disagree. For example Ekman, for the basic emotions at least (see the link I posted above). To quote:

In the late sixties, Izard and Ekman in separate studies each showed photographs from Tomkins’ own collection, to people in various literate cultures, Western and Non-Western. They found strong cross-cultural agreement in the labeling of those expressions. Ekman closed the loophole that observing mass media might account for cross cultural agreement by studying people in a Stone Age culture in New Guinea who had seen few if any outsiders and no media portrayals of emotion. These preliterate people also recognized the same emotions when shown the Darwin-Tomkins set. The capacity for humans in radically different cultures to label facial expressions with terms from a list of emotion terms has replicated nearly 200 hundred times.[1]

The FACS coding system[2] has pretty good experimental support, across multiple cultures. As noted on this article, it can distinguish between fake smiles and real smiles etc. See https://en.wikipedia.org/wiki/Smile#Duchenne_smile for more on this.

Even if you disagree with that and think that motion is required (which is an argument that has some validity) there is still no reason to think a computer system can't do it.

[1] https://www.paulekman.com/blog/darwins-claim-universals-faci...

[2] https://en.wikipedia.org/wiki/Facial_Action_Coding_System

Actual experts today disagree with Paul Ekman -- the book I linked to in my root comment provides an extensive discussion of this.

Ekman was at the forefront of creating facial emotions expression research, but while I do believe he convincingly showed that there are basic instinctual, stereotypical, cross-cultural facial expressions, the field now generally accepts that FACS coding does not map reliably to emotions in real-life contexts.

In other words, just because a smile is a cross-cultural instinctual display of happiness does not mean a given individual who is happy will be smiling, or that a given individual who is smiling will be happy.

A 1-1 mapping isn't required, IMHO everybody involved would consider a probability distribution as a reasonable expected output, and as long as that probability distribution is reasonably close to reality and is substantially different from the prior, output like that is useful signal and would count as the system working.

The same applies for labeling, if a particular facial expression can occur in an indistinguishable manner because of three different causes, that's okay; there's nothing wrong with a gold standard label in training data marked as "we've observed that 50% people had this expression because they were angry and 30% because they were in pain, but no happy or sleepy people had an expression like that".

Furthermore, for the purpose listed in the article, you don't need to determine an absolute value of some emotional state, but you need to detect large shifts in it (i.e. you get an baseline that implicitly adjusts for part of individual and cultural differences) - i.e. whether the audience (in aggregate!) now looks significantly more frustrated than the same people looked 30 minutes ago.

Unsupervised learning is really good today, about 1-2% under supervised learning. You can also cross correlate text, audio and video. If there's signal in the data, then a model can learn it, and you don't need so many labels.

The signal is facial expressions. You're missing the “map facial expressions to emotions” step.

Your argument is basically that since something is hard it is impossible?

No, my argument is that if something is impossible it's impossible. I said "the raw data isn't there", not that "the raw data is hard to interpret".

> self-reported emotions also have tons of pitfalls well documented in the literature

That's interesting. Where can I read about these issues?

This is a common misperception about AI and ML. Most leading voices in ML will tell you that--unless you have a very good reason to believe differently--ML is not the tool to turn to if you need better than human accuracy. That also comports with my experience in the field over the last 6 years.

What do you mean by this? This seems like a vast sweeping generalization. There are lots of areas where ML already outperforms human accuracy with open and available tools, are you saying these should be avoided unless there is a very good reason to use them?

How do you know it is accurate if you cannot tell yourself? And the ML learned it by your expectations. Without accurate verification you fundamentally have GIGO.

In this paper (https://www.nature.com/articles/s41551-018-0195-0) Google predicts patients' sex with 97% accuracy from images of their retinas. This was thought to be impossible.

I think this is a pretty decent example of a Neural Net becoming better at something than humans are.

I have certainly been surprised by the question, "Are you mad?" from people who know me very well and have completely misread my emotion. And that's in person.

Static photos are inadequate for interpreting emotions. There is no context and context is critical for understanding emotions.

What was the situation? What expressions was the person making in the previous moments. As you said, body language. How the person relates to the viewer.

A video clip would be better but still would miss a lot.

> So any AI products designed to supposedly recognize emotions are necessarily snake oil.

Please consult the current state of AI before giving an a priori rebuttal. Emotions can also be detected in text, voice and video, not just in static images. Sentiment detection in text is widely used.

The premise of the article is valid, whenever AI systems are being deployed they need to be ethically vetted, but the writing is superficial and 'too emotional'.

I'm well aware of the current state of AI, thanks. You, however, may need to familiarize yourself with the current state of emotions research.

Sentiment detection in text, for example, is not emotion detection. That's not a rebuttal.

As an example of what I mean, you can feed a NYT news article written by a journalist experiencing regret, or boredom, or elation, due to events at home, while writing it -- and it won't pick up a thing. But it may pick up a valid sentiment of outrage, which was intended by the author as the desired effect upon the reader, but which was not once felt as an emotion by the author while writing it.

Semantic detection is far easier because the author of text generally tries to communicate a sentiment, consciously, in their text, and text is intended to have fairly unambiguous meaning. Neither is generally true of emotions on the face.

And nowhere did I say emotions only exist in static images. Of course they are communicated through voice, video, etc. -- I don't know how anyone could think otherwise.

> And nowhere did I say emotions only exist in static images. Of course they are communicated through voice, video, etc. -- I don't know how anyone could think otherwise.

Your initial post argued about it being difficult to detect emotion given just a single frame, then concluded that "any AI products designed to supposedly recognize emotions are necessarily snake oil".

I thought it was clear from context that the product in question would be evaluating video by using single frames.

And I stand by the snake oil comment for the reasons stated, with the state of AI and emotions research today.

I'm not saying it can never be done, but that we would need major breakthroughs in both.

Sentiment detection continues to be an entirely different area of research.

>the fact is that humans can't reliably identify emotions from photographs

As a layman, I had previously thought Paul Ekman's work refuted this claim, but it seems like it's been called into question. (Ekman was reportedly the basis for the main character in the show Lie to Me)


Technicality people do not always express emotions the same way physically despite what may be the "consensus" on what an expression is believed to be. It can differ culturally for one, let alone neurological conditions or neurodiversity. Human vision isn't strongly reliable - just the current standard. If somebody claims they don't have HIV when their tests come back positive it is clear which one wins out.

Further, I’ve had people over-analyze my every expression to the point of abuse plenty of times. It is entirely possible to wind up with unneeded hostility from trying to infer emotional states.

I wish people and machines would Not try to infer too much from this sort of thing. Treat people as ends in themselves.

> So any AI products designed to supposedly recognize emotions are necessarily snake oil. I assume they are reliably measuring facial expressions, but facial expressions simply cannot be mapped to actual emotions in any kind of reliable way in the real world.

Unfortunately snake oil is a profitable business, and unreliability isn't going to stop people from selling and using these systems.

It's going to be like AI-driven captchas [1], the people subjected to these systems are going to have to learn to give the systems the answers they want, even if it's obvious they're incorrect. You'll have students concentrating on expressing an "attentive" face rather than concentrating on the material.

[1] Yes, I'm talking about you reCAPTCHA. Highway buzz strips aren't crosswalks.

> And the fact is that humans can't reliably identify emotions from photographs, so there's zero chance for AI.

This is absolutely not the case. I work in adjacent fields, and it not uncommon for machine learning based solutions to reliably outperform any single human.

In most cases ML aggregates human judgement. Provided some humans are right more of the time than a random distribution would give (and there is enough data) then a good learning algorithm will find it.

Now there are issues with this of course - inappropriate generalisation, inappropriate overfitting etc.

And this comment is correct - data outside a single photo will make the system much more reliable. 2 second video sequences should help a lot.

But the general principle is valid: an AI/ML should be able to reliably outperform humans.

We seem to have evolved to display emotion because it's a useful communication tool. So much that in text based communication where a lot of emotion is lost, we explicitly re-insert emote-icons to make it clear again.

So emotions can be thought as a language (albeit a pretty lossy one). If AI can one day understand spoken/written language, I don't see why it can't also understand the language of emotions.

> And the fact is that humans can't reliably identify emotions from photographs, so there's zero chance for AI.

The second doesn't necessarily follow from the first—there are plenty of things humans are bad at that computers are good at.

You are probably right that current products aren't very capable, and may even be snake oil.

But also I'd be quite surprised if this isn't possible in the near future.

I think these objects are maybe true but not relevant.

The leading example of the article was about Zooom videos, not single frames, and in a school setting you presumably know what culture the students are from, so I don't think you need it to work cross-culturally. And I don't doubt that people can deliberately hide their emotions, but many settings are not adversarial, so there would be no reason to do so. E.g., if you imagine that a teacher is teaching a lecture to a large audience over zoom and is using a "puzzlement meter" to see if they seem to be getting it, then any audience member who does feel puzzled will only gain from frowning and making the lecturer slow down.

In general, I think from my experience talking to people over video chat, some emotional information does get communicated over the video, so in principle computer programs should be able to pick up on it too.

> about Zooom videos, not single frames

The products in question are almost certainly assessing still frames from video. There's been vanishingly little research on the time component of emotions, even though we know it's hugely important.

> you need it to work cross-culturally

This would then require separate training models for e.g. individual subcultures within countries as well as detecting which subcultures participants belong to. That is also far beyond anything being done currently.

> but many settings are not adversarial, so there would be no reason to do so

It has nothing to do with an adversarial setting, people try to hide their emotions constantly. They hide that they're fed up with their boss in front of colleagues, they hide that they're stressed with their spouse at work, they hide that they're worried the project will fail. We are emotionally regulating virtually all the time.

> some emotional information does get communicated over the video, so in principle computer programs should be able to pick up on it too

In principle, yes, but in practice the emotional content is so dependent upon your cultural and individual mental model of the person that you would need to model their entire psychology. "What does that long pause mean?" The point is that emotional signals are so incredibly complex and vary so much from person to person, that the difficulty of accurately decoding emotions is more akin to AI that can make conceptual inferences and hold a genuinely intelligent conversation, as opposed to mere pattern recognition.

FWIW and I’m not an AI and this is a much smaller question...

But there was a little test that ran around a few years back that was just “Is this smile genuine or not” and I was able to identify 20 out of 20 pictures real and fake smiles.

The pictures were gathered by asking subjects to smile and then taking the picture or taking a picture after they responded to a joke, so I guess there is still room for some of the “genuine” smiles to be faked.

Well, that's actually relatively easy if you know what you're looking for -- real smiles are known as "Duchenne smiles" and involve contraction of eye muscles. And that could absolutely be done by AI today, relatively easily.

The issue is the emotion behind it... you can be quite happy without smiling at all, and people who pose for the camera all day long learn to give a 100% convincing Duchenne smile no matter how they're feeling that day.

Also, emotions aren't necessarily a thing as such; they're at least sometimes a retroactive interpretation of physical reactions (sympathetic and parasympathetic nervous system activation.)

i.e. you know you got a rush, then by examining context, you attach a (theory-theory) word to that rush. Both AI and photographs are bad at context.

> And the fact is that humans can't reliably identify emotions from photographs, so there's zero chance for AI.

You say that like human intelligence is some sort of upper bound on intelligence.

AI can work on real-time videos (see "ancient" techniques with C3D). A single photograph is noisy, a bunch of photographs close spatiotemporally are much more decisive.

Thinking through the consequences of this type of technology is important, but the narrative of "we can no longer allow technology X to go unregulated" as a vague catch-all for every negative externality is unhelpful. Regulate how? What concrete solutions is the author suggesting?

> Regulate how? What concrete solutions is the author suggesting?

A small group of activists could volunteer to be our ethical conscience. They prefer story telling and emotional appeal to logic, and being warriors all day long, are very tired of everyone else not understanding. And we can't understand unless we go through the same ideological retraining they have, so anything we do is by default failing to reach them. But since they know better, it should be their job anyway to give us the ethical approval. /s

That's the problem. Those who most want to do that job are the ones who will abuse it the most.

I think what the author is suggesting is that decision making, especially those which can critically affect a person, should not be automated by machines. There must be a mechanism in place which is open to scrutiny (preferably by a qualified human) and accountability.

Take the example of predicting terrorism threats based on facial queues of stress or fear. Machines lack context, which a qualified human would otherwise take into consideration. You can be stressed out because you might be accompanying a child, or fearful that you might miss a flight. If a TSA agent deports someone simply because a machine recommended it, that would be inhumane.

People like to argue that more regulation will adversely affect automation and/or growth/scaling of technologies and businesses. Growth is important but it must not cost us our humanity.

Personally, I think this type of thing falls into what I like to call the "all problems are management problems" category.

If a TSA agent deports someone simply because the machine recommended it, then that is a problem. But if a TSA agent deports someone simply because they were having a bad day, then that is also a problem.

I guess I don't care what tools those in authority are using (their intuition or a mechanical intuition or a database lookup), but what I care about is whether or not innocent people without the ability to navigate an appeal system are being erroneously penalized.

And ultimately, it's the responsibility of management to make sure that the people/tools they assign are doing a good job. If they fail, then management needs to find new people/tools. If management fails to do that, then all the AI regulation in the world isn't going to do much good.

Well, the purpose of the AI regulation is specifically to hold the "management" accountable when they fail to prevent misuse of the technology. The regulations are to be enacted by the lawmakers. As I see it, the debates like the one we are having right now will determine how the regulation will be shaped eventually.

Yes, but I'm worried about the people component of my "people/tool" equation. If management sees that costly fines occur when AI is used, then maybe they'll abandon it and just use people instead.

However, if they hire a bunch of power hungry sociopaths who are very good at hiding their malicious oppression and who also bring in donuts every Thursday to stay on their bosses good side, then the situation could easily lead to worse outcomes for the people who have to deal with this system.

If we create a computer system that oppresses 1% of innocent people, then that is a problem. However, I don't consider it a win to ban the computer system and replace it with a human system that oppresses 10% of innocent people. Like, the situation isn't better because humans are oppressing humans instead of a computer doing the oppression.

That's why I was focused on management. I don't care that things are going badly for some specific technology related reason. It's management's job to fix it regardless. If management can't rely on the technology for regulatory reasons, then they might rely on people who do just as bad of a job. And hey, that scenario is even better for management because if they hire a bad actor who gets caught then that person faces the consequences and not them.

Maybe we're going to develop a specialised court system for AI where humans can sue for AI-related injustice. I don't trust companies to self regulate effectively.

We haven’t needed such courts for computers, the internet, or other algorithms. I fail to see how special courts are needed for AI. Our current legal system appears perfectly suited for AI - at least no worse than for other tech.

> decision making ... should not be automated by machines. There must be a mechanism in place which is open to scrutiny (preferably by a qualified human) and accountability

You're saying "I don't trust AI, I want human supervision", but this works both ways. Sometimes we don't trust the humans and would prefer a neutral AI. Humans do terrible things to other humans. Who's going to review my appeals? What are their biases? Are they any more trustworthy than a model?

Agreed. What even counts as "AI that interprets human emotions"? Is a suggestion algorithm designed to maximize clicks "interpreting human emotions"?

I find this to be an interesting statement. What happens if you're able to evade regulation by making your "AI that interprets human emotions" of really poor quality.

"Your honor, we investigated the source code and discovered that there was technically no AI being utilized. It was a poorly formatted series of if-statements. Honestly, I wouldn't even consider this a program. How it avoids crashing the moment it's run is beyond me."

Like, does AI mean some statistical method is used? That large data sets were used? Are you using some declarative language like prolog?

Ultimately the only definition I can see them coming to will reduce to "person uses computer to do thing I don't like." And somehow I don't think that's going to actually help.

Well, I suggest an outright total ban on anything that purports to read people's emotions. Either it doesn't work, in which case it shouldn't be used, or it does work, in which case it shouldn't be used.

Suppose there's a browser plugin that uses sentiment analysis on text fields to try to detect text written in anger. And it doesn't do anything with the information except warn you so you can think twice before posting an angry comment to social media or sending an angry email. Should that be illegal?

Or maybe it more directly tries to sense emotion by using your webcam to look at your facial expression.

> Should that be illegal?

That would be acceptable collateral damage, if it couldn't be permitted without opening the door for the creation of systems that used the information against the analyzed people's interests.

I'd be willing to at least consider a carefully crafted exception. The problem being that when you write such an exception, it tends to be awfully easy to introduce loopholes that, in practice, allow using uninformed pseudo-consent, or false consent with no real alternative available, to use information against people.

The thing is, if you include something like that (i.e. sentiment analysis as crude, rudimentary way of looking at people's emotions), the genie is out of the bottle - it's a widespread task that's used as a relatively simple homework excercise in undergrad courses, you would need to censor it out of textbooks worldwide, which is a quite big issue to say at least.

I.e. my point is that such a ban would have to be very extensive and invasive, with obvious censorship of small, simple segments of code and whole avenues of basic knowledge. Given some data, you can get a crude emotion detector from facial images or text messages - not state of art but somewhat accurate - with something like ten lines of code, with no previous skill on "emotion analysis", just applying generic ML approaches. I can't imagine how such a ban could be implemented, as so many people would still be able to easily make such systems whenever they wanted to, so the ban wouldn't be effective.

Perhaps you could regulate the application of automated decision making to decisions about people and requiring some review-and-override mechanisms (GDPR has some limited aspects of that), but it's a very different area than just banning knowledge and skills that already exist and are relatively widespread.

People can read other people's emotions. You can build a mechanical turk program that's effectively the same as having a personal concierge agent with respect to the regulation.

Like, if you ban reading people's emotions you effectively have to also ban any human interaction.

The problem is making assumptions about why people are experiencing certain emotions, or telling people they are wrong when they say, "I'm actually not angry"

Yes, but this can happen just as easily with human actors as it can with non-human actors.

I suppose the benefit of a human actor is that you can theoretically fine or jail them if they're found to be malicious or sufficiently incompetent.

However, on the other hand, human actors can explain why they're doing the right thing. Even when they are in fact doing the wrong thing. An AI that is broken incomprehensibly can still be determined to be broken. The human actor causing issues can produce very convincing arguments to avoid termination. Also they can bring in donuts every Thursday to stay on the bottom of the termination list.

[And to be clear. I don't trust the technology at all. I just also don't trust the human system either. A system isn't better because innocent people are oppressed by humans instead of by a computer.]

It's still completely reasonable to ban automating it. The impact of having it done in an automated way is completely different from the impact of having it done on a person-to-person basis, for a lot of reasons, starting with scale.

It's also reasonable to ban anything that claims to be better at it than a human.

I don't think it's completely reasonable to ban it. Although, I would agree that it's completely reasonable to restrict nearly anything.

Although, I do like your comment about scale. If you have a system that's 99% successful, then if you apply it to everyone in the US then you're failing 3 million people. That's a problem.

Of course, your system might be mechanical OR it might just be a group of people each one just "doing their job." From a result oriented point of view, you might end up with a mechanical system that oppresses less people than a people system.

I don't feel good about either one, but I also don't feel good about causing wide scale misery because at least it's people screwing over other people instead of a machine screwing over people.

[Of course it's worth noting that I don't trust the technology at all. It's just that I also don't trust the human solution that it claims it can replace.]

Suppose someone, for example a depressed person, wants to use it to monitor their emotional state and use various automated responses like notifications or distractions to help maintain an emotional state they want?

I think this viewpoint is limited. Sometimes reading peoples' emotions can be an extremely good thing.

I am aware of a healthcare company which currently has a model in production which alerts healthcare providers if a person displays suicidal intent. They have several confirmed instances deaths being prevented because of the interventions taken due to alerts from their model. While I don't think ML should be used to manipulate people's emotional states, I think this is a case where having a model that can read people's emotions is a good thing.

The funny thing is that by totally banning "anything that purports to read people's emotions" you also ban research that would help people. For example you need to train models to detect race in order to make sure there are no racial biases in another model you want to deploy.

> Regulate how?

make a committee to vote to fund a study to create a subcommittee to vote to fund a study on something from a decade ago with a small, non-representative sample size, reaching predetermined conclusions that justify the need for the committee and all related subcommittees, instead of getting anything done.

y'know, government.

The author is not writing down action items after a meeting. It's the job of politicians (presumably) to draft up policy. Activism groups also push for specific regulation sometimes.

But stating your policy opens up another angle of attack: those who wish to undermine your ideas can attack your implementation rather than the concept.

It's at best sloppy to just say "this needs to be regulated, but I can't be bothered to suggest even one possible regulation that would fix anything I find so objectionable." The author has a doctorate and works at Microsoft Research on AI. If she thinks it should be regulated (which is absolutely a fair position to take) I'd much rather have her be the one suggesting regulations than some 80 year old lawyer who's been in Congress since the civil rights era or some special interest lobbyist.

Her understanding of AI does not translate to an understanding of how a society runs or how laws should be made. If anything, many people overgeneralize their expertise at one task to suggest bad solutions to problems from other domains. Especially engineers and PhDs.

I agree that she's still better than most US politicians. I have some faith in politicians from Western Europe to do half the correct thing, or at least not be swayed by arguments coming from money too much. Not that much faith, but still..

If someone wants to make an argument "this needs to be regulated" then they need to assert that the benefits of doing so outweigh the drawbacks. The article does not really do that; it points out a class of potential harm, but it does not try to present an reasonable argument that regulation is likely to succeed in preventing that harm; and the balance of pros vs cons can't even be discussed without at least some general idea about what kind of regulation we're talking about.

> those who wish to undermine your ideas can attack your implementation rather than the concept.

Yeah, it's frustrating when people do this. But if you want to have real solution at the end of the day, then you'll need to hammer out all of your implementation issues.

If you produce an implementation that is flawed, then people will be able to evade the spirit of your regulation rendering it useless.

The author makes a lot of claims, but doesn't back them up. After reading this article, I can only conclude that she wanted me to fear new AI applications, but I am not convinced that I should. It's typical media-scare-mongering in the pages of Nature.

Probably also worthwhile to point out that when someone in a Western journal says we should regulate an AI application, we should be clear that they mean: Let's regulate it in the West and let other nations pull ahead. That's the market context.

AI gets fetishized, but it is a product like many others. If you make claims about it, then you should be able to prove it, otherwise you are committing false advertising. That is to say, AI is already regulated. It remains unclear whether AI requires a new regulatory regime. Personally, I doubt it.

AI is only as good as the underlying data. People are TERRIBLE at reading people's emotions. If you cannot teach the data model good inputs--how are you going to get decent outputs?

If China and other nonwestern nations use AI like these emotion reading programs--they will end up with bad outputs. This will end up putting them behind nations that sensibly regulate how AI is used. Sensible regulations are a tall order to be sure but we must endeavour towards it.

There’s a lot of throwing things at the wall and seeing what sticks in this article. It also conflates things in its comparisons, like comparing AI to pharmaceuticals. We don’t ingest algorithms and a much better comparison would be to a non FDA regulated lab test, many of which exist.

The lie detector comparison is also odd. We never regulated lie detectors, we simply barred their use in courtrooms because their results are unreliable. This is I think the core issue I have with articles like this - if we talk about regulation, we should talk about regulating outcomes, not underlying tech. There was no talk of regulating skin conductivity devices after polygraphs were barred from evidence.

The problem with AI emotion recognition is not necessarily how good it is at identifying emotions. It's the fact that humans can make themselves feel emotions in order to reach their goals. If you can "believe in yourself" to make yourself more confident in succeeding at something, you've done the same type of mental manipulation as a compulsive liar who believes their own lie.

It's not broken in practice, the theory doesn't work.

This is a problem for humans too, but non-technical people take anything that a computer spits out as word of truth! We intuitively know it's hard for a human to really read a human. Not so for computers.

Don't overlook the existence of professional actors. Conmen and various methods of crime require significant acting ability. I do not believe it is possible to create an AI sophisticated enough to identify skilled professional acting or for that matter ordinary deceit.

> Conmen and various methods of crime require significant acting ability

I really don't think they do. The main thing that they require is that you don't listen to the anxiety (or after experience lose the anxiety completely) that everyone can see through you. It's the illusion of transparency that outs a liar, not any codable authenticity in their reaction. "Tells" are nervous tics, and "lie detectors" are sympathetic nervous system arousal detectors.

People can't see through you. A funny thing you learn when doing public speaking is that they can't even see your crippling nervousness unless it expresses itself in stereotypical tics. If your tics are strange, people will have no insight into them (unless they know you.)


Your impression of a professional criminal is ludicrous and appears to be based on movies and comic books. "Tells" do not exist when a person is method acting.

I don’t know if there is anyone that knows for sure if it is possible to create an AI sophisticated enough to detect skilled professional acting.

Though, if we allow our imaginations to take off a bit, one could imagine an AI that can represent the entire mental state of an individual in itself and probe it to determine if you’re acting while you’re acting in front of the AI.

We can start by not identifying this software as AI, and then work through what it can actually do with scientific rigor.

Thinking we can boil emotion down to a well qualified set of states and rules is as reckless as it is presumptuous.

"You like this"


"Computer says otherwise"

Think that through for a while. Each of us is the authority on our intent. We are the authority in who we are, what we feel, and so forth too.

I won't have anything speaking for me or mine and neither should you.

In 99% of use-cases with ML models like this, the output of the model won’t be the direct output of the system it’s used in. Instead, the output of the model will be fed through as an input to a correlational model/heuristic.

I.e. Amazon doesn’t care what emotion you’re feeling. But it does potentially care whether you seem to be “in a buying mood” — e.g. it might be able to save a lot of money by constraining its ad placements to only be shown to such people. It isn’t going to try to figure out what mood “a buying mood” is — it’s just going to train another model to look at the mood-model output, together with people’s shopping histories, and then learn “people tend to buy things more often when the mood model has this output.”

So it’s not like Amazon will ever assert that you’re in a particular mood. They’ll just assert that you look like someone who’s ready to buy things, with your mood being one input to that judgement. Their perception of your mood doesn’t have to be 100% accurate for that to be helpful; any more than a salesman’s read of your mood does.

ADS are not the worry I am expressing here.

Other things, like tests to judge the nature or inclinations of someone are a much higher worry.

"Each of us is the authority on our intent."

Tell that to the courts.

They agree. It's just that a) sometimes people lie about their intent; and, b) sometimes the courts have determined intent is not the most important thing, or even irrelevant.

The judge is the authority. They can do whatever they want. Sometimes the listen to the person, sometimes not.

Sort of. They can't rule that intent is irrelevant if it's a necessary component of a given crime, for example. They have pretty wide latitude to run their courtroom how they wish, rule on admissibility of evidence, etc. but there are a decent number of things they're bound by. The problem is that some of these can only be fought on appeal which has a host of its own issues, including cost.

Yeah, but the stuff they are bound by can be violated by them misapplying the law. There's no real discipline or corrective action other than the appeal you mentioned, but even that doesn't ensure that it won't recur, or that similar mistakes won't happen at the next level.

This happened recently as an example, there's a law that exempts dogs undergoing field training from being on a leash. There's no definition of field training in statute and the definition in the dictionary is very permissive (basically any action related to training a hunting/working dog). Under the principle of lenity, any ambiguity is supposed to be interpreted in favor of the defendant. The judge ruled that we were just playing fetch, even though we had a letter from a game warden saying the activities would be acceptable under the law. The judge also misapplied facts that had nothing to do with the law, such as if we have a license or training on how to train dogs, yet there is no state license nor does the law mention any requirement to be trained (it's customary to self-train). And saying that we didn't have any special equipment with us, which again is nowhere in the law and isn't required. There were other issues and misapplication of law related to rights violations, a motion to dismiss, and even contradictory rulings about trial de novo issues.

Also, at the lowest level the magistrates aren't even required to be lawyers. They can violate rights, misapply law, and even yell at you or tell you they won't hear your side all with no consequences because they claim ignorance, so they're "just mistakes" that you can pay for an appeal to hopefully fix.

There's zero accountability for the police, DA's office, and the courts. We witnessed multiple rights violations, documented lies, and gross misapplication of law, but nobody cares.

Were you, in fact, just playing fetch?

Training has some purpose. And that purpose speaks to intent.

Reads to me like your intent was to play fetch and count on legal ambiguity.

Was it?

I take her hunting and the activities at the park are part of training in a dynamic environment. It's not just fetch.

That is not an answer.

Did anything besides fetch happen at that park?

I am asking to get at intent. It still appears the same to me. I have no new information.

Did you miss the part of the answer that says it's not just fetch?

You also missed a direct question on intent; namely, "Reads to me like your intent was to play fetch and count on legal ambiguity.

Was it?"

So far we have nothing on the table to differentiate fetch play in the park from training as well.


There's the letter from an official agency stating that the actions we perform are complaint with the law. Also, if there's no definition/differentiation then it goes to the defendant under the legal doctrine of lenity or even reasonable doubt. Hell, you can throw entrapment in there too if you got permission/clarification from the government and they later prosecuted you for it.

You can throw in all the stuff you want about not having special equipment or a (non-existent) training license, but those facts alone would not be enough to prove guilt, especially when the majority of the people training don't have a training license nor special equipment. Remember, you're supposed to be innocent and the prosecution has the burden of proof. It seems that's not really the case when they take irrelevant facts that apply to the majority of people field training and use them to prosecute people.

No, why? What was your intent? To train your dog?

I am looking really hard to see a solid defense here. I do not have it yet.

I think the law is shit, frankly.

But, all I really have here is your game of fetch should be permitted because it is part of some other training regimen.

So, what is that training and how does a game of fetch contribute to it?

Put more simply, how is your game of fetch not like any other game of fetch?

Now, you should also know I could give two shits about what you do with your dog. Probably a lot of the stuff I do with mine :D

What you are battling here is people are not so inclined to see your position as genuine. I did not, and thought it worth a go as a general exercise.

So far it has been interesting!

Your actions and framing align most strongly with what I stated earlier, depending on ambiguity to get an off leash game of fetch in the on leash park.

If I were stuck with that scenario and in need of the park, I would just make sure some meaningful and unambigious training happened, because fuck 'em. No joke. Someone, somewhere may just be wound up two clicks too tight.

Most of what you appear to be depending on falls under wide discretion however, others wound up or not, your path is clear:

Maximize their ability to defend discretion in your favor and it will land in your favor more than not.

You appear to be trading on good will as if it were entitlement. That kind of thing fails often.

Good will is not an entitlement. It can be garnered, cultivated, encouraged, but not expected or demanded.

A nod to the spirit of the laws, some charm and consideration go a loooooong way.

If you have to explain the technicality? Doomed.

All this has gotten me out of a ton of vehicle and outdoors type citations and scenarios.

I could revise my position given you can tell me either:

Something other than fetch happened in the park

, or

how your game differs from the norm, or otherwise was necessary to do in that park and contribute meaningfully to a training regimen of some kind, any articulable kind.

Otherwise a person would likely see you playing with your dog off leash.

You gotta flesh something like this out a LOT more for it to play out favorably in my experience. I am not talking about licenses or any of that either.

I am talking about giving them something solid that speaks to training. It is missing from this discussion and it absolutely should not be. I asked for it multiple times too:

I had similar discussions with others many years ago. A few tweaks in how I go about things like this did result in a dramatic shift in good will coming my way.

Good luck.

Nope, but you missed the question of what happened in the park.


Did anything other than fetch happen?

Then why did you say it was not an answer?

The reason I didnt answer the other question is that it is not relevant.

Another dog barked at and chased our dog after their owner couldn't hold onto the leash.

That's not really important since not being able to hold the leash is a violation of the law. The other party didn't even have a dog license. They decided to call the police because that makes them the "victim" and protects them from their infractions.

Not to mention, it's irrelevant. The law is stated in an absolute liability/immunity fashion. If you are involved in field training, then you are not in violation of the law (civil issues could still be brought if your property caused damages).

Ahh! Here we go.

Something other than fetch happened in the park!

That still leaves the question of whether your activities outside the animal conflict were anything besides fetch and how those activities contribute to some articulable training regimen.

The conflict changes the entire thing and it is relevant!

Now someone has to answer the question about why your dog is not on a leash to the other dog owner.

In my other comment, I spoke about fleshing this out some. That is what they need.

Secondly, yeah path of least resistance is to not step up for you, report the call, and go through the motions.

What incentive do they have to step up for you?

How do they sell it to the other dog owner?

After considerable, and entertaining by the way, discussion how your game is differentiated from any game of fetch remains unclear. In my view, this is likely the primary reason it went the way it did for you.

That officer needed a clear, compelling reason to step up in your favor. Did not have it.

I don't and have been looking for it, again as a general exercise.

Nothing personal here man. Just found how this played out interesting.

I said it was not an answer because it implied matters of intent and did not speak to intent directly, nor did it speak to either how your game of fetch is part of some articulable training, and or can be differentiated from any other game of fetch.

You have said, "the actions we take" meaning either you and the dog, or you and others and the dog do something that is permitted, yes?

I did not get clarity on what those actions were.

The actions were the voice and hand commands as well as retrieving sticks and balls.

All of this is moot anyways. After being subjected to pretrial restrictions under a charge they knew to be incorrect and was amended contrary to code, the case should have been dismissed.

It is moot.

All I can say is your scenario does not include much that bolsters your case.

Very surprised it went to trial. Was anyone, animal injured?

Ignoring the fact that obviously you've got a chip on your shoulder with regard to the judicial system, what the judge was doing makes total sense. They're trying to see if you were actually "field training" your dog or if you just wanted to play fetch. Expecting you to play fetch with a leashed dog is a whole other issue entirely.

> we had a letter from a game warden saying the activities would be acceptable under the law

State laws vary but in mine, Game Wardens have identical powers to any other police officer. They have all the same arrest powers and have state-wide jurisdiction. Assuming it's the same in your state, and assuming you received some sort of citation or arrest that landed you in court in the first place, the judge at best has contradictory information from two equally relevant officers of the court. It's not exactly a slam dunk acquittal.

> The judge also misapplied facts that had nothing to do with the law, such as if we have a license or training on how to train dogs, yet there is no state license nor does the law mention any requirement to be trained (it's customary to self-train). And saying that we didn't have any special equipment with us, which again is nowhere in the law and isn't required.

The judge wasn't trying to determine whether you were following the law, this all goes to - as earlier in the thread - intent. If you just want to play fetch with your dog, you won't have any of this. But if you come loaded to bear with a bunch of training implements, treats, balls, whistles, etc., it's hard to argue that you're just playing fetch and not actually engaged in training. Any licensure or equipment would have supported your case.

> There were other issues and misapplication of law related to rights violations, a motion to dismiss

Almost everyone who has ever appeared in front of a court has claimed their rights were violated to the point where it's practically a meme. It's almost never the case.

> Also, at the lowest level the magistrates aren't even required to be lawyers.

Same in my state - they're elected, and mainly hear traffic infractions, zoning disputes, and very small summary offenses. Thankfully, you can appeal to Common Pleas from a magistrate court for something like $30. But I know some states it can be hundreds for an appeal.

> They can violate rights, misapply law

I mean, there are (again, in my state) censure and impeachment proceedings for magistrates, and while not super common they do happen often enough that it's not a huge scandal or anything. Misapplication of law is exactly what the appeals process is for.

> even yell at you

My goodness!

> There's zero accountability for the police, DA's office, and the courts.

Simply not true.

> We witnessed multiple rights violations, documented lies, and gross misapplication of law, but nobody cares.

What's more likely? 1. Every police officer, attorney, and court house, as well as every politician and all the media, doesn't care about widespread systemic violations of basic rights, documented wrongdoing, and misapplication of the law. 2. You're misinformed about the law.

"Almost everyone who has ever appeared in front of a court has claimed their rights were violated to the point where it's practically a meme. It's almost never the case."

A trooper knowingly held an incorrect charge, resulting in pretrial restrictions specific to that charge. It's a violation of of both the federal and state constitution to deprive anyone of liberty or property except by the law of the land, and there's nothing in the law allowing one to knowingly hold an incorrect charge. The ADA on the case had this information too and allowed the charge to continue - a violation of the Bar's professional standards. The trooper eventually amended the charge, but lied to the judge, saying it was out of leniency when I even have an IAD report saying it was because he had an incorrect charge and knew it. But they determined it was just a "misunderstanding". Even the rules of criminal procedure prohibits the amending of the charge at that point due to the circumstances.

"Misapplication of law is exactly what the appeals process is for."

That may be, but I think it's negligence to put an unknowlegable person in a position of power like that. States have laws about those practicing law needing to have a degree and pass the Bar, yet they don't care if a judge understands basic legal terminology. How dumb does one have to be to think that a request to dismiss with prejudice is the person calling you prejudiced? This system design flaw results in delays and costs to innocent people, not to mention undermining the integrity of the system. I'm my state the filing fee for an appeal is non-refundable, so you can be "fined" just to get a trial with a real judge even if you're innocent.

"What's more likely? 1. Every police officer, attorney, and court house, as well as every politician and all the media, doesn't care about widespread systemic violations of basic rights, documented wrongdoing, and misapplication of the law. 2. You're misinformed about the law."

Considering that an investigative journalist is pitching this story to their editor, that a civil rights lawyer says we have a case but the system doesn't view it favorably unless there's a lot of money involved, and that the statutes and case law supports my interpretation, then I'm leaning towards #1 (but your choices are flawed due to the use of absolutes. It should say that the system will protect the bad members as a means of protecting itself and because they don't want to deal with issues they see to be small).

"'even yell at you' My goodness!"

That is a violation of judicial ethics and conduct...

"'There's zero accountability for the police, DA's office, and the courts.' Simply not true."

This case seems to demonstrate a lack of accountability.

The courts can discover intent. It cannot determine it independently.

Which still makes them the authority...

Lacking some direct statements and or evidence?


You've done that in the exchange above, not actually speaking directly to intent. The result, when that information is needed, is it will be obtained indirectly, and if it cannot be determined at all, some other decision will be made.

Physically, in the way humans work, we are that authority. Should we undermine ourselves, yeah that's an issue.

Conflating that state of affairs with what we require courts to do as some proof we do not own our intent isn't helpful.

It's an absolute liability/immunity statute. Intent is not part of the elements of the offense.

See my other comments, it can definitely be part of the defense. Yours specifically.

Well then you better educate the judge and ADA because they believe intent doesn't matter and that the law makes it a per se violation.

Well, no. Remember I went looking for that material and did not find much.

Why is the conclusion always to regulate <technology> and not to regulate workplace surveillance for example?

When reading I actually assumed any such regulation would effectively be "regulate workplace surveillance" of sorts.

The software described would fall into that category.

You do not even have to assume it - here's a quote that sums up the drift of the entire article: "It is time for national regulatory agencies to guard against unproven applications, especially those targeting children and other vulnerable populations."

The article also makes the claim that some of these technologies are based on simplistic and largely disproved psychological assumptions, but again, its opposition is to their use rather than mere existence.

“Guard against unproven applications” is very different from protecting a right to privacy.

Because most people have a status quo bias and instinctively want to embalm the world in the technological state it was in when they were in their 20s and early 30s. A new technology always turns over somebody's applecart. Most people don't have the long view necessary for understanding that all technological progress is good.

> Why is the conclusion always to regulate <technology> and not to regulate workplace surveillance for example?

Why not both? Both the tool and the activity?

Because one is a piece of data that anybody can create and distribute, while the other is a harmful practice that people are fighting for generations.

> It maps facial features to assign each pupil’s emotional state into a category such as happiness, sadness, anger, disgust, surprise and fear.

Is boredom an emotional state? It's probably worth including, considering the setting.

Coworkers that I've worked with for years will still occasionally ask me if I'm stressed or angry. Apparently that's what I look like in deep concentrating, even if I'm totally relaxed and otherwise happy. If a mild case of RBF can throw humans off how easy would it be for an AI to miss-judge?

Which raises the question, why is anyone even considering AI to judge emotional state? For what purpose?

As mentioned in the article, it's because it's based off of Paul Ekman's facial recognition work (FACS), which infers a person's emotional state based on their facial expression. The only reliable markers Ekman found were for those 6 emotions (and contempt and perhaps 1-2 others). I haven't heard of any for boredom

The misattribution of anger you're describing is pretty common, I think. I'm guessing it's because you furrow your brow in deep concentration (and possibly tense your lower eyelids), which is also common in anger faces. The real giveaway here should be the missing upper eyelid raise (and possible pupil dilation) that you won't be doing but should be there if you're angry. People and AIs that are trained with good visibility of your face shouldn't make these errors very often.

Our biology is hardwired to communicate the emotions we're feeling via our face. We want other people to know how we're feeling because it's important. It could be misused, though (Ekman's FACS work with airport security to detect lies was a failure).

Data driven algorithms are very very useful as moral crumple zones. I don't like this trend of companies replacing traditional decision making with "AI"; the anti-discrimination laws we have are going to be toothless if people can just hide their biases in training sets. Who is held responsible when the algorithm discriminates? Considering how much money these companies are extracting from these algorithms it's really a "heads I win, tails you lose" situation.

I know enough to know that using this in schools, or at work, is a human rights complaint just waiting to happen in my jurisdiction. Do people who are blind, or who have certain mental illnesses, or certain cognitive conditions, or certain conditions like paralysis, end up reported inaccurately or unfavourably by these systems? Oh, and what about ethnicity, culture, religion, race? Any bias there?

Misuse of psychometric tests ("Do you make friends easily?") have resulted in payouts in the UK and Canada, probably elsewhere. This sort of emotion-detection seems analogous.

Clarity is underrated. There's a confusion here: No AI - no human for that matter - can recognise emotions. At best, they can recognise an expression - a physiological response - associated with an emotion. This distinction is essential because:

//> every conference had those emotional recognition tv screens/camera gizmos and people had so much fun playing with those, mainly because how easy is to fool them

//> the same stimulus can trigger every single possible emotion, depending on context and personal patterns

//> emotional responses are both instinctual and learned (inuits seldom show anger, while other cultures make a spectacle out of it)

This whole AI story is a cold path. Without context AI is an idiot savant. Wanna' see what is actually going on?

I do behavioural research and inform messaging. For this, I need to understand what makes people behave one way or another. Turns out, every emotion is like a little app that tells you what to do. People churn from your app on-boarding process? They encounter an unexpected barrier and their emotional response is mediated by anger, making them ragequit. My intervention is too manage expectations better and the emotional response is diminished. On-boarding rate improves 21%

All I need is a small randomised sample (200 for every 100k users), I recognise emotional patterns and grade them with something similar to Bayes factors.

I share the feeling of others here: this AI story is just another application of surveillance for the purpose of manipulation. So if it's not stupid, it's evil. And of course our collective response is to avoid or deceive.

How about empathy? Things would be much better if we would feel actually understood not manipulated.

Why can’t we treat an logarithm like a single person? That person may carry their own biases, have a bad day or be just plain stupid. So when I feel that I’m not treated the way it should be, I can appeal to that person’s boss or have a court settle the problem. I understand people like to hide behind “the machine’s decision” but if a person would hide behind their bosses orders, we would hold that boss accountable.

Instead of playing whack-a-mole trying to regulate tech, it would be, IMHO, more effective to regulate the outcomes we don't want. In this case, how about stronger privacy protection for employees/students (and people in general). Then it doesn't matter what crazy tech you come up with to "get around the AI regulation".

Lets also add Opaque HR processes like Performance Reviews / Stack ranking to the List.

I swear when I worked for BT it took 6 months for people to get over those ghastly pdp 121's - The system was fiddled to give everyone as low a pay rise as possible.

Should we regulate research into the technology? No, I don't think so. Should we regulate deployment of the technology? Definitely in schools, yes. As far as workplaces go, at least you could argue that employees could find different jobs. But when it comes to schools where people are forced to be? Use of such technology is atrocious. Use of such technology in a school or workplace is a test of submissiveness and of willingness to withstand humiliation. The sane response of a child or employee to such technology would be to beat the camera into small pieces with a baseball bat.

As an aside, I find it interesting that the author only publishes articles about regulating AI on Nature: https://www.nature.com/search?author=%22Kate%20Crawford%22

"Time to regulate AI that interprets human emotions"

"Halt the use of facial-recognition technology until it is regulated"

"There is a blind spot in AI research"

> We believe that a fourth approach is needed.... It also engages with social impacts at every stage — conception, design, deployment and regulation.

Kate Crawford is a famous researcher who focuses on social implications and ethics of AI, so kudos to her for being able to get her work into Nature so reliably! As an aside, that would be a real feat to ONLY publish papers in Nature, I wonder if anyone ever was able to do that.

I'd lean more towards emphasizing the importance of explainability. If a system is opaque, it cannot be trusted. This goes both for AI producing stupid results and for software that relies on security-by-obscurity.

If AI is made explainable, then confidence in AI should more closely map to actual capability (that is, bad AI will not be trusted because its flaws are on display, while good AI will be trusted because its strengths are on display). It will also provide AI developers with a better grasp of what they need to work on.

Can you explain what goes on in human brains?

No, but human brains weren't engineered by humans, and so it is reasonable that we don't currently understand them even as reverse-engineering efforts continue.

AI, by contrast, is engineered by humans, so why is it humans can't explain it?

> human brains weren't engineered by humans

If this was true, the phenomena of feral children would not exist [1]. Human brains are socially engineered by an array of informal (family, friends, social context) and formal (schooling, governments) institutions.

[1] https://zenodo.org/record/901393

A deep learning network consists of hundreds of thousands of nodes, and it is becoming more (I didn't look up the actual number).

You can show the weights of the connections between the nodes, but that wouldn't really help anybody to understand it.

I don’t really care if an algorithm can tell someone why it concluded x about me. There are some x for which I don’t want any algorithm concluding about me and there isn’t any context that could be provided that would justify it or make me comfortable with it.

If x is true, you have no right to prevent anyone else concluding it about you. Truth is the ultimate defense against all this anti-science FUD.

You have a few issues here with science and its application in society.

If I develop an application that can tell if you're gay or not, is it my imperative to report you to the authorities so you can be rounded up and disposed of?

I say you can very well develop such applications if you want to, using them especially against society at large is a privilege afforded to you, and one that can be revoked. Lots of other pseudo-scientific bullshit has been hoisted on the public such as eugenics causing massive amounts of damage, and i'm not looking for the next generation of this that will trend the same way.

What does it mean to use something "against society"? If I run a service that analyses images that people upload and people have the lawful right to upload these images, that's my business. Neither you nor anyone else gets to have a say in private transactions between me and someone else no matter what form of analysis of a being offered. My algorithms do not require your approval.

You may think that I not anyone else has any right to interfere with your private business transactions, but you are wrong.

You, and me alike exist at the whims of society and when our behaviour displeases enough people you will find yourself imprisoned or worse.

so in your opinion, we should have no right to privacy?

I broadcast my face expression every time I walk outside. I have no right to tell other people they can't look at my face and draw conclusions about me.

It's quite hard to reliably determine human emotions. To quote David Ogilvy:

"People don't think what they feel, don't say what they think, and don't do what they say."

>Such oversight is essential to defend against systems driven by what I call the phrenological impulse: drawing faulty assumptions about internal states and capabilities from external appearances, with the aim of extracting more about a person than they choose to reveal.

I think the reality is a bit more cynical. People are looking to filter on who will provide the right social performance. But that's not as comfortable to admit when you're on the side of using or selling such a system.

<robot voice>We _understand_ you're having a negative experience waiting on hold to speak to a human in our call center. Here at Evil Corp we believe negative emotions are best dealt with through long reflection. To better serve you we will be adding 10 minutes to your hold time. Let's hope you're in a better mood then.</ robot voice>


It's interesting how different parts of the face have greater emphasis in terms of expressing emotion across Asia and the west - see 'The Eyes Don't Always Have It' section at https://www.iflexion.com/blog/emotion-recognition-software.

Often I think about how the intonations in my voice are being recorded and stored and analyzed by somebody. //=====//

"Hey GOOOGLLLE, play! NPR!" -- me annoyed that Google Home like to switch it up sometimes -- Google: (recognizing the irritation in my voice) here is an ad for therapy, medication or an impulse purchase.

How about we regulate using AI solutions to influence people behaviour ?

Like making them want to vote for someone who will stab them in the back when in the office.

Or using AI for population control.

There are huge issues our modern technology creates. We as spieces are not ready for all of that. Our institutions are like 100 years away from the state we need them to be right now.

Are the people in charge of regulating the AI which interprets human emotions better than the AI at the same task? How do we currently regulate humans that interpret other humans emotions? Fairly easy to point a case where AI did poorly. Much harder to compare the current process to the new one. Humans can be assholes too.

Who cares whether or not it works? Always-on surveillance constantly monitoring children will destroy what we think of as a normal development path. We could easily, accidentally, and permamntly damage an entire generation of humans with this kind of tooling.

Looks like they reinvented the polygraph with the same exact set of problems.

The focus on lack of evidence for effectiveness here seems plainly wrong.

Before you can have workable regulation you need to define what you're regulating, what you're not, what is allowed and what isn't.

Good luck with that.

Just don't send your kids to a school that uses such technology, and don't work for employers that use it.

If you cry for regulation, it is a sign that government has too much power. But more regulation gives governments even more power.

Make sure that your citizens have basic rights. Obviously being judged by an algorithm can not make for fair laws. Governments can use algorithms for their assessments, but their has to be a line of appeal.

Funny, my view as European is that governments are the only entities powerful enough to actually keep companies in check. Without them, companies would do to you whatever they would like.

have you considered that perhaps companies and governments are together on this?ok, governments do slow down things and come up with regulations here and there (if they decide to). Imo, at the end of the day, they both play on the same team...

Call me a sweet summer child but I actually believe that our (German) government is still by the people and for the people.

I'm also an European, and I disagree strongly. Not everybody in Europe is a socialist.

The only thing that guarantees fair treatment is competition (so that for example a worker can quit and work somewhere else). Government rules just breed corruption and inefficiency that harms everybody.

Shopping privatized policy through your wallet/feet just doesn't work. It requires critical mass from consumers and the largest portion of consumers will not, or perhaps are in situations where they cannot, resist with their feet/wallet.

I'll take more powerful democratic government over more powerful unchecked privatized (authoriatian) power any day of the week. I'd be happy if both had less concentrated power and citizens had significantly more power.

The way to give citizens power is through law that is clearly and explicitly designed to empower average citizens against both government and private wealth. We have a fair amount of such protection built into our governmental system that does just this for citizens with respect to government (after all, this sort of authoriatian tyranny is one of the reasons we have the US now). We have significantly less clear and explicit law that protects citizens (consumers) from concentrated private wealth (businesses), giving private wealth a massive grey zone of unscrupulous space to play around in.

This is not an issue of concentrated private wealth. Most schools are run by governments.

You are right, citizens need to be empowered against their governments. But regulations give governments more power, not less.

And if you say the majority of consumers doesn't care about some issue, what makes you think democracy could provide a solution?

You also don't need that many feet to create an alternative school. You just need enough children to justify an extra building and teachers, or supplement the home schooling.

>Just don't send your kids to a school that uses such technology

It's not really a realistic option for most people.

Many folks just can't change schools at will. They simply don't have that kind of flexibility in their life / options.


That's also not an option for people, can you imagine that?

I feel like we're going down one of those "I'm not sure how to tell you that other people's lives aren't as simple as you make them out to be." kind of conversation.

Schools are already shit in many ways. What is your answer to that? Just wait out the years and hope for the right politicians to be elected and change it? They might give your kids diversity training and teach them they are unworthy scum. They might deny evolution theory. They will mess up your kids. And all you say is "nothing I can do"?

I never said it is simple. You still live in a free country, though. What on earth could deprive you of all options?

>What on earth could deprive you of all options?

Time, Money?

You're idea here about what options are available to people is strange to me. You know people's options to do things in life are limited by a variety of factors right?

Time and money can be managed. If you don't have enough money, look for a better job (it might help against the time issues, too).

To change schools, you might have to move anyway, so you might as well look for a job somewhere else.

If you really have no other options, then maybe AI that detects emotions is the least important of your worries.

Yeah that's the kind of hand waving problems away that I kind feared. I'm not sure you've got the perspective to really understand what some folks are dealing with in life if you can just so easily hand wave so many things.

I rather consider your kind of argument to be hand wavy. "Some people simply have no options or opportunities whatsoever". Can you provide some actual examples? And what is even the point to talk about those hypothetical people? Since nothing can be done, their situation can never ever change, there is no point in discussing it.

Yes there may be some people who are too sick to work. Everybody else can go look for another job. And the ones who are too sick to work probably have other people taking care of their kids already. So those other people can go look for another job.

Just get more money 4Head


Let's build an AI to regulate it!

The future of bureaucracy.

Only if you can ensure transparency in all aspects of life, which includes political lobbying, credit scores etc should you bother with regulating something else let alone AI like software. Who the hell do they think creates these systems?

Ai doesn’t interpret human emotions if it uses classical logic. It’s just a bias min/maxing machine.

Extreme point of view for a technology that hasn't been developed yet. Why kill something, when's not even alive lol

AI has been playing human straight out of my brain into my friend’s devices

Even live humans can’t do better than random chance at accurately gauging another humans actual emotional state.

Accuracy varies from person to person but I think humans can do better than random chance.

Even goats can read human emotions from faces to some degree https://royalsocietypublishing.org/doi/10.1098/rsos.180491

"AI" that interprets human emotion is fraud. There is no means to determine the inner state of an individual short of asking them, and then filtering for situational context. Unless you're talking about strapping a portable fMRI machine to the subject's head, it's not possible to determine the emotional state of another individual.

And for the point, I've worked with Ekman. This sub-field is not scientifically possible.

Looking at a person’s face gives me Bayesian evidence about their emotional state, nudging my beliefs in a direction. AIs would be even better at this, given access to not just more data but also superhuman abilities such as detecting heart rate by watching the target’s throat.

To the extent that “no means” is true, surely it’s also true of humans trying to judge the emotional state of other humans?

Not denying the snake oil currently in the sector as a whole, but I think the tech should eventually be able to do anything our minds can do.

But your mind cannot identify the emotional state of a professional actor. People are far more sophisticated than this immature and naively conceived technology.

A system (or human) does not need to be perfect to be useful. I'll definitely assert that my mind can identify the emotional state of humans around me even if it's not always correct about it and can be cheated by a professional actor; it's right more often than not and it's a very useful capability to have so I'd say it works because I'm using it all the time even if's not perfect.

It would be fraud if and only if it's no better than chance, if it has no correlation whatsoever with the true emotional state - and it's perfectly possible (by both humans and machines) to make an "educated guess" about the emotional state that would - especially in agregate for many humans - be informative for all kinds of purposes, even if it can be easily cheated by a professional actor. For example, you probably can detect if a particular section of an online class turns people unusually frustrated or unusually sleepy, even if it doesn't affect all people, half of affected people don't show it or aren't detected, and there's a professional actor in the class faking the exactly opposite signal. Also, there are many scenarios where it's not even worth considering antagonistic behavior, where people have no motivation to put any effort in misleading it.

The argument that it's impossible to determine the inner state of an individual with certainty is irrelevant, because noone is claiming that, and it's not a requirement for the described use cases. After all, surveys of "do you like product/person X" provide valuable information even if it's trivial for anyone surveyed to lie. All the system needs to do (which needs to be properly validated, of course) is to provide some metric that in practice turns out to be reasonably correlated with the inner state of the individual, even if it doesn't work in all cases and for all individuals, and that IMHO is definitely achievable.

Perhaps it's more a difference in semantics - what do we call if a system (or human) for identifying some status or truth is halfway between "no better than chance" and "can identify the true status 100% of the time". I would say that it's a feasible system for identifying that thing, and that system works (though it's not perfect); it seems that you would say that such a system does not work - but how would you call or describe such a halfway-accurate system?

It is facial expression recognition, not emotion recognition. If the tech/companies state what the tech actually does, then a non-AI-specialists will not make gross assumptions and design a completely nonsense fictional system on top they believe has some form of "authority" which they then proceed to enforce.

Won't stop snake oil salesmen from advertising such models and profit off of them. A substantial portion of the detrimental effects will be shared, regardless of the validity of the algorithm.

As an industry of computer SCIENTISTS, we should make the statement that such systems are not possible and fraud.

But such systems are possible and aren't necessarily fraud. You'd prefer that reality be other than what it is. I get it. You don't want these systems to work. But they do, and a noble lie saying they don't is still a lie.

Do you really not think it's possible to read emotions from facial expressions? If humans can do it, a machine can do it better. The claim from the article that machines can't read emotions is pure motivated reasoning so distorted and disconnected from reality that it amounts to fraud. It's obviously the case that faces convey emotions. Look around.

As if none of you are aware of actors and method actors that will and do fool any system observing them externally. Identifying the inner emotional state of a 3rd person is not possible, no matter how much you want it to be.

Most people most of the time aren't acting. Facial expressions are great Bayesian indications of emotional state.

Really? I disagree. I believe most of the time people are acting. Most of the time people are in a role they would not care to hold if the choice were theirs.

Facial expressions are a terrible indication of emotional state because humans are multi-layered: what's to say the person with a grimace does not have a tooth ache, or a bad back - yet otherwise is in completely normal state for them, if asked they'd say they are good.

I perceive the majority of people considering this situation to only be considering 1st level effects. I have seriously considered this as a professional ambition, scientifically investigated the situation and discarded the concept as unreliable at best and a fraud engine in reality.

Think about this a bit more, use your scientific method training. you'll come to the conclusion this is not science, this is pseudo-science and fraud.

Are you telling me that you need an fMRI to relatively reliably classify 100 people into two classes based on whether they're very happy or in great despair?

Of course one can try it out to see if it needs regulating: https://emojify.info

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact