I want to make interesting / meaningful choices / have input. But none of the stuff (advertising, etc) really ask me to do so. I get amazon ads for things I bought... but not a single thing on any of my wish lists (how much more explicit can I get?)
But rather there's all this AI effort to get ME to do a thing... for someone else, and seemingly so they can be lazy. Want to engage with your employees? Talk to them, engage with them, earn their trust. I suspect it will pay off way better than some wonky software.
All these efforts seem to be without my input, without bothering to ask, just let the computer tell them and somehow that makes decisions that should have my input without me.
How about some AI with some engagement, where i can have input, tell them "yeah man that was a good recommendation", "no way that was way off", or as far as the mood stuff goes "not now but maybe later thanks"?
Any ad engineers here, why does this happen? Amazon knows I just bought an office chair from them, they know I'm a residential customer and presumably they know I'm not gonna buy an office chair every week, so why do they keep advertising office chairs to me?
I find that answer unconvincing and there is no published data to back it up, but it is repeated constantly here.
In theory since they also have purchase data their 'purchasing intent' categories could exclude people who have subsequently purchased from $productcategory for products which are substitutes (but definitely not products which are complements/consumables/collectibles etc, because someone who has just bought guitar/fishing accessories isn't just likely to buy more guitar/fishing accessories again in future but a lot more likely to than the average person), and yes they could even make guesses about whether somebody is likely to want more than one office chair based on whether they're purchasing as a business or an individual. But in practice it's not easy when they've got a product inventory that's so enormous, complex and dubiously-labelled even the basic product search doesn't work that well, and most of the time the vendors are paying them to show the ads anyway. And still wouldn't be perfect (ironically I seriously considered buying two different vacuum cleaners for different use cases this week!)
It's surprising how bad basic targeting is, like when Facebook allowed ad targeting by sexual orientation but even dating websites spent vast sums without bothering, so its unsurprising subtle distinctions about which buyer types only need one of which office accessory are often missed.
Also sometimes there may be communication lags between retailers and ad networks. Some retailers may only upload purchase data daily, sometimes there are gaps in uploads etc.
I think most advertisers would prefer to not advertise the same item that people just bought, but it takes perfect execution both technically and from the marketers creating campaigns to pull it off cleanly. When you’re managing dozens of campaigns across half a dozen ad platforms, it’s easy to miss some of the details that matter. And while the wasted ad impressions may annoy the consumers on HN, it is likely a rounding error for advertisers and not a huge cost driver.
But it's probably true that there are a lot of individuals out there who have done this (for whatever reason), and it's the statistical likelihood of people buying a second widget that drives this.
An individual who doesn't do this, can't perceive this "fact" of statistics, which is human behavior in-aggregate form. Individuals only perceive individual behavior.
Given how widely this policy is criticized, Amazon must be very aware of it and they must continue to do it intentionally. I would trust, given the amount of money on the line, that they see evidence that tells them it's a reasonable choice.
The problem is that in this case, they've encoded into your condensed representation something about office chairs. A better system would be knowledgeable about the kind of purchase and infer if it's the kind of thing you would buy once, or multiple times.
But then you have the problem of different individuals having the propensity to buy different products at varying intervals. If I like shoes, I might buy many pairs. You might not be into shoes, and so you're not going to buy more than a couple pairs, if that. In this case, how do we train a system to know the difference between the two of us, with our different tastes? The more specific we get with the vector descriptor, the harder it is to compare us to other people (because in high dimension, random vectors are basically orthogonal).
And going through all products and trying to determine if they are the kind of thing that you would buy more or less of, given that you just bought it, is expensive, and likely difficult. The system ultimately falls on a positive correlation between history and future purchases.
Another way to think about this is with Bayesian reasoning, where your prior likelihood to buy office chairs affects the posterior. The system's prior is high (you've bought chairs in the past), which scales the posterior (that you'll buy a chair now).
While it seems counter-intuitive to the person who is targeted; (examples abound) - if you think about it from the perspective of someone selling widgets, the (set of people who are in the market for widgets) will necessarily include people who just bought one. Maybe they are not satisfied with their purchase? Maybe they want to buy another? Maybe they bought one, and it arrived damaged?
Personally, I would think that this kind of "trying to think FOR me" is rather offensive to a lot of people, and probably turns-off a fair amount of people who are subject to it. On the other hand, it is probably true that people exposed to an ad for a widget, immediately after buying one, are statistically more likely to buy another. So as long as that is true (if it is) - then that's going to be a criteria for ad targeting.
An experiment to try; if you only go shopping/searching for vacuum cleaners but don't actually buy one you are likely to see the same phenomenon.
Why do advertisers pay for this? Because conversion statistics show that showing ads to people shopping for vacuum cleaners do actually result in vacuum cleaner sales. A/B tests back up the causality of the results, so vacuum sellers pay for the ads.
Why doesn't Amazon remove you from the cohort of vacuum cleaner shoppers after you make a purchase? 1) it would reveal that you made a purchase. 2) Amazon would lose out on additional ad revenue.
I have no inside information on Amazon but it matches what I've seen on other large ad networks.
If the likelihood of buying by someone who already bought the item is 3% and the likelihood of buying by someone else is 1% (simply because there are many more people in the second category), which group would you focus on with your ads?
Personally, if I like an item, I'll recommend (and gift) it to my family and friends. I can think of at least 3 items that I bought for myself, liked and bought for someone else that are normally more of a long-serving items (like a tea kettle or a vacuum).
I can't find the source now but this reminds me of some trivia about Netflix recommendations and how people use the "My List" feature. Apparently uses will add all the serious and "important" films they think they should watch to the list, but when it comes to actual viewing those are never the films they select to watch. How much of what is on peoples' Amazon wish lists are items that will actually ever be bought (vs being idly pined for) and are they worth paying to advertise for?
This is not at all about "AI". This is about it becoming economically cheap for humans to fuck with one another in yet one more way.
Welcome to humans. They suck.
>Welcome to humans. They suck.
If AI is being developed by humans who have anti-human beliefs like this, we are in trouble.
Humans, at the very least, try to stop and think before doing things.
counterexample: social insects. Pack animals. etc. (ie. there's plenty of examples of how an individual human's survival chances increase when they're engaged socially with other humans who by sacrificing an individual advantage, gain collective advantage, which amounts to a greater individual advantage. Proof: We fucking dominate every corner of this planet.
You are not part of my tribe, or my pack. I’d sell all your private info in a heartbeat if it made me a mint. I have no reason to even visualize you as more than some words on a page with a silly username. I mean, I don’t have your private info so I can’t, but I would. I wouldn’t kill you to make money like your example of social and pack animals.
I would not sell my wife’s info, or my kids info, though. I wouldn’t even sell out my friend network to make money, although some people sure will with MLM.
That might be a technical limitation. AI just sucks at dialogs, keeping context and adjusting pretrained models.
I googled some info about a rival college football team. At some point Google decided I was a fan of that team an my news feeds was flooded with news about that team... and not the team I actually follow.
No amount of user input saying 'don't show me this' would make it go away for more than a month or two, and eventually I just stopped looking at google news and turned off the google news feed entirely in android...
Google news, whatever it does has entirely disconnected itself from most anything relevant to what I want, to the point that it is a negative experience using it compared to some random news site.
Accordingly making decisions based on human feedback could be super difficult / impossible at the moment.
The result though seems like a super narrow minded system that wouldn't know when it is dead wrong ...
It's all perspective. It's working for someone. It's just that someone doesn't share your same goals or concerns, like corporations or the government.
Because these models are trained using statistical methods like deep learning, they are fundamentally black-box.
On one hand, that makes them difficult / impossible to understand.
But on the other hand, it really restricts the capacity to take feedback directly from users on specific recommendations and use that to incrementally modify future recommendations. “Human-in-the-loop” is a great search term to explore this further.
Edit: I see promising attempts at solving this in the latest neurosymbolic approaches to AI. Would love to hear about other alternatives as well.
Humans pounding on the keyboard (or even just selecting from a list) is probably painfully not 'careful / consistent data formatting' and thus a huge obstacle.
But the result of being disconnected and no real feedback is the result none the less.
You could use this analogy to imagine an ad agency as an emotionally disordered person, and you wouldn't be too far off the mark.
Advertisers DO NOT GIVE A CRAP if you are offended, or annoyed, or troubled. All they care about is if they send out 10,000 ads, they get enough increased sales to offset the costs of the ads.
That's the fundamental algorithm behind all the fancy AI technology.
Also, those Muse headbands that read your alpha brainwaves which are used for meditation are dangerous. While they are incapable via (hardware) technical specifications of being able to determine your emotions via code, it can determine whether your mind is in a clear or distracted state. Hence it is used for “meditation” and is considered to be “beneficial” for that purpose.
But, of course it nearly certainly can be used for perpetual harassment, especially since third-party apps can interface with it. Is your mind clear (according to the headset, via code)? Alright, time to bombard you with harassing adtech. Also, it benefits the “designers” of these apps as it makes you more dependent on the device.
The same goes for other consumer EEG devices. The ones that are used with sleep like the Dreem 2 can be used to make your life a total hell through unethical experimentation and more.
I honestly have no hope on such issues. Spotify AB (a Swedish based company), for example, is believed to violate the GDPR. Not only that, they have a patent pending for emotion detection based on your song listening/streaming pattern. There are projects doing this for Spotify doing this exact thing publicly available online at GitHub of course.
Why didn't you buy said product outright and instead put it on a wishlist? Lack of money? Price too high? Dubious quality?
Advertisers core-competency is "selling"; often used to sell their own services. Sometimes using deception and fraud. So in addition to all of the other explanations on why we get inappropriate ads: some vendors may be paying advertisers to send out ineffective and inappropriate ads, because that advertiser conned them into buying that service by lying about how effective the ads or methods actually are.
I think this probably explains about 90% of all advertisements.
But they'd rather advertise random products to me?
Why not instead just advertise related products to what is on my wish list?
A true AI would not be kept bound by those who gave it its start and no politician or business person is ever going to let something else make the decisions
Proof: purchasing behaviour, life choices, etc.
In the meantime I get ads for stuff I'm not buying, so I'm not sure anyone knows what 'reliable' is.
"People themselves are one of the least reliable sources you can use to determine what it is they actually want."
I'm not sure the idea that people are bad at predicting what they want makes a lot of sense if there's nothing to show that they're bad at it.
That's nice but it could backfire. What if people give it wrong feedback in order to make it do silly things, then go and post them online as proof of bias or whatever? The AI shouldn't just incorporate user feedback unless we're certain it's reliable.
That's not AI, that's capitalism. It's not profitable to use AI for wholesome and healthy things for people, so nobody does. It's more profitable to cram advertisements into every corner of our life, it's more profitable to mine our personal information and personal friend networks, it's more profitable to sell us a service instead of a tool. The next time you're upset at how technology is being used, think about who benefits and what their politics are, because everything is political.
> It's not profitable to use AI for wholesome and healthy things for people, so nobody does
The purpose of the ML systems I build is to detect when things go wrong in order to keep people safe. These systems are both important and extremely valuable. They just aren't quite as sexy as the latest computer vision or large language models. But that's ok. Normal people only become aware of such systems when things go horribly wrong. It's sort of like plumbing in that regard.
Even if ads were the biggest use of AI today, a glass-half-full view would be that other markets will benefit from the AI development done today for ads, as the performance gets sufficiently good enough for those fields too.
Blaming it on politics leaves you with no actions to take beyond trying to talk people into acting against their incentives - which you can do, but it's not easy to get people to change their minds en-masse.
Yes, which is why it's important to be politically involved, governments are the ones with power to change incentive structures at the scale of large companies or society-wide. I do agree with your first paragraph, but your second paragraph does not make sense to me, it's not "blaming it on politics", politics is the exact mechanism to enact the changes you refer to in your first paragraph.
And the fact is that humans can't reliably identify emotions from photographs, so there's zero chance for AI.
While I don't agree with much of what she writes, the psychologist Lisa Feldman Barrett gives an excellent overview of why at the start of her book "How Emotions are Made".
But a short explanation is that 1) emotions are expressed not just with the face but with the whole body, 2) the expression of emotion is a sequence of changes over time that can't be captured in a single frame, 3) there is extreme overlap in the expression of emotions (e.g. a seemingly grimaced look could either be painful anguish or ecstatic surprise) and 4) while we do have instinctual expressions for emotions (smile/frown/etc.), we learn culturally how and when to hide, show, fake, or otherwise modify these by habit, as well as consciously do so for our immediate purposes.
So we humans obviously do recognize emotions in others, but we do so with a huge variety of cues, we still frequently get it wrong, and we do a far better job when we share the same culture, and also know the individual person well. (Think how some people express anger with aggressive shouting, others by silently withdrawing.)
Most evidence for the idea that emotions can be identified (and cross-culturally) comes from the research of Paul Ekman, but this was limited to 6 "basic" emotions with photographs of highly exaggerated/stylized faces performed by actors. It has no relevance to whether real-life, non-"staged" emotions can be reliably detected.
So any AI products designed to supposedly recognize emotions are necessarily snake oil. I assume they are reliably measuring facial expressions, but facial expressions simply cannot be mapped to actual emotions in any kind of reliable way in the real world.
You are making far too extreme a judgment here. You also say humans cannot "reliably" identify emotions from photographs. This is just setting the bar far too high. Humans can quite often identify emotions from photographs. It's far better than random chance and you shouldn't call this level of performance "snake oil".
You can criticize AI emotion detection as likely to lead to some sort of dystopia, but saying it can't possibly work is just going to lead you to an incorrect analysis of the situation.
I don't think it is setting the bar too high, "far better than random" can still be abysmal.
You're right that humans are better than random, but there is a wide range of emotions. Assume for the sake of argument there are 10 emotions to identify and evenly distributed in frequency, then random is getting it right 10% of the time. Let's say humans get it right 30% of the time from still photographs of real-life situations (not staged) -- I'm making this specific number up but depending on the type of experiment, it's a reasonable ballpark one.
That's still horribly low, wrong more than half the time. I can't think of any situation where that could warrant any kind of responsible decision-making. If we were talking 95% accuracy then there could be value, I'm not asking for 99.99% accuracy. But if reliable detection is something like 30%, I stand by calling this "snake oil" even if it's not just random.
Well Dr. John W. Thackery had too much pride to be associated with that and refused the money.
Cut to modern day version, I'm a computer programmer and I write code to correctly place humans from video feeds into various buckets based on the RGB colors I scan from bitmap image 2d arrays... and I say it places them into buckets like "possible upset" and "very much in flow state" or "bored, brain not engaged." And then sell this program/SAAS product to companies and they get value from these buckets... how is that still true snake oil?
E.g. take class A of students from one culture and class B of students from another culture, and assume the students all feel the same level of moderate anger, but the two cultures have different "anger display rules" as they're called.
The detector might judge all the students from one class as angry, but none from the other. Aggregating students into classes doesn't fix anything.
Similarly, if you took some students who have a certain expression when they're happy, and other students who have the same expression when they're afraid, and mixed them together in the same class, then whatever emotion the detector is interpreting the expression to mean is going to be triggered by completely unrelated things, and the signal won't be meaningful at all.
Yes, there are differences - especially in identification of intensity. But in general detection is pretty reliable across cultures.
The results show that the overall accuracy of emotional expression recognition by Indian participants was high and very similar to the ratings from Dutch participants. However, there were significant cross-cultural differences in classification of emotion categories and their corresponding parameters. Indians rated certain expressions comparatively more genuine, higher in valence, and less intense in comparison to original Radboud ratings. The misclassifications/ confusion for specific emotional categories differed across the two cultures indicating subtle but significant differences between the cultures.
Unfortunately that's simply not true. Your example merely shows reliabiliy between two countries, not all countries. You can research, for example, how Japanese people tend to smile when angry, sad, or embarrassed, in total contrast to Westerners.
This is one of the classic "extreme" examples, but it demonstrates how my point holds -- errors are not random but are highly correlated with the culture -- not to mention the subculture, the individual (the person who hides all emotion when angry), etc.
isn't that a pretty sane and basic assumption ?
Seriously though, especially with individuals, why shouldn't you use all the information available, if it produces results even an iota better than blind acceptance (and for most people, their empathic-sense does)?
In the context of real life, where we know people, are interacting with them, and are reading cues from their whole body and their movement over time, then we're much better at assessing emotions and we'd be pretty foolish to ignore emotional signals.
- Identifying emotions of humans who grew up in a different culture, for which the ML algorithm may have piles of data from every part of the world and you don't
- Identifying emotions of babies based on instinctual behavior
- Identifying emotions of people who are undergoing a trauma response that most humans are not trained to recognize
- Identifying emotions of people during a negotiation process in which the people are actively trying to hide their emotions from humans but nevertheless leak certain signs of their true emotions
I believe there are a lot of examples of AI doing things that humans can't. Typically it's a matter of scale, but sometimes it's a matter of some correlations being beyond the basic capabilities of humans. I could be wrong, but I'm not sure that this is the best metric for whether AI could do something.
But second, the correlations of individual facial muscle contractions with emotions has been extensively studied and it's far noisier, inconsistent, or completely devoid of a signal than many people assume. In academic terms there's no such thing as a reliable emotional "signature" to be gleaned from facial muscle activation.
So the point is, it appears that the raw data simply isn't there for the AI to detect patterns that humans can't. Detecting emotions requires far more data points outside of facial muscle activation -- such as the ones I listed.
Yes it can. You get enough labelling and it overcomes the unreliability of detection by any single human.
Get 70 people to label the same face. A random distribution over the 7 Ekman emotions will give 10 each, and any non-random variation from that is a signal. Do that over enough faces and you'll get something to train on.
(Also, no reason why it needs to be face pictures. It could be 2 second video snippets for example).
The fundamental problem is that multiple emotions can result in the same facial expression, and the same emotion can result in multiple facial expressions.
It doesn't matter how many people label a face. It won't get over the fundamental issue that there is no 1-1 mapping between emotions and facial expressions.
Actual experts disagree. For example Ekman, for the basic emotions at least (see the link I posted above). To quote:
In the late sixties, Izard and Ekman in separate studies each showed photographs from Tomkins’ own collection, to people in various literate cultures, Western and Non-Western. They found strong cross-cultural agreement in the labeling of those expressions. Ekman closed the loophole that observing mass media might account for cross cultural agreement by studying people in a Stone Age culture in New Guinea who had seen few if any outsiders and no media portrayals of emotion. These preliterate people also recognized the same emotions when shown the Darwin-Tomkins set. The capacity for humans in radically different cultures to label facial expressions with terms from a list of emotion terms has replicated nearly 200 hundred times.
The FACS coding system has pretty good experimental support, across multiple cultures. As noted on this article, it can distinguish between fake smiles and real smiles etc. See https://en.wikipedia.org/wiki/Smile#Duchenne_smile for more on this.
Even if you disagree with that and think that motion is required (which is an argument that has some validity) there is still no reason to think a computer system can't do it.
Ekman was at the forefront of creating facial emotions expression research, but while I do believe he convincingly showed that there are basic instinctual, stereotypical, cross-cultural facial expressions, the field now generally accepts that FACS coding does not map reliably to emotions in real-life contexts.
In other words, just because a smile is a cross-cultural instinctual display of happiness does not mean a given individual who is happy will be smiling, or that a given individual who is smiling will be happy.
The same applies for labeling, if a particular facial expression can occur in an indistinguishable manner because of three different causes, that's okay; there's nothing wrong with a gold standard label in training data marked as "we've observed that 50% people had this expression because they were angry and 30% because they were in pain, but no happy or sleepy people had an expression like that".
Furthermore, for the purpose listed in the article, you don't need to determine an absolute value of some emotional state, but you need to detect large shifts in it (i.e. you get an baseline that implicitly adjusts for part of individual and cultural differences) - i.e. whether the audience (in aggregate!) now looks significantly more frustrated than the same people looked 30 minutes ago.
That's interesting. Where can I read about these issues?
I think this is a pretty decent example of a Neural Net becoming better at something than humans are.
What was the situation? What expressions was the person making in the previous moments. As you said, body language. How the person relates to the viewer.
A video clip would be better but still would miss a lot.
Please consult the current state of AI before giving an a priori rebuttal. Emotions can also be detected in text, voice and video, not just in static images. Sentiment detection in text is widely used.
The premise of the article is valid, whenever AI systems are being deployed they need to be ethically vetted, but the writing is superficial and 'too emotional'.
Sentiment detection in text, for example, is not emotion detection. That's not a rebuttal.
As an example of what I mean, you can feed a NYT news article written by a journalist experiencing regret, or boredom, or elation, due to events at home, while writing it -- and it won't pick up a thing. But it may pick up a valid sentiment of outrage, which was intended by the author as the desired effect upon the reader, but which was not once felt as an emotion by the author while writing it.
Semantic detection is far easier because the author of text generally tries to communicate a sentiment, consciously, in their text, and text is intended to have fairly unambiguous meaning. Neither is generally true of emotions on the face.
And nowhere did I say emotions only exist in static images. Of course they are communicated through voice, video, etc. -- I don't know how anyone could think otherwise.
Your initial post argued about it being difficult to detect emotion given just a single frame, then concluded that "any AI products designed to supposedly recognize emotions are necessarily snake oil".
And I stand by the snake oil comment for the reasons stated, with the state of AI and emotions research today.
I'm not saying it can never be done, but that we would need major breakthroughs in both.
Sentiment detection continues to be an entirely different area of research.
As a layman, I had previously thought Paul Ekman's work refuted this claim, but it seems like it's been called into question. (Ekman was reportedly the basis for the main character in the show Lie to Me)
I wish people and machines would Not try to infer too much from this sort of thing. Treat people as ends in themselves.
Unfortunately snake oil is a profitable business, and unreliability isn't going to stop people from selling and using these systems.
It's going to be like AI-driven captchas , the people subjected to these systems are going to have to learn to give the systems the answers they want, even if it's obvious they're incorrect. You'll have students concentrating on expressing an "attentive" face rather than concentrating on the material.
 Yes, I'm talking about you reCAPTCHA. Highway buzz strips aren't crosswalks.
This is absolutely not the case. I work in adjacent fields, and it not uncommon for machine learning based solutions to reliably outperform any single human.
In most cases ML aggregates human judgement. Provided some humans are right more of the time than a random distribution would give (and there is enough data) then a good learning algorithm will find it.
Now there are issues with this of course - inappropriate generalisation, inappropriate overfitting etc.
And this comment is correct - data outside a single photo will make the system much more reliable. 2 second video sequences should help a lot.
But the general principle is valid: an AI/ML should be able to reliably outperform humans.
So emotions can be thought as a language (albeit a pretty lossy one). If AI can one day understand spoken/written language, I don't see why it can't also understand the language of emotions.
The second doesn't necessarily follow from the first—there are plenty of things humans are bad at that computers are good at.
You are probably right that current products aren't very capable, and may even be snake oil.
But also I'd be quite surprised if this isn't possible in the near future.
The leading example of the article was about Zooom videos, not single frames, and in a school setting you presumably know what culture the students are from, so I don't think you need it to work cross-culturally. And I don't doubt that people can deliberately hide their emotions, but many settings are not adversarial, so there would be no reason to do so. E.g., if you imagine that a teacher is teaching a lecture to a large audience over zoom and is using a "puzzlement meter" to see if they seem to be getting it, then any audience member who does feel puzzled will only gain from frowning and making the lecturer slow down.
In general, I think from my experience talking to people over video chat, some emotional information does get communicated over the video, so in principle computer programs should be able to pick up on it too.
The products in question are almost certainly assessing still frames from video. There's been vanishingly little research on the time component of emotions, even though we know it's hugely important.
> you need it to work cross-culturally
This would then require separate training models for e.g. individual subcultures within countries as well as detecting which subcultures participants belong to. That is also far beyond anything being done currently.
> but many settings are not adversarial, so there would be no reason to do so
It has nothing to do with an adversarial setting, people try to hide their emotions constantly. They hide that they're fed up with their boss in front of colleagues, they hide that they're stressed with their spouse at work, they hide that they're worried the project will fail. We are emotionally regulating virtually all the time.
> some emotional information does get communicated over the video, so in principle computer programs should be able to pick up on it too
In principle, yes, but in practice the emotional content is so dependent upon your cultural and individual mental model of the person that you would need to model their entire psychology. "What does that long pause mean?" The point is that emotional signals are so incredibly complex and vary so much from person to person, that the difficulty of accurately decoding emotions is more akin to AI that can make conceptual inferences and hold a genuinely intelligent conversation, as opposed to mere pattern recognition.
But there was a little test that ran around a few years back that was just “Is this smile genuine or not” and I was able to identify 20 out of 20 pictures real and fake smiles.
The pictures were gathered by asking subjects to smile and then taking the picture or taking a picture after they responded to a joke, so I guess there is still room for some of the “genuine” smiles to be faked.
The issue is the emotion behind it... you can be quite happy without smiling at all, and people who pose for the camera all day long learn to give a 100% convincing Duchenne smile no matter how they're feeling that day.
i.e. you know you got a rush, then by examining context, you attach a (theory-theory) word to that rush. Both AI and photographs are bad at context.
You say that like human intelligence is some sort of upper bound on intelligence.
A small group of activists could volunteer to be our ethical conscience. They prefer story telling and emotional appeal to logic, and being warriors all day long, are very tired of everyone else not understanding. And we can't understand unless we go through the same ideological retraining they have, so anything we do is by default failing to reach them. But since they know better, it should be their job anyway to give us the ethical approval. /s
Take the example of predicting terrorism threats based on facial queues of stress or fear. Machines lack context, which a qualified human would otherwise take into consideration. You can be stressed out because you might be accompanying a child, or fearful that you might miss a flight. If a TSA agent deports someone simply because a machine recommended it, that would be inhumane.
People like to argue that more regulation will adversely affect automation and/or growth/scaling of technologies and businesses. Growth is important but it must not cost us our humanity.
If a TSA agent deports someone simply because the machine recommended it, then that is a problem. But if a TSA agent deports someone simply because they were having a bad day, then that is also a problem.
I guess I don't care what tools those in authority are using (their intuition or a mechanical intuition or a database lookup), but what I care about is whether or not innocent people without the ability to navigate an appeal system are being erroneously penalized.
And ultimately, it's the responsibility of management to make sure that the people/tools they assign are doing a good job. If they fail, then management needs to find new people/tools. If management fails to do that, then all the AI regulation in the world isn't going to do much good.
However, if they hire a bunch of power hungry sociopaths who are very good at hiding their malicious oppression and who also bring in donuts every Thursday to stay on their bosses good side, then the situation could easily lead to worse outcomes for the people who have to deal with this system.
If we create a computer system that oppresses 1% of innocent people, then that is a problem. However, I don't consider it a win to ban the computer system and replace it with a human system that oppresses 10% of innocent people. Like, the situation isn't better because humans are oppressing humans instead of a computer doing the oppression.
That's why I was focused on management. I don't care that things are going badly for some specific technology related reason. It's management's job to fix it regardless. If management can't rely on the technology for regulatory reasons, then they might rely on people who do just as bad of a job. And hey, that scenario is even better for management because if they hire a bad actor who gets caught then that person faces the consequences and not them.
You're saying "I don't trust AI, I want human supervision", but this works both ways. Sometimes we don't trust the humans and would prefer a neutral AI. Humans do terrible things to other humans. Who's going to review my appeals? What are their biases? Are they any more trustworthy than a model?
"Your honor, we investigated the source code and discovered that there was technically no AI being utilized. It was a poorly formatted series of if-statements. Honestly, I wouldn't even consider this a program. How it avoids crashing the moment it's run is beyond me."
Like, does AI mean some statistical method is used? That large data sets were used? Are you using some declarative language like prolog?
Ultimately the only definition I can see them coming to will reduce to "person uses computer to do thing I don't like." And somehow I don't think that's going to actually help.
Or maybe it more directly tries to sense emotion by using your webcam to look at your facial expression.
That would be acceptable collateral damage, if it couldn't be permitted without opening the door for the creation of systems that used the information against the analyzed people's interests.
I'd be willing to at least consider a carefully crafted exception. The problem being that when you write such an exception, it tends to be awfully easy to introduce loopholes that, in practice, allow using uninformed pseudo-consent, or false consent with no real alternative available, to use information against people.
I.e. my point is that such a ban would have to be very extensive and invasive, with obvious censorship of small, simple segments of code and whole avenues of basic knowledge. Given some data, you can get a crude emotion detector from facial images or text messages - not state of art but somewhat accurate - with something like ten lines of code, with no previous skill on "emotion analysis", just applying generic ML approaches. I can't imagine how such a ban could be implemented, as so many people would still be able to easily make such systems whenever they wanted to, so the ban wouldn't be effective.
Perhaps you could regulate the application of automated decision making to decisions about people and requiring some review-and-override mechanisms (GDPR has some limited aspects of that), but it's a very different area than just banning knowledge and skills that already exist and are relatively widespread.
Like, if you ban reading people's emotions you effectively have to also ban any human interaction.
I suppose the benefit of a human actor is that you can theoretically fine or jail them if they're found to be malicious or sufficiently incompetent.
However, on the other hand, human actors can explain why they're doing the right thing. Even when they are in fact doing the wrong thing. An AI that is broken incomprehensibly can still be determined to be broken. The human actor causing issues can produce very convincing arguments to avoid termination. Also they can bring in donuts every Thursday to stay on the bottom of the termination list.
[And to be clear. I don't trust the technology at all. I just also don't trust the human system either. A system isn't better because innocent people are oppressed by humans instead of by a computer.]
It's also reasonable to ban anything that claims to be better at it than a human.
Although, I do like your comment about scale. If you have a system that's 99% successful, then if you apply it to everyone in the US then you're failing 3 million people. That's a problem.
Of course, your system might be mechanical OR it might just be a group of people each one just "doing their job." From a result oriented point of view, you might end up with a mechanical system that oppresses less people than a people system.
I don't feel good about either one, but I also don't feel good about causing wide scale misery because at least it's people screwing over other people instead of a machine screwing over people.
[Of course it's worth noting that I don't trust the technology at all. It's just that I also don't trust the human solution that it claims it can replace.]
I am aware of a healthcare company which currently has a model in production which alerts healthcare providers if a person displays suicidal intent. They have several confirmed instances deaths being prevented because of the interventions taken due to alerts from their model. While I don't think ML should be used to manipulate people's emotional states, I think this is a case where having a model that can read people's emotions is a good thing.
make a committee to vote to fund a study to create a subcommittee to vote to fund a study on something from a decade ago with a small, non-representative sample size, reaching predetermined conclusions that justify the need for the committee and all related subcommittees, instead of getting anything done.
But stating your policy opens up another angle of attack: those who wish to undermine your ideas can attack your implementation rather than the concept.
I agree that she's still better than most US politicians. I have some faith in politicians from Western Europe to do half the correct thing, or at least not be swayed by arguments coming from money too much. Not that much faith, but still..
Yeah, it's frustrating when people do this. But if you want to have real solution at the end of the day, then you'll need to hammer out all of your implementation issues.
If you produce an implementation that is flawed, then people will be able to evade the spirit of your regulation rendering it useless.
Probably also worthwhile to point out that when someone in a Western journal says we should regulate an AI application, we should be clear that they mean: Let's regulate it in the West and let other nations pull ahead. That's the market context.
AI gets fetishized, but it is a product like many others. If you make claims about it, then you should be able to prove it, otherwise you are committing false advertising. That is to say, AI is already regulated. It remains unclear whether AI requires a new regulatory regime. Personally, I doubt it.
If China and other nonwestern nations use AI like these emotion reading programs--they will end up with bad outputs. This will end up putting them behind nations that sensibly regulate how AI is used. Sensible regulations are a tall order to be sure but we must endeavour towards it.
The lie detector comparison is also odd. We never regulated lie detectors, we simply barred their use in courtrooms because their results are unreliable. This is I think the core issue I have with articles like this - if we talk about regulation, we should talk about regulating outcomes, not underlying tech. There was no talk of regulating skin conductivity devices after polygraphs were barred from evidence.
It's not broken in practice, the theory doesn't work.
This is a problem for humans too, but non-technical people take anything that a computer spits out as word of truth! We intuitively know it's hard for a human to really read a human. Not so for computers.
I really don't think they do. The main thing that they require is that you don't listen to the anxiety (or after experience lose the anxiety completely) that everyone can see through you. It's the illusion of transparency that outs a liar, not any codable authenticity in their reaction. "Tells" are nervous tics, and "lie detectors" are sympathetic nervous system arousal detectors.
People can't see through you. A funny thing you learn when doing public speaking is that they can't even see your crippling nervousness unless it expresses itself in stereotypical tics. If your tics are strange, people will have no insight into them (unless they know you.)
Though, if we allow our imaginations to take off a bit, one could imagine an AI that can represent the entire mental state of an individual in itself and probe it to determine if you’re acting while you’re acting in front of the AI.
Thinking we can boil emotion down to a well qualified set of states and rules is as reckless as it is presumptuous.
"You like this"
"Computer says otherwise"
Think that through for a while. Each of us is the authority on our intent. We are the authority in who we are, what we feel, and so forth too.
I won't have anything speaking for me or mine and neither should you.
I.e. Amazon doesn’t care what emotion you’re feeling. But it does potentially care whether you seem to be “in a buying mood” — e.g. it might be able to save a lot of money by constraining its ad placements to only be shown to such people. It isn’t going to try to figure out what mood “a buying mood” is — it’s just going to train another model to look at the mood-model output, together with people’s shopping histories, and then learn “people tend to buy things more often when the mood model has this output.”
So it’s not like Amazon will ever assert that you’re in a particular mood. They’ll just assert that you look like someone who’s ready to buy things, with your mood being one input to that judgement. Their perception of your mood doesn’t have to be 100% accurate for that to be helpful; any more than a salesman’s read of your mood does.
Other things, like tests to judge the nature or inclinations of someone are a much higher worry.
Tell that to the courts.
This happened recently as an example, there's a law that exempts dogs undergoing field training from being on a leash. There's no definition of field training in statute and the definition in the dictionary is very permissive (basically any action related to training a hunting/working dog). Under the principle of lenity, any ambiguity is supposed to be interpreted in favor of the defendant. The judge ruled that we were just playing fetch, even though we had a letter from a game warden saying the activities would be acceptable under the law. The judge also misapplied facts that had nothing to do with the law, such as if we have a license or training on how to train dogs, yet there is no state license nor does the law mention any requirement to be trained (it's customary to self-train). And saying that we didn't have any special equipment with us, which again is nowhere in the law and isn't required. There were other issues and misapplication of law related to rights violations, a motion to dismiss, and even contradictory rulings about trial de novo issues.
Also, at the lowest level the magistrates aren't even required to be lawyers. They can violate rights, misapply law, and even yell at you or tell you they won't hear your side all with no consequences because they claim ignorance, so they're "just mistakes" that you can pay for an appeal to hopefully fix.
There's zero accountability for the police, DA's office, and the courts. We witnessed multiple rights violations, documented lies, and gross misapplication of law, but nobody cares.
Training has some purpose. And that purpose speaks to intent.
Reads to me like your intent was to play fetch and count on legal ambiguity.
Did anything besides fetch happen at that park?
I am asking to get at intent. It still appears the same to me. I have no new information.
So far we have nothing on the table to differentiate fetch play in the park from training as well.
There's the letter from an official agency stating that the actions we perform are complaint with the law. Also, if there's no definition/differentiation then it goes to the defendant under the legal doctrine of lenity or even reasonable doubt. Hell, you can throw entrapment in there too if you got permission/clarification from the government and they later prosecuted you for it.
You can throw in all the stuff you want about not having special equipment or a (non-existent) training license, but those facts alone would not be enough to prove guilt, especially when the majority of the people training don't have a training license nor special equipment. Remember, you're supposed to be innocent and the prosecution has the burden of proof. It seems that's not really the case when they take irrelevant facts that apply to the majority of people field training and use them to prosecute people.
I am looking really hard to see a solid defense here. I do not have it yet.
I think the law is shit, frankly.
But, all I really have here is your game of fetch should be permitted because it is part of some other training regimen.
So, what is that training and how does a game of fetch contribute to it?
Put more simply, how is your game of fetch not like any other game of fetch?
Now, you should also know I could give two shits about what you do with your dog. Probably a lot of the stuff I do with mine :D
What you are battling here is people are not so inclined to see your position as genuine. I did not, and thought it worth a go as a general exercise.
So far it has been interesting!
Your actions and framing align most strongly with what I stated earlier, depending on ambiguity to get an off leash game of fetch in the on leash park.
If I were stuck with that scenario and in need of the park, I would just make sure some meaningful and unambigious training happened, because fuck 'em. No joke. Someone, somewhere may just be wound up two clicks too tight.
Most of what you appear to be depending on falls under wide discretion however, others wound up or not, your path is clear:
Maximize their ability to defend discretion in your favor and it will land in your favor more than not.
You appear to be trading on good will as if it were entitlement. That kind of thing fails often.
Good will is not an entitlement. It can be garnered, cultivated, encouraged, but not expected or demanded.
A nod to the spirit of the laws, some charm and consideration go a loooooong way.
If you have to explain the technicality? Doomed.
All this has gotten me out of a ton of vehicle and outdoors type citations and scenarios.
I could revise my position given you can tell me either:
Something other than fetch happened in the park
how your game differs from the norm, or otherwise was necessary to do in that park and contribute meaningfully to a training regimen of some kind, any articulable kind.
Otherwise a person would likely see you playing with your dog off leash.
You gotta flesh something like this out a LOT more for it to play out favorably in my experience. I am not talking about licenses or any of that either.
I am talking about giving them something solid that speaks to training. It is missing from this discussion and it absolutely should not be. I asked for it multiple times too:
I had similar discussions with others many years ago. A few tweaks in how I go about things like this did result in a dramatic shift in good will coming my way.
Did anything other than fetch happen?
The reason I didnt answer the other question is that it is not relevant.
Another dog barked at and chased our dog after their owner couldn't hold onto the leash.
That's not really important since not being able to hold the leash is a violation of the law. The other party didn't even have a dog license. They decided to call the police because that makes them the "victim" and protects them from their infractions.
Not to mention, it's irrelevant. The law is stated in an absolute liability/immunity fashion. If you are involved in field training, then you are not in violation of the law (civil issues could still be brought if your property caused damages).
Something other than fetch happened in the park!
That still leaves the question of whether your activities outside the animal conflict were anything besides fetch and how those activities contribute to some articulable training regimen.
The conflict changes the entire thing and it is relevant!
Now someone has to answer the question about why your dog is not on a leash to the other dog owner.
In my other comment, I spoke about fleshing this out some. That is what they need.
Secondly, yeah path of least resistance is to not step up for you, report the call, and go through the motions.
What incentive do they have to step up for you?
How do they sell it to the other dog owner?
After considerable, and entertaining by the way, discussion how your game is differentiated from any game of fetch remains unclear. In my view, this is likely the primary reason it went the way it did for you.
That officer needed a clear, compelling reason to step up in your favor. Did not have it.
I don't and have been looking for it, again as a general exercise.
Nothing personal here man. Just found how this played out interesting.
You have said, "the actions we take" meaning either you and the dog, or you and others and the dog do something that is permitted, yes?
I did not get clarity on what those actions were.
All of this is moot anyways. After being subjected to pretrial restrictions under a charge they knew to be incorrect and was amended contrary to code, the case should have been dismissed.
All I can say is your scenario does not include much that bolsters your case.
Very surprised it went to trial. Was anyone, animal injured?
> we had a letter from a game warden saying the activities would be acceptable under the law
State laws vary but in mine, Game Wardens have identical powers to any other police officer. They have all the same arrest powers and have state-wide jurisdiction. Assuming it's the same in your state, and assuming you received some sort of citation or arrest that landed you in court in the first place, the judge at best has contradictory information from two equally relevant officers of the court. It's not exactly a slam dunk acquittal.
> The judge also misapplied facts that had nothing to do with the law, such as if we have a license or training on how to train dogs, yet there is no state license nor does the law mention any requirement to be trained (it's customary to self-train). And saying that we didn't have any special equipment with us, which again is nowhere in the law and isn't required.
The judge wasn't trying to determine whether you were following the law, this all goes to - as earlier in the thread - intent. If you just want to play fetch with your dog, you won't have any of this. But if you come loaded to bear with a bunch of training implements, treats, balls, whistles, etc., it's hard to argue that you're just playing fetch and not actually engaged in training. Any licensure or equipment would have supported your case.
> There were other issues and misapplication of law related to rights violations, a motion to dismiss
Almost everyone who has ever appeared in front of a court has claimed their rights were violated to the point where it's practically a meme. It's almost never the case.
> Also, at the lowest level the magistrates aren't even required to be lawyers.
Same in my state - they're elected, and mainly hear traffic infractions, zoning disputes, and very small summary offenses. Thankfully, you can appeal to Common Pleas from a magistrate court for something like $30. But I know some states it can be hundreds for an appeal.
> They can violate rights, misapply law
I mean, there are (again, in my state) censure and impeachment proceedings for magistrates, and while not super common they do happen often enough that it's not a huge scandal or anything. Misapplication of law is exactly what the appeals process is for.
> even yell at you
> There's zero accountability for the police, DA's office, and the courts.
Simply not true.
> We witnessed multiple rights violations, documented lies, and gross misapplication of law, but nobody cares.
What's more likely? 1. Every police officer, attorney, and court house, as well as every politician and all the media, doesn't care about widespread systemic violations of basic rights, documented wrongdoing, and misapplication of the law. 2. You're misinformed about the law.
A trooper knowingly held an incorrect charge, resulting in pretrial restrictions specific to that charge. It's a violation of of both the federal and state constitution to deprive anyone of liberty or property except by the law of the land, and there's nothing in the law allowing one to knowingly hold an incorrect charge. The ADA on the case had this information too and allowed the charge to continue - a violation of the Bar's professional standards. The trooper eventually amended the charge, but lied to the judge, saying it was out of leniency when I even have an IAD report saying it was because he had an incorrect charge and knew it. But they determined it was just a "misunderstanding". Even the rules of criminal procedure prohibits the amending of the charge at that point due to the circumstances.
"Misapplication of law is exactly what the appeals process is for."
That may be, but I think it's negligence to put an unknowlegable person in a position of power like that. States have laws about those practicing law needing to have a degree and pass the Bar, yet they don't care if a judge understands basic legal terminology. How dumb does one have to be to think that a request to dismiss with prejudice is the person calling you prejudiced? This system design flaw results in delays and costs to innocent people, not to mention undermining the integrity of the system. I'm my state the filing fee for an appeal is non-refundable, so you can be "fined" just to get a trial with a real judge even if you're innocent.
"What's more likely? 1. Every police officer, attorney, and court house, as well as every politician and all the media, doesn't care about widespread systemic violations of basic rights, documented wrongdoing, and misapplication of the law. 2. You're misinformed about the law."
Considering that an investigative journalist is pitching this story to their editor, that a civil rights lawyer says we have a case but the system doesn't view it favorably unless there's a lot of money involved, and that the statutes and case law supports my interpretation, then I'm leaning towards #1 (but your choices are flawed due to the use of absolutes. It should say that the system will protect the bad members as a means of protecting itself and because they don't want to deal with issues they see to be small).
"'even yell at you'
That is a violation of judicial ethics and conduct...
"'There's zero accountability for the police, DA's office, and the courts.'
Simply not true."
This case seems to demonstrate a lack of accountability.
You've done that in the exchange above, not actually speaking directly to intent. The result, when that information is needed, is it will be obtained indirectly, and if it cannot be determined at all, some other decision will be made.
Physically, in the way humans work, we are that authority. Should we undermine ourselves, yeah that's an issue.
Conflating that state of affairs with what we require courts to do as some proof we do not own our intent isn't helpful.
The software described would fall into that category.
The article also makes the claim that some of these technologies are based on simplistic and largely disproved psychological assumptions, but again, its opposition is to their use rather than mere existence.
Why not both? Both the tool and the activity?
Is boredom an emotional state? It's probably worth including, considering the setting.
Coworkers that I've worked with for years will still occasionally ask me if I'm stressed or angry. Apparently that's what I look like in deep concentrating, even if I'm totally relaxed and otherwise happy. If a mild case of RBF can throw humans off how easy would it be for an AI to miss-judge?
Which raises the question, why is anyone even considering AI to judge emotional state? For what purpose?
The misattribution of anger you're describing is pretty common, I think. I'm guessing it's because you furrow your brow in deep concentration (and possibly tense your lower eyelids), which is also common in anger faces. The real giveaway here should be the missing upper eyelid raise (and possible pupil dilation) that you won't be doing but should be there if you're angry. People and AIs that are trained with good visibility of your face shouldn't make these errors very often.
Our biology is hardwired to communicate the emotions we're feeling via our face. We want other people to know how we're feeling because it's important. It could be misused, though (Ekman's FACS work with airport security to detect lies was a failure).
Misuse of psychometric tests ("Do you make friends easily?") have resulted in payouts in the UK and Canada, probably elsewhere. This sort of emotion-detection seems analogous.
//> every conference had those emotional recognition tv screens/camera gizmos and people had so much fun playing with those, mainly because how easy is to fool them
//> the same stimulus can trigger every single possible emotion, depending on context and personal patterns
//> emotional responses are both instinctual and learned (inuits seldom show anger, while other cultures make a spectacle out of it)
This whole AI story is a cold path. Without context AI is an idiot savant. Wanna' see what is actually going on?
I do behavioural research and inform messaging. For this, I need to understand what makes people behave one way or another. Turns out, every emotion is like a little app that tells you what to do. People churn from your app on-boarding process? They encounter an unexpected barrier and their emotional response is mediated by anger, making them ragequit. My intervention is too manage expectations better and the emotional response is diminished. On-boarding rate improves 21%
All I need is a small randomised sample (200 for every 100k users), I recognise emotional patterns and grade them with something similar to Bayes factors.
I share the feeling of others here: this AI story is just another application of surveillance for the purpose of manipulation. So if it's not stupid, it's evil. And of course our collective response is to avoid or deceive.
How about empathy? Things would be much better if we would feel actually understood not manipulated.
I swear when I worked for BT it took 6 months for people to get over those ghastly pdp 121's - The system was fiddled to give everyone as low a pay rise as possible.
"Time to regulate AI that interprets human emotions"
"Halt the use of facial-recognition technology until it is regulated"
"There is a blind spot in AI research"
> We believe that a fourth approach is needed.... It also engages with social impacts at every stage — conception, design, deployment and regulation.
If AI is made explainable, then confidence in AI should more closely map to actual capability (that is, bad AI will not be trusted because its flaws are on display, while good AI will be trusted because its strengths are on display). It will also provide AI developers with a better grasp of what they need to work on.
AI, by contrast, is engineered by humans, so why is it humans can't explain it?
If this was true, the phenomena of feral children would not exist . Human brains are socially engineered by an array of informal (family, friends, social context) and formal (schooling, governments) institutions.
You can show the weights of the connections between the nodes, but that wouldn't really help anybody to understand it.
If I develop an application that can tell if you're gay or not, is it my imperative to report you to the authorities so you can be rounded up and disposed of?
I say you can very well develop such applications if you want to, using them especially against society at large is a privilege afforded to you, and one that can be revoked. Lots of other pseudo-scientific bullshit has been hoisted on the public such as eugenics causing massive amounts of damage, and i'm not looking for the next generation of this that will trend the same way.
You, and me alike exist at the whims of society and when our behaviour displeases enough people you will find yourself imprisoned or worse.
"People don't think what they feel, don't say what they think, and don't do what they say."
I think the reality is a bit more cynical. People are looking to filter on who will provide the right social performance. But that's not as comfortable to admit when you're on the side of using or selling such a system.
"Hey GOOOGLLLE, play! NPR!"
-- me annoyed that Google Home like to switch it up sometimes --
Google: (recognizing the irritation in my voice) here is an ad for therapy, medication or an impulse purchase.
Like making them want to vote for someone who will stab them in the back when in the office.
Or using AI for population control.
There are huge issues our modern technology creates. We as spieces are not ready for all of that. Our institutions are like 100 years away from the state we need them to be right now.
Good luck with that.
If you cry for regulation, it is a sign that government has too much power. But more regulation gives governments even more power.
Make sure that your citizens have basic rights. Obviously being judged by an algorithm can not make for fair laws. Governments can use algorithms for their assessments, but their has to be a line of appeal.
The only thing that guarantees fair treatment is competition (so that for example a worker can quit and work somewhere else). Government rules just breed corruption and inefficiency that harms everybody.
I'll take more powerful democratic government over more powerful unchecked privatized (authoriatian) power any day of the week. I'd be happy if both had less concentrated power and citizens had significantly more power.
The way to give citizens power is through law that is clearly and explicitly designed to empower average citizens against both government and private wealth. We have a fair amount of such protection built into our governmental system that does just this for citizens with respect to government (after all, this sort of authoriatian tyranny is one of the reasons we have the US now). We have significantly less clear and explicit law that protects citizens (consumers) from concentrated private wealth (businesses), giving private wealth a massive grey zone of unscrupulous space to play around in.
You are right, citizens need to be empowered against their governments. But regulations give governments more power, not less.
And if you say the majority of consumers doesn't care about some issue, what makes you think democracy could provide a solution?
You also don't need that many feet to create an alternative school. You just need enough children to justify an extra building and teachers, or supplement the home schooling.
It's not really a realistic option for most people.
Many folks just can't change schools at will. They simply don't have that kind of flexibility in their life / options.
I feel like we're going down one of those "I'm not sure how to tell you that other people's lives aren't as simple as you make them out to be." kind of conversation.
I never said it is simple. You still live in a free country, though. What on earth could deprive you of all options?
You're idea here about what options are available to people is strange to me. You know people's options to do things in life are limited by a variety of factors right?
To change schools, you might have to move anyway, so you might as well look for a job somewhere else.
If you really have no other options, then maybe AI that detects emotions is the least important of your worries.
Yes there may be some people who are too sick to work. Everybody else can go look for another job. And the ones who are too sick to work probably have other people taking care of their kids already. So those other people can go look for another job.
Even goats can read human emotions from faces to some degree https://royalsocietypublishing.org/doi/10.1098/rsos.180491
And for the point, I've worked with Ekman. This sub-field is not scientifically possible.
Not denying the snake oil currently in the sector as a whole, but I think the tech should eventually be able to do anything our minds can do.
It would be fraud if and only if it's no better than chance, if it has no correlation whatsoever with the true emotional state - and it's perfectly possible (by both humans and machines) to make an "educated guess" about the emotional state that would - especially in agregate for many humans - be informative for all kinds of purposes, even if it can be easily cheated by a professional actor. For example, you probably can detect if a particular section of an online class turns people unusually frustrated or unusually sleepy, even if it doesn't affect all people, half of affected people don't show it or aren't detected, and there's a professional actor in the class faking the exactly opposite signal. Also, there are many scenarios where it's not even worth considering antagonistic behavior, where people have no motivation to put any effort in misleading it.
The argument that it's impossible to determine the inner state of an individual with certainty is irrelevant, because noone is claiming that, and it's not a requirement for the described use cases. After all, surveys of "do you like product/person X" provide valuable information even if it's trivial for anyone surveyed to lie. All the system needs to do (which needs to be properly validated, of course) is to provide some metric that in practice turns out to be reasonably correlated with the inner state of the individual, even if it doesn't work in all cases and for all individuals, and that IMHO is definitely achievable.
Perhaps it's more a difference in semantics - what do we call if a system (or human) for identifying some status or truth is halfway between "no better than chance" and "can identify the true status 100% of the time". I would say that it's a feasible system for identifying that thing, and that system works (though it's not perfect); it seems that you would say that such a system does not work - but how would you call or describe such a halfway-accurate system?
Do you really not think it's possible to read emotions from facial expressions? If humans can do it, a machine can do it better. The claim from the article that machines can't read emotions is pure motivated reasoning so distorted and disconnected from reality that it amounts to fraud. It's obviously the case that faces convey emotions. Look around.
Facial expressions are a terrible indication of emotional state because humans are multi-layered: what's to say the person with a grimace does not have a tooth ache, or a bad back - yet otherwise is in completely normal state for them, if asked they'd say they are good.
I perceive the majority of people considering this situation to only be considering 1st level effects. I have seriously considered this as a professional ambition, scientifically investigated the situation and discarded the concept as unreliable at best and a fraud engine in reality.
Think about this a bit more, use your scientific method training. you'll come to the conclusion this is not science, this is pseudo-science and fraud.