As someone working on a similar project (specifically, emotion recognition) I'm highly interested to hear how such a product should look like to be not considered unethical. So far from the comments I see that:
- it should be made clear that you are being analyzed e.g. by big yellow sticker near the camera
- no raw data should be stored
- it should be used to collect statistics, not identify individuals (?)
Is it sufficient to consider such a software as a fair use? What else would you add to the list to make it reasonable?
The ethics are simple: If you don't get opt-in consent, many subjects are going to feel violated. Even if you assure them you anonymize the data.
It's not enough to put a warning next to the camera because you've already captured them at that point and it's too late. If anywhere it would need to be at the entrance to the store.
If a store has a warning at the door that this happens inside, it's good because now I can avoid getting inside the store and silently hate and boycott the brand.
If a store has a warning label on the device engaging in this, it's bad because it's too late for not entering the store. I'm gonna complain right now at the store manager, maybe call the cops or sue. I'll be vocal about actively hating the store, the brand, the manager, the employees.
If I went to a store engaging in this without telling and I later learn about it, then I'm calling Keyser Söze and it's pitchforks and beheading time.
I suppose it will take a couple more generations of brainwashing to have the population ready to accept this kind of highly invasive technology. IIRC about 10 years the big brother awards was awarded to a french industry group for their blue book describing how to condition a population to accept surveillance and control technology over a few generations.
In some countries, like Sweden [1], this type of deployments of cameras are strictly regulated. A quick reading of the rules in Sweden tells me that you are unlikely to get permission for this easily.
Sure, but I would bet money that GP isn't in a country that has those kinds of regulations. I'm addressing their emotional overreaction to something that require rational action (such as the law you mention).
Thanks for the detailed comment. Couple clarifying questions if you don't mind:
- do you know that loyalty cards are often used in stores to collect customer data (a kind of offline cookie)? do you consider it a bad/dangerous/unethical or does it sound ok for you?
- if instead of a camera there was a person looking at customers and recording his observation, would you feel bad about it?
- do you know that loyalty cards are often used in stores to collect customer data (a kind of offline cookie)? do you consider it a bad/dangerous/unethical or does it sound ok for you?
Yes I do know that loyalty cards are used to collect data. I think most people do. I don't take loyalty cards for this reason and I'm glad that they are opt in although there is some financial pressure to take them.
- if instead of a camera there was a person looking at customers and recording his observation, would you feel bad about it?
I would feel bad about it and I think the person should ask my permission first.
> I would feel bad about it and I think the person should ask my permission first.
But if that person just memorizes customer reactions to understand how people on average react to particular products or actions, that's ok, right? Because this is what sellers and business owners do to improve their product. So is it about human-to-human interaction or some more subtle detail? I'm biased here, so sorry if I miss something obvious in this situation.
It's not subtle. If there was an employee standing next to you or following you around the store with a clipboard taking notes on you and your facial expressions, only then would you have something approaching an apples to apples comparison. Stop pretending that's normally "what sellers and owners do" and you're just automating it. Customers Do Not Want.
Loyalty cards are opt in. Security surveillance can be unsettling but customers understand its purpose and limited scope. What you're proposing is more invasive, and most people would not appreciate it if they knew about it.
Look, give up trying to justify it. Customers don't want it. You should find another application for this technology.
Well, I definitely do have other applications for it. For example, I know that similar software has been used in labs to estimate people's reaction to videos and game features, in mobile applications to improve interaction with a user, etc.
My interest to offline applications comes from personal experience: recently we demonstrated our product (not emotion recognition, but also capturing user's face) on an exposition. People came to our stand, used the product (so they clearly opted-in), asked questions, etc. After 2 days, we asked a girl at the stand "What do people think about the product"? "Well, in general, they are interested" she answered. Not much info, right? Definitely less informative than "65% expressed mild interest, 20% had no reaction and 5% found it disgusting, especially this feature".
So I don't try to justify this use case - my life doesn't depend on it - but I find it stupid not to try to understand your clients better when it doesn't introduce a moral conflict.
Loyalty cards are opt-in, and it's common knowledge that its explicit purpose is to track information about yourself -- so I think a lot more people find them (or at least their existence) acceptable.
This is true, the vast vast majority of stores have surveillance of some kind. Advertising's impact on the human psyche cant be underestimated. In the last ten years alone this has become increasingly apparent. Whether it be photoshopping images that manipulate our conception of beauty or dating apps that make us feel lonely enough to install.
I don't mind being recorded at a checkout.
Recording me to decipher my thoughts instead of my actions crosses a line.
> If a store has a warning at the door that this happens inside, it's good because now I can avoid getting inside the store and silently hate and boycott the brand.
Given how widespread this kind of monitoring is, this approach is basically "I will punish the honest stores and reward a sneaky store by spending my money there instead"
It's actually pretty simple - don't use it on people.
Advertising? No. Sales? Definitely no.
Augmenting that single-player video game so that it adjusts content depending on emotions and gaze of the player? Ok. Better if the player is explicitly told the game will track their reactions though.
EDIT:
Also, another angle. Even for advertisers / "sales optimization", I'd forgive you if that was a local, on-site system. But if it's meant as a SaaS, with deployments connected to vendor's butt, then I am gonna actively try to screw with it if I learn there's one installed anywhere I frequent. Hopefully new EU laws will curb that, though.
I had it on for so long that, for my brain, the two words are basically the same now :). I keep forgetting about it when I edit a post (the substitution happens on display, not on submit).
The only ethical possibility in my view is for it to not exist. I don't like having my emotions manipulated to make me buy more stuff, regardless of whether I am anonymized or not. But then again, I think similarly of a lot of the non-targeted advertising; the recognition just add whole new level of disgusting.
What about collecting statistics to make better decisions? Let's say, you go to your favorite jeans store, but find out that current collection is disgusting. Does it sound ok for you if some sort of system would analyze your attitude to the product to improve it in later versions?
You can do controlled user-testing sessions with that system with specific people that have consented and are potentially compensated in some way. You will most probably also get more useful information out of that.
But being recorded "en masse" in a shop for that purpose I would think is invasive. I would totally avoid that shop if I knew that system was in place.
Also, I am not convinced that statistics lead to better design, so that would most probably be just wasteful, but that is another discussion :p
But isn't A/B testing doing the same thing? If it's different, what's the key difference between analyzing facial expression in a shop and analyzing user interaction on a site (given that both have a warning about data collection)?
The former has very weak (if any) consent, is indiscriminate, easy to abuse, and creates unnecessary conflict (e.g. i really like those pants sold in that shop but I don't like to be tracked, ok i'll go in just this time...).
In A/B testing there is a clear context and purpose, and is normally negotiated between actual humans.
There might be middle grounds (A/B testing can be done online and use facial recognition and in relatively large scale) but for me it has to be opt-in (as in, you have to fill a form to join) not opt-out (as in, leave this webpage/shop if you don't want my tracking). This is more challenging to the organization proposing the tracking, because they need to provide some value in exchange so people actually sign up for that. But in the long term being founded on the principle of consensual mutually benefiting relationships can only be good for your organization/brand, right? As in: at last a company that treats people like humans!
I've watched the same hysteria & concerns about all kinds of privacy-invading systems. Social Security Numbers, credit cards, computer IDs, camera GPS, search queries, and piles of other tech all start popularization with "OMG evil people can do evil things with that data to hurt you!" Save for a few holdouts (usually much older folks), society at large has completely accepted all that tech as normal. Just takes about a decade of the convenience overwhelming the fear. I despise SSNs, but cutting my taxes by $1500/yr (child tax credit) is motivating; credit cards suck for a zillion reasons, but swipe-and-done is so damn convenient; no question Google has an impressive model of me but those search results are enormously useful; etc.
I have a question: Do you do trials in a controlled environment, where you actually have proper feedback and a distinct comparison between self-described state and machine analysis? Because in my opinion, systems like these are like modern day version of astrology (at least, when they are only based on vision and not things like fMRT imaging or proper psychological analysis). I know seriously depressed people, who always had a smile on their face (maybe a social coping mechanism), as well as "angry" looking coworkers, who had a very good mood most of the time. It very easy do misinterpret a persons mood, when the only "interaction" is: looking at them, and analyzing their facial features.
When these things are used outside a controlled environment, things could get even more complicated: weird beards, squeezing of your eyes because of excessive sunshine, reflexive glasses, etc.
1. Accurate collection of facial features. Illumination, occlusions, head rotation, etc. may seriously affect accuracy, but this is exactly our main focus right now. We are at the very start of the process, yet early experiments and some recent papers show that it should be doable.
2. Correlation between real and detected emotional state. At the moment we concentrate on the 6 basic emotions and don't detect less common expressions like depression with smiling face. This topic is definitely interesting and I'm pretty much sure it's possible to implement given enough training data, but right now we try to concentrate on different things.
> I'm pretty much sure it's possible to implement given enough training data
No, the point of the comment you are replying to is that there are emotions that are impossible to detect using external information. We can hide our emotions very well. The question is to what extent does external emotional information provide monetizable value?
> there are emotions that are impossible to detect using external information. We can hide our emotions very well.
This is an assumption which I'm not convinced holds true. Just because we can hide our assumptions well enough to fool other people doesn't necessarily imply that's it's impossible to detect using external information.
I've seen some pretty convincing expressions of emotions from actors who were obviously not at the time, in love, in pain, in anger etc. I'm pretty certain that any system that takes your facial appearance and no other information (e.g. you are an actor, you are currently on a movie set), it would have no way to distinguish genuine from false emotion.
If we are talking about professional actors trying to trick the tracker, then yes, it should be pretty hard to design software to overcome it. But most people aren't that good, and although they can mislead their friends or collegues, they still leave clues to detect a fake emotion. If you are interested, Paul Ekman has quite a lot of literature on the topic, e.g. see [1].
But humans are notoriously bad at picking up on details, and things like music and scenery can have a big impact on our perceptions. I'm not saying that you're wrong, I'm just saying that in the absence of any evidence to the contrary I don't think we can just assume that you're right.
The fact that you are already working on this says something about your willingness to do something distasteful to earn a paycheck. A slightly bigger paycheck would probably mean you would relax you morals even further. Even if your product starts out with stickers and no logging, I bet it doesn't stay that way for long. Not if the paycheck can be bigger.
To me the only way this could be ethical is by the project being limited to private space (a lab, a room in your house). No data is to be recorded ever, runs on an airgapped computer, doesn't try to identify people, every people subjected to it has to be fully aware of what this is about and the implication it can have.
The opinion of most people here is that facial recognition technology is for the most part creepy if used in a commercial setting. Mine is slightly different. I think it's fine if you want to show me a different advertisement or sign based on an interpretation of my expression. I also think it's ok if you track my position within a mall and see which shops I visit and when. I would draw the line at attaching personally identifiable information to that data such as a name or a photo of my face. Anyone who decides to do that is probably going to cause harm/inconvenience to me (I don't want junk email from shops I happened to visit but didn't buy in).
I should also state that I think the first use of my data is ultimately unprofitable. Will the extra cents you make by advertising cinnabon to depressed looking people or hairdressers to long haired people really offset the cost of developing such a system? If applied to a broad population any customisation effects will be marginal.
I also believe that the non-anonymous tracking system is much more likely to produce value for companies and it would be very tempting when gathering anonymous data to cross reference with actual individual information. My concern about any tracking system is that by the motivation of profit it could easily shift from an ethical to non ethical space.
I'm kind of surprised that it didn't have some sort of Data Protection warning near it already, but I'm not sure if the EU data protection directive covers Norway as well.
We have pretty strong laws regarding this. It has created several news articles for the past week, and the Norwegian Data Protection Authority has already commented and said they don't believe this is legal. Stickers where added after the initial discovery.
I'm curious what's the boundary between ethical and unethical. People constantly analyze each other's mood and it's perceived positively. But doing the same thing massively using automated tools is often considered inappropriate. So is it because of using technology, massiveness, purposes? I hope there's a way to make such things both efficient and not unethical.
It is unethical because there's no "opt-out" option. You have taken a photo of me without my consent, used it for intents to increase profit despite not having my consent; and furthermore, attaching personal data to it is Orwellian and a complete invasion of my privacy. I can go out in public and have nobody know who I am. A retailer should not have access to my identity (since they can cross-reference other data sets to deanonymize me)unless I interact with them and hand over information of my own volition.
As far as I know, storing personal data - including photo, name, email and sometimes even IP address - without explicit and clear consent is strictly forbidden in most countries, at least in EU.
The only way this could possibly be considered ethical is if you get informed consent from every single person the system is analysing. If you provided each person with a detailed explanation of what the data would be used for, and required them to opt-in before collecting it, that would be fine.
That's at the heart of it - is examining a person in public with automated tools, unethical? Just saying it is, isn't a compelling argument.
The FBI can use automated tools for surveillance - which doesn't speak to ethics directly but indirectly, as we hope ethics drove those rules.
I can sit in my private store and observer people out the window all day, even take notes. That's not unethical; that's a sociological experiment or some such, and done millions of times a day.
It may be jarring or creepy to imagine an advertisement is sizing me up. Again, ethics is more than 'does it make people uncomfortable'.
Manipulating people on a mass scale without their informed consent has always been considered unethical; its on you if you're trying to argue that its not.
And you're 'but what if a person does it' arguments are irrelevant - there's a clear difference of scale between the massively automated systems we're discussing and a single person with a pencil and paper.
It was one billboard ad - not really 'massively automated'. Would have been cheap to hire an intern to stand behind the billboard and make notes. Probably cheaper.
I think terms of service have something to do with it. In most profiling scenarios, the average consumer has no idea what's going to happen to the data collected on them. I'd be much more comfortable participating in a value-exchange involving your product if I knew precisely what information would be collected, how long it would be stored, who would have access to it during that time frame, what would happen to it at the end of that term, and precisely how that information would be capitalized upon. That probably seems ridiculous to you, but from my perspective, it represents a precise definition of the value I'm yielding to you, and a reasonably precise definition of the risk I'm incurring by doing so.
- it should be made clear that you are being analyzed e.g. by big yellow sticker near the camera
- no raw data should be stored
- it should be used to collect statistics, not identify individuals (?)
Is it sufficient to consider such a software as a fair use? What else would you add to the list to make it reasonable?