Check out the video here http://sightcorp.com/ for an ultra creepy overview. You can even try their live demo: https://face-api.sightcorp.com/demo_basic/.
I feel a little bad about calling out one API provider specifically, so here's a bunch more:
Face tracking, emotional analytics and vision based demographics analysis is a pretty huge industry. There's a entire spectrum of uses for this tech, from the altruistic (psychology labs, humans factors research), to the well, not.
We need HIPPA for all personal information, not just medical. We have an expectation of privacy in being "lost in the crowd" when we're out and about. Our physical & online whereabouts, who we're physically with, who we're communicating with, our personal contact information, and obviously payment information is private information that can be harmful if not kept private (false positives in automated legal systems, identity theft, and including all the defenses of securing medical information).
Anybody who chooses to hold such information must regard it with a high level of respect and privacy. Since nobody is doing so, and there are no penalties for violating privacy, and this gets into fundamental rights and proper functioning of society, it seems applicable to federal law.
Sure, paper medical records suck and aren't inherently more or less secure, but no one breaks into a car and runs away with 500 patients' medical histories when each patient's record fills pages, folders, or filing cabinets, rather than bytes on a hard drive (or even better, it slips away through a network connection that no one in the hospital even knew existed thanks to a back door on a piece of medical equipment).
HIPAA largely means that your medical information has been outsourced to whatever software/network/hardware provider claimed they could do the job (and whoever they outsourced the job to in some cases). If you don't sign whatever HIPAA agreement(s) your provider puts in front of you, chances are they can't treat you, so what choice do you really have?
The UK has the Data Protection Act which does some of this.
One radical option would be to grant people copyright over their own PII (with about a billion caveats to allow journalism etc.)
Then you'd just see setups like the financial industry has. You get analysts, call them journalists, and have people subscribe to your publication. The journalists go get insider-ish tips and 'publish' them to a select group of followers.
With laws like that you'd just hire a full time business analytics journalist to cover your store.
I thought Minority Report was still a few years away...
Thanks for the links, this stuff is both fascinating and scary.
Sentiment analysis for everyone!
The cameras retailers use with their surveillance systems are coming with facial recognition built in now. 
And lots of retailers, banks, etc, are using systems that track people's visits across multiple locations. 
You'll see a lot of these systems being sold as fraud/loss prevention solutions. The reason for this is that it's a relatively easy sell this way - customers can count how many thieves they've caught this way to easily determine the ROI they're getting on the system. Once the systems are in place, it's relatively easy to start using them for marketing related purposes.
Not all uses of systems like these are necessarily unethical. Consider a case where you want to set up a rule like 'if the average lineup length at the checkouts exceeds 5 people, call backup cashiers'. The problem is that once you have something like this in place, it's very tempting for company execs to want to use the data for legal but less than ethical purposes.
Note that some ethical consensus is key---without it, companies can just price "Well, some customers think image recognition is creepy" into the risk model and do it anyway. Compare privacy concerns---people talk big about their concerns over privacy, but in practice, we're still in a world where a survey-taker can get very personal information from a random individual at a mall by offering a free candy bar. Until and unless people arrive at a common consensus that their personal information---including their face---has value or they have a proprietary right to that information, even in public, there's no real tractable solution to this problem.
... because there's no real agreement that there's a problem to solve.
The department of commerce tried to facilitate talks about establishing a voluntary standard. The surveillance industry was so terrified of the idea that they should be held to a principled position that they wouldn't even budge on one of the weakest possible protections: A voluntary-participation standard that said people must opt-in to be identified by name through facial recognition when they are on public property.
https://www.eff.org/document/privacy-advocates-statement-nti... (and previous HN discussion on negotiations falling apart: https://news.ycombinator.com/item?id=9729696 )
My local gas station upgraded its pumps recently to allow it to play video ads on the screen used to do the credit card transaction. I don't doubt it's partially the reason that gas station is still operational when similar non-franchises vendors in town have gone under.
I would rather have no content than ad-supported content. of course, nobody will ever offer that! You can't sell ads if people can opt out, and too many big players think they're the only way.
Thank gas station should have charged more or folded than sell you shit you don't want, won't want, and will never spend money on.
Meanwhile, there are some inroads into financial support alternatives to ads everywhere. Google has a "contributor" product (https://contributor.google.com/v/marketing) where you can basically bid against the ads they'd vend to you; instead of an ad running, you pay a microtransaction to buy the privilege of no ad.
It's an interesting idea, but it only works with Google's ad network.
Frankly, i don't mind google ads; i mind wasting 20 seconds to load a page with about two paragraphs of content and 3mb worth of ads. But this is all ignorig the broader point: why are we basing our revenue off of patterns many realize for being toxic, consumerist, negative-value? People AT GOOGLE will happily admit this while working to build it.
I do my own part by supporting Ad Nauseum and actively punishing sites that serve ads, particularly facebook and google. It's also decent for a (very shallow, for now) layer of noise for your ad profiles. Offer me a flat fee and convince me to spend; don't trick me into viewing ads.
Even the pay-for-no ads model doesn't hold up, because if you pay for content, why wouldn't they just double-collect and make you pay for ads served with the content? I purchased my phone and my phone service, but I still get ads in my notifications. Because I didn't pay "enough" to avoid it.
It's like paying off a blackmail ransom. You give them $100 and they come back next week and say "how about another $100?"
Your source in the marketing material of an IP camera manufacturer.
We research that space and I can guarantee that less than 0.1% of IP cameras have facial recognition built-in or running. These manufacturers, like Axis, whom you cite, would love for such capabilities but they are still very uncommon.
While I'm sure this is true (since the majority of IP cameras in the world are cheap things little more than webcams), do you have a number for retail stores specifically? I know many of the larger chains spend a lot of money on their cameras and movement detection and other intelligence has been onboard those for at least 15 years.
Consider Amazon Go (https://www.theverge.com/2016/12/5/13842592/amazon-go-new-ca... setting up an account with a store, users enter, grab what they want, and leave. The system of cameras and biometric trackers observing the store figures out after-the-fact what you grabbed and charges it automatically to your account through a sensor fusion including face recognition. That's a level of convenience rivaling ecommerce for things people want to grab by hand (often produce and small items, for example), and it's completely enabled by this category of technology.
I get your basic point and I don't disagree that we need more privacy protection. But, no, we do not "need HIPPA" for all personal information.
This is a wonderful app. I will use it every day!
It also picked up the colors in my aloha shirt perfectly. (Anyone who knows me knows that I am to aloha shirts as Steve Jobs was to black turtlenecks.)
When I want to feel young and go shopping for shirts, now I know what to do!
But it also scores me high for anger and sadness, despite (what I thought to be!) a rather neutral expression. Perhaps it knows more than we think :)
Though I'm guessing it gets a lot more accurate when it can take and average multiple shots.
(Anyone else with a glasses, a beard, or other non-typical facial features want to comment? I'm curious now how well their system handles these?)
Thinks he is 45 years old. He is 64! Not calibrated for the superior Russian genetics.
I don't think glasses and beard are non-typical facial features!
The second time it thought I was 28, which increased my happiness even more.
My partner tried it and it took 10 years off her age, and found an angry 31 year old man hiding in the folds of her clothing!
(edit)... on the other hand they probably were just trying to sell product so thought flattery was the right approach...
Covered up my receding hairline a bit and it said 29. I reckon if I shaved I could get it down to about 22 since that's how old people usually think I am.
Pulled a disgusted face and it said 47. Hmm.
Rekognition guesses a wider age range - but gets a "correct" answer - the sightcorp one guesses me as 7 years younger than I am.
Applied Science has a good video on how they work:
We used a Cisco Meraki router once for a client and rigged it up to know who was in the office (for fun, to be aware that it could be done). It'd be nice to know the iPhone/iPad scramble themselves if possible.
I'm not sure if the same can be done to tags, but considering the size of the tiny electronics, and the fact that they are manufactured under the assumption they'll never need to be touched (aka, no CMOS spike tolerance), it might be trivially...
...wait. I just remembered about RFID alarm barriers in retail stores.
Well this is annoyingly difficult to discuss, then...
Though I concur, might be used for evil purposes, couldn't resist posting this. I just love disposable cameras hacks.
Why? Is there a law against public discussion about how to disable an anti-theft device or something?
"Wore these heels to six bars, didn't get hit on once. Please repair or refund."
I suppose reading the paper is one option.
EDIT: Link to the paper seems to be broken. Here's the PDF: http://www.cs.vu.nl/~ast/Publications/Papers/lisa-2006.pdf
I tried variations of the standard expressions and pulled off sad, disgust, anger quite easily.
I knew binge watching Lie To Me before my psychology mid would come handy at some point!
Someone recently thought I was 30. People aren't any better than computers.
I am not 91.
Understand how your customers feel. Detect and measure facial expressions like happiness, surprise, sadness, disgust, anger and fear."
(There's an old joke that sometimes shows up on HN about augmenting an automated bugtracker to snap a photo when a crash is detected or a bug is reported, so developers can be reminded that bugs tie down to real people who are actually sad / angry that the software failed them ;) ).
I'd rather not give them my facial image so they can optimize for me.
Here, instead, there is no indication that you're being watched, analyzed and kept recorded for indefinite amounts of time.
Privacy is not black and white.
There is a world of difference between someone seeing you for a moment as they pass you in the street and forgetting you a moment later, and automated systems that permanently record everything, analyse it, correlate it with other data sets, make it searchable, and ultimately make automated decisions or provide information that will be used by others to make judgements about the affected individuals, all without the knowledge or consent of those individuals and therefore without any sort of reciprocity.
The idea that you have no reasonable expectation of privacy in a public place dates from a time when you could also expect to pass through town in relative anonymity, go about your business without anyone but your neighbours and acquaintances being any the wiser, and would probably change those neighbours and acquaintances from time to time anyway so the only people who really knew much about you would be your chosen long-time friends and colleagues. I think it's safe to say that that boat sailed a while ago, and maybe what privacy means and how much of it we should expect or protect aren't the same in the 21st century.
Preventing the misuse of Blunt Instrument Technologies™ is literally what laws are for. Surveillance is just a club we don't have laws about yet, but should.
It's just creepy.
The salesperson doesn't know in what shops you have been before.
The salesperson might also not know you talked to his colleague the day before.
This is about trust and privacy. You can't trust what they do with your data.
1) These people are rare in the general population, and demand for their time is likely to be incredibly high. Therefore, they cannot be deployed everywhere, unlike machines.
2) When confronted with a human being in a sales scenario, people have a chance to be on guard against potential manipulative behavior. When the sales scenario becomes ubiquitous and invisible, it is much harder for people to avoid being taken advantage of.
3) Ethics are not so absolute. Something that is only mildly bad at an individual level can have terrible results when thousands are doing it. (Littering, for instance, or illegal hunting/fishing.) This is known as a social trap, and it leads to negative outcomes for everyone involved.
A temporary problem solved by natural selection, technological augmentation, and increasing incentives. Perfect performers in any profession are hard to come by. Ambitious people still strive to get there.
>2) When confronted with a human being in a sales scenario, people have a chance to be on guard against potential manipulative behavior. When the sales scenario becomes ubiquitous and invisible, it is much harder for people to avoid being taken advantage of.
Because people don't understand technology or sales. In your reality, people should be on guard all the time because sales and marketing were already continuous, even before hidden cameras. In actual reality, most people don't care that much about being sold to as long as the sale itself it not abusive.
> 3) Ethics are not so absolute. Something that is only mildly bad at an individual level can have terrible results when thousands are doing it. (Littering, for instance, or illegal hunting/fishing.) This is known as a social trap, and it leads to negative outcomes for everyone involved.
Sure, but that omits the necessary step of justifying this behavior as being either mildly bad on an individual level or terrible on a mass scale, much less both. It is neither.
Also, I would add item 0: advances in technology mean that surveillance devices will only become smaller, cheaper, and more connected over time. The future you fear so much is, in fact, inevitable.
It is true that technology (both social and digital) continues to progress, and that the genie can't be put back in the bottle once it escapes. However, you don't have to put it back in the bottle. Speed limits don't stop speeding, and laws against murder don't stop homicide. The legal and regulatory system exists not to fully prevent understand behavior, but rather to reduce it to a manageable level.
In short: I agree with one part of your premise. Technology will continue to evolve and will continue to challenge human society in this area. Unlike you, however, I don't believe that we have to roll over and accept the implications and consequences of unregulated privacy invasions, neuromarketing and whatnot.
I'm sure that in the future, we will also create cheaply available opaque faraday cages that you can roll around in if you wish. And that most people will not care to do so.
Does not protect your exposed face
>the de facto privacies of anonymity, free association, and predictable rules of social engagement
Are outdated illusions with no basis in fact
>>the de jure "reasonable expectation of privacy"
> Does not protect your exposed face
Yeah, that's why it is "reasonable expectations" not "absolute enforcement."
Stalking per se is mostly only illegal because it becomes harassment and bothers the victim. This kind of monitoring is entirely unobtrusive. As the response to the original tweet illustrates, most people aren't even aware that it is happening.
Being subjected to constant sensory input and trickery from dozens of teams of experts on consumer psychology is bad enough when they haven't also been stalking and recording your every move.
Caveat emptor becomes an absurd position when the power imbalance is so great. Massive data collection and mining needs to be reigned in. The fact that it's not obvious people seeking to trick you by any means necessary are recording you everywhere you go does not make it OK, at all. Surveillance capitalism is way, way over the line, has been for some time, and just keeps going farther. That they're good at keeping you from realizing you're under surveillance is no defense whatsoever.
Consumers do try to aggregate data for the equivalent of "massive data collection and mining". Most just don't care to pay for something that is not wholly controlled by a storefront. Generally, producers are more likely to understand the ROI.
Well, you never know into what dystophia you are heading...
There's a huge double standard in place that makes it somehow wrong for computers to do what humans have been doing without objection for decades or millenia.
I don't believe this. This kind of advertisement in public would be illegal in germany as it is mass surveillance.
Not that this is a bad thing---being able to think differently like this is one of the positives of having countries!---but relative to the rest of the world, what Germany considers "surveillance" is unusual and sometimes surprising.
I also tried it on a few internet porn images. It looks like it is definitely only relying on the face for determining gender, or it thinks that there are a lot of women with hairy flat chests and large penises...
Runs on CPU only, realtime 1280x720 @ 15 fps.
Is it creepy? Sure. But anyone can run it. I was looking at a rewrite to work in CLI with a web interface instead. But the core loop is the magic part that makes everything work nicely.
What's the worst that could happen? The local advertising regulator will order you to meet the regulatory standards or remove the cameras, but give you x number of weeks to act per unit installed.
Problematic parts: sourcing these images (I know there sure aren't many photos of pre-transition me, and I don't let the ones in my possession go much of anywhere), lots of ethical issues around having a system that can say "I am 97% sure this person is gonna want to transition". Also probably lots of other ethical issues I'm not thinking of.
Maybe i should try to sell it..
the idea of collecting who looked at your display is invaluable. It would be beneficial for both government and ad agency. Ad agency is obvious but government could learn if displays present information people want and it was presented in a manner to catch their attention. The negative aspects of government use could be limited through privacy laws and such.
Microsoft's service is much more consistent and accurate (I've tested the same images...): https://azure.microsoft.com/en-us/services/cognitive-service...
"error_code": 0, <--so i guess no error
"age": 26, <--- they need to re-calibrate that and make it x3
"african": 17, <--- wow!!!
"asian": 8, <--- wow!!!
"caucasian": 65, <--- for real!!!!!
"hispanic": 7 <--- wow!!!
"surprise": 4, <--- even HE can't believe he's POTUS
"anger": 76, <--- not surprised at all
"sadness": 4 <--- #SAD
They guessed 39 for me (I'm 25). The age bit seems not so accurate
"WorkSmart can track workers' keystroke activity and take webcam images to ensure they're doing their jobs."
I worked for the U.S. Postal service for a time doing data entry (here as a matter of fact... http://www.sltrib.com/news/3445651-155/the-first-and-last-of...).
That job measured the number of keystrokes per hour of each employee. You had to maintain a 10,000 keystrokes per hour minimum data entry rate, they also spot checked for accuracy. Capturing the data you were suppose to enter (a scan of a piece of mail) what you entered, and what should have been entered.
While there was no question about goofing off... they used commodity hardware, but nothing else was general purpose (no internet, no email, no solitaire, no obviously general purpose OS), and no phones, talking., etc... they were very much watching for speed and accuracy during the entire time you were clocked in.
If they knew what should have been entered, why couldn't they just automate the data entry?
OCR was pretty bad back in 1995 (which is why the Palm Pilot had a special handwriting for users to learn).
See this NPR piece:
Also "Act Three" of Episode 70 of "This American Life":
I did the data entry job for (as I recall) about a year and a half: it paid the bills and gave me a better life than I would have without it and I didn't have the right qualifications or work experience for anything better. For those reasons I was appreciative of the work while at the same time I looked to improve my working situation... something I can say of my work today (though what counts as improvement in working situation is way different now).
There are many jobs out there that need doing. Many of them are boring, or dirty, or dangerous. I don't think that necessarily makes for any more or less of a "sad life". I'm completely thankful to those that do those boring, dirty, or dangerous jobs. Some of them, like me, did it as an early first job sort of thing and used the work experience to get a better job: we paid our dues as it were. Others, like what I find unfulfilling or uninteresting. Some want to work outside, some want to work with their hands, some want to work with minds, and some simply want to be a bit financially better off than they are without the work.
"There is no such thing as a lousy job - only lousy men who don't care to do it."
We had labs of iMacs and if anything happened to a machine, kids would (more often than not) just yank the power cord. Occasionally this would foobar the machine entirely and create unnecessary work. We couldn't catch the culprits.
So, if the machine managed to boot up successfully, after an unexpected power loss, we would take a photo using the built in camera and send it along with a ticket to the job queue, as well as do a complete re-image of the machine (automatically).
The number of funny photos we collected of kids just starting at the computer with WTF looks on their faces. But - from these photos we at least had the opportunity to educate the individuals about how to look after the computers a bit better.
I think this is the original story, you can follow the "related stories" links at the bottom to see how it developed: