Check out the video here http://sightcorp.com/ for an ultra creepy overview. You can even try their live demo: https://face-api.sightcorp.com/demo_basic/.
I feel a little bad about calling out one API provider specifically, so here's a bunch more:
Face tracking, emotional analytics and vision based demographics analysis is a pretty huge industry. There's a entire spectrum of uses for this tech, from the altruistic (psychology labs, humans factors research), to the well, not.
We need HIPPA for all personal information, not just medical. We have an expectation of privacy in being "lost in the crowd" when we're out and about. Our physical & online whereabouts, who we're physically with, who we're communicating with, our personal contact information, and obviously payment information is private information that can be harmful if not kept private (false positives in automated legal systems, identity theft, and including all the defenses of securing medical information).
Anybody who chooses to hold such information must regard it with a high level of respect and privacy. Since nobody is doing so, and there are no penalties for violating privacy, and this gets into fundamental rights and proper functioning of society, it seems applicable to federal law.
Sure, paper medical records suck and aren't inherently more or less secure, but no one breaks into a car and runs away with 500 patients' medical histories when each patient's record fills pages, folders, or filing cabinets, rather than bytes on a hard drive (or even better, it slips away through a network connection that no one in the hospital even knew existed thanks to a back door on a piece of medical equipment).
HIPAA largely means that your medical information has been outsourced to whatever software/network/hardware provider claimed they could do the job (and whoever they outsourced the job to in some cases). If you don't sign whatever HIPAA agreement(s) your provider puts in front of you, chances are they can't treat you, so what choice do you really have?
The UK has the Data Protection Act which does some of this.
One radical option would be to grant people copyright over their own PII (with about a billion caveats to allow journalism etc.)
Then you'd just see setups like the financial industry has. You get analysts, call them journalists, and have people subscribe to your publication. The journalists go get insider-ish tips and 'publish' them to a select group of followers.
With laws like that you'd just hire a full time business analytics journalist to cover your store.
I thought Minority Report was still a few years away...
Thanks for the links, this stuff is both fascinating and scary.
Sentiment analysis for everyone!
The cameras retailers use with their surveillance systems are coming with facial recognition built in now. 
And lots of retailers, banks, etc, are using systems that track people's visits across multiple locations. 
You'll see a lot of these systems being sold as fraud/loss prevention solutions. The reason for this is that it's a relatively easy sell this way - customers can count how many thieves they've caught this way to easily determine the ROI they're getting on the system. Once the systems are in place, it's relatively easy to start using them for marketing related purposes.
Not all uses of systems like these are necessarily unethical. Consider a case where you want to set up a rule like 'if the average lineup length at the checkouts exceeds 5 people, call backup cashiers'. The problem is that once you have something like this in place, it's very tempting for company execs to want to use the data for legal but less than ethical purposes.
Note that some ethical consensus is key---without it, companies can just price "Well, some customers think image recognition is creepy" into the risk model and do it anyway. Compare privacy concerns---people talk big about their concerns over privacy, but in practice, we're still in a world where a survey-taker can get very personal information from a random individual at a mall by offering a free candy bar. Until and unless people arrive at a common consensus that their personal information---including their face---has value or they have a proprietary right to that information, even in public, there's no real tractable solution to this problem.
... because there's no real agreement that there's a problem to solve.
The department of commerce tried to facilitate talks about establishing a voluntary standard. The surveillance industry was so terrified of the idea that they should be held to a principled position that they wouldn't even budge on one of the weakest possible protections: A voluntary-participation standard that said people must opt-in to be identified by name through facial recognition when they are on public property.
https://www.eff.org/document/privacy-advocates-statement-nti... (and previous HN discussion on negotiations falling apart: https://news.ycombinator.com/item?id=9729696 )
My local gas station upgraded its pumps recently to allow it to play video ads on the screen used to do the credit card transaction. I don't doubt it's partially the reason that gas station is still operational when similar non-franchises vendors in town have gone under.
I would rather have no content than ad-supported content. of course, nobody will ever offer that! You can't sell ads if people can opt out, and too many big players think they're the only way.
Thank gas station should have charged more or folded than sell you shit you don't want, won't want, and will never spend money on.
Meanwhile, there are some inroads into financial support alternatives to ads everywhere. Google has a "contributor" product (https://contributor.google.com/v/marketing) where you can basically bid against the ads they'd vend to you; instead of an ad running, you pay a microtransaction to buy the privilege of no ad.
It's an interesting idea, but it only works with Google's ad network.
Frankly, i don't mind google ads; i mind wasting 20 seconds to load a page with about two paragraphs of content and 3mb worth of ads. But this is all ignorig the broader point: why are we basing our revenue off of patterns many realize for being toxic, consumerist, negative-value? People AT GOOGLE will happily admit this while working to build it.
I do my own part by supporting Ad Nauseum and actively punishing sites that serve ads, particularly facebook and google. It's also decent for a (very shallow, for now) layer of noise for your ad profiles. Offer me a flat fee and convince me to spend; don't trick me into viewing ads.
Even the pay-for-no ads model doesn't hold up, because if you pay for content, why wouldn't they just double-collect and make you pay for ads served with the content? I purchased my phone and my phone service, but I still get ads in my notifications. Because I didn't pay "enough" to avoid it.
It's like paying off a blackmail ransom. You give them $100 and they come back next week and say "how about another $100?"
Your source in the marketing material of an IP camera manufacturer.
We research that space and I can guarantee that less than 0.1% of IP cameras have facial recognition built-in or running. These manufacturers, like Axis, whom you cite, would love for such capabilities but they are still very uncommon.
While I'm sure this is true (since the majority of IP cameras in the world are cheap things little more than webcams), do you have a number for retail stores specifically? I know many of the larger chains spend a lot of money on their cameras and movement detection and other intelligence has been onboard those for at least 15 years.
Consider Amazon Go (https://www.theverge.com/2016/12/5/13842592/amazon-go-new-ca... setting up an account with a store, users enter, grab what they want, and leave. The system of cameras and biometric trackers observing the store figures out after-the-fact what you grabbed and charges it automatically to your account through a sensor fusion including face recognition. That's a level of convenience rivaling ecommerce for things people want to grab by hand (often produce and small items, for example), and it's completely enabled by this category of technology.
I get your basic point and I don't disagree that we need more privacy protection. But, no, we do not "need HIPPA" for all personal information.
This is a wonderful app. I will use it every day!
It also picked up the colors in my aloha shirt perfectly. (Anyone who knows me knows that I am to aloha shirts as Steve Jobs was to black turtlenecks.)
When I want to feel young and go shopping for shirts, now I know what to do!
But it also scores me high for anger and sadness, despite (what I thought to be!) a rather neutral expression. Perhaps it knows more than we think :)
Though I'm guessing it gets a lot more accurate when it can take and average multiple shots.
(Anyone else with a glasses, a beard, or other non-typical facial features want to comment? I'm curious now how well their system handles these?)
Thinks he is 45 years old. He is 64! Not calibrated for the superior Russian genetics.
I don't think glasses and beard are non-typical facial features!
The second time it thought I was 28, which increased my happiness even more.
My partner tried it and it took 10 years off her age, and found an angry 31 year old man hiding in the folds of her clothing!
(edit)... on the other hand they probably were just trying to sell product so thought flattery was the right approach...
Covered up my receding hairline a bit and it said 29. I reckon if I shaved I could get it down to about 22 since that's how old people usually think I am.
Pulled a disgusted face and it said 47. Hmm.
Rekognition guesses a wider age range - but gets a "correct" answer - the sightcorp one guesses me as 7 years younger than I am.
Applied Science has a good video on how they work:
We used a Cisco Meraki router once for a client and rigged it up to know who was in the office (for fun, to be aware that it could be done). It'd be nice to know the iPhone/iPad scramble themselves if possible.
I'm not sure if the same can be done to tags, but considering the size of the tiny electronics, and the fact that they are manufactured under the assumption they'll never need to be touched (aka, no CMOS spike tolerance), it might be trivially...
...wait. I just remembered about RFID alarm barriers in retail stores.
Well this is annoyingly difficult to discuss, then...
Though I concur, might be used for evil purposes, couldn't resist posting this. I just love disposable cameras hacks.
Why? Is there a law against public discussion about how to disable an anti-theft device or something?
"Wore these heels to six bars, didn't get hit on once. Please repair or refund."
I suppose reading the paper is one option.
EDIT: Link to the paper seems to be broken. Here's the PDF: http://www.cs.vu.nl/~ast/Publications/Papers/lisa-2006.pdf
I tried variations of the standard expressions and pulled off sad, disgust, anger quite easily.
I knew binge watching Lie To Me before my psychology mid would come handy at some point!
Someone recently thought I was 30. People aren't any better than computers.
I am not 91.
Understand how your customers feel. Detect and measure facial expressions like happiness, surprise, sadness, disgust, anger and fear."
(There's an old joke that sometimes shows up on HN about augmenting an automated bugtracker to snap a photo when a crash is detected or a bug is reported, so developers can be reminded that bugs tie down to real people who are actually sad / angry that the software failed them ;) ).
I'd rather not give them my facial image so they can optimize for me.
Here, instead, there is no indication that you're being watched, analyzed and kept recorded for indefinite amounts of time.
Privacy is not black and white.
There is a world of difference between someone seeing you for a moment as they pass you in the street and forgetting you a moment later, and automated systems that permanently record everything, analyse it, correlate it with other data sets, make it searchable, and ultimately make automated decisions or provide information that will be used by others to make judgements about the affected individuals, all without the knowledge or consent of those individuals and therefore without any sort of reciprocity.
The idea that you have no reasonable expectation of privacy in a public place dates from a time when you could also expect to pass through town in relative anonymity, go about your business without anyone but your neighbours and acquaintances being any the wiser, and would probably change those neighbours and acquaintances from time to time anyway so the only people who really knew much about you would be your chosen long-time friends and colleagues. I think it's safe to say that that boat sailed a while ago, and maybe what privacy means and how much of it we should expect or protect aren't the same in the 21st century.
Preventing the misuse of Blunt Instrument Technologies™ is literally what laws are for. Surveillance is just a club we don't have laws about yet, but should.
It's just creepy.
The salesperson doesn't know in what shops you have been before.
The salesperson might also not know you talked to his colleague the day before.
This is about trust and privacy. You can't trust what they do with your data.
1) These people are rare in the general population, and demand for their time is likely to be incredibly high. Therefore, they cannot be deployed everywhere, unlike machines.
2) When confronted with a human being in a sales scenario, people have a chance to be on guard against potential manipulative behavior. When the sales scenario becomes ubiquitous and invisible, it is much harder for people to avoid being taken advantage of.
3) Ethics are not so absolute. Something that is only mildly bad at an individual level can have terrible results when thousands are doing it. (Littering, for instance, or illegal hunting/fishing.) This is known as a social trap, and it leads to negative outcomes for everyone involved.
A temporary problem solved by natural selection, technological augmentation, and increasing incentives. Perfect performers in any profession are hard to come by. Ambitious people still strive to get there.
>2) When confronted with a human being in a sales scenario, people have a chance to be on guard against potential manipulative behavior. When the sales scenario becomes ubiquitous and invisible, it is much harder for people to avoid being taken advantage of.
Because people don't understand technology or sales. In your reality, people should be on guard all the time because sales and marketing were already continuous, even before hidden cameras. In actual reality, most people don't care that much about being sold to as long as the sale itself it not abusive.
> 3) Ethics are not so absolute. Something that is only mildly bad at an individual level can have terrible results when thousands are doing it. (Littering, for instance, or illegal hunting/fishing.) This is known as a social trap, and it leads to negative outcomes for everyone involved.
Sure, but that omits the necessary step of justifying this behavior as being either mildly bad on an individual level or terrible on a mass scale, much less both. It is neither.
Also, I would add item 0: advances in technology mean that surveillance devices will only become smaller, cheaper, and more connected over time. The future you fear so much is, in fact, inevitable.
It is true that technology (both social and digital) continues to progress, and that the genie can't be put back in the bottle once it escapes. However, you don't have to put it back in the bottle. Speed limits don't stop speeding, and laws against murder don't stop homicide. The legal and regulatory system exists not to fully prevent understand behavior, but rather to reduce it to a manageable level.
In short: I agree with one part of your premise. Technology will continue to evolve and will continue to challenge human society in this area. Unlike you, however, I don't believe that we have to roll over and accept the implications and consequences of unregulated privacy invasions, neuromarketing and whatnot.
I'm sure that in the future, we will also create cheaply available opaque faraday cages that you can roll around in if you wish. And that most people will not care to do so.
Does not protect your exposed face
>the de facto privacies of anonymity, free association, and predictable rules of social engagement
Are outdated illusions with no basis in fact
>>the de jure "reasonable expectation of privacy"
> Does not protect your exposed face
Yeah, that's why it is "reasonable expectations" not "absolute enforcement."
Stalking per se is mostly only illegal because it becomes harassment and bothers the victim. This kind of monitoring is entirely unobtrusive. As the response to the original tweet illustrates, most people aren't even aware that it is happening.
Being subjected to constant sensory input and trickery from dozens of teams of experts on consumer psychology is bad enough when they haven't also been stalking and recording your every move.
Caveat emptor becomes an absurd position when the power imbalance is so great. Massive data collection and mining needs to be reigned in. The fact that it's not obvious people seeking to trick you by any means necessary are recording you everywhere you go does not make it OK, at all. Surveillance capitalism is way, way over the line, has been for some time, and just keeps going farther. That they're good at keeping you from realizing you're under surveillance is no defense whatsoever.
Consumers do try to aggregate data for the equivalent of "massive data collection and mining". Most just don't care to pay for something that is not wholly controlled by a storefront. Generally, producers are more likely to understand the ROI.
Well, you never know into what dystophia you are heading...
There's a huge double standard in place that makes it somehow wrong for computers to do what humans have been doing without objection for decades or millenia.
I don't believe this. This kind of advertisement in public would be illegal in germany as it is mass surveillance.
Not that this is a bad thing---being able to think differently like this is one of the positives of having countries!---but relative to the rest of the world, what Germany considers "surveillance" is unusual and sometimes surprising.
I also tried it on a few internet porn images. It looks like it is definitely only relying on the face for determining gender, or it thinks that there are a lot of women with hairy flat chests and large penises...
Runs on CPU only, realtime 1280x720 @ 15 fps.
Is it creepy? Sure. But anyone can run it. I was looking at a rewrite to work in CLI with a web interface instead. But the core loop is the magic part that makes everything work nicely.
What's the worst that could happen? The local advertising regulator will order you to meet the regulatory standards or remove the cameras, but give you x number of weeks to act per unit installed.
Problematic parts: sourcing these images (I know there sure aren't many photos of pre-transition me, and I don't let the ones in my possession go much of anywhere), lots of ethical issues around having a system that can say "I am 97% sure this person is gonna want to transition". Also probably lots of other ethical issues I'm not thinking of.
Maybe i should try to sell it..
the idea of collecting who looked at your display is invaluable. It would be beneficial for both government and ad agency. Ad agency is obvious but government could learn if displays present information people want and it was presented in a manner to catch their attention. The negative aspects of government use could be limited through privacy laws and such.
Microsoft's service is much more consistent and accurate (I've tested the same images...): https://azure.microsoft.com/en-us/services/cognitive-service...
"error_code": 0, <--so i guess no error
"age": 26, <--- they need to re-calibrate that and make it x3
"african": 17, <--- wow!!!
"asian": 8, <--- wow!!!
"caucasian": 65, <--- for real!!!!!
"hispanic": 7 <--- wow!!!
"surprise": 4, <--- even HE can't believe he's POTUS
"anger": 76, <--- not surprised at all
"sadness": 4 <--- #SAD
They guessed 39 for me (I'm 25). The age bit seems not so accurate
"WorkSmart can track workers' keystroke activity and take webcam images to ensure they're doing their jobs."
I worked for the U.S. Postal service for a time doing data entry (here as a matter of fact... http://www.sltrib.com/news/3445651-155/the-first-and-last-of...).
That job measured the number of keystrokes per hour of each employee. You had to maintain a 10,000 keystrokes per hour minimum data entry rate, they also spot checked for accuracy. Capturing the data you were suppose to enter (a scan of a piece of mail) what you entered, and what should have been entered.
While there was no question about goofing off... they used commodity hardware, but nothing else was general purpose (no internet, no email, no solitaire, no obviously general purpose OS), and no phones, talking., etc... they were very much watching for speed and accuracy during the entire time you were clocked in.
If they knew what should have been entered, why couldn't they just automate the data entry?
OCR was pretty bad back in 1995 (which is why the Palm Pilot had a special handwriting for users to learn).
See this NPR piece:
Also "Act Three" of Episode 70 of "This American Life":
I did the data entry job for (as I recall) about a year and a half: it paid the bills and gave me a better life than I would have without it and I didn't have the right qualifications or work experience for anything better. For those reasons I was appreciative of the work while at the same time I looked to improve my working situation... something I can say of my work today (though what counts as improvement in working situation is way different now).
There are many jobs out there that need doing. Many of them are boring, or dirty, or dangerous. I don't think that necessarily makes for any more or less of a "sad life". I'm completely thankful to those that do those boring, dirty, or dangerous jobs. Some of them, like me, did it as an early first job sort of thing and used the work experience to get a better job: we paid our dues as it were. Others, like what I find unfulfilling or uninteresting. Some want to work outside, some want to work with their hands, some want to work with minds, and some simply want to be a bit financially better off than they are without the work.
"There is no such thing as a lousy job - only lousy men who don't care to do it."
We had labs of iMacs and if anything happened to a machine, kids would (more often than not) just yank the power cord. Occasionally this would foobar the machine entirely and create unnecessary work. We couldn't catch the culprits.
So, if the machine managed to boot up successfully, after an unexpected power loss, we would take a photo using the built in camera and send it along with a ticket to the job queue, as well as do a complete re-image of the machine (automatically).
The number of funny photos we collected of kids just starting at the computer with WTF looks on their faces. But - from these photos we at least had the opportunity to educate the individuals about how to look after the computers a bit better.
I think this is the original story, you can follow the "related stories" links at the bottom to see how it developed:
Edit: Here is what the buttons look like. Gender and age.
The cash register had a matrix of buttons
They'd just push a button as they punched in your order.
Now, would they press a random demographic button after that (instead of the same button every time)? Maybe, however there are numerous other ways to increase compliance in that case as well. If the logs kept coming back sketchy, well the cashiers are on tape - bam, another firing (note from the video tape: cashier intentionally hits male button when it's obviously a female, then repeats the behavior multiple times). Eventually the example gets across to the other workers to at least make an effort.
No idea if the operator is also recording gender and perceived age group etc., but I do know that on most occasions, you can opt not to answer the post code question.
I actually did some experiments - for different properties in roughly the same area/price range, I told different real estate agents different postcodes. It is beyond reasonable doubt that the code you tell them play a huge role on how they rank you as potential buyers. When you tell them a random north shore post code, you are guaranteed to receive a nice & friendly follow up on the coming Monday, however if you tell them that you live in the west (when mostly inspecting north shore properties), they would smile and immediately end the whole conversation.
The sample size here is ~50, which I believe is big enough to draw some reasonable conclusions.
You know about the anonymized and aggregated data that one can buy ? Well it can be easily anonymized and deaggregated.
The good part is that you don't even have to do it yourself and go directly to data brokers that have done the hard work for you.
Not at all out of anything that might be categorized as malice, just to add this datapoint to my mental map.
Companies amalgamate that data, then sell lookups of varying degrees.
Screwfix in the UK gather a lot of personal data as part of their sales process, they're the least covert about data gathering I've seen.
I've been annoyed by this stuff long before most people were ready to consider privacy concerns anything more than paranoia.
My safeway card is in someone else's name... one day they had to pull it for some reason and I got a "Have a nice day.... Mr.... Soprano." and a big smile.
"Thanks good to know, from now on I'll go buy at a business where this artificial limitation does not exist."
In our case, the average big ticket buyer travelled an average of 30 miles, which was awesome in that it made the end caps more valuable.
Just a generic teen drama
IE they keep the last 4 digits of your credit card, combined with a post code you have a unique id.
Googling for "why stores ask for the zip code" brings up a lot of press post, e.g. https://www.forbes.com/sites/adamtanner/2013/06/19/theres-a-...
Seriously, I mostly shop in the same neighbourhoods. And when not, there's often something on the counter with their address...or I can give a mate's address and let him get the junk mail....
99,99% of the time when asked a piece of personal data it is for cross-referencing or other privacy invading purpose.
It's really handy to see how many people the activities attract, and who they appeal to most. You're tracked everywhere!
Facial recognition is a unique identifier but cashiers have access to almost nothing they can record... [Edit] What was it?
Edit: clarified that I am asking about the history here, what information was manually collected by cashiers as parent stated
The tweeted advertisement system also looks like it's only recording demographics. Not individual personal IDs.
Give me an example of what they used to record by hand. All I can think of is "male, adult". I am specifically interested in what else you say they used to record.
I wasn't asking about the present status quo, only your historical statement about cashiers recording by hand.
Today advertisers that phone home (spyware) often lie and claim only aggregate data is produced - but this manual example really is the kind of data that is okay. It's far less detailed than something like facial recognition. Thanks for adding the link!
You could make a note of a person's seeming affect—positive/negative/neutral emotion.
Keep in mind this has to be done while doing usual cashier things, so not much attention can be taken up by it.
Best part - we got a first gen raspberry pi to crunch all the data locally at 2-5fps. Gender, age group (child, youth, teen, young adult, middle age, senior), and approximate ethnicity were all recorded and logged. Everyone had a unique profile and could track people between cameras and days (underlying facial features do not change).
Next time you look at digital signage, just be aware that it is probably looking back at you.
For me I knew how our data was anonymized. So while our system would be able to say "I have seen person 1234 at locations 4,7,9,11 on dates x,y,x" we had absolutely no way of knowing who 1234 was or anything about them, even the unique identifier was just a hash.
Obviously it depends on how much data you collect/store, personally I don't think the things shown in OP are all that onerous (sex, age group, gender, rage, time spent looking at ad).
Minor nitpick, but giving someone a nickname isn't the same as anonymization.
"Hey Bob, thanks for logging on. Did you know we've been calling you 1234 these past five years!"
When a passive recognition system _uniquely_ tracks & identifies a person, it just takes time before that gets cross-referenced.
(different story if the data gets aggregated, or you scrub the uid completely after some window)
What's the difference between this and an anonymised dataset? No PII is tracked, it's just looking at you and calculating what emotion you're likely feeling to show more targeted advertising.
I mean, I'm personally against it but we've got to prove a higher and higher ROI to justify the cost of digital signage, this leads to just that.
Inadvertently spill the name of the company making this and those wanting to use it to the public so they can receive their well deserved backlash.
If you want to start a war, have a quick Google for the big players, they'll have this tech in and will be proudly advertising it on their site.
The thing is: People don't care. Not your HN reader (evidently), but your Joe Bloggs. Hell, Snowdon told them the Five Eyes are reading their email and they barely gave a damn.
That's exactly what bigbugbag meant. You don't really care and are in for the job. That's okay but just be honest here.
I could sell useless products to old people and make some money on the side. But I don't because I'm against that.
And some I sacrifice things for. A person can't die on every single hill they happen to fancy :)
"Hi. I am the original taker of the photo. There is a screen that normally shows peppes pizza advertisements in front of peppes pizza in Oslo S. The advertisements had crashed revealing what was running underneath the ads. As I approached the screen to take a picture, the screen began scrolling with my generic information - That I am young male (sorry my profile picture was misleading, not a woman), wearing glasses, where I was looking, and if I was smiling and how much I was smiling. The intention behind my original post on facebook was merely to point out that people may not know that these sort of demographics are being collected about them merely by approaching and looking at an advertisement. the camera was not, at a glance, evident. It was merely meant as informational, maybe to point out what we all know or suspect anyway, but just to put it out in the open. I believe the only intent behind the data collected is engagement and demographic statistics for better targeted advertisements."
Minority Personalized Advertising https://www.youtube.com/watch?v=7bXJ_obaiYQ
If a stranger on the street started following me, taking pictures without permission, and taking notes about my appearance of actions and storing it in their database, I would say he was behaving unethically.
Ask street photographers - it's a delicate balance. Many people really dislike having their pictures taken without their permission.
I mean, this is literally the "global village" coming to fruition. The online shopkeeper knows you just as well as a shopkeeper in a real village - it knows who you are, it remembers all your previous visits, it knows your hobbies (even if you didn't tell him about them, but someone else in the village), it can make suggestions based on that.
When you buy flowers, the village shopkeeper knows not only who's buying them, but also has a good idea for whom these flowers are intended. That's where we're heading.
This is the level of (non)privacy that we historically had, living in much smaller communities than modern cities. The trend of more anonymity brought by urbanization is reversing, but it's not something new or horrible, if anything, the possibility of being just another face in the crowd is an anomaly that existed for a (historically) short time and is slowly coming to an end once more.
ethical != opinion
Ethical has weight and you can lose your job, and even go to jail for being unethical. RMS actually has a very strong academic ethical mind (Even though I disagree with him more then agree). BUT ethics isn't easily defined.
Here is a decent link to defining Ethics. https://www.scu.edu/ethics/ethics-resources/ethical-decision...
Mind explaining where you think I implied otherwise? Or why you keep repeating this despite the fact that I haven't?
>Ethical has weight and you can lose your job, and even go to jail for [what your boss thinks is] unethical.
For example, doctors who get fired for performing abortions, something that most of hn doesn't find unethical.
That is itself, merely what Manuel Velasquez, Claire Andre, Thomas Shanks, S.J., and Michael J. Meyer find ethical.
Do we really want to commodotize the simple act of walking down the boulevard? Make every moment in public space (and private digital space!) sliced, diced, and scrutinized by God knows how many data munchers, middlemen, analytics brokers, and ethically challenged people in order compel as much thoughtless consumer spending as possible, long term consequences be damned? Allow incredibly detailed profiles to be built up on every person, spanning the decades of their life? And of course, there is always the danger of governments co-opting and abusing this information years or decades in the future, after adminstrations have come and gone, and laws have been overturned, drastically altered, or ignored. As the tech and richness of the data increases, the temptations will as well. Well meaning people can do nefarious things in certain contexts.
I believe our societal institutions and corporate entities are not mature enough to safely handle the power granted by unrestrained, high resolution data on the entire populace
Granted, I don't think things would get too terrible without overwhelming protest, but I don't see why we should bet on that.
>Do we really want to commodotize the simple act of walking down the boulevard?
It's not a boulevard you are walking down, but a bazaar. The only difference is that modern technology allows you to visit the bazaar to be "sliced, diced, and scrutinized by God knows how many data munchers, middlemen, analytics brokers, and ethically challenged people in order compel as much thoughtless consumer spending as possible" without physically travelling there.
>Allow incredibly detailed profiles to be built up on every person, spanning the decades of their life?
Sure. It's called a relationship. Or a memory.
>And of course, there is always the danger of governments co-opting and abusing this information years or decades in the future, after adminstrations have come and gone, and laws have been overturned, drastically altered, or ignored.
You can safely replace "this information" with virtually anything useful and get the same effect. Do you feel the same about, say, nuclear weapons? Or legal authority to lock people in cages? I would say either of those is far more dangerous than data. Yet we recognize that the power exists regardless, and the government can at least put it to good use.
>I believe our societal institutions and corporate entities are not mature enough to safely handle the power granted by unrestrained, high resolution data on the entire populace
Then the obvious answer is to improve societal institutions and corporate entities, which is useful in and of itself, rather than futilely trying to impede the progress of technology.
Fair point, I could have dropped that sentence. I stand by my gross mischaracterization statement, though. Programmatic surveillance is very different from a stranger looking at someone.
> Sure. It's called a relationship. Or a memory.
The profile built up on people by ad brokers and spy agencies is a relationship? I don't think that's how most people would describe it.
> You can safely replace "this information" with virtually anything useful and get the same effect. Do you feel the same about, say, nuclear weapons? Or legal authority to lock people in cages?
Uh, a core part of the problem is this information being coupled with the ability to lock people in cages (or exert power in other ways). Obviously the data by itself is inert and useless. It's what people might do with it that matters.
Important examples would be restrictions on free speech and suppression of dissent. Imagine something like a credit score 2.0, created by analyzing a lifetime of private communication, online activity, and transactional data.
Those websites you visited 12 years ago? It's gonna cost you on your next car loan. And don't even think of running for city council -- the dirt will really come out then. Etc etc.
Obviously, technology brings a lot of great benefits. I'm all for that. I think we should just be aware of new pitfalls it brings as well, and try to account for them.
Most people use language woefully imprecisely. The relationship I have with the barista at the cafe near my office isn't the same as the relationship I have with my sister but it is a relationship of the kind that's relevant here. Knowing what I order and when, recognizing me, etc.
>Uh, a core part of the problem is this information being coupled with the ability to lock people in cages (or exert power in other ways). Obviously the data by itself is inert and useless. It's what people might do with it that matters.
A nice thought, but in practice, when we try to fragment this power by privatizing police, prisons, military, firefighting, etc, all of which have many modern examples, things do not turn out well. As unreasonable as it may sound, the evidence suggests it's better to put all the eggs into one poorly run basket.
>Imagine something like a credit score 2.0, created by analyzing a lifetime of private communication, online activity, and transactional data....
Oh, I imagine.
Yes but that is a very different type of relationship with quite different characteristics. I hope it isn't too difficult to infer I'm arguing not everyone wants these types of relationships. To call it "just another relationship" is not very helpful for the discussion.
This type of relationship may have significant extended and unforseen side effects. It's not well constrained and the preserved artifacts could easily be hijacked for countless unknown purposes decades in the future. It's a fundamentally new paradigm that we don't fully understand yet, and given humanity's historical tendency to abuse new mechanisms of power as they become available, I think some caution is very reasonable.
Perhaps to make my position a little more clear, a key point on why detailed data profiles could be quite dangerous is their scalable and programmatic nature. Never before could a single click of a button identify every individual who has been discussing topic X in the last year, or spit out a list of everyone with 2 degrees of connection to some targeted individual. The same unlimited possibilities that make this stuff exciting to technologists are also why it may be quite dangerous.
These powers are unprecedented. You would need a rotating team of investigators inside every home and every place of business in order to gather this data in previous eras, not to mention even trying to collate and process it. It's equivalent to someone in previous eras standing over your shoulder and writing down every newspaper article you read, taking notes on every conversation you have, etc. Because it is invisible, it doesn't feel this way, but that is what's happening.
> when we try to fragment this power by privatizing police, prisons, military, firefighting, etc, all of which have many modern examples, things do not turn out well
I never suggested we do that?
I am equally surprised by the comments about how come engineers implement such systems, how they find it ethical, etc. I'm sorry, but it sounds just a bit out of touch with the real world, or just outside of HN bubble. Given the things that money motivates people to do, it's probably one of the least unethical things that has been done.
I am not judging that this is right or wrong, I am simply stating the fact that nothing about this should be surprising. Yes, this is slightly sad, but that's simply the reality of technological advancement. It's not really possible to expect the rest of the world to use the technology only for things considered 'right', etc.
Looking at things from a little distance, the whole thing is abhorrent, and paints a really sad state of our society. I wrote this many times, and will keep writing it: if you did the same things personally to your friend that people in advertising industry do to everyone, you'd most likely get punched in the face. And yet somehow marketing became a respectable occupation.
There isn't really much consensus---even on HN---that passive demographic data collection is a bad thing alone. People claim it is, and I believe they feel it is---then they turn around and do things that compromise their stated beliefs because it's convenient.
I liken it to the gap between the rhetoric around open source and free software and the reality that Windows and Mac OS make up approximately 90% of OS marketshare. You can believe what you want to believe, but from a business standpoint you'd be putting yourself at a disadvantage if you structure your business requiring FOSS operating systems to climb to even 25% of marketshare; there's a similar situation, probably, for customer data tracking and advertising preference tracking.
It feels straight out of Black Mirror.
OTOH, what is so interesting in simple face recognition, innit? That future became a past quite fast, meanwhile teh human rights never get old. (smile.jpg)
- it should be made clear that you are being analyzed e.g. by big yellow sticker near the camera
- no raw data should be stored
- it should be used to collect statistics, not identify individuals (?)
Is it sufficient to consider such a software as a fair use? What else would you add to the list to make it reasonable?
It's not enough to put a warning next to the camera because you've already captured them at that point and it's too late. If anywhere it would need to be at the entrance to the store.
But guess what? Customers HATE this stuff. The backlash and lost business is not worth it. See: http://abc7news.com/business/philz-to-stop-tracking-customer...
If a store has a warning label on the device engaging in this, it's bad because it's too late for not entering the store. I'm gonna complain right now at the store manager, maybe call the cops or sue. I'll be vocal about actively hating the store, the brand, the manager, the employees.
If I went to a store engaging in this without telling and I later learn about it, then I'm calling Keyser Söze and it's pitchforks and beheading time.
I suppose it will take a couple more generations of brainwashing to have the population ready to accept this kind of highly invasive technology. IIRC about 10 years the big brother awards was awarded to a french industry group for their blue book describing how to condition a population to accept surveillance and control technology over a few generations.
What exactly are you going to call the cops for let alone sue for?
While I don't agree with this kind of technology you seem to be overly emotional about it which really doesn't help the issue.
- do you know that loyalty cards are often used in stores to collect customer data (a kind of offline cookie)? do you consider it a bad/dangerous/unethical or does it sound ok for you?
- if instead of a camera there was a person looking at customers and recording his observation, would you feel bad about it?
Yes I do know that loyalty cards are used to collect data. I think most people do. I don't take loyalty cards for this reason and I'm glad that they are opt in although there is some financial pressure to take them.
- if instead of a camera there was a person looking at customers and recording his observation, would you feel bad about it?
I would feel bad about it and I think the person should ask my permission first.
But if that person just memorizes customer reactions to understand how people on average react to particular products or actions, that's ok, right? Because this is what sellers and business owners do to improve their product. So is it about human-to-human interaction or some more subtle detail? I'm biased here, so sorry if I miss something obvious in this situation.
Look, give up trying to justify it. Customers don't want it. You should find another application for this technology.
My interest to offline applications comes from personal experience: recently we demonstrated our product (not emotion recognition, but also capturing user's face) on an exposition. People came to our stand, used the product (so they clearly opted-in), asked questions, etc. After 2 days, we asked a girl at the stand "What do people think about the product"? "Well, in general, they are interested" she answered. Not much info, right? Definitely less informative than "65% expressed mild interest, 20% had no reaction and 5% found it disgusting, especially this feature".
So I don't try to justify this use case - my life doesn't depend on it - but I find it stupid not to try to understand your clients better when it doesn't introduce a moral conflict.
Don't many shops have a generic "you are under video surveillance"? Wouldn't that also cover something like this?
I don't mind being recorded at a checkout.
Recording me to decipher my thoughts instead of my actions crosses a line.
Given how widespread this kind of monitoring is, this approach is basically "I will punish the honest stores and reward a sneaky store by spending my money there instead"
Advertising? No. Sales? Definitely no.
Augmenting that single-player video game so that it adjusts content depending on emotions and gaze of the player? Ok. Better if the player is explicitly told the game will track their reactions though.
Also, another angle. Even for advertisers / "sales optimization", I'd forgive you if that was a local, on-site system. But if it's meant as a SaaS, with deployments connected to vendor's butt, then I am gonna actively try to screw with it if I learn there's one installed anywhere I frequent. Hopefully new EU laws will curb that, though.
What about collecting statistics to make better decisions? Let's say, you go to your favorite jeans store, but find out that current collection is disgusting. Does it sound ok for you if some sort of system would analyze your attitude to the product to improve it in later versions?
You can do controlled user-testing sessions with that system with specific people that have consented and are potentially compensated in some way. You will most probably also get more useful information out of that.
But being recorded "en masse" in a shop for that purpose I would think is invasive. I would totally avoid that shop if I knew that system was in place.
Also, I am not convinced that statistics lead to better design, so that would most probably be just wasteful, but that is another discussion :p
In A/B testing there is a clear context and purpose, and is normally negotiated between actual humans.
There might be middle grounds (A/B testing can be done online and use facial recognition and in relatively large scale) but for me it has to be opt-in (as in, you have to fill a form to join) not opt-out (as in, leave this webpage/shop if you don't want my tracking). This is more challenging to the organization proposing the tracking, because they need to provide some value in exchange so people actually sign up for that. But in the long term being founded on the principle of consensual mutually benefiting relationships can only be good for your organization/brand, right? As in: at last a company that treats people like humans!
When these things are used outside a controlled environment, things could get even more complicated: weird beards, squeezing of your eyes because of excessive sunshine, reflexive glasses, etc.
1. Accurate collection of facial features. Illumination, occlusions, head rotation, etc. may seriously affect accuracy, but this is exactly our main focus right now. We are at the very start of the process, yet early experiments and some recent papers show that it should be doable.
2. Correlation between real and detected emotional state. At the moment we concentrate on the 6 basic emotions and don't detect less common expressions like depression with smiling face. This topic is definitely interesting and I'm pretty much sure it's possible to implement given enough training data, but right now we try to concentrate on different things.
No, the point of the comment you are replying to is that there are emotions that are impossible to detect using external information. We can hide our emotions very well. The question is to what extent does external emotional information provide monetizable value?
This is an assumption which I'm not convinced holds true. Just because we can hide our assumptions well enough to fool other people doesn't necessarily imply that's it's impossible to detect using external information.
Facial recognition is by essence creepy.
I should also state that I think the first use of my data is ultimately unprofitable. Will the extra cents you make by advertising cinnabon to depressed looking people or hairdressers to long haired people really offset the cost of developing such a system? If applied to a broad population any customisation effects will be marginal.
I also believe that the non-anonymous tracking system is much more likely to produce value for companies and it would be very tempting when gathering anonymous data to cross reference with actual individual information. My concern about any tracking system is that by the motivation of profit it could easily shift from an ethical to non ethical space.
For it to not be used without opt-in consent. And to not hold any other benefits/privileges/incentives behind the wall of opting in.
Otherwise, what you're doing is deeply unethical.
The FBI can use automated tools for surveillance - which doesn't speak to ethics directly but indirectly, as we hope ethics drove those rules.
I can sit in my private store and observer people out the window all day, even take notes. That's not unethical; that's a sociological experiment or some such, and done millions of times a day.
It may be jarring or creepy to imagine an advertisement is sizing me up. Again, ethics is more than 'does it make people uncomfortable'.
And you're 'but what if a person does it' arguments are irrelevant - there's a clear difference of scale between the massively automated systems we're discussing and a single person with a pencil and paper.
Edit: there is more information in the article. Not going to read it, though.
The screen uses a software called Kairos to analyze faces. It can estimate age, gender, and whether you are "white, black, hispanic, asian, or other".
According to the marketing manager at Peppe's Pizza, he thought there would a label on the screen saying what's going on, but in fact there is just a small sticker on the back of it, which is quite hard to see.
The company making the screen, ProtoTV, says that people should be okay with this because ads on the internet are even more targeted. A government representative says that the system might violate laws about surveillance cameras.
I guess using such a system just to analyze people in real-time might already violate some surveillance laws like you said (in my country all such cameras must be warned about, even traffic cameras). But do you have any idea if those things also record data on customers? I could see how keeping such a database on customers might be dancing on the fine line of creating a "registry", which is pretty heavily regulated by laws at least here. Even more so if there is a possibility to identify real persons from that data.
What a terrible argument considering how universally hated internet ads are.
(If the "Am" part of the name seems familiar it's because it's Alan Sugar's son)
They installed one at a local petrol station so they lost my business. Perhaps I should be more vocal about it.
I'm not saying there isn't an absolute right and wrong, but I certainly don't find this as abhorrent as most people here, apparently.
I suppose a banner saying "you are being tracked" on tom of such a billboard could have quite an interesting effect. Come to think of it I've seen "Smile, you're being recorded!" in some shops.
IIRC the countermeasure is quite easy and requires something like a LED and a battery, but I may be wrong it's been 10+ years.
There are laws and customs around public places and what may be done there. E.g. depending on your local laws, if you're in public, people can usually take pictures of you and there's also nothing you can do about that.
Don't like it? Petition to have the laws changed. This is how we deal with such things in a democracy.
Trying to guild the engineers who build this system is both IMO wrong, but also completely pointless in terms of real-world effect.
Quite the opposite, in France "le droit à l'image" is a privacy right that allows anyone to request that any picture of them being taken to be deleted.
If you're not sharing this position it could be that you are younger or have been subjected to the conditioning of population by industries to make intrusive surveillance technology acceptable to them which has been going on for at least 15+ years afaik.
"if you don't agree with me you're either a kid or brainwashed" - nice.
From what I can tell this is anonymous analysis and classification - this kind of info is useful and I don't mind one bit that it's being collected - in fact if the data is accurate then I like it - I can provide feedback without effort. I prefer it much more than being spammed by pollsters or a service tracking and associating behavior with my profile.
I don't think that's true. Most of us tech-geeks are worried about privacy way more than the average person. I personally don't see this particular use case as too problematic, depending on what is done with the data - as others in the thread have pointed out, you're in a public place, other people could be taking pictures of you or writing down information about you, and I don't think most people are worried about that either.
My view is that as long as the technology exists or can exist, it will be developed used, so complaining about the people building it is completely fruitless. If you really dislike how it's used, help pass laws against it! Don't go around guilting people for building this stuff.
The difference is scale. It would be prohibitively expensive for every pizza shop to hire someone to collect demographics of passerby. These systems can run on a Raspberry Pi.
It's fine to speculate about and individual's reasoning for why they believe what they believe but it's entirely useless for determining what the majority believe.
Correction: Someone will use it for evil. This kind of tech is hot stuff for people engaged in evil.
As an ethical programmer, I'd be sure to incorporate security, anonymization, and be able to draw the line so that I can help businesses make more money (since that's what they pay me for), and advance technology at the same time.
This is only unethical when it's used in a system that infers more information aside from general demographics (which, BTW nearly everywhere collects), and makes them vulnerable to interception outside of the pizza company.
and from the same guy:
"Don’t ask how you’re going to pay your rent working ethically. Ask why you’re open to behaving unethically in the first place." https://deardesignstudent.com/ethics-and-paying-rent-86e972c...
The movie was released in 2002, so fairly prescient.
Spielberg: "The Internet is watching us now. If they want to. they can see what sites you visit. In the future, television will be watching us, and customizing itself to what it knows about us. The thrilling thing is, that will make us feel we're part of the medium. The scary thing us, we'll lose our right to privacy. An ad will appear in the air around us, talking directly to us."
Ethical vs. Unethical, Pro-Privacy vs. Against Privacy are the two common discussion points. I, however, think the bigger problem here is that there's a very non-zero probability that this technology may cause unintended consequences simply by relying on false/inaccurate data.
For one, I work in analytics (loaded catch-all occupation) and I work with people who would marry their "data skills" if they could. In my industry, false positives of 80% is acceptable, and openly admitted errors in "machine-learning" logic (quoted to highlight my company's buzz-word usage, but practically non-existent) are made daily. People create algorithms, and people make errors.
Let's let our imagination run wild here for a second: It's 2030, and this technology becomes ubiquitous to the point where no one objects. Businesses take all the data from sentiments, gender, age...etc. to optimize for their target demographic, and price accordingly. In other words, let's assume this tech is used for perfect price discrimination. Economic theory dictates this is a win-win for everyone since everyone starts paying their willingness to pay. But, let's assume there's a catastrophe and medicine is in dire need. Price discrimination works fine assuming perfect competition, and is a useful framework, but it breaks down empirically where we live in a society that doesn't behave so rationally. Who survives? Those willing to pay the most, and the algorithm worked flawlessly here. But it was not intended to dictate who survives.
What I'm trying to say is that we should be cognizant of the fact that we don't live in a perfect bubble, and technology like this should be scrutinized for it's effects exhaustively- including any unintended consequences. We live in a society (duh), and as a society, it is up to us, with the help of policy makers, to determine the fate of this technology.
However, there is one situation where monopolies achieve market-efficiency: when the monopolist can perfectly price-discriminate. This eliminates the dead-weight loss. But, crucially, this also means that all of the gains from trade accrue entirely to the monopolist, as consumers are all paying their own individual 'indifference' prices.
It's a value judgement, but I don't see this as a socially optimal outcome even if it is the market efficient one.
Furthermore, The scenario you highlight is price discrimination to the first degree, where the monopolist captures all the "surplus." Economists generally claim this outcome to be "unrealistic" but it helps us understand the more traditional outcome: https://courses.byui.edu/econ_150/econ_150_old_site/images/8...
As you can see, there is still consumer surplus from a monopoly price discriminating, but at the cost of a deadweight loss.
Economists generally claim this outcome to be "unrealistic" but it helps us understand the more traditional outcome
I agree with this. Just to add, pretty much every simplified market model you would find in an undergraduate-level textbook won't correspond to any market in reality. As you suggest, they're just very simplified models designed to 'kinda point you in the right direction', rather than be taken as a description of reality. The most dangerous people tend to be those who took micro 101 but were never told the latter :)
But I would love to come back to this post in 2050 and be proven completely wrong!
from: "(..) What’s interesting with regards to the book while thinking of this stuff is that the garment that sort of helps to save the day is a t-shirt. You call it the ugliest t-shirt in the world."
for some expansion on the reference.)
a) You're assuming that the AI is looking to find the first face it sees, rather than all faces in view - both your shirt and your real face will be picked up as two separate individuals. Even if it's "one face at a time", why would you assume the shirt gets picked up instead of your face?
b) It really would not be difficult teach a neural net to detect one real face located above a face on a shirt, and ignore the lower one. The only potential false positives would happen with a taller individual walking with a shorter person in front of them, whereby the shorter person may be filtered out as a shirt.
So no, shirts with faces on them are not a countermeasure. You're adding additional noise, but you're not eliminating your own face from being picked up as well.
That way a camera blackout/missing footage wouldn't signal a covert action, while at the same time one could go on defending democracy(tm) without worrying creating a media-storm about the tyrannical methods used to ensure Freedom(tm).
Also IR LEDs:
I don't really think that's the case (here, yet) but I do think it's scary that it's so easy to do that its not just done as a proof of concept but actually used in production in a low tech industry.
Gathering demographic or sentiment without storing, cross referencing (has this person been here before etc) or otherwise using the data for anything such as targeting ads - is kind of acceptable. I mean it wouldn't be hard to do that manually via a camera if you wanted to test the engagement of an ad. I'm sort of hoping this is just some tech project from a university or something, and not an actual product you can buy and hook into some adtech service.
Edit: as someone else pointed out - it's not a proof of concept it's an adtech off the shelf product. Because of course :(
Until and unless those ratios reverse, it's going to be that way and you'll have fewer and fewer places to shop. (I'd happily make the case to my boss that he makes more money without retargeting tactics like this if such were the truth.)
How to ZAP a Camera:
Using Lasers to Temporarily Neutralize
Google translate is readable, if not super-mega-accurate: https://translate.google.com/translate?sl=auto&tl=en&js=y&pr...
Five years ago it didn't seem so sinister. A lot has happened since then, I guess.
I think this is what they did, anyway.
Maybe in a different political context where one category is sent to death camps, another to forced labor camps, etc.
It's likely hard to legislate against software that attempts to detect if there is a person, what their expression is, and guesses at their gender.
You could imagine that job being done by a person (just noting how many people stopped at the advertisement, and what their expression was). I don't think there's really a way to make that illegal.
I suppose I think it's something that people should be aware of, though.
If you enter most of our stores with a phone in your pocket, you're being tracked. They track where you went, in front of what shelves you stopped and for how long, if you went to the cashiers of just left...
And if we track people here in the third world, you can be sure you are being much more tracked in first world stores.
"...peppes pizza in Oslo S."
Quote from their website:
"Audience Measurement included
The information and statistics needed in order to realize audience targeting in DOOH is gathered through livedooh’s integrated anonymous video analysis, which collects information about gender, age and length of view. Audience metrics are used by the ad server’s decision engine to optimize advertisement delivery and increase performance."
The problem is that this is an agency(of the government) owned facility.
Every time I've visited it's been quite accurate on me, my friends, and on the other visitors.
EDIT: here is a pic of what I'm describing: http://fredly.fhs.no/dancemixbloggen/wp-content/uploads/site...
I think they zeroed in on their demographic, good job!
seems like you would either need a collapse of the economy to such a degree that cameras and computers aren't affordable...or some kind of extremely aggressive regulation.
i can't see how the latter would ever come about or be effective.
Try picturing a future where having access to clean drinking water is a privilege that only some have, with no cheap energy available and unstable climate. This is the future we built for ourselves and admittedly internet and computer are useless when you don't have electricity.
Now that you have failed to do that, try doing it for one day 5 years ago, a whole week, a whole month, everyday since you were born.
A public surveillance apparatus such as the one featured here who record everyone, every day, not forgetting a single thing or person.
Try thinking a bit and free yourself from the backfire effect before making claims that make you look bad.
Just because I am not in my house does not mean I consent to being tracked everywhere I go. We make the rules in our society, and with enough political will we can restrict this stuff.
Norwegian law disagrees with you.
If it is hidden (no clear warning) and there's no way to opt-out, then yes, it is unethical imho.
Not as advanced as this, obviously, but industry has been thinking about ways to do automated visual analysis like this for a long time.
Ubiquitous surveillance is only going to get more, uh, ubiquitous. It's the end of privacy but also the end of crime...
it would be interesting to see the ad and how / if it changes based on who is watching
I wouldn't be alarmed by this, they probably don't even now the accuracy of the algorithm they are using or how to interpret the collected data correctly.
What is absurd is there is someone like you in every privacy-related thread claiming that everyone who is outraged is also a Facebook user, or somehow is fine with what Facebook does.
But here's the thing, you can choose to not use facebook or deploy some kind of mitigating strategies for online surveillance. On the other hand ublock origin is useless to protect you from meat space surveillance and tracking.
Understand that surveillance in public space physical world is different from its online counterpart, that it is way harder to detect and counter and as such ought to draw a bigger outrage before it gets generalized.
If this is true, it's all stuff that they're taking by stealth.
I've chosen to not be on facebook and I've been shown an account in my name made by someone else, pictures where I'm tagged, public posts and comments mentioning me, private message mentioning me.
This is the tip of the iceberg, I have not been shown the facial features facebook has associated to me, the "social graph" they have linked to me and countless other internal facebook stuff the general public is supposed to not know about.
You can choose not to use the Internet, or only use hardened devices over tor - but it's not exactly the same as "you can choose not to drink coca cola" (incidentally, it might be difficult in places to completely avoid products by the coca cola company, as opposed to just "coke, the coca cola soft drink").
Added to this, the network effect make fb hard to avoid - people use fb/messenger for a lot of communications, volunteer groups, political groups, education... You are free to argue it's I'll advised (and I agree). But wishful thinking alone does not mean "choosing not to use Facebook", might not include: missing social, educational and work opportunities.