Possessing a picture of someone's face is OK, but creating a model that represents that face is not OK if that model can be compared to another picture to categorize it.
Possessing a picture of someones face is OK, and having a human create a mental model that represents that face is OK as well even if that can be compared to another picture to categorize it.
The argument seems to center around how easy/hard or expensive/cheap the process is.
If we had somehow ended up inventing cameras after computers, it's conceivable that we might have regulated them.
Road laws were changed when we switched from horses to cars. The same framing as the parents comment might result in the argument: "Cars are exactly the same thing but faster, why change things?"
But this time the machine processes themselves are controlling the way we talk about them.
> without consent ... invades an individual’s private affairs and concrete interests
Consent, privacy, and concrete interests (possibly includes systematically influencing politics) are all central to this debate.
Secondly, the law is not alien to punishing the same outcome differently based on the tool used. Punching someone, knifing someone, shooting someone, or throwing a bomb at someone will carry different punishments - even if the resultant harm to the victim is roughly the same.
If FB trains to recognize the face of some friend of an FB user, but that friend is not on FB, and never previously signed up for FB, then FB would be clearly in violation.
I think FB does this when people tag friends? I'm not sure though?
You upload a pic of a night out. Tag all your friends in it. And one of those friends is not, and never has been, on FB. Well, clearly, that guy never gave FB consent. And FB is doing it all without his knowledge.
It is about scale of the violation of privacy rights.
Take murder...its not criminal to think about murder. Its not even criminal to discuss a murder (so long as no "substantial step" is taken). Its not murder if you kill someone, but did not intend to, that would be manslaughter. Intent to kill someone and doing it, that is Murder and is criminal. If one shouts a racial, ethnic, or religious slur while committing murder, then it is both murder and a hate crime. If one organizes the mass killings of groups of people based on race, ethnicity and/or religion at a large enough scale, it is genocide.
The law is very capable of distinguishing intent, acts themselves and the scale of acts. Its only these tech companies that wish to muddy the waters by spinning this decision into the court outlawing your right to take photos.
Other examples include:
* Trading card game booster packs vs. video game loot boxes
* Taking notes about people who visit your physical store vs. taking notes about people who browse to your website
That said - I do realize I'm in the minority opinion here re: GDPR, etc. But the inconsistency really disturbs me, I don't agree with a ban on facial recognition: who is to tell someone what they can and can't do with a bucket of bits? What about other automated recognition for content moderation - surely automatically detecting nudity is OK by the law? But what if those recognition models wind up doing some form of emergent facial recognition via unsupervised learning? How could someone even verify that?
I would be similarly bothered if this was somehow done manually by a bunch of government spies dispersed all over cities, using pencils and paper. Of course this is impossible, but it shows that neither computers, nor cyberspace, nor the entity doing it are essential components for it to be bothersome. Computers only play a role in this insofar that they make it possible to scale this.
Which might seem wild, but it's actually pretty consistent with the amount of trust we put in companies right now. Facebook could arguably do some nebulous bad thing with their facial recognition data. AT&T can definitely know who you are calling on any call that goes through their POTS system. But we don't have a ban on diagnostic tools for large telephone providers out of fear that they'd abuse their position - I see no reason to ban a technology of large social network providers for the same fear. 
It comes down to trust - as soon as one of these companies did abuse their position to do something widely evil, the market consequences would be dire (to say nothing of the reactionary legislation).
 Possibly a poor example, considering anti-trust w/r/t Bell growing too large - but that kind of regulatory intervention I would hope could be applied equally to either class of company should they be determined a monopoly.
Look at Nestle/Coca Cola/Pepsi sucking the earth dry of precious driking water.
Or the fallout for bankers over the 2008 recession.
Or Equifax making merry with your social security number.
Or Deepwater horizon.
Or Cambridge Analytica.
Or... you get the point.
I'd argue the market has precious few fucks to give about outright evil deeds. There are countless examples of corporations pillaging and raping the earth for short term gains, and the market thanks them for it.
Free market efficiency is true for only a very narrow range of operating parameters, and regulations are very much needed for tackling th ebig picture.
The bottomline being, given a fuzzy outline of the line in the sand, I'd still rather the government shoot for it. If they fall short by too much, we'll know soon enough because symptoms will persist. If they go too FAR, well, that's perhaps even better because I'd rather the default stance on new things be conservative skepticism and caution, and then we slowly allow more things once we deem them acceptable.
But even with the Apple example, aren't they the most valuable company on Earth? Despite having anti-consumer policies like removing the 3.5mm jack or making their laptops nigh-on irrepairable. Clearly the market shrugs at those 'tradeoffs' in exchange for Apple shifting more product more quickly. This is what I mean by the market not being the perfect device to self-regulate. Perhaps on a VERY long term scale, but not on human time scales.
Theo already did a bad thing with their picture data by applying facial recognition to it! I never consented to this use of my pictures and am explicitly against it. Yet they did it anyway. Hence the lawsuit.
Taking notes about people who visit your physical store vs. taking notes about people who browse to your website
We're not talking about a shopkeeper in a physical store taking notes (or even using sinister tech like tracking your cell phone, or facial recognition to track you) while you're in his store.
If you really want to draw an analogy we're taking about said shopkeeper, who puts a very sophisticated GPS tracking device on you, which does not only provides information, which shops you visit and what you buy, but also looks over your shoulder to track what you read in the library.
But the inconsistency really disturbs me
All these new technologies (facial recognition, automated photo editing, etc) are good things when they become widely available to everyone instead of just being the domain of instituations, corporations, and governments. All legal decisions like this do is prevent the spread of abilities to individuals and make the growing power imbalance worse.
it's not even close to equivalency. The contents of booster packs is predetermined before the moment of purchase. Whether you buy them, or your buddy buys them, you'll get the same cards. With vidoe game loot boxes, you can modify the contents of the lootbox at the moment of purchase depending on who's the buyer. Perhaps it's a popular streamer, so you'll boost his winrate so that people watching the stream can go "wow, that was so worth it, I'll buy some boxes too". Perhaps you'll boost the winrate for a whale, only to get him hooked and put him on a dry period with no good loot at all. The online nature of such transactions offers much capabilities for abuse.
Imagine entering an electronics store in a mall and looking at a fridge. You leave the store, but an employee of the store had spotted you and what you looked at. They follow you out the door and into another store. There they whisper to an employee of that store "psst, give me $5 and I'll tell you what this guy's after". The other store's employee then proceeds to offer you a discount on a fridge.
If some store had been doing that in meatspace people would have been up in arms, but for some reason because it's a "re-targeting ad" on a computer people have mostly been blind to what's going on.
Facial recognition is the same. If you were doing it on the scale Facebook is doing it but in meatspace it would be illegal.
There's no "to me"; the argument is there in the clear and you can respond to it in terms of its content instead of the unknowns that go on in your head.
> I don't agree with a ban on facial recognition: who is to tell someone what they can and can't do with a bucket of bits?
Classic reductionist argument. They're a "bucket of bits" insofar that your physical body is a "bucket of atoms".
It‘s not „but do it on a computer“ rather it‘s „but when it becomes unprecedented“...
We are facing a completely new type of society where these types of entities are very likely to exist, we need to figure out how to deal with this.
> We are facing a completely new type of society where these types of entities are very likely to exist, we need to figure out how to deal with this.
We are dealing with them through laws and regulations.
But, there's a lot of unprecedented technology invented every day. And for the most part we do just carry on as usual. That's part of what made the original internet such a beautiful place - we didn't try to waterlog it with proactive legislation because of potential bad things that could happen. Instead, that freedom created the wonderful ecosystem we have today.
The (US) law eventually reacted, but for the most part did so in a measured and reasonable fashion to form things like DMCA and CFAA.
Conversely, it's my opinion that proactive lawmaking leads to disastrous and overly broad, unenforceable, and burdensome laws like SOPA and CALEA. And I see this kind of ban on facial recognition to be firmly in the proactive category.
I think that rejecting proactive legislation per se is a dangerous attitude. For example, see climate change. Proactive legislation could have made us avoid all of the discussions we are now working through during crunch time...
If we have reasonable evidence that there is a high likelihood of us creating worlds that we don’t want to live in, we should take reasonable action proactively to avoid those scenarios.
Thus, I agree with you that not all unprecedented technologies need to be proactively legislated but as soon as there is reasonable evidence for possible negative consequences we should start reasonable processes to avoid those consequences. There is no black or white situation here, we need to have evidence based discussions and work our way through this collectively.
In other words, for violating the spirit of the law, but on a computer.
What about a picture I took myself of a public place in Chicago with people's faces in it? Those are certainly my bits.
Also loot boxes - imagine if Wizards of the Coast walked and gave away MTG booster packs on the street for free but told you "pay 10 cents to open it and you could get this cool stuff". Sets a completely different mental model in your head.
Would that be illegal though? I think my point stands - you could do that in real life and no one is really disturbed by it to campaign for it to be outlawed. You can even see it happen if you ever ride the metro, panhandlers will give you a book/trinkets/snacks and ask for a "donation" for you to keep it (most people refuse and give back the token).
This is more like standing in front of a kindergarden with an icecream truck before lunch and giving away free ice creams on a hot summer day but you can't open them just look at them unless you pay. It's abusing human psychology without you asking for it - you just want to play the game and they are pushing your brain's buttons to get you to buy stuff. Maybe it won't work on you, but imagine how it's "brainwashing" the younger generations.
But it's kind of "exit through the gift shop," right? All you wanted to do was ride the roller coaster, but now you've got all of these trinkets and refreshing beverages that you can touch and pick up, but you just can't have without paying. And to me that just seems part and parcel of modern capitalism, so I'm not really in favor of outlawing a particular flavor of it.
Net neutrality is also threatened by capitalism. Do you also want that gone?
So I can have two pictures and look at them by hand, and use my own brain to categorize them. But the minute I use a computer to do so I have committed a crime.
It's this kind of stance which is going to make machine learning algorithms that casually identify people for convenience sake illegal. What the people in the court system have decided, is that they are frightened of being found out as hypocrites - as people who take bribes, and people who do bad things in public.
You can always do bad things in private. No one can stop you from doing such things, especially when they come short of murder.
There is essentially no difference between this technology, and the technology used for license plate recognition, but license plate recognition is totally legal.
In many places, if you were pulled over by a cop, you would pay a hefty fine, and the automated license place readers will simply fine you a bit less or let you go to court. What's wrong with recognizing people that are committing crimes and sending them a notice, "We were about 1% sure this was you doing something wrong, so here's a 1% fine." Everyone agrees such systems will be initially imperfect, but everyone is also totally agreed that present law enforcement is imperfect. I fail to understand the problem.
Unless of course, the problem is that everything is really run by criminals, and we're getting way too close to knowing the truth. That could be a problem.
Edit: I appreciate honest debate more than downvotes. If I have a shitty idea, please tell me why. Unless of course, you are afraid of honest discussion. Cowardice is always acceptable here.
Well then why not skip the whole imperfect parts recognition and simply fine the whole population each a fraction of the fine? We are 100% sure that the criminal is within the population. So lets just fine everybody one / $population_size fraction of the fine. The adequate fine will be paid and the criminal will be hit and hey, the system is not perfect, but what is?
If someone picked me up based on fingerprints, I would much rather them say "We found prints matching yours at this crime scene. Statistically, there are 5 other people with prints like that in the area. We have circumstantial evidence that points to you as well. Since we aren't certain, but we have your prints we'll fine you 1/5th of the full amount for the regular violation." Currently, we make mistakes that lock people up for years based upon uncertain accusations - it's always all or nothing. This puts way too much pressure on investigators to get it right and in the case of murders, put someone away no matter what.
I'll tell you what - if they're going to make the mistake anyway, I'd rather have 1/5th to 1/20th of the time to serve.
Edit: This is the 21st century. We're also not restricted to simply imprisonment. Other factors can be affected. It would be interesting if labels could be placed upon those who are suspect, like "suspicious". This could prevent certain purchases and types of travel which would make the kinds of crimes they are suspected of committing more difficult.
That sounds so dystopian
And gun violence in America has already shown what happens when we don't take common sense measures such as this seriously. If you knew someone had a 50% chance of shooting up a Walmart, would you casually say "Well we don't know that fro sure." or would you make a list of all of those suspicious people and track them?
We already have what is a dystopian credit scoring system, and I don't see people in a panic, or protesting in the streets, excepting for an occasional person writing that their data is used unfairly.
I'm in agreement really. If people don't want their information used for a credit score, they should be able to refuse to have their info used for it, but of course that comes at the cost of not having good credit.
Social credit scores could function quite similarly.
Move over, mesothelioma TV commercials. We have a new target for the bottom-feeding lawyers.
It seems to be the case that a lot of companies instead focus their resources on tort reform bills to neuter the laws.
When are we going to start standing up to companies who violate our privacy like this?
The ruling is specifically about the facial recognition part of FB. They could end up paying $5 billion just because of facial recognition stuff .
Imagine if you put the Konami Code on the Google homepage, and got sued and lost for it. Then got fined $1 billion for it. You'd feel like you made a pretty bad decision there.
Obviously facial recognition is closer to the core of what FB is doing in general but it's still a lot of money for an incidental part of their system! They can pay it, but that's $5 billion they can no longer use to buy like... 10 startups or whatever.
I robbed this house, but it was only the house-robbing part of me, go easy.
But no, the unfair jerks put all of me in jail.
Facebook is likely making a lot of money off of, as you claim, something that is close to the core of what Facebook is doing. It makes $5 billion closer to a small cost of doing business.
As a means to increase user engagement (who then will volunteer more information by posting, liking and clicking around, especially on ads), I'd guess the same, that this also doesn't add much.
What's more, it's not just a one time fine. It's an order to stop certain actions. If they do not comply, then they will get fined again and again.
Can I sue them for them to stop?
Screenshot of the relevant settings: https://i.imgur.com/Xg4wkPV.png
According to their relevant help page about it (https://support.google.com/photos/answer/6128838?co=GENIE.Pl...), turning the setting off also deletes:
- Face groups in your account
- Face models used to create those face groups
- Face labels you created
I never signed it. I just included the unsigned piece of paper (among others I disagreed with) in amongst the pile of papers I turned in to them. Nothing ever came of not signing but I suspect I could sue them now.
She was allowed to have a private photo site that all of the parents in the class had access to, and upload photos there. The parents were not supposed to re-share those, but some did.
SchoolBench allows you to match profiles with media consent, so you can work out what photos are able to be published or not.
> SchoolBench allows you to match profiles with media consent, so you can work out what photos are able to be published or not.
How does Schoolbench get a picture of my kid if I did not consent for the school to use pictures of my kid?
From there you can identify whether the photo overall is publishable, and if not, what students aren't. You can then crop/edit the photo to remove students from then on.
All of this happens in an on-premise VM, rather than calling out to a cloud service. Obviously you can run up an instance in AWS, etc.. if you want to cloud host, but that is the school's volition.
> From there you can identify whether the photo overall is publishable, and if not, what students aren't. You can then crop/edit the photo to remove students from then on.
> All of this happens in an on-premise VM, rather than calling out to a cloud service. Obviously you can run up an instance in AWS, etc.. if you want to cloud host, but that is the school's volition.
But if the parents don't sign the waiver allowing their kids photo to be used, how do you process the photo to determine if any kid in it has not had the waiver signed? Isn't the act of processing the image a violation of the agreement (which is effective since the waiver was not signed)?
Its an invasion of privacy of all cars and drivers in the vicinity and should be illegal.
2019: Are you scummy enough to be a Facebook engineer?
In other words, this is ineffective. I hope EU cripples them, not even $5B FTC fines scare them.
Does anyone know if other states have similar laws? Wonder what type of momentum would have to manifest for other state legislatures to get a similar bill into committee for debate
It looks to me like momentum on facial recognition is building now. Call your state reps, tell your friends, find folks who feel the same way and make some noise.
I deleted my Facebook a few years ago, so if some class-action suit comes out for people who were users in 2018+, where's my payout? How is such a system fair to people who had the sense to either delete before whatever time horizon is used in a case or people who never created an account? None of these people who could win the lottery in court suffered a real loss.
Yet the legal system gives people an incentive here to sign up for free services so they can one day reap the rewards when Free Service X slips up and breaks State Law Y. Admittedly, the rewards will be small. But nonetheless, it is an incentive and aggregated across society it isn't nothing.
This sort of litigious behavior just slides us further and further into a culture of dishonesty and makes a mockery of the justice system.
Also, let me be more specific given there's another thread on here about this. What makes anyone think they're entitled to "$5,000" for signing up for a free service and uploading their photos to it? This is absurd. Please explain specifically how running a facial recognition algorithm on person's photos is equivalent to "$5,000" worth of damages. Where did that number come from? Why not $1 or $1,000,000?