I had the strangest interaction with the Messenger app a few months ago.
I was spending time with friends, and I took a few pictures of all of us (didn't send them through either FB or Messenger). A few hours later, messenger popped up a notification telling me something along the lines of "Hey, I see you took pictures today of <friend>. Want to send them to her?"
Made me feel incredibly creeped out that FB would take my photos and (presumably upload and) analyze them even when I hadn't given them to FB.
"By recognizing your Facebook friends in the photos you take (just like when tagging or sharing photos on Facebook), Messenger can create a group thread for you to share the photos with those friends in just two taps."
I am still shocked about that feature. The messenger app is installed by millions of people. Does anyone know how the app does the friend recognition? Local on the phone or on FBs servers?
Facebooks servers. Phones could not be capable of doing that much facial recognition that quick. Plus that bandwidth to grab all your friends images to compare to would be a metric ton of images given some selfie-addicts.
The other day tinfoil me finally gave in and installed the Instagram app. Marshmallow gives me more granular permission control, but naturally I had to give it access to the file system to be able to upload pictures. Knowing about their ways with Messenger, I guess it's only a matter of time before Instagram scans my whole phone for images and videos to perform biometric analysis.
You don't need to take it to the congress; just stop using the services of a company whose practices you don't agree with. If more people would do this, maybe the company will get the message and change its practices.
This information is too valuable to dissuade companies from collecting and using it. You can't do much as an user if everyone is doing it. In many situations you might not even know that you're being tracked, there already technologies that track you in public places.
Congress is broke and the tech companies know it and optimize against this and will continue to make hay while the sun shines (i.e. before the US produces any effective consumer privacy legislation).
Strangely enough, Elizabeth Warren was able to put together comprehensive legislation to work towards ending Intuit's (to name the largest offender) corporate welfare [1] in the tax prep software space. Or is there an anti-private corporation/pro IRS tax prep lobbyist I'm missing?
I don't follow the sausagemaking process in DC, so I don't precisely know in that instance, but my guess would be a public advocacy group.
In my state legislature, which I follow, it's really common for various groups to write sample legislation. Everything from duck habitat preservation to open source software.
I saw that now too. Opt-in isn't enough here, the user of the messenger app should get consent from all folks on those pictures before auto-uploading anything.
Facebook lets you opt-out of being tagged, which is about all they can do. The photos don't being to you, plus they're not going to know that you're in a photo until after it gets uploaded.
People also use to think putting hour real name on the internet was creepy. Little by little the privacy erosion has brought us where we are today - standing naked in front of a corporate monster profiting off our data.
Wow I never heard of that before, that's insane...
The worst thing is you can't "boycott" the app because even if you don't use it but your friends do, the pictures of you WILL be analyzed. Even if you explicitly refused their ToS...
Are these kind of things possible with WhatsApp? (on iOS specifically, and now that it even uses e2e) I don't know if I should trust them with Facebook being behind the wheel...
Facial recognition is really scary. This was recently demonstrated for Russia's Facebook, VKontakte, when a service appeared that let you look up people on that site by photo. So people started looking up the profiles of random subway passengers, outing sex workers to their families and friends, etc.
The scary part is that people are feeding the facebook DB with pictures, and many more data. They did it, they are doing it, and will keep it up, despite everything we told them.
That's the entirely expected part. I don't know exactly which "we" you're referring to, but the tech scene in general absolutely did not warn people about the dangers of uploading these photos, on the contrary, they encouraged it.
I don't think it's reasonable to expect "everyday" people to know the dangers here. There hasn't been any education, so you can't say "nobody cares". Nobody is aware.
He has it right - nobody cares. Even when people are made aware, they still don't care. Look at the security debacle of Snapchat, not to mention its lack of ethics. Even when the issues were exposed, the majority of users didn't care. Same with FB privacy. It's not just a matter of awareness. You have to almost throw a wrench at people to wake the majority up.
If you love React like most React lovers out there you love the team that makes Facebook, and you think everything they do is awesome and super cool: Flux, Flow, React Native, GraphQL and so on.
If not directly responsible, these people are at least accomplices of all the horrible things Facebook may be doing.
I really don't like having my picture taken anymore. You don't where it's going to end up.
I also do wonder how much infrastructure would need to be added in a country that already has a lot of video surveillance (like the UK) to implement a "find this person" feature, where you could just feed it a photo and it would go looking at all camera feeds.
Getting your picture taken kind of puts you on the defensive, but that probably isn't going away. You have to be able to go outside, go to the bank, go to work, and go to the grocery store. Each of those places are going to have cameras up for their own peace of mind.
The alternative would be to not go out and enjoy life, which is the worse of the two options.
There are all kinds of ways to allow security camera's while still protecting ones privacy.
but we are never even talking about security camera, which by default I have given my permission by shopping in the store for the store to take my photo for the purpose of security.
We are talking about one 3rd party taking your photo with out your permission (or with), then with out your permission again transfer that photo Facebook and other companies, then those companies with out your permission again use that photo to built unknown amounts of Biometeric and other data, as well as combine that photo with other photos to build a database of unknown location, shopping, and other data, which is then combined with unknown quantities of other data about you from other source.
We need to understand the concept of PII Ownership, if a company like Facebook, google, or any one else has collected information on me I should be able to submit something to them and they should be required to tell me What they have collect, and whom they have shared that info with, I should also have the option to tell them to permanently purge all records of my PII from their databases.
In terms of sociologic development the middle-eastern countries are considered backwards with their stone-age laws, ingrained religion, and treating women like lesser beings, and yet we may soon be wearing 'burka' like clothing to protects our identity. Of course apart from the physical look, there is no connection between the two.
Now this is my type of dystopian future; those are very cool, like something straight out of Bladerunner. Fashion with a function, all that's missing is a clear raincoat.
The interesting thing about the Snowden releases was just how good GCHQ where technically, goes to show that when government wants to do IT projects it can, I guess that shows where their priorities lie at least.
As technology improves, it's hard to imagine regulations keeping pace with preventing this sort of thing ... And even if they did, then it would only be used by criminals and governments (while I may call it criminal, they would likely exclude themselves from any such regulation), and I'm not sure that situation (criminals and governments being the only users of the technology) is a better situation than the technology being available to all.
If you have ever watched the TV show Black Mirror, it does an exploration of what it would be like to have instant access to know everything about a person by looking at them. The technology becomes a commodity and results in the destruction of personal relationships rather than a dystopian big brother society.
It seems to me that is a much more likely future than a criminal and government oppression future. Is one of them inevitable? Probably, but not any more than our present is someone else's future.
That is similar to the outcome that Asimov explores in The Dead Past[1] and like you he lets the government off the hook very easily, like it is a benevolent entity that won't exploit the technology already in the hands of the masses when we know this to be untrue. If technology reaches a point where it disrupts all personal relationships the government wouldn't be far behind creating the dystopian "big brother" society you speak of (which depending who you ask may have already been built).
And that, in turn, could eventually be countered by the technology that allows anyone to easily modify their appearance any way they want, however many times they want.
But then..of course, all such services will be required by governments to keep a complete record of a person's "appearance history." Enter the black market, a la Minority Report.
For the scramble suit to work, publicly available 'mixing' stations would have to be available, which would work like a black box, you could only surveil input and output.
And exits from your house would have to somehow be connected to those stations, or have your house change locations randomly.
The latter could work if houses were mostly in form of larger modular worldwide-compatible shipping containers. Also no windows, we would have VR.
I was not familiar with those, perhaps they will. Very fascinated reading about them now. I guess I don't follow the specifics, ... to give an example of my confusion, a journalist can't livestream a protest without blurring out the faces of the protestors?
That's fine, because simply a face is not at the moment considered to be personal data and there's a general exemption for news reporting and livestreaming is not 'data retention' (and I think not 'data processing' either, for the purposes of the directive.
As technology improves, it's hard to imagine regulations keeping pace with preventing this sort of thing ...
I don't really see why. The main risks with modern big data storage and mining technologies mostly come from the "big" part. A single random person passing you in the street and seeing you out and about for a few seconds probably isn't much of a threat to you. A casual photo in someone's family holiday album where you were in the background probably isn't much of a threat to you either. Nor is a snapshot from a CCTV camera in a store where you shopped.
On the other hand, a few thousand such records in electronic form, all uploaded to the same site that is systematically scanning them, extracting all the metadata that went with each of them, and then correlating all of that with other data that can identify you from your face... That is a threat to you in all kinds of ways. However, the only organisations capable of posing such a threat are those that have access to sufficiently large amounts of data. There aren't actually very many of those.
Governments and essential services that you use often are one category. In a way, this is the trickiest area ethically, because there is obvious scope for abuse of large data sets and there are obvious security risks to keeping such data at all, but sometimes these organisations also have legitimate reasons for processing such data. The parts of governments that operate in the public eye, which is most of them, probably aren't going to break whatever rules we decide are ethically appropriate and codify through laws or official regulations, though.
Another major category is services you deal with regularly. If you shop at a store and they use surveillance technology for legitimate security purposes, that's one thing. If you shop at a store that is part of a larger organisation and the members of that organisation pool their footage and analyse it for other purposes such as customer tracking and marketing as well, that's a bit different. But again, there are relatively few organisations with the ability to collect, store and process large enough volumes of data to pose a significant general threat to privacy. It seems unlikely that these organisations would try too hard to push the boundaries on what is permitted, if the rules are reasonably clear and the penalties for breaking them significant.
The really shady category, IMHO, is the organisations that collect large amounts of data about you by getting other people to provide it. The moment your friend installs a social network's app on their phone and gives it (intentionally or otherwise) permission to upload and scan the data from their phone, the social network also has your phone number and so on. Of course, in places like the EU where there are stronger data protection and privacy laws, the legality of doing so has been challenged several times. But the greatest con the likes of Facebook ever pulled in building their massive databases was solving the scalability and jurisdictional problems by getting almost everyone to spy on each other for them. Even if you choose not to participate yourself, at their scale there will be dozens if not hundreds of people helpfully providing a very detailed set of data about you to Facebook anyway, with Facebook actively encouraging them to do so. Moreover, unlike the case with your own government or services you physically visit or use in person, social networks and the like operate internationally and aren't necessarily subject to the same data protection controls, as long as they can rig it so that they aren't actually the ones transferring personal data out of a controlled area, which of course they also solve by getting your friends to do it for them.
What all of these cases have in common is that they are big organisations dealing with lots of people. The data mining and mass surveillance aspects typically only work as a significant threat to privacy at large scales. And if they're big enough to do that, and not secretive enough to literally hide behind special laws in the way that government security services do, they're also big enough to be actively monitored for compliance and if necessary penalised for non-compliance by regulatory authorities.
Sounds like a slightly suboptimal compromise situation that one could grudgingly give in to. But thanks to 3rd party ad networks (trans-national, etc), we can't just not have nice things, we can't even have slightly suboptimal compromise things.
To be clear: the article claims that Facebook can recognize anyone just by seeing them. They haven't demonstrated this ability.
It's far easier to judge who's in your picture among your 20 closest friends than it is to find that face among all two billion Facebook users.
State of the art recognition systems still have a few orders of magnitude of accuracy improvements to go before they can solve that problem.
See the Iarpa Janus project for some recent government+academic-sponsored work on this front: http://www.nist.gov/itl/iad/ig/facechallenges.cfm The task is to recognize terrorists in airport surveillance pictures and such.
Using only facial recognition cannot identify a person in the world. However, combined with one or two GPS points (photo geotags), you can get pretty damn close.
If you don't want your face to be recognized, you can sometimes prevent it from being detected in the first place.
Our research group recently looked into how to hide from Facebook's face detector. If you're uploading photos, adding white bars around the eyes of the subjects in the photo is the best way to prevent Facebook from finding the face. However, if you're out and about in the real world, even scarves and masks aren't enough--facebook sometimes finds these occluded faces too. There aren't very many easy answers. https://arxiv.org/abs/1602.04504
Years ago, I read about a hat that some Hollywood stars had found useful for fighting paparazzi. Mind you, I don't recall the source, and it my have been tabloid nonsense.
The hat had a circle of IR LEDs (or something of the sort) emitting 360 degrees of invisible-to-the-naked-eye light. Photos of anyone wearing the hat had a similar effect to using a flash in a mirror, the subject was completely washed out by light.
Again, I'm not sure if this was bogus or not, but it seems the demand for such a product is only going to become greater. Hell, I'd buy one.
Does anyone with expertise in this area know if this type of device is technically possible? I'm voting no, otherwise we'd be seeing these devices everywhere, and there would be laws against wearing them.
I had seen the exact same thing but sadly I have no more information on it. From what I saw it caused a wide glare issue around the area with the lights.
Near-red IR is hard to filter without also accidentally making red colors look off, because getting the frequency response perfectly match human eyes is extremely hard. You don't want to detect less, and detecting just a bit more usually won't hurt.
Not merely will it store biometric data, if you happen to use their Messenger to share photos, those photos will remain stored on their servers even after you've deleted the associated conversation. Facebook's disregard for privacy and content control for its users is the biggest problem with its platform by far. And with its latest profit reports, it's bound to get worse.
I think technologists need to help the public understand how if they are going to continue to fully integrate technology in assisting with their lives, they will be moving closer to a "privacy free" future.
The trade-off between having tailored services and "smart" systems is privacy from machine systems.
The technology community pines for futuristic technology like amazing machine personal assistants, forgetting that human personal assistants (see: professional executive secretaries) know basically everything about the person they are assisting.
In the case of the human personal assistant the governing terms originate with the person being assisted, and so the data and the ways it can be used remain in control of the owner.
Tech companies are inverting this control, which has very dangerous implications for those they assist.
There are ways to keep control with the user, however these require a level of architecture inversion the consumers haven't fought for yet.
In the case of the human personal assistant the governing terms originate with the person being assisted, and so the data and the ways it can be used remain in control of the owner.
Well, the data and ways it can be used are also in control of the user now, they just all agree to it - even without knowing it in many cases. My question is: how do we talk to consumers to show them the benefits while also informing them of how it works?
There are ways to keep control with the user, however these require a level of architecture inversion the consumers haven't fought for yet.
I'd be curious to know how you see this working - especially in the case of providing a benefit from network effects. The benefit from these systems is using the inputs of other people, and those by definition release control.
Beyond that case, making each user train their own CNN is an impossible consumer demand. The way facebook does it with face tagging and google does it with number/letter recognition, everyone is training the CNN - crowd-sourced neural net training.
You seem to be arguing as though the current situation is preferable and consumers just need to be convinced of that.
Firstly, the user is not in control of the contract. They have no realistic means of modification or exerting pressure, nor resisting or requiring changes. Take it or leave it is not control.
Secondly. You are likely aware of the permissions systems being introduced on modern smartphones. This is an interesting step towards inverting the control of the device back towards the user.
Similar steps can be made in other areas. CNN training is certainly a different action to usage, and has different privacy implications, so they should obtain permissions independently.
Note how these systems are not permission forever, but permission for now. Handling this sort of data access is a pain, but it isn't actually hard.
You seem to be arguing as though the current situation is preferable and consumers just need to be convinced of that.
My whole stance is that it's unknown if it's preferable because people don't know what the long term costs are - as no one has told them about it.
Clearly they are gaining value from utilizing these systems or they wouldn't be using them. What isn't understood is if they would still use them if they understood the potential long range privacy implications.
Firstly, the user is not in control of the contract. They have no realistic means of modification or exerting pressure, nor resisting or requiring changes. Take it or leave it is not control.
In the case of private services, it most certainly is control. Irrespective of that however, these systems aren't basic services or things that people have a right to, so your control argument is in the wrong context. Facebook et al. are frivolities - not needs.
Staying off Facebook is not enough. Facebook could easily write a web crawler to find off-site photos of people tagged on Facebook. They could even create shadow accounts to track tagged people who don't have Facebook accounts. Even people not tagged could have shadow accounts seeded from Wikipedia or news photos.
Writing web crawlers and astroturfing to get all that data is hard. Facebook would more likely just buy that data straight from the source, since most other tech companies are similarly unscrupulous and happy to sell.
The second information which I see having potential is the EXIF of the pics, the off pixels and other identifying data of the camera (flare, blur, watermark), artificially introduced or not.
Not only it proves who we with whom at the same time. It also links profiles. You can create a whole new identity; but if you didn't throw away all your phones and cameras, then Facebook can link the profiles. Or if you sell something on Craigslist and upload on Snapchat with the same camera: It's tied to your identity.
Facebook was great when it was a way to connect with close friends and people in your immediate community (ex. university) and see what to happening around you. Now thats it trying to evolve into this platform that targets you so that it can make an insane amount of ad revenue, it has become less and less worth it. Especially in respect to our privacy.
Maybe photos should carry a "robots.txt"-like permissions tag explicitly banning social networks from using it, and a cryptographic watermark embedded in the picture that is hard to remove, so they are not tempted to delete the tag.Cameras (and apps) should auto-tag files if the user so wishes.
I was thinking along the lines of a "Do not track" flag that is to be voluntarily observed by large social networks like, FB. It would be a "Do Not Share" flag.
Humans have evolved for millions of years on the expectation that other people will recognize our faces. I'm sure we'll come up with a better solution than "file lawsuits to hopefully make computers recognizing our faces illegal".
To be honest, most of these are surface level traits of an individual. There are deeper traits which go much more personal, and have even been touched on in popular culture in recent times, like in the new James Bond movie (think gait recognition). But even gait, although highly individual, is still touching the surface. I was thinking of 'trimming the bloom filter' to such a degree that we can recognize not only a person on sight, but by cue words, and individual dictionaries alone.
It is no secret that our world is divided by language alone, so then as analysts we can attach certain words to certain behaviors, and this has been proven countless times to expose a person. If I speak English I probably respond in the same way to 'pizza'. Pizza means food, and therefore pizza emotes a pleasure response. But 'bomb' and other words must decide a different response then?
Marketers have copped this early on and frequently use talismanic phrases to elicit positive responses to products, so why not Facebook, and any other tech harvester of data such as Google, et al? Last time I checked it is not a crime to elicit responses using personal, and individualized key-phrases.
Which reminds me of the researcher at Berlin's Humboldt University who ended up in jail because he was the only one at the time who used the word 'gentrification'.
Him, and this left-wing group setting cars on fire.
Photos permission access is too granular on iOS -- they should add a third access state for photos that grants an app permission to "write new photos to photo album, but prevents an app from scraping the photos library" ... Or maybe that should be the behavior of the current permission grant ...
Assuming that the concentration and mining of huge personal datasets is inevitable, can we design systems (other than law) that prevent these data to be misused?
You'd have to be pretty vigilant, and it would require others to do the same. There'd have to be an equal, or greater, amount of photos tagged as another person. Who is this other person, anyhow? Where's their account? All data says it's Jill_the_Pill, so she must use pseudo-names. Flag her account for further investigation.
It's best to just stay off of Facebook if you care at all about privacy. At least, that's the underlying message that I get from this, and every other story I read about the site.
> It's best to just stay off of Facebook if you care at all about privacy
Anyone who cares about privacy already does not use facebook. It's the fact that facebook still gets your photos/info that is the problem. It's a difficult problem to solve. Simply avoiding facebook sadly does not cut it.
I was spending time with friends, and I took a few pictures of all of us (didn't send them through either FB or Messenger). A few hours later, messenger popped up a notification telling me something along the lines of "Hey, I see you took pictures today of <friend>. Want to send them to her?"
Made me feel incredibly creeped out that FB would take my photos and (presumably upload and) analyze them even when I hadn't given them to FB.