These data are being released in hopes of furthering the developments of these techniques - which ultimately benefits a future use by Facebook.
In turn, people like "presenti" have pointed out several times that the data were collected in Facebook offices or other places where all participants consent. The implication is clear: While the present data are legal, they would not be if shot in the "wild".
In many jurisdictions, there is no legal use case for these methods. It is literally not possible to have a widely-used implementation of these glasses that would not violate the privacy rights of "bystanders".
Of course, Facebook will work hard to make it legal, on all fronts. Whether laws need to be changed or, as pointed out by presenti, some innovation may arise to make you at least an anonymous blob, separate from name and address. And if you do not have an account - see in this thread - what is there to worry about?
Well, even if the association would only be indirect, any data collected from you (with increasing sophistication) would be part of Facebook and its models. Even as a group of anonymous users, you would become less and less anonymous and increasingly more explainable.
If you are so far unconvinced that this matters, please be also aware that the only thing standing between businesses and the extraction of the entire surplus (or consumer welfare) is asymmetry of information.
As soon as your needs and wants are sufficiently modeled, companies will use these data to maximize the profit from all their interactions with you. You will, in a sense, pay monopoly prices without facing monopolies.
Again, read the replies carefully. See how legal constraints are certainly something that is "to be solved in the future". However, whether you should ever be afforded privacy or anonymity in the face of facebook's algorithm is implicitly answered, with (frankly) a worrying amount of arrogance and dismissal: no, you and everything about you should no longer be any unexplained variation for facebook. One way or another, facebook plans to uncover you, and there is no negotiating this point.
You must now realize that Facebook is not interested in your name and birthdate, they are interested in being able to predict everything about you without these personal data. Faeebook wants a model, one that is fundamentally opposed to your welfare. And this objective will be realized.
Take heed of this, better now than later.
It's pretty simple to see, but a lot of people don't want to see it.
Granted, many of us already collect and give away significant amounts of very personal data, some of which regularly gets leaked or stolen; but first person video and audio recordings?... scary.
And then with Facebook... I trust Zuckerberg with my life data as much as I trust his choice in hairstylists.
They took a truly golden opportunity to connect people and ravaged it, then turned to acquiring every possible competitor they could, as their main product turned into hot garbage.
Even AOL garners more of my respect than FB, how is this even possible?! FB needs to die like AOL died. It'll only take the boomer generation to die out. Unfortunately, Instagram is still a decent property, probably because they didn't let Zuckerfuck fuck it up. Expect Instagram to become cancer too once FB is no longer the main profit center.
Just imagine the possibilities of large-scale manipulations, tho!
Remembering my shell history for commands I run locally on a single computer, stored in a format I can control/edit, that is never transmitted to other people or used for advertising, that is fed into a simplistic history algorithm I have complete control over, and that I can toggle on and off at will even for individual commands -- that's the same as a 3rd party analyzing on remote servers everything I look at while I'm walking around the real world?
> Forecasting: What am I likely to do next (e.g., “Wait, you’ve already added salt to this recipe”)?
> Hand and object manipulation: What am I doing (e.g., “Teach me how to play the drums”)?
> Audio-visual diarization: Who said what when (e.g., “What was the main topic during class?”)?
> Social interaction: Who is interacting with whom (e.g., “Help me better hear the person talking to me at this noisy restaurant”)?
Now consider the fact that Facebook is actively suppressing some subjects from popping up in their users' feed.
If users becomes reliant on this AI to initiate, guide and execute most day to day tasks then Facebook will literally have the power to make certain aspects of their users' life disappear.
For example, you have a contact who criticizes Facebook or their business partners too much? Don't remind them of their birthday.
Just as a reminder Cambridge Analytica also started at a University. And many politicians are seek advice from academics who are directly or indirectly incentivized by Facebook. Through grants and/or through access to data others don't have access to.
> What are you all doing to allow those of us that don’t want to be a part of your utopia to opt out?
Answer from Facebook VP of AI:
> I encourage you to read the details of the news that doesn't actually answer your question but gives you an idea of how we think
I'm glad to see that even on a industry-specific forum like Hacker News, we can't get straight answers from people like this. Giving people "an idea of how we think" seems like a great way to stay vague enough that your thinking can change however it needs to over time to fulfill your business objectives, health or safety of the community be damned. Facebook doesn't need to give us vague "ideas", it needs to provide straightforward answers to straightforward questions.
> AI-powered wearables will have stringent privacy guarantees, both for the wearer and the people around.
Reading a sentence like this from a Facebook exec completely stretches credulity for me. I have zero confidence that your definition of "stringent privacy guarantees" is anything close to something that actually benefits the community as a whole.
We don't want HN to be a hostile place that tars-and-feathers people who show up to explain another side of a story—especially not on a topic they know about. For most of us, our work is what we know the most about. Therefore, people showing up to discuss something related to their work are among the highest-value contributors HN can have. We don't want to discentivize that, and attacking them for it goes directly against the mandate of the site. You don't have to agree, obviously, but you do have to stick to the site guidelines. (Btw, that also means that you shouldn't be posting generic-indignant rants here.)
Edit: I just noticed that you got much nicer later in the thread. That's way better—thanks.
Facebook just released Ray Ban smart glasses that don't look any different from standard Ray Bans and don't provide a solid design affordance to communicate to others when they're active, so I don't really get the feeling that Facebook cares at all about who is being swept up in their systems.
What I can assure you though is that if you are not a user, we are not recognizing you in these pictures and videos. We won’t do that without explicit consent from users.
I do have a follow-up on this if you'll entertain me: why is it that Facebook stores content of people that it doesn't recognize?
Granted, I'm but a lowly web developer, but it seems like creating business logic that automatically removes content with people that aren't Facebook users would be pretty straightforward to implement. You've already solved the hardest part of that problem, the facial recognition, so why not go all the way?
Moving forward, you'd have a really simple approach to privacy that's transparent and people understand without needing to get into the weeds.
Receiving assurance that I'm not being recognized in photos and videos isn't very comforting when I see Facebook releasing products like the "smart" Ray Bans. Recognizing people in images is only one of many types of data that Facebook gleans from that content, and I don't want anything involving me being processed in any way by that company, whatsoever.
Which is a basic right under GDPR.
No private entity is allowed to store or process data about you for their purposes without your consent.
Did FB just said they are doing it though?
Facebook trains its models on your data, in particular your pictures and (in the future) your video - your behavior, voice, face, etc. But feat not, your name will not flow directly in the algorithms (until you sign up an account - and the rest of your data is already there!)
And as presenti openly (and quite arrogantly) stated just one post up: There is no conceivable way for you to ever do anything about it, except if FB stops operations of this kind.
Which they can't, of course, because it is the very basis of their business model.
In several countries of the EU, if not all, you have the right to decide whether anyone takes picture material of you, and how it is used. This, of course, includes video.
As I wrote elsewhere, there is no way for such glasses (that are uploading to facebook) to ever be used legally in these countries. Even if they merely feed an anonymous algorithm, they are still based on illegal actions. And what facebook likes about it, is that it's the people wearing the glasses who break the law. And the small details about uploading and processing the data... well who's gonna sue (successfully)?
So, the replies in this thread indicate that facebook will develop this technology, knowing full well it will profit from people breaking the law with it. They slightly cloud the issue here (open, data, yada yada) in hopes countries do not disallow such tech from the get go ... but only slightly. No, they are cocksure about this scheme working out.
In the US, of course, there's no issue in the first place.
The lack of data-privacy regulations is likely rooted in the fact that the US never had an experience like the EU with the Nazis who literally killed on behalf of data.
Sure, the US famously "kills people based on matadata"¹, but not US people so most US people don't care.
But I've also heard that some people in the US start to recognize the value of data-privacy regulations. That's positive to note.
Do you gather data about those "unknown" persons.
Do you try to match information about "unknown" persons form different sources?
This part is easy to answer: if a user uploads a photo of you and there are other people in the photo or a recognizable location, then that's data about you. Even if it's just stored as a photo now, it'll take a neural network a millisecond to turn that into feature vectors that can be used to build a knowledge graph. This goes for Dropbox as well as Facebook.
It's the usual trick to be able to say afterwards: "I didn't say that! You just made up some interpretation. I'm not responsible for your interpretation."
pesenti - I have respect that you come forward like this and attempting to push mankind's boundaries. Unfortunately, FB is probably the second worst company ever (first could be Palantir) to take on this. I don't trust them a attometer, and never will. Any attempt to change this attitude always ends badly, see ie Carmack.
Very mixed feelings is probably the best description I can give.
I don't care to hear about the consent you received for this controlled study. I care to hear about how I/we can opt-out from something like this when it is eventually deployed in the wild en masse as part of an actual product. I'm pretty sure that's what OP was also asking about. Can you elaborate on that?
Here are the principles we are using when developing these products:
In practice, it will mean that some things you will know about, but don't necessarily give consent (i.e., someone taking a picture or a video of you), while other will likely require consent (i.e., recognizing you in these pictures/videos).
This would be outright illegal in the EU.
Even someone is taking photos / videos of me this person is not allowed to share those with third parties without my consent.
If this content is going straight to FB this wouldn't be legal in the first place.
Of course FB is fine with that as they just shift the responsibility for the illegal actions onto the person who is uploading things without consent. (Same "trick" as with phone contacts upload).
FB is using a legal loophole here. Nobody will sue his / her friends. And even if someone tried, it's after the fact anyway: FB can't be forced to "unlearn" the gathered information.
Is that really true? Does that mean every tourist taking photos with an iCloud connected phone is violating some European law?
This makes me and so many others upset. I hate that people can upload photos of me to social media without my consent. And with the new glasses, it’s disingenuous to even say I’ll even know it’s happening. The little light on it is a bad solution.
As VP that doesn't suprise me. The problem is trust, and it is continuously eroding further and further the more we get to know how your company operates.
for example, can FB provide a list of recording locations and times, so I can request my image/audio removed?
How good is facebook's anonymisation system? what % of faces can be removed (on average)
Has every single piece of footage captured with Aria been anonymised?
"Participants will only record in either Facebook offices (once they reopen), wearers’ private homes (with consent from all members of the household), or public spaces, and won’t record in private venues without written consent from such places. Before any information gathered in a public place is made available to our researchers, it will be automatically scrubbed to blur faces and vehicle license plates."
So we anonymize all the content collected in public. I don't have the stats for the face blurring algorithm, and while you can ask for your data to be removed, we don't provide locations/times. These are good suggestions though.
I see some contradiction here.
You don't know "the stats for the face blurring algorithm" but you're saying you "anonymize all the content collected"?
If the stats don't say it's 100% (which is impossible afaik if done by machines) you obviously don't anonymize upfront all the content collected.
1. You don't think upfront about the consequences your products have for people's privacy.
2. You don't care about the consequences your products have for people's privacy, and leave the particular details for the lawyers of how such products could still be distributed legally.
Which of this explanations should we prefer?
So just to be sure: Is this feature one of the goals of this research, yes or no?
From the same post:
> How would such other people who don't want to be recorded for these purposes opt out?
To make the obvious very explicit: The previous question (which you just praised as a "good question") is about this feature. Wasn't this clear to you until now? I'm wondering. This is a simple conversation thread not hard to follow.
Then how about you try again and answer it directly this time?
I think in the coming years FB should invest more in a public conversation on these issues. How can we meet the future in a way that our data is in factuality is our data, even though it’s stored by a third party such as yourself.
Especially in light of in the FB model, where the user seems to be the product, and advertisers the client.
And I agree completely on the public conversation, this is why we released this dataset and why we are doing project Aria.
How do we prevent becoming “walkable surveillance machines”? How can we control the scope and and the issue of consent? These are more rhetorical questions, but hopefully it is part of the dialogue internally at FB.
but I don't expect you to take it at face value yet, we have a long way to go.
Nobody should believe Facebook's privacy promises when the largest penalty it has ever received for profiting from user privacy violations over many years equates to one month's revenue. The incentives are simply not strong enough for Facebook to be a good actor in this area.
Wouldn't it be nice if selected external researchers too could collect data on the Facebook platform with the explicit consent of users?
Who knows it with greater accuracy, you, or FB?
Second, you need to take the "Argh its facebook, boo hiss" hat off. Then apply some critical thinking. an AR world that is connected to the physical _requires_ this kind of thing. Your device need to anticipate what you are going to do so that it can work out the probability of your need and act on it. Like Jeeves, but less able, and more annoying. As its ML, it need a massive dataset to train on. this is <0.5% of that dataset.
Depending on how things are done, if facebook are first to market with a usable AR system, they will be forced to have anonymisation built in (as in remove faces at the sensor level, unless you have permission to remember). Apple will have a showy "pixelation" layer, that is mostly ineffectual, but will PR it out to make them seem like they've cured cancer. Google, if they ever manage to get back into AR will just make things cheap and let the shitty android marketplace spy on what ever they like.
You will also have to remember that the power budget of these glasses is absolutely fucking tiny. All day screen, SLAM, AI and possibly music will need to all fit in to ~2-5watt hours. This means that virtually everything will need to be on device. (dropping to the cloud eats a boat load of power.)
Now that's not to say that 100% accurate [if thats even possible] "diarisation" won't rip society apart.
There are lots of questions that need to be answered, the problem is, tech journalists are ill equipped to ask them. Most of the teams designing the glasses are well meaning, but their targets and life experience has not equipped them to do a good job at ethics.
This is an antipattern of change. It does not need to be perpetuated because it reproduces the permanence of the constructs we want to dismantle. Even more importantly, these entities are fully aware of this and craftily guide the masses toward individual responsibility as a default, knowing that it won't affect real change. The antidote to this is critical collective action which realizes the impossibly of change as itself an impossible stance to take.
Collective action doesn't do itself, and the words "collective action" don't constitute a plan. Individual evasions at least make things more expensive to do.
Also, individual action doesn't crowd out collective action. Not protecting yourself doesn't create collective action. A concrete plan with concrete individual actions to take creates collective action. Individual sacrifice to create institutions creates collective action. IMO you should be ready to tell people where to show up before you tell them not to help themselves.
My point is that "If you can't stop Facebook snooping" is a self-limiting assumption. Of course we can stop Facebook from snooping. Facebook's ability to snoop relies on the social, political, legal, and economic assumptions all of which are constructed by people and therefore aren't immutable constants. Our progress toward changing that is realizing that all of these things are things we made and therefore can change.
- You could factor in the cost for environment per each recommendation and have that drive down the score of a recommendation: I think the "fast-fashion" idiotic anti-environmental trends would die out fast than.
- You could factor in, based on some overarching theme of ethos, that each person needs to eat healthy and recommend the right healhty food at the right time; instead of recommending the next best McDonalds burger to an already obese person.
- Take into consideration, based on studies, what actual heatlhy behaviour looks like, and try to fit that to a person - with respect to their actual persona and state of health (which are obviously known already given that each bit of you is pretty much on the street already)
I could go on, but in truth, a personal recommender that would know me better than I even would, and would optimise towards happiness and greatness, instead of # shit sold like it currently is, would already be a tremendous improvement. Surely, this idea is also far from original, but I hope that we someday get to such a thing. Atleast a thing that stops recommending environmentally destructive behaviours like buying so much stuff I don't really need.
I know that there are categories, even auto-generated categories on YouTube, but they aren't that great or all encompassing. My biggest critique of present personalized ML content streams is that there's no two-way communication. I can't tell IG that yes I wanna see big booties right now, but no I don't want to see them five hours from now. Or I want to see aquaponics right now since I'm on a learning binge, but later tonight I don't need more technology and science, I just need to laugh and unwind.
Our actions, our content consumption, is not allowed to be one and done. Every move you make is seen as a signal to the algorithm that you want more of that all the time. The closest thing we have is incognito mode, or maintaining multiple accounts for different purposes. But why can't you look at an ex on instagram one time without them showing you that person everytime you open the app for 5 years. It's really toxic and unsustainable that all of your actions are seen as 'yes, please more of this all the time'.
i once pitched this type of user experience (for the general case, not tiktok, which didn't exist at the time) at a hackathon competition, and it was popular enough then to win us some free stuff. the demo we hacked together was very rudimentary, but had we gotten a little further along with a proof of concept before falling apart, who knows what might have happened...
the biggest challenge was actually licensing costs and rights management, not the recommendation engine (though that was challenging too).
I suspected Facebook does the work you referenced but I had no idea they actually did and had any flack about it. I'd like to think I'm a fairly tech savvy and an informed consumer (especially around issues like this), maybe I'm not, but if I am, what hope do you have of the general population caring when they aren't even aware as a first hurdle before the other massive hurdles of getting them to care enough to act.
The fact is that this type of work isn't regulated. Funding agencies regulate human subject work for your typical public funded research work and require a lot of informed consent. The leverage they use is that should you violate these terms, you may have current and future funding pulled, may be black listed across multiple agencies, and may find it difficult to pursue any career in research in the future involving human subjects. There are lots of disincentives not do this on the premise of manipulating your resources.
Meanwhile, entities like Facebook have piles of such resources and have a vested interest in all sorts of this questionable work. Pretty much nothing prevents them from doing it since they're self funding. Unless you can remove their self fundability, it will continue. This goes back to problem of needing consumers to vote with their wallet in an effective manner.
The other option is to create the explicit policy protections in law to regulate these activities but as a society, capital has convinced everyone that all regulation is evil and will only hinder consumer/citizen progress. You need not only policy but real enforcement teeth as well that create disincentives that are catastrophic enough not to tempt risk for the potential gains.
Perhaps the public consciousness around the unhealthy design of existing social media is approaching a point where a new Facebook-like social network could actually achieve a critical mass of users. On the order of years I think it'll happen (like rates of smoking cigarettes), but how long that will be who knows. I can only hope it comes in the next few years as I fear the destructive ripple effects through culture and society are only getting worse.
Whose? Individual or group wellbeing? Optimizing for one can lead to very suboptimal outcomes for the other.
As for the answer to your question, just go for best wellbeing for the individual. Happier, healthier, less obese, fitter, non-smoking people are also a net benefit to society.
Assuming the technology works at all.
Also other tech. companies as FB is sucking up all media/political/people's energy and the rest can cruise current wave unharmed.
I think it's for two reasons: 1) it's scarier to think that Big Media and Big Tech are getting along swimmingly despite a few minor conflicts and necessary keyfabe for the plebs, and 2) people who do horrible things for money in Big Tech want to feel better about themselves, and want people to think well of them (not like they think of people who worked at AIG, Arthur Andersen, or Countrywide.)
I don't need to go into the reasons why it's bad. For anyone who's aware of the last 15 years of social media critique or novels like Brave New World, it's obvious that this would be a dystopia multiplier. Ad tech on steroids. If you think the manipulation is bad now just wait until this data is fed into the ML models.
If you must have everything you see, do and say recorded then please, for the love of god, let's use non-profit, open source / free software platforms where we own our own data.
iMessage, WhatsApp, Gmail, Facebook Messenger, Instagram: all of these are stored on the server effectively unencrypted (in the case of iMessage and WhatsApp, via the chat backups, a backdoor to the e2e encryption they promise). The state has access to all of them at any time. To top it off the carriers and a million different apps are constantly logging your location history for all time, for every member of society.
If you think that isn't one of the most useful and powerful databases in the history of mankind, I question your imagination.
The massive damage, which may just be existential, is not apparent to the USA just yet, but it will be eventually.
The balance of power has shifted even more dramatically than just about ever before in history, in any country in history. Alarm bells should be going off left and right but nobody seems to care and it's just business as usual.
As for this specific issue, of course there are the usual massive issues around data privacy. The major issue I was alluding to here is ad tech AI processes being fed an unimaginably rich source of intimate data. It's a horrifying prospect.
FWIW, I think telephone company billing records have been doing this for nearly a century. Sure, it wasn't the official government policy, but the data was recorded. ECHELON has been around quite a long time as well, and that was indeed government monitoring of large amounts of international communications.
I'm not trying to normalize it, just adding some context.
There may well be earlier extant data. It was probably tabulated either on punch-card or magnetic tape, and certainly was used for billing purposes. Prior to the 1980s, large-scale data preservation was quite expensive.
If anyone has information of earlier comprehensive call history data surviving to the present, I'd appreciate being illuminated.
More citations of Hemisphere and call history data in an earlier comment: https://news.ycombinator.com/item?id=22208434
Facebook Messenger, WhatsApp chat backups, iMessage chat backups (incl all geotagged photo attachments), et c.
Though metadata is often more useful than content. And the point is that telco providers, government agencies, and all those who've hacked into them, know of phone calls made from numbers you no longer remember you had.
FB effectively still controls the encryption keys.
Coincidentally nobody of the usual suspects complained that "the terrorists are going dark".
Putting this facts together I'm assured the crypto can be broken at will.
If you honestly believe them then I've got a bridge you might wanna take a look at...
Does FB invest also in those?
Just imagine, all the possibilities!
[Do I need to mark this as sarcasm?]
I fear we're far past the point of no return, and there is no escape from the disaster we've created even for those of us who have avoided participating ourselves.
yes "Dear $diety please protect this machine and save this lowly process from the OOM killer" > /dev/null