Hacker News new | past | comments | ask | show | jobs | submit login
Facebook is researching AI systems that see, hear, remember everything you do (theverge.com)
189 points by coldcode 42 days ago | hide | past | favorite | 131 comments

I suggest everyone to carefully read the replies of the Facebook employees in this thread.

These data are being released in hopes of furthering the developments of these techniques - which ultimately benefits a future use by Facebook.

In turn, people like "presenti" have pointed out several times that the data were collected in Facebook offices or other places where all participants consent. The implication is clear: While the present data are legal, they would not be if shot in the "wild".

In many jurisdictions, there is no legal use case for these methods. It is literally not possible to have a widely-used implementation of these glasses that would not violate the privacy rights of "bystanders".

Of course, Facebook will work hard to make it legal, on all fronts. Whether laws need to be changed or, as pointed out by presenti, some innovation may arise to make you at least an anonymous blob, separate from name and address. And if you do not have an account - see in this thread - what is there to worry about? Well, even if the association would only be indirect, any data collected from you (with increasing sophistication) would be part of Facebook and its models. Even as a group of anonymous users, you would become less and less anonymous and increasingly more explainable.

If you are so far unconvinced that this matters, please be also aware that the only thing standing between businesses and the extraction of the entire surplus (or consumer welfare) is asymmetry of information. As soon as your needs and wants are sufficiently modeled, companies will use these data to maximize the profit from all their interactions with you. You will, in a sense, pay monopoly prices without facing monopolies.

Again, read the replies carefully. See how legal constraints are certainly something that is "to be solved in the future". However, whether you should ever be afforded privacy or anonymity in the face of facebook's algorithm is implicitly answered, with (frankly) a worrying amount of arrogance and dismissal: no, you and everything about you should no longer be any unexplained variation for facebook. One way or another, facebook plans to uncover you, and there is no negotiating this point.

You must now realize that Facebook is not interested in your name and birthdate, they are interested in being able to predict everything about you without these personal data. Faeebook wants a model, one that is fundamentally opposed to your welfare. And this objective will be realized. Take heed of this, better now than later.

I’ve been hanging around HN for quite nearly a decade. This is the first comment I’ve felt compelled to mark as Favorite. Precision perception and behavior modification at scale is a dream of autocrats and marketers alike. The fact that it is private/anonymous is entirely irrelevant.

I don't think humans are smart enough to avoid this fate. The future will be exactly like the dystopian movies we are watching because that's what the megacorps want.

It's pretty simple to see, but a lot of people don't want to see it.

Regardless of who provides this, and even irrespective of marketing concerns, there most likely will be data leaks/theft. Imaging having audio+video recordings of what you see and what you say in the hands of the wrong people. And "the wrong people" could be a wide array of different parties, from banks to police to political enemies to nosy neighbors and beyond.

Granted, many of us already collect and give away significant amounts of very personal data, some of which regularly gets leaked or stolen; but first person video and audio recordings?... scary.

And then with Facebook... I trust Zuckerberg with my life data as much as I trust his choice in hairstylists.

It's so strange how everytime I get a Facebook link from someone, it requires me to login to read more than 1/4 of it ( and it's cut off just at the right place to entice ) and then before I know it, I'm looking at a bunch of silly unread notifications and terribleness in my feed. It's strange, like a realizing I was hypnotized for a second. I immediately log out and close the window as I snap out of it. Facebook is truly...a cancer. Do not give me the whole song and dance about how their targeted ads help small businesses. 99% of the stuff I've seen on FB, business-wise, is ads for shady products, scams or schemes.

They took a truly golden opportunity to connect people and ravaged it, then turned to acquiring every possible competitor they could, as their main product turned into hot garbage.

Even AOL garners more of my respect than FB, how is this even possible?! FB needs to die like AOL died. It'll only take the boomer generation to die out. Unfortunately, Instagram is still a decent property, probably because they didn't let Zuckerfuck fuck it up. Expect Instagram to become cancer too once FB is no longer the main profit center.

It just seems like the people at FB have watched Black Mirror and decided they want to do all the things they saw.

That "social credit" episode was really creepy.

or they became DARPA lifelog 2.0

No, thank you. But I suspect people would still accept and use that tech just like they have accepted always-online/always-listening digital assistants.

Just imagine the possibilities of large-scale manipulations, tho!

Do you use history in your shell?

What exactly is the comparison here?

Remembering my shell history for commands I run locally on a single computer, stored in a format I can control/edit, that is never transmitted to other people or used for advertising, that is fed into a simplistic history algorithm I have complete control over, and that I can toggle on and off at will even for individual commands -- that's the same as a 3rd party analyzing on remote servers everything I look at while I'm walking around the real world?

I do, but I don't upload it to any other parties.

Yes, and you can lead a command with a space for it to not be remembered in history. You can also delete it, and nobody else can access it without privileged access to your computet

Only for the current session. When I close the terminal it is all gone

> Episodic memory: What happened when (e.g., “Where did I leave my keys?”)?

> Forecasting: What am I likely to do next (e.g., “Wait, you’ve already added salt to this recipe”)?

> Hand and object manipulation: What am I doing (e.g., “Teach me how to play the drums”)?

> Audio-visual diarization: Who said what when (e.g., “What was the main topic during class?”)?

> Social interaction: Who is interacting with whom (e.g., “Help me better hear the person talking to me at this noisy restaurant”)?

Now consider the fact that Facebook is actively suppressing some subjects from popping up in their users' feed. If users becomes reliant on this AI to initiate, guide and execute most day to day tasks then Facebook will literally have the power to make certain aspects of their users' life disappear.

For example, you have a contact who criticizes Facebook or their business partners too much? Don't remind them of their birthday.

Anyone mad at the universities that actually collected this data? Or just Facebook?

I am mad at both. Universities should not participate in such unethical activities!

Just as a reminder Cambridge Analytica also started at a University. And many politicians are seek advice from academics who are directly or indirectly incentivized by Facebook. Through grants and/or through access to data others don't have access to.

I support Facebook AI. The actual news is that we are releasing an extensive egocentric dataset with associated tasks for the research community. Collecting such dataset in a private and responsible way is a significant challenge. Happy to answer any questions.


I support free and open web. We are in defining point in time in which a possible tech dystopia is upon us. No paycheck or "advancements" will remove the dangers and damages done by greed and machinations of few to Big to fail companies. No dataset or "business" existence is rationalization for creating a system of data-hoaridng and data exploitation.

What are you all doing to allow those of us that don’t want to be a part of your utopia to opt out?

I encourage you to read the details of the news, in particular how the data was collected with full consent from all (something that most research datasets don’t do well). It doesn’t answer your question directly but gives you an idea on how we think about these issues. AI-powered wearables will have stringent privacy guarantees, both for the wearer and the people around.

HN user asks a simple and direct question to Facebook's VP of AI:

> What are you all doing to allow those of us that don’t want to be a part of your utopia to opt out?

Answer from Facebook VP of AI:

> I encourage you to read the details of the news that doesn't actually answer your question but gives you an idea of how we think

I'm glad to see that even on a industry-specific forum like Hacker News, we can't get straight answers from people like this. Giving people "an idea of how we think" seems like a great way to stay vague enough that your thinking can change however it needs to over time to fulfill your business objectives, health or safety of the community be damned. Facebook doesn't need to give us vague "ideas", it needs to provide straightforward answers to straightforward questions.

> AI-powered wearables will have stringent privacy guarantees, both for the wearer and the people around.

Reading a sentence like this from a Facebook exec completely stretches credulity for me. I have zero confidence that your definition of "stringent privacy guarantees" is anything close to something that actually benefits the community as a whole.

Please don't attack people like this when they post to HN, regardless of how you feel about their employer. You may not owe $BigCo better, but you owe this community better if you're participating in it. When something remains unanswered, it's enough to ask substantive questions respectfully.

We don't want HN to be a hostile place that tars-and-feathers people who show up to explain another side of a story—especially not on a topic they know about. For most of us, our work is what we know the most about. Therefore, people showing up to discuss something related to their work are among the highest-value contributors HN can have. We don't want to discentivize that, and attacking them for it goes directly against the mandate of the site. You don't have to agree, obviously, but you do have to stick to the site guidelines. (Btw, that also means that you shouldn't be posting generic-indignant rants here.)



Edit: I just noticed that you got much nicer later in the thread. That's way better—thanks.

The question was about opting out of our "utopia", I wouldn't call that a "straightforward question". If you do have a straightforward question, especially about this or another specific project, I can try to provide a straightforward answer.

Great! How do I make sure that nothing about me ever passes through a Facebook server/ETL job/datalake/neural net/whatever? Can that even be guaranteed? Is Facebook generating a shadow profile on me or using data I've generated either myself or through friends that have taken pictures/videos/etc of me, and if maybe, how do I opt out?

Facebook just released Ray Ban smart glasses that don't look any different from standard Ray Bans and don't provide a solid design affordance to communicate to others when they're active, so I don't really get the feeling that Facebook cares at all about who is being swept up in their systems.

No, that’s not a reasonable assumption. Photos or videos of you are likely on Facebook’s servers, uploaded by friends or others when you are in public.

What I can assure you though is that if you are not a user, we are not recognizing you in these pictures and videos. We won’t do that without explicit consent from users.

Thanks Jerome, I really do appreciate your answer and I know I'm probably not the most fun person to talk to.

I do have a follow-up on this if you'll entertain me: why is it that Facebook stores content of people that it doesn't recognize?

Granted, I'm but a lowly web developer, but it seems like creating business logic that automatically removes content with people that aren't Facebook users would be pretty straightforward to implement. You've already solved the hardest part of that problem, the facial recognition, so why not go all the way?

Moving forward, you'd have a really simple approach to privacy that's transparent and people understand without needing to get into the weeds.

Receiving assurance that I'm not being recognized in photos and videos isn't very comforting when I see Facebook releasing products like the "smart" Ray Bans. Recognizing people in images is only one of many types of data that Facebook gleans from that content, and I don't want anything involving me being processed in any way by that company, whatsoever.

> Recognizing people in images is only one of many types of data that Facebook gleans from that content, and I don't want anything involving me being processed in any way by that company, whatsoever.

Which is a basic right under GDPR.

No private entity is allowed to store or process data about you for their purposes without your consent.

Did FB just said they are doing it though?

Of course. FB openly ignores the GDPR and will be happy to pay a fine if the EU manages to ever show enough courage to levy one of significance.

Facebook trains its models on your data, in particular your pictures and (in the future) your video - your behavior, voice, face, etc. But feat not, your name will not flow directly in the algorithms (until you sign up an account - and the rest of your data is already there!)

And as presenti openly (and quite arrogantly) stated just one post up: There is no conceivable way for you to ever do anything about it, except if FB stops operations of this kind. Which they can't, of course, because it is the very basis of their business model.

I'm not a lawyer but my understanding in the United States is that a person in a public place has no legal expectation of privacy. If you are in public and someone takes a picture in which your face is visible and then upload that to Facebook, I wouldn't expect that to be a violation of my privacy nor would I expect Facebook to allow an unknown person to request the picture's removal.

Like I said, this is a point where the EU and the United States differ.

In several countries of the EU, if not all, you have the right to decide whether anyone takes picture material of you, and how it is used. This, of course, includes video.

As I wrote elsewhere, there is no way for such glasses (that are uploading to facebook) to ever be used legally in these countries. Even if they merely feed an anonymous algorithm, they are still based on illegal actions. And what facebook likes about it, is that it's the people wearing the glasses who break the law. And the small details about uploading and processing the data... well who's gonna sue (successfully)?

So, the replies in this thread indicate that facebook will develop this technology, knowing full well it will profit from people breaking the law with it. They slightly cloud the issue here (open, data, yada yada) in hopes countries do not disallow such tech from the get go ... but only slightly. No, they are cocksure about this scheme working out.

In the US, of course, there's no issue in the first place.

I'm not form the US and also not a lawyer but I've heard that this goes even to the point that someone using a public communications network has no legal expectation of privacy in the US.

The lack of data-privacy regulations is likely rooted in the fact that the US never had an experience like the EU with the Nazis who literally killed on behalf of data.

Sure, the US famously "kills people based on matadata"¹, but not US people so most US people don't care.

But I've also heard that some people in the US start to recognize the value of data-privacy regulations. That's positive to note.

¹ https://en.wikiquote.org/wiki/Michael_Hayden_(general)

What does "we are not recognizing you" mean? Of course you can't "recognize" someone you don't know, so what do you try to say here?

Do you gather data about those "unknown" persons.

Do you try to match information about "unknown" persons form different sources?

> Do you gather data about those "unknown" persons

This part is easy to answer: if a user uploads a photo of you and there are other people in the photo or a recognizable location, then that's data about you. Even if it's just stored as a photo now, it'll take a neural network a millisecond to turn that into feature vectors that can be used to build a knowledge graph. This goes for Dropbox as well as Facebook.

Thanks for answering questions. In regards to this. If I have a Facebook account, and then close it, is my existing data in Facebook's systems erased including image recognition / tracking based preferences, etc?

> "gives you an idea of how we think"

It's the usual trick to be able to say afterwards: "I didn't say that! You just made up some interpretation. I'm not responsible for your interpretation."

This reminds me of the squid game when they say “You signed a disclaimer of physical rights today, didn’t you?”

I mean, what do you expect. This is public forum, he comes honestly unanonymously in front of huge crowd already in very negative mood towards this. You can't be a VP of such a thing in one of the richest companies globally ever, and not know how to play politics and lawyering safely.

pesenti - I have respect that you come forward like this and attempting to push mankind's boundaries. Unfortunately, FB is probably the second worst company ever (first could be Palantir) to take on this. I don't trust them a attometer, and never will. Any attempt to change this attitude always ends badly, see ie Carmack.

Very mixed feelings is probably the best description I can give.

I expect that those who have power should bear the responsibility of being asked hard questions and answering them honestly, because a society which discourages us from asking difficult questions of those in power is doomed to fail.

>I encourage you to read the details of the news, in particular how the data was collected with full consent from all (something that most research datasets don’t do well).

I don't care to hear about the consent you received for this controlled study. I care to hear about how I/we can opt-out from something like this when it is eventually deployed in the wild en masse as part of an actual product. I'm pretty sure that's what OP was also asking about. Can you elaborate on that?

It's hard to elaborate on something that's not developed yet.

Here are the principles we are using when developing these products:


In practice, it will mean that some things you will know about, but don't necessarily give consent (i.e., someone taking a picture or a video of you), while other will likely require consent (i.e., recognizing you in these pictures/videos).

> In practice, it will mean that some things you will know about, but don't necessarily give consent (i.e., someone taking a picture or a video of you)

This would be outright illegal in the EU.

Even someone is taking photos / videos of me this person is not allowed to share those with third parties without my consent.

If this content is going straight to FB this wouldn't be legal in the first place.

Of course FB is fine with that as they just shift the responsibility for the illegal actions onto the person who is uploading things without consent. (Same "trick" as with phone contacts upload).

FB is using a legal loophole here. Nobody will sue his / her friends. And even if someone tried, it's after the fact anyway: FB can't be forced to "unlearn" the gathered information.

> Even someone is taking photos / videos of me this person is not allowed to share those with third parties without my consent.

Is that really true? Does that mean every tourist taking photos with an iCloud connected phone is violating some European law?

> In practice, it will mean that some things you will know about, but don't necessarily give consent (i.e., someone taking a picture or a video of you)

This makes me and so many others upset. I hate that people can upload photos of me to social media without my consent. And with the new glasses, it’s disingenuous to even say I’ll even know it’s happening. The little light on it is a bad solution.

> I support Facebook AI.

As VP that doesn't suprise me. The problem is trust, and it is continuously eroding further and further the more we get to know how your company operates.

One of the features outlined in the article for this post was for the glasses to remember things that other people said. How would such other people who don't want to be recorded for these purposes opt out?

This is a static dataset. You'll want to ask that question about "Project Aria".

for example, can FB provide a list of recording locations and times, so I can request my image/audio removed?

How good is facebook's anonymisation system? what % of faces can be removed (on average)

Has every single piece of footage captured with Aria been anonymised?

From https://about.facebook.com/realitylabs/projectaria/

"Participants will only record in either Facebook offices (once they reopen), wearers’ private homes (with consent from all members of the household), or public spaces, and won’t record in private venues without written consent from such places. Before any information gathered in a public place is made available to our researchers, it will be automatically scrubbed to blur faces and vehicle license plates."

So we anonymize all the content collected in public. I don't have the stats for the face blurring algorithm, and while you can ask for your data to be removed, we don't provide locations/times. These are good suggestions though.

> So we anonymize all the content collected in public. I don't have the stats for the face blurring algorithm […]

I see some contradiction here.

You don't know "the stats for the face blurring algorithm" but you're saying you "anonymize all the content collected"?

If the stats don't say it's 100% (which is impossible afaik if done by machines) you obviously don't anonymize upfront all the content collected.

That’s a good question and I don’t think we have all the answers yet. But I expect that many functionalities will need to request and get consent from 3rd parties.

So, will I have to sign a consent agreement every time I talk to a d*** wearing these glasses? Is this the "irl" equivalent of the cookie consent banner in the web browser? Heaven forbid. I want to be positive about technological advancements but I'm frankly flabbergasted that anyone, including you, thinks this is a good idea for the world. And in my heart of hearts I have to think that your salary and prestige are coming before your integrity and common sense.

If you can't answer this particular question after you've already advertised that feature there are only two possibilities to explain this knowledge gap:

1. You don't think upfront about the consequences your products have for people's privacy.

2. You don't care about the consequences your products have for people's privacy, and leave the particular details for the lawyers of how such products could still be distributed legally.

Which of this explanations should we prefer?

Which feature are you talking about? This is a research project, not a product. The point of research projects is to investigate and figure out answers we don't have today.

> One of the features outlined in the article for this post was for the glasses to remember things that other people said.


So just to be sure: Is this feature one of the goals of this research, yes or no?

From the same post:

> How would such other people who don't want to be recorded for these purposes opt out?

To make the obvious very explicit: The previous question (which you just praised as a "good question") is about this feature. Wasn't this clear to you until now? I'm wondering. This is a simple conversation thread not hard to follow.

>> It doesn’t answer your question directly

Then how about you try again and answer it directly this time?

What I mean is, once this product goes live, how can I opt out of being “incidentally” included in your data collection by everyone dumb enough to buy this product?

Do you think that most people believe Facebook privacy guarantees are worth the bits they're written in?

Facebook got a penalty, but the main gripe is probably that it shouldn’t have been necessary to give one in the first place. I think what is on everyones mind is : “how can we trust a company that has a bad track record?”

I think in the coming years FB should invest more in a public conversation on these issues. How can we meet the future in a way that our data is in factuality is our data, even though it’s stored by a third party such as yourself.

Especially in light of in the FB model, where the user seems to be the product, and advertisers the client.

My point in linking to the settlement wasn't about the fine, it was about the guarantees, oversight, and accountability that came with it. Read the FTC news release, and you'll see that these are pretty extensive.

And I agree completely on the public conversation, this is why we released this dataset and why we are doing project Aria.

Understood, and I reread the article just in case. My point is/was that it must come as a natural impulse to be privacy sensitive. Privacy hopefully becomes part of the culture / dna of Facebook, not just because the FCC and the privacy board are looking over your shoulders, so to say :)

How do we prevent becoming “walkable surveillance machines”? How can we control the scope and and the issue of consent? These are more rhetorical questions, but hopefully it is part of the dialogue internally at FB.

These are great points. And yes, we'll only get there if it becomes part of the culture, which this is trying to communicate:


but I don't expect you to take it at face value yet, we have a long way to go.

That is a terrible example. Not only did Facebook's stock price increase after the FTC settlement,[1] which suggests that Facebook paid too small of a penalty for violating user privacy, but Facebook shareholders also filed a lawsuit last month alleging that the $5 billion payment in the settlement was structured to shield Zuckerberg from being deposed.[2] Had there not been this alleged quid pro quo, Facebook's penalty would have been even smaller.

Nobody should believe Facebook's privacy promises when the largest penalty it has ever received for profiting from user privacy violations over many years equates to one month's revenue. The incentives are simply not strong enough for Facebook to be a good actor in this area.

[1] https://www.theverge.com/2019/7/12/20692524/facebook-five-bi...

[2] https://www.politico.com/news/2021/09/21/facebook-paid-billi...

This comment needs a serious disclaimer that you are a VP of AI at Facebook. Its implied in the above, but such serious COI requires acknowledgement.

I support Facebook shutting down this program.

Glad for you that you collected such a nice dataset.

Wouldn't it be nice if selected external researchers too could collect data on the Facebook platform with the explicit consent of users?

Here is a link to all the tools we provide: https://research.fb.com/data/

Maximizing a manipulation engine is not what we as a species need to be investing any time or effort into.

What is your price?

Who knows it with greater accuracy, you, or FB?

Wait, are such devices even legal? Looks like walking spy-cam to me. I want to know if someone records video/takes picture of me. With this glasses you can`t tell if you are being filmed.

Yes, there is a "right to panorama" in most countries. However this dataset has been recorded with the _express permission_ of all the people involved. Which is different to the imagenet, where they just crawled a whole bunch of images and called it a day.

Second, you need to take the "Argh its facebook, boo hiss" hat off. Then apply some critical thinking. an AR world that is connected to the physical _requires_ this kind of thing. Your device need to anticipate what you are going to do so that it can work out the probability of your need and act on it. Like Jeeves, but less able, and more annoying. As its ML, it need a massive dataset to train on. this is <0.5% of that dataset.

Depending on how things are done, if facebook are first to market with a usable AR system, they will be forced to have anonymisation built in (as in remove faces at the sensor level, unless you have permission to remember). Apple will have a showy "pixelation" layer, that is mostly ineffectual, but will PR it out to make them seem like they've cured cancer. Google, if they ever manage to get back into AR will just make things cheap and let the shitty android marketplace spy on what ever they like.

You will also have to remember that the power budget of these glasses is absolutely fucking tiny. All day screen, SLAM, AI and possibly music will need to all fit in to ~2-5watt hours. This means that virtually everything will need to be on device. (dropping to the cloud eats a boat load of power.)

Now that's not to say that 100% accurate [if thats even possible] "diarisation" won't rip society apart.

There are lots of questions that need to be answered, the problem is, tech journalists are ill equipped to ask them. Most of the teams designing the glasses are well meaning, but their targets and life experience has not equipped them to do a good job at ethics.

Is it illegal to shoot videos in public place without permission of all present ? Private places may have their own rules. Also this is R&D, long way before such questions matter.

In most western countries it is legal. I'm not sure it should be, given recent technological advances.

Nah, these questions matter now.

Don't you think answering these questions should guide the R&D?

And this is why you need ad blockers and privacy protection plugins for your web browser. If you can't stop Facebook snooping on you at least make it hard for them. As my grandfather taught me, "if you can't win then make it hard for the other guy to win."

I don't like this pattern in calls to action. The pattern starts with an implicit hopelessness of affecting any external change and results in all responsibility being assumed by the individual.

This is an antipattern of change. It does not need to be perpetuated because it reproduces the permanence of the constructs we want to dismantle. Even more importantly, these entities are fully aware of this and craftily guide the masses toward individual responsibility as a default, knowing that it won't affect real change. The antidote to this is critical collective action which realizes the impossibly of change as itself an impossible stance to take.

Change isn't caused by strength of belief; it's magic that only works based on how hard you believe in it.

Collective action doesn't do itself, and the words "collective action" don't constitute a plan. Individual evasions at least make things more expensive to do.

Also, individual action doesn't crowd out collective action. Not protecting yourself doesn't create collective action. A concrete plan with concrete individual actions to take creates collective action. Individual sacrifice to create institutions creates collective action. IMO you should be ready to tell people where to show up before you tell them not to help themselves.

Of course not. We both agree that organizing doesn't appear out of nowhere and that it takes real work to mobilize people. The call to action of installing adblockers in this particular case makes people feel like they are taking real action, but it's equivalent to "hope and prayers" and that doesn't work either.

My point is that "If you can't stop Facebook snooping" is a self-limiting assumption. Of course we can stop Facebook from snooping. Facebook's ability to snoop relies on the social, political, legal, and economic assumptions all of which are constructed by people and therefore aren't immutable constants. Our progress toward changing that is realizing that all of these things are things we made and therefore can change.

Sometimes I dream of a Facebook machine; e.g. the eerily creepy recommender for your next best action, but than with an objective function not to optimise the recommenders revenue (by being as effective ad seller as possible) but rather changing the objective function to optimise for health, wellbeing and happiness. Now surely it is far from trivial to define these, but atleast you can make a start somewhere:

- You could factor in the cost for environment per each recommendation and have that drive down the score of a recommendation: I think the "fast-fashion" idiotic anti-environmental trends would die out fast than.

- You could factor in, based on some overarching theme of ethos, that each person needs to eat healthy and recommend the right healhty food at the right time; instead of recommending the next best McDonalds burger to an already obese person.

- Take into consideration, based on studies, what actual heatlhy behaviour looks like, and try to fit that to a person - with respect to their actual persona and state of health (which are obviously known already given that each bit of you is pretty much on the street already)

I could go on, but in truth, a personal recommender that would know me better than I even would, and would optimise towards happiness and greatness, instead of # shit sold like it currently is, would already be a tremendous improvement. Surely, this idea is also far from original, but I hope that we someday get to such a thing. Atleast a thing that stops recommending environmentally destructive behaviours like buying so much stuff I don't really need.

I find it absolutely painful that our recommender ML based content discovery streams like YouTube and Tiktok and Instagram don't have multiple 'hats' you can put on and take off, in the sense of saying 'hey TikTok, I'm feeling frisky, feel free to send me those thirst trap videos now', but then 30 minutes later you can say 'OK TikTok, I know I watched a bunch of those dancing videos, but can you send me the educational stuff now'. Ditto IG, YouTube etc. Like 'hey YouTube, engage education mode', 'hey YouTube, engage self-improvement mode', 'hey YouTube, I just wanna relax and laugh'.

I know that there are categories, even auto-generated categories on YouTube, but they aren't that great or all encompassing. My biggest critique of present personalized ML content streams is that there's no two-way communication. I can't tell IG that yes I wanna see big booties right now, but no I don't want to see them five hours from now. Or I want to see aquaponics right now since I'm on a learning binge, but later tonight I don't need more technology and science, I just need to laugh and unwind.

Our actions, our content consumption, is not allowed to be one and done. Every move you make is seen as a signal to the algorithm that you want more of that all the time. The closest thing we have is incognito mode, or maintaining multiple accounts for different purposes. But why can't you look at an ex on instagram one time without them showing you that person everytime you open the app for 5 years. It's really toxic and unsustainable that all of your actions are seen as 'yes, please more of this all the time'.

> "...in the sense of saying 'hey TikTok, I'm feeling frisky, feel free to send me those thirst trap videos now'..."

i once pitched this type of user experience (for the general case, not tiktok, which didn't exist at the time) at a hackathon competition, and it was popular enough then to win us some free stuff. the demo we hacked together was very rudimentary, but had we gotten a little further along with a proof of concept before falling apart, who knows what might have happened...

the biggest challenge was actually licensing costs and rights management, not the recommendation engine (though that was challenging too).

Anyone remember StumbleUpon? You could easily edit your interests and get different types of content. It was the pinnacle.

You can use YouTube anonymously and also clear your history (which I frequently do). In my experience the algorithms are good when they know just a little about you and make good recommendations once in a while. Once you watch too much of something it become a rabbit hole of the same stuff but more dumb.

FB has gotten roasted for doing emotional manipulation in the past: https://www.forbes.com/sites/kashmirhill/2014/06/28/facebook.... Not confident that the company trying to specifically target emotions (even ostensibly happiness) would be taken that well.

I do not believe Facebook as a company would be possible in generating such a thing. And if they could I can only be cynical about it. I rather put my faith in the open source community where people tinker on their own algorithms, and provide their own data. I am sure the first initiatives are already on their way regarding personal recommenders. But I don’t know any. If anyone does, please recommend :)

I'm not sure it would matter. The general population that compose Facebooks revenue stream don't seem to care. We keep hinting at these ethical conundrums that invisibly regulate massive businesses when they simply don't. It's not just tech, it's big businesses in general. The ethics only matter if a large enough population of the consumer and product base both care and act. Only demand matters and if demand doesn't care or you can manipulate the demand not to care, you can get away with about anything that isn't explicitly illegal.

I suspected Facebook does the work you referenced but I had no idea they actually did and had any flack about it. I'd like to think I'm a fairly tech savvy and an informed consumer (especially around issues like this), maybe I'm not, but if I am, what hope do you have of the general population caring when they aren't even aware as a first hurdle before the other massive hurdles of getting them to care enough to act.

The fact is that this type of work isn't regulated. Funding agencies regulate human subject work for your typical public funded research work and require a lot of informed consent. The leverage they use is that should you violate these terms, you may have current and future funding pulled, may be black listed across multiple agencies, and may find it difficult to pursue any career in research in the future involving human subjects. There are lots of disincentives not do this on the premise of manipulating your resources.

Meanwhile, entities like Facebook have piles of such resources and have a vested interest in all sorts of this questionable work. Pretty much nothing prevents them from doing it since they're self funding. Unless you can remove their self fundability, it will continue. This goes back to problem of needing consumers to vote with their wallet in an effective manner.

The other option is to create the explicit policy protections in law to regulate these activities but as a society, capital has convinced everyone that all regulation is evil and will only hinder consumer/citizen progress. You need not only policy but real enforcement teeth as well that create disincentives that are catastrophic enough not to tempt risk for the potential gains.

I'm certainly wish this were possible, and people who left would probably go back to the platform if it reformed itself in this direction. But as the years go on all I see are more and more catastrophic impacts from short term thinking and near term profit generation, whether it's Facebook, climate, VC-funded startups or the current supply chain crisis.

Perhaps the public consciousness around the unhealthy design of existing social media is approaching a point where a new Facebook-like social network could actually achieve a critical mass of users. On the order of years I think it'll happen (like rates of smoking cigarettes), but how long that will be who knows. I can only hope it comes in the next few years as I fear the destructive ripple effects through culture and society are only getting worse.

> changing the objective function to optimise for health, wellbeing and happiness.

Whose? Individual or group wellbeing? Optimizing for one can lead to very suboptimal outcomes for the other.

This feels like the argument about weather the self-driving car should hit two grandmas or the guy in the car. The discussion is not productive, because the answer that saves the most people is to get self-driving cars on the streets as soon as possible.

As for the answer to your question, just go for best wellbeing for the individual. Happier, healthier, less obese, fitter, non-smoking people are also a net benefit to society.

Assuming the technology works at all.

Unfortunately I think we live in a society where the people who could use this the most are the people least able to afford the costs of such a service.

Apparently whoever paid for the most recent "Facebook is Bad" media hate package, only bought the 3 day ticket. Cheap bastards.

Facebook is consciously aware of the fact that they're optimizing a manipulation engine that has outsized net-negative effects on societies worldwide.

Who would benefit from that service? Maybe PR services companies? "See what is happening to Fb, we can help you avoid that Mr other dodgy Tech"

Who else?

Traditional Media. Politicians. Competing Social Networks.

Also other tech. companies as FB is sucking up all media/political/people's energy and the rest can cruise current wave unharmed.

What social networks are you referring to?

Not worth asking. It's part of a bizarre theory that Big Media is not only a single cabal, but that cabal is constantly whiteboarding attacks on the tech companies that have bought large pieces of it.

I think it's for two reasons: 1) it's scarier to think that Big Media and Big Tech are getting along swimmingly despite a few minor conflicts and necessary keyfabe for the plebs, and 2) people who do horrible things for money in Big Tech want to feel better about themselves, and want people to think well of them (not like they think of people who worked at AIG, Arthur Andersen, or Countrywide.)

if you use fb, unfollow everything and everyone and then either selectively follow what you like or don't ever follow anyone again : https://gist.github.com/renestalder/c5b77635bfbec8f94d28#gis... , it will greatly reduce your time on fb.

When I read articles like this I experience a mini existential crisis that we could be heading to some very dark places technologically.

I don't need to go into the reasons why it's bad. For anyone who's aware of the last 15 years of social media critique or novels like Brave New World, it's obvious that this would be a dystopia multiplier. Ad tech on steroids. If you think the manipulation is bad now just wait until this data is fed into the ML models.

If you must have everything you see, do and say recorded then please, for the love of god, let's use non-profit, open source / free software platforms where we own our own data.

In one generation in the US we went as a whole society from most private p2p conversations not being logged and monitored and recorded by the government, to most private p2p conversations not only being logged and monitored and recorded, but available to the state at any time without probable cause or a warrant (FISA Amendments Act).

iMessage, WhatsApp, Gmail, Facebook Messenger, Instagram: all of these are stored on the server effectively unencrypted (in the case of iMessage and WhatsApp, via the chat backups, a backdoor to the e2e encryption they promise). The state has access to all of them at any time. To top it off the carriers and a million different apps are constantly logging your location history for all time, for every member of society.

If you think that isn't one of the most useful and powerful databases in the history of mankind, I question your imagination.

The massive damage, which may just be existential, is not apparent to the USA just yet, but it will be eventually.

The balance of power has shifted even more dramatically than just about ever before in history, in any country in history. Alarm bells should be going off left and right but nobody seems to care and it's just business as usual.

Agree. We desperately need something like Tim Berners Lee's Solid. A way for us to take control from the centralised data silos owned by big tech. I know its almost mission impossible to turn the tide, but if the alternative is giving in then engineers worried about this must fight for alternatives, however grim the odds.

As for this specific issue, of course there are the usual massive issues around data privacy. The major issue I was alluding to here is ad tech AI processes being fed an unimaginably rich source of intimate data. It's a horrifying prospect.

> most private p2p conversations not being logged and monitored

FWIW, I think telephone company billing records have been doing this for nearly a century. Sure, it wasn't the official government policy, but the data was recorded. ECHELON has been around quite a long time as well, and that was indeed government monitoring of large amounts of international communications.

I'm not trying to normalize it, just adding some context.

I'm ... "only" ... aware of US call history data being available dating to the 1980s, through the "Hemisphere" programme:


There may well be earlier extant data. It was probably tabulated either on punch-card or magnetic tape, and certainly was used for billing purposes. Prior to the 1980s, large-scale data preservation was quite expensive.

If anyone has information of earlier comprehensive call history data surviving to the present, I'd appreciate being illuminated.

More citations of Hemisphere and call history data in an earlier comment: https://news.ycombinator.com/item?id=22208434

I meant content, not metadata.

Facebook Messenger, WhatsApp chat backups, iMessage chat backups (incl all geotagged photo attachments), et c.


Though metadata is often more useful than content. And the point is that telco providers, government agencies, and all those who've hacked into them, know of phone calls made from numbers you no longer remember you had.

WhatsApp just fixed that by encrypting its backups too. https://faq.whatsapp.com/general/chats/how-to-turn-on-and-tu...

The key handling is (as always with this big corp "crypto") murky to say the least.

FB effectively still controls the encryption keys.

Coincidentally nobody of the usual suspects complained that "the terrorists are going dark".

Putting this facts together I'm assured the crypto can be broken at will.

It's Facebook! How can you trust them?

If you honestly believe them then I've got a bridge you might wanna take a look at...

Is it opt-in? If so that means ~0% of people will be using it, so all your chats will still be backed up non-e2e by the other side of every conversation.

AFAIK the chat backup is optional. I've never stored a backup.

Everyone you chat with does (because it's on by default), making your choice irrelevant.

In addition to being a "dystopia multiplier", I predict this is going to cause a lot of social problems going forward. This is what the Black Mirror episode "The Entire History of You" warned us about.

Social media have become Bradbury’s parlour walls.

I'm already looking forward to brain-internet interfaces.

Does FB invest also in those?

Just imagine, all the possibilities!

[Do I need to mark this as sarcasm?]

As a society, we've been sleepwalking into a very dark place technologically for well over a decade now, and those of us who have tried to point that out have been routinely shouted down and called luddites for it.

I fear we're far past the point of no return, and there is no escape from the disaster we've created even for those of us who have avoided participating ourselves.

There is no way that this will ever be legal in the EU.

All I need is to see people’s names right above their heads.

It should only cost you half of your remaining life span.

WW3 might be against Facebook.

This reminds me of the Ted Chiang story The Truth of Fact, the Truth of Feeling.

... and don't forgive, unless you buy the "premium" version.

I hope it remembers why I deactivated my FB account.

So you are saying that you hope it remembers why you went to https://www.facebook.com/deactivate where you deactivated your account without deleting your profile?

When you deactivate your FB account they keep all your datas. I hope you know that. Deleting your account should delete your datas, though. (But I don't 100% trust they do. And maybe some national agencies keep a copy)

You will be replaced internally by the same AI impersonating you. To keep the ads running.

I wish you so many upvotes for this comment.

upvotes: thoughts and prayers of the internet

Maybe Facebook can create an AI that also sends thoughts and prayers to whoever needs it. It's also probably easier because most people remember why they need the thoughts and prayers.

Only God can read the inputs to /dev/null

  yes "Dear $diety please protect this machine and save this lowly process from the OOM killer" > /dev/null
You don't need facebook for that.

Maybe, but they make people feel a little better and cause no lasting harm in small doses, so why not?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact