The thing that surprises me about the facial recognition news is that I was expecting it to be the other way around. For meta to outlaw it.
The reason for the re-name is to separate Zuck and his new project from the toxic reputation that facebook has. So the logical thing to do is to get meta to outlaw facial recognition, as a first indication that it's going to do business differently from how facebook was operated.
Instead this is really interesting. Zuck has 'moved' from facebook to meta, and apparently, plans to bring his complete lack of ethics along with him (or to put it generously, his ignorant view of the world). Could we see the next evolution in facebook to be a social media platform that the company has decided is the past- and therefore they can afford to run it more ethically and be more considered about their abuse of their users. Become the warm and fuzzy that some parts of microsoft have become. Meanwhile Zuck moves to the "metaverse" which is him and a few others desparately trying to carry out the exploitation he succeeded with in the heyday of Facebook?
Or to put it another way - it would be genuinely surprising if Zuck has the self-awareness to realise that Facebook's brand can be rehabilitated but that his brand is what's poisoning it.
Works at Facebook, opinions are my own.
I find this article highly speculative. Facebook or Meta has not said anything on future uses, but that's for a lot of other things. Company hasn't said anything about not making weapons in the future as well.
In my opinion facial recognition technology has genuine uses, however from the privacy perspective, I personally would want all of these models to be running on my device. Google does it and so can other companies including Meta, and I find that completely fine as long as it's relevant to me and I have an option to turn it off.
They seem to have firsthand sources to support the articles title:
> While Meta says that facial recognition isn’t a feature on Instagram and its Portal devices, the company’s new commitment doesn’t apply to its metaverse products, Meta spokesperson Jason Grosse told Recode.
Then further down, the article also links to other articles that goes deeper:
> Facebook Vice President Andrew Bosworth told employees that the company is evaluating the legal and privacy issues around facial recognition for its upcoming wearable gadget.
Not sure why you think it's speculative when it's based on Meta/Facebook employees own statements.
By the way, thanks for leaving a comment here and being public about working at Facebook, can't be easy to receive so much hate, especially here on HN, but we all need everyone's perspective, so thanks a lot for providing it here!
Yourself as a Facebook employee, you should know first hand that your employer lies to their own workforce without flinching. The only thing I will trust Facebook with is when they report their earnings.
The Facebook brand has become utterly toxic. Meta is a clean slate, but there is almost zero chance they will not load it up with toxins again. They have no incentives to be a good citizen.
Meta wants to capture your biometric data so that your facial expressions and body language can be mapped onto a 3D avatar in virtual reality. Is this really the future of the metaverse? or is it just what Mark Zuckerberg thinks what the metaverse should look like?
I am a little skeptical of this vision. And I imagine most of the "metaverse" will be experienced through flat screens - and text, voice, and video will remain dominate. VR will grow but do we really need to map our physical expressions onto an avatar to play games or collaborate together? Why go through all this complexity when we can just have controls to enable our 3D avatars to emote?
If you really want to convey facial expressions and a personable experience, just enable video.
> If you really want to convey facial expressions and a personable experience, just enable video.
I hate to take Meta's side, but enabling video is inviting discrimination and hate speech. Enabling audio alone has proven to be enough to bring out the worst in people online.
Voice mixers and avatars will result in a much more even experience for everyone involved. Emoting through an avatar will make that experience richer.
Enabling audio makes it easy to find a target based on their inherit characteristics. The bar is low to find, for example, a woman to harass in a videogame over voice comms (an exceedingly common problem).
FB, Reddit, et.al. require the target to actually exhibit a belief or opinion to become a target. It ironically raises the bar for being an asshole to a specific target when you can't hear them.
I do not think the solution to hate speech and discrimination is to hide the traits that make us different.
If the future of the metaverse is on the blockchain, each digital identity will have a reputation to uphold. I think transparency in people's actions, a reputation system, and a simple moderation system is sufficient to manage toxic behavior.
> Its primary feature is to strongly tie someone's identity to their physical body.
I am just being skeptical here. Again, why cant I just have a control panel with a wide range of avatar emotes, the same way we currently have a wide range of emojis.
My physical facial expressions are limited. What if I wanted my jaw to literally drop on the floor? Or my heart to beat out of my chest?
Insert a "why not both" meme here. Having both available is very powerful, especially for non-power users who would rarely use the advanced list of emotes.
> I hate to take Meta's side, but enabling video is inviting discrimination and hate speech. Enabling audio alone has proven to be enough to bring out the worst in people online.
Virtual avatars already work in video conferencing apps, and any voice modulation software will work there as well. They're even easier to use, because tracking facial expressions and mapping them to a virtual avatar can happen with just a webcam, you don't need a complicated extra set of hardware.
I really just don't believe that privacy/anonymity/anti-discrimination is the reason Facebook is making any decision with VR. And I would base that on mostly just on the entire history of the company.
This is the company that has pushed harder than any other mainstream social network to force people online to use their real names. They've repeatedly had micro-targeting bugs in their ad-system that allowed targeting people based on race, and there have been multiple instances of people having parts of their identity leaked to friends and family members through advertising. The whole facial recognition thing they're planning to get rid of was a system that would tag your real face and correlate it with your real identity online, outside of your control via other people's accounts. I know people online who's literal first experience with Facebook was as soon as they signed up immediately getting doxed/outed by automated systems that sent suggested friend requests from their alt accounts to other contacts based on their address book. This is not a system or a company that in any way cares about digital autonomy, anonymity, or user-controlled identity, it really never has, and I don't believe Mark when he suddenly starts claiming that the company has now suddenly done a 180 on user autonomy.
I'm not anti-VR; I don't think it has the workplace value that Facebook is claiming, but it does have some benefits. And there is some potential for self-expression in VR that is genuinely exciting (see platforms like VR-Chat, but notably, Facebook is doing nothing in that space). But VR is definitely not a technology that's designed to decrease the risk of abuse. When you strap a VR headset on, you're putting yourself in a vulnerable situation, and Facebook has no controls I can see that try and mitigate the fundamental problem; they don't have tons of control over avatars, they don't have voice masks.
If they're claiming lack of video passthrough for faces is a privacy feature, that's an excuse, I don't believe it for a single second. None of the rest of their setup reflects a company that honestly cares about any of that stuff. You bring up audio as being a problem on its own -- Facebook isn't helping that problem; it's moving away from text communication, and then not enabling audio filters by default in VR. It's making that problem just straightforwardly worse.
I'm not the least bit skeptical of this vision, but that's because I see direct use along Facebook's (Meta's) business model.
If you are selling advertising (influencing) access to a population through heightened metrics and more ability to target amenable customers, there are many useful things you can learn and correlate through things like delay before click-through, where the mouse pointer hovers, and so on. This is done on a massive scale and it's extremely revealing, allowing for deep and fascinating connections into the human subconscious.
This is NOTHING to what you'd get tracking micro-expressions on the human face, something that is largely involuntary on a whole different scale.
This is like mapping the human genome, but for human behavior. It's absolutely revolutionary. Even with the most primitive recognition the big data generated will be transformative.
The inevitable result, WHOEVER does this (if it isn't Facebook, it'll be somebody else. If it isn't the soulless commercial sphere it'll instead be the government. Or 'all of the above') will be this outcome: you can then pay to manipulate entire populations to your whims. All you have to do is have an outcome, a plausible means of influencing them, and this ability to micro-target everyone who is susceptible to your message and skip over anyone who might whistle-blow on you or raise a coordinated objection.
This… changes the conditions under which democracy is practiced, and under which anti-democracy can be consistently maintained. It's a technology that is easily wielded by just plain old humans that are no smarter than the 'masses' to be manipulated.
The really interesting question is what happens when AI or larger-than-human frameworks get involved and have this technology. If there is a Singularity, there will be no objections. If AI rules us, it'll be as the domesticated animals we essentially are, and there will be no call for tyranny and widespread violence: if anything, this suggests that violence and horror become artifacts of fallible human implementation of these things. The more sophisticated use of it will be bloodless, with any desired outcome 'willingly' adopted by the populations targeted.
> If you are selling advertising (influencing) access to a population through heightened metrics and more ability to target amenable customers, there are many useful things you can learn and correlate through things like delay before click-through, where the mouse pointer hovers, and so on. This is done on a massive scale and it's extremely revealing, allowing for deep and fascinating connections into the human subconscious.
Counterpoint: no it isn't and no it doesn't. All of these models presume that they can infer what has the user's primary focus, when in fact users are distracted by all sorts of things that are not connected to the computer in front of their face. This is all just the daily sturm and drang to sell to marketing departments, which is Facebook's real business model (lying to advertisers [1]).
> This… changes the conditions under which democracy is practiced, and under which anti-democracy can be consistently maintained. It's a technology that is easily wielded by just plain old humans that are no smarter than the 'masses' to be manipulated.
I'm all for the presumption of the worst when it comes to tech corporations, but some consideration of likely outcomes based on historical trends is prudent. The fact is the only corporations who have a vested interest in what you're saying here are military contractors, and they are content to charge the DoD, FAA, CIA, etc $10k per line of custom code to run on their Access 2000 databases and IBM AIX systems.
At the end of the day, there's more money in lying to the Ford/GM marketing departments than there is trying to organize a fake coup for Elon Musk in Bolivia. After all, if they were willing to pay for the lithium, they'd just pay for it. The whole point is trying to get it on the cheap. Car marketing departments, on the other hand? They'll pay, and pay a lot.
> do we really need to map our physical expressions onto an avatar to play games or collaborate together
Its potentially great.
I use Horizon Workplace to chat with friends socially that i may otherwise facetime with. When its > 1 person, having the spacial audio and the hand gestures is great. I don't mind not seeing my friends real faces. Adding facial gestures that are real-time and natural would be huge imo and a lot more immersive.
Would i use that for my grandma? No. Online friends? Yes.
"A calm technology will move easily from the periphery of our attention, to the center, and back."
It seems a big assumption that a technology occupying "the periphery of our attention" is calm. A lot of apps/sites are starting to do this 'dot where you have something you need to process', and although they are less obnoxious than notifications and popups, they are still a third party deciding that I need to do something, and worse, they hang around with their 'you have left something undone' vibe until you click on them. This is not calming.
Frankly I think 'use as little of my attention as possible' should be the major principle, not 'be polite in how you dispose of my attention'
Basically it creates an AI model of your face, send that to the other party, and then when you move, it only sends key face points. The other party uses the AI model to create a virtual hi-res picture of you, thispersondoesnotexist style, except it looks perfectly like you. Then it uses the face points to move it the way you move.
The result: it consumes almost no bandwidth and you get the experience of a hi-res video call.
Of course, then I realized the trap: it's a perfect honey pot to get everyone to scan their face.
This is, IMO, what's going to happen with the VR. Big companies like Facebook/Meta are going to use the online avatar as a motivator to let them scan everything that can identify you. It will make the virtual you so realistic! Scan your face, your cat, your whole flat!
People will jump on that, with no thinking about the consequences what so ever. After all, they are crazy about the way services like snapchat can change your face on a video.
It's already game over. Like with Ken of the North star, we don't know it, but we are already owned.
The latest iPhones came with a LIDAR feature installed. Literally people paying for technology that can scan a room and report back to (who knows) anyone that has access to the device's telemetry. In the future, apps like TikTok, FaceBook, Meta and whatever can simply access the scanner, or the data storage from the device and have access to a wealth of information on any user they want.
Facial recognition is just the surface of a festering violation of individual user privacy. Once these companies manage to capture data, it has value for years into the future even if it is shut down at any point later.
Indeed, our privacy rights have already been violated and owned... Privacy should be protected by law at the source of data collection -- THE DEVICE LEVEL -- not just at the app level. Phones, Cameras, ATMs, Cars, etc... The devices and apps that we pay for and use (respectively), should not enable data collection that can be used against us in any way for extortion or manipulation, especially in incriminating circumstances. Current law provides for protections about this to an extent, but courts are still not educated and active enough, and the laws are fairly weak in specificity to properly enforce those standards.
I obey the law, but even the simplest data gathered by companies on us can become a means of manipulation or extortion that companies can use over time against us covertly. Think about private/personal data use in determining bank loans, job opportunities, court cases, and in health insurance. Possibly the biggest crime is that we're eagerly lining up to pay for the devices and sign up for apps that aid the corruption at higher costs overall than ever.
Other advantages of the NVIDIA system is that it can "fix" things about your face in the reconstruction. It can synthesize eye contact, cover for you when you're looking at your phone, synthesize a little smirk at a bad joke even if you just rolled your eyes.
I already met someone I’ve been talking to on Zoom for 2 years and he’s just fucking massive. I expected him to just be a normal size guy but he’s like 6 foot 8 and the whole dynamic would have been very different if we had started in person.
This wouldn't work for a lot of people, especially visual ones like Deaf people. I have learnt to read cues in faces. This was useful to understand my children, for example.
> Of course, then I realized the trap: it's a perfect honey pot to get everyone to scan their face.
This isn't a massive database containing scanned faces. This is really just the next generation of compression. It was only logical that neural nets would be involved in the next gen of image/video compression.
Something they talk about a lot on the Vergecast (weekly podcast from The Verge) is that the killer app for AR is identifying people in real life for you. I think that's probably true for a lot of people who have to have a lot of professional in-person interactions with a lot of different people - which importantly includes journalists. They semi-jokingly talk about how they'd forgive many misgivings about a product that could reliably do that. And Facebook seems to be in by far the best position to have the data to build that product, as long as they hang on to their facial recognition data set somewhere.
> they talk about a lot on the Vergecast (weekly podcast from The Verge) is that the killer app for AR is identifying people in real life for you.
> I think that's probably true for a lot of people which importantly includes journalists.
I am TERRIBLE with faces. Like confuse-my-uncle-with-my-father bad, and i really don't see this feature being great. Like, will i really look for a notification over everyones head telling me there name? Will i have to populate it from my contact book? Will contacts give me their photo?
It seems like this is their plan, but they will be forced to do it all locally. They are deleting all their face templates so that people can't accuse them of doing it remotely.
am curious - what will meta do with shadow profiles? does your shadow-meta just sit idly in the middle of some public space for eternity? do you get vandalized/ridiculed? can your shadow-meta be identified by others? will it use the previous facial recognition data to guide how you look?
have they actually admitted to having these shadow profiles? "what do you want us to do with these things you say we have that we totally do not have pinky swear"
It probably is an unethical technology. Its primary feature is to strongly tie someone's identity to their physical body. This alone is not inherently unethical, but it opens the door for so many abuses that it is hard to consider it as ethical. These abuses come from the ability to use a centrally-controlled technology to systematically isolate and excommunicate people from society, no matter where they go.
It becomes even more unethical when you consider the secondary, less publicly acknowledged feature: to provide plausible deniability for pursuing false positives. When facial recognition has the legal weight to drive searches and arrests, the "fuzziness" of false positives becomes an asset to those performing searches and arrests.
What makes facial recognition so dangerous is that it's so trivial and so cheap to deploy. You literally walk into a supermarket and tills have facial recognition built in to automatically bind your shopping to "you" even if you don't have an account or pay with cash. It's abused to hell basically and you will be facially recognized pretty much everywhere you go in public. It's definitely an "unethical" technology because of how it's generally used.
DNA testing isn't, because the technology is far too expensive and time consuming to be used this way. The second supermarkets start scanning my DNA to give me special offers, I'll complain about it too.
My point is, this sort of "well it could be used this way!" argument isn't helpful at all. Retina/fingerprint scanning could be used this way too, but it isn't - that doesn't make the argument against facial recognition invalid, or it any less unethical.
This analogy is somewhat valid, since facial and DNA data are mostly "public," given how often we show our face or incidentally shed some DNA (IIRC the police collected the golden state killer's DNA from a cup at a restaurant).
But it also relies on a shallow comparison – unlike a facial scan, DNA is not a passively collectible identifier. Collecting and processing DNA requires active, physical surveillance that downloading and faceprinting a photograph does not.
This physical limitation eliminates most practical risk of a global adversary exploiting DNA matching to build a dragnet surveillance regime. Meanwhile, facial recognition has already wrought such a state of affairs.
Incidentally, passive DNA surveillance does happen, but only in aggregate, like when sampling wastewater for toxins or other biomarkers. I'm not aware of any technology that separates the distinct DNA of individuals from mixed wastewater. If that were possible, then maybe DNA dragnet surveillance would be, too.
You can't scan someone's DNA from 20 feet away in tens of milliseconds with <$100 of hardware.
We're able to do facial recognition on microcontrollers costing less than $5. Commercial facial classification on that class of hardware is half a decade out.
In a sane world, there would be sane laws with stiff penalties (as in, jail). For instance, "any identifying data about individual users may only be used for authentication and shall be completely under the control of the user. 'Under control' means the user is allowed to delete it. Examples of activity not compatible with 'under control of the user' include training AI models. Illegal use of authentication data will represent a felony punishable by XYZ."
And even then, if there is jail, it's not beyond a corporation to appoint a lesser VP to serve the jail time and just "take care of" him when he gets out. The American agricultural products company you would think it is + their American bank + some British banks did so years ago when they got caught trying to offset the cost of R&D farmland in Africa by leasing it out to those warlord / death squad types (and their arms dealers) who hand machetes, cocaine, and AK47s to 10 year olds.
To me, philosophically, A tool by itself isn't inherently good or bad, but the way in which it is applied is. Unfortunately, people couldn't resist the temptation to do evil that large-scale, (relatively) rapid, programmatic identification (faulty or otherwise) of humans enables.
No it's great to track and record every human around the clock and completely automated. That way, we make sure everyone stays in line and don't try to organize themselves to get rid of dictatorship.
No no, it's for their safety. We can now watch as you get mugged at gunpoint while not doing anything about it. See, you're safer now that we've facially recognized and identified you getting mugged.
What I'm trying to say is that they are in fact not backing away from facial recognition as the title suggests - unless they will stop using it internally as well.
From what I read I only saw them talking about the photo tagging feature more or less - I didn't see any references to usage in internal systems or research.
I might be mistaken, but unless explicitly mentioned I will not give them the benefit of the doubt - we are way past that.
The reason for the re-name is to separate Zuck and his new project from the toxic reputation that facebook has. So the logical thing to do is to get meta to outlaw facial recognition, as a first indication that it's going to do business differently from how facebook was operated.
Instead this is really interesting. Zuck has 'moved' from facebook to meta, and apparently, plans to bring his complete lack of ethics along with him (or to put it generously, his ignorant view of the world). Could we see the next evolution in facebook to be a social media platform that the company has decided is the past- and therefore they can afford to run it more ethically and be more considered about their abuse of their users. Become the warm and fuzzy that some parts of microsoft have become. Meanwhile Zuck moves to the "metaverse" which is him and a few others desparately trying to carry out the exploitation he succeeded with in the heyday of Facebook?
Or to put it another way - it would be genuinely surprising if Zuck has the self-awareness to realise that Facebook's brand can be rehabilitated but that his brand is what's poisoning it.