Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've seen a genius use of face analysis lately: nvidia new video codec for chat.

https://developer.nvidia.com/ai-video-compression

Basically it creates an AI model of your face, send that to the other party, and then when you move, it only sends key face points. The other party uses the AI model to create a virtual hi-res picture of you, thispersondoesnotexist style, except it looks perfectly like you. Then it uses the face points to move it the way you move.

The result: it consumes almost no bandwidth and you get the experience of a hi-res video call.

Of course, then I realized the trap: it's a perfect honey pot to get everyone to scan their face.

This is, IMO, what's going to happen with the VR. Big companies like Facebook/Meta are going to use the online avatar as a motivator to let them scan everything that can identify you. It will make the virtual you so realistic! Scan your face, your cat, your whole flat!

People will jump on that, with no thinking about the consequences what so ever. After all, they are crazy about the way services like snapchat can change your face on a video.

It's already game over. Like with Ken of the North star, we don't know it, but we are already owned.



The latest iPhones came with a LIDAR feature installed. Literally people paying for technology that can scan a room and report back to (who knows) anyone that has access to the device's telemetry. In the future, apps like TikTok, FaceBook, Meta and whatever can simply access the scanner, or the data storage from the device and have access to a wealth of information on any user they want.

Facial recognition is just the surface of a festering violation of individual user privacy. Once these companies manage to capture data, it has value for years into the future even if it is shut down at any point later.

Indeed, our privacy rights have already been violated and owned... Privacy should be protected by law at the source of data collection -- THE DEVICE LEVEL -- not just at the app level. Phones, Cameras, ATMs, Cars, etc... The devices and apps that we pay for and use (respectively), should not enable data collection that can be used against us in any way for extortion or manipulation, especially in incriminating circumstances. Current law provides for protections about this to an extent, but courts are still not educated and active enough, and the laws are fairly weak in specificity to properly enforce those standards.

I obey the law, but even the simplest data gathered by companies on us can become a means of manipulation or extortion that companies can use over time against us covertly. Think about private/personal data use in determining bank loans, job opportunities, court cases, and in health insurance. Possibly the biggest crime is that we're eagerly lining up to pay for the devices and sign up for apps that aid the corruption at higher costs overall than ever.


Other advantages of the NVIDIA system is that it can "fix" things about your face in the reconstruction. It can synthesize eye contact, cover for you when you're looking at your phone, synthesize a little smirk at a bad joke even if you just rolled your eyes.


Since iOS 14 FaceTime can make you look like you are looking at the camera instead of the screen below it.

Settings -> FaceTime -> Eye Contact


It's also going to normalized it.

Removing imperfections will become the norm, the way photoshoping magazing covers is, with all the social consequences.

In fact, people will confuse what the machine think we look like, with what we actually look like, and I'm bracing for the bias that it will create.


I already met someone I’ve been talking to on Zoom for 2 years and he’s just fucking massive. I expected him to just be a normal size guy but he’s like 6 foot 8 and the whole dynamic would have been very different if we had started in person.


This wouldn't work for a lot of people, especially visual ones like Deaf people. I have learnt to read cues in faces. This was useful to understand my children, for example.

So not so genius after all I am afraid.


> Of course, then I realized the trap: it's a perfect honey pot to get everyone to scan their face.

This isn't a massive database containing scanned faces. This is really just the next generation of compression. It was only logical that neural nets would be involved in the next gen of image/video compression.


Think how many people already gave this data to Apple to unlock their phones.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: