This tech is key for Personalized Advertising where consumers are inserted into still and video adverting in place of current spokespersons and side-by-side with celebrities. Advertising is about to get surreal and the fake news consumers are about to get exploited something unbelievable. "Deep Fakes" for porn is kid stuff compared to what this tech opens: Pandora's Box if you ask me.
I wonder what other technological breakthroughs are locked away behind patents.
What a boring dystopia.
Seriously though, if a company ever did without my permission I would sue the pants off of them.
There's currently a class-action lawsuit in progress against Facebook's use of facial tagging of Illinois residents: http://www.chicagotribune.com/business/ct-biz-facebook-taggi....
Regardless of how deep Facebook's pockets are, I could see another class-action lawsuit taking them on over recruiting its users into becoming uncompensated spokesmen in deepfake ads hocking products to their friends.
Also the legal situation in other jurisdictions may be less friendly to Facebook's usage of this technology, to the point where their deep pockets won't help them. I'm no international lawyer but I think that's definitely a possibility.
If this becomes "a thing", I fully intend to use my UK citizenship and send GDPR boilerplate deletion requests to all the data brokers, social networks, and digital advertising services I can find.
Better hope it becomes a thing before March.
There's nothing wrong with buying a little time while society catches up with the technology.
* Automated injury detection. You've got a warehouse, you've got cameras, now you've got an instant alert when one of your workers appears to be injured but out of sight of other workers. You've got street cameras, now you've got automatic detection of someone having a heart attack and laying down on a sidewalk. (Dystopian application: "homeless person detected, deploying zap-drones") Hospitals and old folks' homes could use this, too.
* Lifeguard Assist programs - automatic detection of drowning-like behaviors. (Of course, over-reliance on this would be bad...)
* Children separated from parents might be easier to detect in places like malls, etc. (I'm going to stop listing obvious parenthetical dystopian applications)
I’ve been eyeing things like the Kinect and iPhone X face tracking for this kind of task (for a fun side project I’m working on), but it would be great if I could track at least position and pose of multiple actors in a scene using just a standard webcam or camcorder.
Technology is inherently usable both for good and evil. You get both by default. It takes active countermeasures - usually non-tech, like regulations - to limit evil applications without sacrificing the good ones. As a society, we do that to some extent, but unfortunately we're not as successful as I'd like (e.g. if it were up to me, I'd seriously curtail the advertising industry).
I don't think it's right to just shrug and say "everything can be used for good or bad". The details matter. If you're talking about a Yo-yo, yeah, you can hit people with it or just amuse yourself; nothing catastrophic is likely to happen. This tech though has greater implications.
If you take a bunch of photos involving people doing things and extract pose information, I'd imagine it would be helpful in figuring out what's going on in other situations that are otherwise dissimilar.
I'd go with limiting how competitors can use it as the main deciding factor.
It's a bit like allowing your scientists to publish their research, but only in prohibitively expensive and thus exceedingly niche journals.
> DensePose should automatically download the model from the URL specified by the --wts argument
The power is when you combine the different databases and build a profile of the person. It is very similar to how advertisement companies like Google build up profiles of customers(gmail,search behavior,dns name resolution
tracking,cookie tracking) only in a different field.
It is probably even more powerful when you combine physical behavior with online behavior-
Yes, but if those individuals were actually more likely to commit crime, the AI would learn those things anyhow, leaving us with the question: if a specific demographic is considerably more likely to commit crime, and the AI picks up on it, is the AI 'racist'? Because racism is a moral judgement, moreover, the intersectionalists would indicate that it also requires the notion of 'power'.
This is not some novel issue I think and will fast become a real ethical dilemma.
Sorry I don't have a link. Saw it on a business news program.
Now I wonder how much of that tech is already deployed...
What's crazy to think about is Gait Analysis from orbit
* Generating avatars in Facebook's VR land from photos you're tagged in
* Recognizing a person IRL from photos they're tagged in
Basically, imagine the current oculus go headset, but with cameras on the front, and instead of showing you the actual world, it shows you a game, based on the existing current world, but morphed to look like Starship Troopers or something.
I wonder how much "your brain actually believes it's all real instead of merely perceiving it" matters... I know there have been a few times where I've gotten so sucked into a movie in a darkened theater that when it ends its incredibly jarring to be brought back to the real world, and the times that's happened to me haven't even been 3D movies, let alone interactive like AR.
Each human pixel in the image is labeled with an index and two coordinates: x, y (u and v are the traditional names, but think of 2D x, y coordinates)
The index specifies which patch that pixel is on, and the x, y coordinates specify where in the patch the pixel is on. This is for a pre-specified set of patches that cover a human mesh. See https://github.com/facebookresearch/DensePose/blob/master/no... for more detail.
So, no, it does not extrapolate the full mesh, but also for all human pixels, you are getting 3D information.