This seems to basically just be the mmWave airport body scanner tech scaled up to cover a whole room, and then using AI to make it look like a human instead of a simple 3D model. I'm sure there was a ton of complexity involved in all of that, but am I missing anything else here?
Isn't the training and GAN mean that the resulting image will look like the persons they trained? You probably could have a dog walk around and you'd still "see" a human in the synthetized image.
The idea of this being used for, say, security would have obvious dangerous bias in it. "Hey, every robber is a middle aged black man."
>Isn't the training and GAN mean that the resulting image will look like the persons they trained?
FaceGAN style networks generate faces that don't look like anyone in the training set. In this case it all depends on how varied of training data they used.
The next logical step would be to create 3d models with interferometry triangulation. My question is: are these developments better or worse for physical space privacy vs optical/infrared recognition?