Hacker News new | past | comments | ask | show | jobs | submit login
Nvidia's DG-Net: Dress up people with different clothes/use as training data (github.com)
87 points by zhedong 6 days ago | hide | past | web | favorite | 34 comments





So the purpose of this model is to be able to identify a person despite their changing of clothes?

Not a sarcastic question


Or, just as valuable to society: online shopping. ;}

Hilarious how wearing pants in the source image turns later shorts wearing iterations into giant cankles.

Sadly, I don't have 16GB GPU memory...

I suspect NVIDIA strategically releases models that just barely don't fit into their gaming grade GPU RAM size.

Clever market segmentation.


I suspect it’s convenience that has a side effect of being strategic. Research is always interested in what is only just now possible with the newest available hardware. The fact that it encourages you to pay for the most recent hardware motivates funding the research.

> NVIDIA strategically releases

You are giving too much credit to their competence to release any smaller sized model.



GPU cluster? You might need to tweak the pytorch code, but you can disperse across multiple GPUs very easily with pytorch.

Perhaps if GPUDirect [1] were available on consumer devices, it would be possible to DMA training/model data in from an SSD/Optane/hypothetical DRAM-based PCIe drive and scale up to bigger problems as a result.

Alas, not even AMD allows access to their GPUDirect equivalent, and I can't imagine anyone being able to drum up enough pressure for those vendors to flip the toggle to make them available on their consumer lines.

[1]: https://developer.nvidia.com/gpudirect


Gpudirect is one of the last differentiators with pro version consumer cards. The 2080Ti allowed use of tensor cores on consumer cards, so I would bet they'd never enable gpudirect, since they need something different.

I know it's going to become an even bigger bottleneck moving forward, but it really raises the barrier for us hobbyists.

I own a 1080ti (12GB RAM) - and I consider this "high-end" for many people who aren't actively employed for machine learning (College kids and younger especially). I know you can "use the cloud" but I would really prefer not to...


Yea some state of the art results are just inaccessible without large budgets, simply because models can scale (and because some orgs have a lot of money to train those scaled models).

You can always just use smaller models and/or lower resolutions though; of course the results won't be on par but it may reach a qualitative result (for research and experimentation purposes) or good enough result (for personal application purposes). E.g. hobbyists don't need AlphaGo-level go playing AI (which I'm sure had aggregate costs in 5 figures or more to train), reduced versions play all far above our levels -- although in this case there's the interesting effort of pooling hobbyist resources to indeed reach SOTA, see LeelaZero[1] and LCZero.

Some kinds of research will be effective only at large orgs, that's always been true. There was indeed a brief period when people realized GPUs could unleash deep learning/CNNs that you could do anything with a good GPU, but that was very much an exception. To borrow from another field, you cannot do a level of car engine research without all infrastructure to fabricate and test engine prototypes (though you can do some/other kinds of theoretical analysis).

[1] http://zero.sjeng.org/home


There's some work on enabling larger stuff to run (slowly) via oversubscription. Maybe pytorch will get this capability eventually.

https://developer.download.nvidia.com/video/gputechconf/gtc/...


1080ti is 11GB, not 12. Titan is 12

This would be a good test for AMD ROCm - their Vega R7 has 16GB of memory, supports PyTorch, costs $700. Would the pretrained model load on their GPU?


Wow yeah looks like you’d need an RTX titan, which are $2500. Crazy expensive.

Can you do that fp16 trick and use an 8gb card?

Edit: > GPU Memory >= 10G (for fp16)


Appreciate the plain-English title for the project.

The results are awful even in a tiny thumbnail image.

I believe the results are for augmenting training data, not for humans to look at.

At this point you might as well be adding noise around the face then.

Your standards are ridiculously high

[flagged]


"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

https://news.ycombinator.com/newsguidelines.html


[flagged]


Please don't post ethnic or national slurs to HN, including ones cloaked in insinuation. It's not legit to cast shade on someone or their work for that reason alone.

https://news.ycombinator.com/newsguidelines.html


That's a guns don't kill people argument applied to facial recognition tech. China is building mass surveillance with every tool available. It is not a secret or a national slur. It is a fact that they use mass surveillance against Muslim ethnic minorities to justify sending them off to re-education camps.

Shame on you. Authors of AI software have a responsibility to act ethically. Just because it might be used for something harmless doesn't justify something that will be applied in a harmful manner to many with certainty.


On HN, it's not ok to ethnically interpret someone's name, map it to their national or racial origin and cast aspersions on them. People can't do that here no matter how strong their political passions are, and we ban accounts that use HN for nationalistic or racial or ethnic battle.

Speaking of that, your account's submission history clearly shows an agenda of nationalistic battle. That is not what this site is for, so please don't do that, regardless of which nation you're for or against. Also, please keep the language of the online rage culture ("Shame on you") off HN, even when addressing moderators.

https://news.ycombinator.com/newsguidelines.html


Calling out China for mass suveillance is racist because everyone in China has the same ethnicity because China purges minorities using surveillance tech because calling out China for mass surveillance is racist...

You got me Dang. Impenetrable logic.


What I wrote seems clear, and this doesn't remotely resemble it.

Individuals are not "China" and deserve the benefit of the doubt regardless of their national origin. That seems like a platitude and I can't quite believe I need to repeat it here.


Zhedong Zheng works at the University of Technology Sydney, Xiaodong Yang works at at NVIDIA Research, Santa Clara, CA. So you are in fact absolutely right, that they work in countries, Australia and the USA, that as members of the five eyes are involved in illegal mass surveillance. But I’m sure you don’t want to imply that anyone in these 2 countries working for any company is writing software solely with the intention of having it used for mass surveillance.

I don't agree with basing motivations on the appearance of people's names, but it's almost for surveillance by definition, no?

Are there non-surveillance use cases for identifying persons of interest across cameras?


Smart doorbells can automatically open doors. Although you end up with weird stuff like this.

https://geekologie.com/2018/09/nest-security-camera-mistakes...


I was just being cheeky.

But I'm sure it would be used that way no matter where the authors are from. It's unfortunate. Those kinds of groups don't often care for "minor details" like ownership, patents, copyright, and the like.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: