Hacker News new | past | comments | ask | show | jobs | submit login

If they used another dataset with images of larger dimensions could they generate larger and less blurry images?



A DCGAN as usually implemented (eg Soumith's Torch DCGAN implementation) can produce arbitrarily large images by upscaling. The quality won't be good, though, unsurprisingly, because it was only trained on 32px or less images. This also means that it's hard to evaluate DCGAN improvements because you're stuck squinting at 32px thumbnails trying to guess whether one blur looks more semantically meaningful than another blur.

One nice thing about this paper is that, as I've been suggested for a while, they up the input to 128px for the Imagenet thumbnails, and if you look at those, it immediately pops out that while the DCGAN has in fact successfully learned to construct vaguely dog-like images, the global structure has issues.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: