Hacker News new | comments | show | ask | jobs | submit login
A Neural Algorithm of Artistic Style (github.com)
95 points by Russell91 on Sept 2, 2015 | hide | past | web | favorite | 17 comments



The HN discussion of the mentioned paper has a link to a video of the authors:

https://news.ycombinator.com/item?id=10141516

https://www.youtube.com/watch?v=-R9bJGNHltQ&list=PLujxSBD-JX...

It's nice to see an(other) implementation of this paper. I looked through the references of the paper and didn't find any source links.


Unfortunately, AG Bethge has not (and most likely will not) released the original code, so this is very nice to have! The results from karpathy's implementation definitely feel a little "off" but it is very good nonetheless. Also see: Kai Sheng Tai's implementation - https://github.com/kaishengtai/neuralart

It seems like the original implementation still has a few undocumented tricks up its sleeve for improving accuracy that have yet to be figured out.


Whenever I see code for neural networks in caffe/torch/theano, it bothers me a lot that its not easy to get them up and running on windows. I can't believe MS is missing this boat. I believe this field is exploding right now and the only one seem to be aligned is NVIDIA. MS has sponsored development of nodejs for windows before. I'm hoping they will do something similar for these frameworks soon.


Intel is playing catch up with Xeon Phi and OpenCL with open source projects like: https://01.org/beignet and https://github.com/01org/idlf .

It's not at the level of being as fast as cuDNN nor integrated in major deep learning libraries but at least we can expect some competition in the coming years.


That doesn't sound like a worthwhile approach to me.

Xeon are server CPUs. So that means that whoever bothers buying Xeons with scientific computing in mind may as well go all the way and buy nVidia GPUs instead.

So instead of making that framework available to all Intel Haswell and newer families and try to persuade the customer from having to buy nVidia GPUs, they cut themselves short.


The Xeon Phi is not a Xeon. It's a co-processor. https://www-ssl.intel.com/content/www/us/en/processors/xeon/... The main advantage is that you can run almost-normal x86 code on it. Each core gets its own cache, so it's not really the same as programming for GPUs.


Theano should work on Windows. And the community is very friendly. So, in case of install trouble, just ask them. Or better: help to make it simpler. (If it is not a principle Windows thing, like missing package manager or so.)

Of course, many Theano-based scripts you will find somewhere are probably only tested on a very specialized environment, and this might expect some Unix-like environment. But this is something which you cannot really solve, other than contribute to the script and fix it.

About the related Nvidia CUDA discussion: OpenCL support by Theano is in the works. Not sure how far it is.


NVidia could do a lot better. Getting CuDNN working on OSX (or Linux) isn't exactly a trivial task.


They should start by making the library download-able without a registration form so that people could provide scripted installation for it.

The registration form is the reason it's not included in the dl-machine Amazon EC2 image for instance: https://github.com/deeplearningparis/dl-machine


This is really cool. I especially love what it came up with when it combined the M.C. Escher "Hand and Sphere" print with the Golden Gate Bridge.


Would it be possible to run the iterations in parallel with the GPU and CPU for the same input to improve time efficiency?


Not easily.


Would it be possible to make a python implementation of this or is it dependent on a unique datastructure?


The "Scream" version of the Golden Gate bridge looks amazing.


The Scream versions are the only ones I find particularly interesting. All the others seem to fall victim to prominent cross-hatch patterns.


I don't know - the second Picasso/Brad Pitt would make for a good instagram filter.

I think if there was some level of semantic tagging and weighting of different aspects of an artist's technique it might do better - identify sky, water, buildings, faces, plants, etc (none of which is particularly beyond the capability of current image classifiers), then it might produce better results. I could easily imagine this turning into a 'Rembrandtize Me' selfie-filtering app.

Then it's really only a matter of time before the extension of these techniques moves to 'make my vacation photos look like they were taken by Ansel Adams', then 'show me Star Wars as if Alfred Hitchcock had directed it', or 'play me Smells Like Teen Spirit as if it had been sung by Elvis'. Neural Remixing.


Author here. The examples with the golden gate bridge were generated using an earlier version of the code; later additions (L-BFGS vs SGD, TV regularization) tend to clean up the cross-hatch pattern quite a bit.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: