Hacker News new | comments | show | ask | jobs | submit login
Deep Painterly Harmonization (github.com)
340 points by isp 5 months ago | hide | past | web | favorite | 41 comments



Congrats. This is very cool.

This is probably less a critique of your project and maybe more a celebration of cubism: The clock in the Picasso portrait is not "harmonized" in terms of cubist motifs at play in the composition. The contours appear merely dilapidated. Maybe this passes for someone with little concern for painting or art history but it was like a sort thumb for me. Maybe this is a good study reference on the gap between pattern recognition and human perception. Maybe not. I'd be interested in other thoughts.


Plenty of them fail to stylistically harmonize even if they visually harmonize. Of course, in some cases, it's debateable whether stylistic harmonization is even possible at the human semantic level. The bicyclist on the bridge in the scream painting was quite bad, and the Eiffel Tower in the cityscape was noticeably degraded by the fact the painting didn't have the colors that the artist would probably have actually used, so it's rather bluish.

What's more impressive is that several of them work quite well, and barring anachronism, at the sizes given in the samples you might not know what was added if they didn't point it out. The park bench I found particularly impressive, for instance. I suspect if we increased the resolution it would break down, but at this size it works pretty well.


I was very impressed by the one with the large book added next to the person in the grass. The curvature of the pages changed to be more gentle.


Yeah good point about the resolution. So I wonder what effect lowering the resolution would have?

In general, it’s definitely impressive how well they work.


I've measured the "realism" of graphics in terms of resolution for many years now, because nowadays we generally have realistic graphics, as long as the resolution is small enough. (Compare with, say, an NES game, where no matter what you do to that image nobody will ever find it realistic.) There's a lot of techniques/artists/etc. I've seen that are perfectly realistic looking at 320x200 (say), but couldn't sustain the illusion of realism at 1920x1080. This way I get a reasonable metric of whether or not graphics are "photorealistic", or in this case, match the style of the painting.

It would be interesting to see them do a higher resolution version, or if they have them. It's possible it would indeed fit right in even so; now that I think about it the microscale of these paintings are probably very stereotypical and the algorithm might be able to reproduce them well. It would be especially interesting to see if it would correctly reproduce brush strokes, which have a lot of context to them. Given that "deep learning" can do things like reproduce the structure of a TV script accurately, it doesn't seem out of the question.


I think the gap in this case is not as wide as you think given that it is only looking at this one image. Had the network been trained for one art style in particular it may be able to pick up on that detail similar to if a person had studied many paintings in that style.


Way cool. I would personally mistake many of the results for a real painting if I didn't have the original in front of me.

Getting the colors right is half the problem, and I guess the histogram loss function I saw mentioned at the bottom of the page does that. A couple of the results had strange looking color transformations though - the little girl whose red shirt turns blue, even though the bed has some red in it, and the red rose that turns yellow even though there are orange flowers in the background. So it's not always choosing colors that are the closest to the source while being available in the target's palette. Anyone know why the colors sometimes go so far of course?

Also, this is one of the first style transfer papers I've seen that has a pretty obvious built-in and seemingly plausible business idea. I'm sure poster stores in malls everywhere could sell versions of your favorite painting or poster with your or your own face or something else of your choice added to the composition. It's like the new version of the picture board painting with face cutouts, but way better.


Mr Bean as Mona Lisa or Einstein on the $10 bill are perfect examples of that.


With these examples it always feels like a very good usecase would be automatic rotoscoping. A movie like "A Scanner Darkly" looks like it could've used these techniques, instead of having all the original footage redrawn manually.


That’s a better case for typical style transfer than this.


Looks like someone did just that!

https://www.youtube.com/watch?v=Rw0hZ_-tztk


Why would they need/want to add the Disclaimer? ("This software is published for academic and non-commercial use only.") By fear of some patent violation? Or do they plan for commercial use (e.g. licensing to Photoshop) and this way they keep the license non-OSS (a.k.a. the "share source" euphemism https://en.wikipedia.org/wiki/Shared_source)? Is it some kind of a well known legalese in academic circles?


Seeing as two of the authors are from Adobe Research, I'd imagine this IP will be used for a Photoshop filter in the future.


They use VGG-19 pretrained weights which are derived from ImageNet which consists of mostly copyrighted images. It is currently untested whether this is a legal problem or not.


Wow, ok, this is seriously insightful and mindblowing for me! I'd never think of such a danger vector, but with your hint I can now totally understand the reason for caution. However painful it feels. Thanks for the reply! :)


Maybe formally their institute owns all IP they produce and they can't make it available unless it's for academic use?


Maybe they simply don't have the rights to the whole source code, but were allowed to make it public if they added that disclaimer. Usually the university will have the ownership of code produced in academia and if you find a commercial model based on their research code, they most likely want some money from that.


Is the disclaimer clearly just saying commercial use is prohibited?


Great examples (after I realised they weren't all going to be McDonald's ads)


My favourite: adding a Star Destroyer to A Starry Night. ("What should we add to A Starry Night?" "What else begins with 'A Star'... aha!")


No kidding! After the first couple I started only looking at the final image, trying to guess the inserted object.


These are the same authors from this https://news.ycombinator.com/item?id=13958366 which was posted a year ago. Really interesting work they are doing.


If a prize for best examples exists, this would win the first place!



Knowing that mankind's scientific record now contains a Gioconda with the face of Benedict Cumberbatch has very much brightened my day.


How long until it becomes cannibalized into a frivolous Snapchat selfie filter?


As opposed to these totally not frivolous examples?


The easy use cases come first.


Consider for example a picture of a street and a rendered car overlaying it. It would be interesting to see how well the technology bakes in the car so that it matches the environment. This would go in direction of realistic rendering.


Anybody have any idea why gen_all.py is python and filt_cnn_artifact.m is written in Matlab? The latter seems easy enough to write in python as well.

Is there something about what this file does that is easier accomplished in Matlab or is it just two different people preferring different languages?


Does this contain pretrained data? Is there a possibility to train it yourself (e.g. to allow a broader set of styles)?


This is really amazing. Very impressive. Would love to see this published on the web somewhere to try it out quickly without requiring folks to have the necessary components to build it.


It would be nice to have some more controls.

For example, often the color of the inserted object changes radically, which might not be what was intended.


The first example that's really abstract is awesome.

I'd love to see more things like this with Rothko, Pollock, etc.


Guess it could be used to create whole movies or video games in that painter's style.


I guess that would be more difficult, because there has to be coherence between subsequent frames (i.e., it has to avoid wildly changing random effects between frames).


So I assume no CUDA (wrong grpahic card) = no possibility to play with this tool? :/


You could use a server, I guess?


This is amazing! Wow, well done.


So has anyone implemented a web-interface for it yet?


wow




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: