
Automatic Image Colorization with Simultaneous Classification - timlod
http://hi.cs.waseda.ac.jp/~iizuka/projects/colorization/en/
======
argonaut
For context, a different colorization model with about the same results:
[http://richzhang.github.io/colorization/](http://richzhang.github.io/colorization/)

Another model previously posted on HN, with (IMO) worse results than these two
models: [http://tinyclouds.org/colorize/](http://tinyclouds.org/colorize/)

------
neom
This is amazing to me. My major was Digital Imaging Technology in 2005 and I
remember doing this by hand in photoshop wondering if one day there would be a
button for it.

~~~
BunnyRubenstein
Same with vectorizing rough pencil sketches:
[https://news.ycombinator.com/item?id=11583368](https://news.ycombinator.com/item?id=11583368)

I remember doing web animation back in 2000, and having to ink each penciled
frame by hand, vectorizing the inked drawing and then coloring it in Flash.
Now you can train a CNN to vector pencil drawings. Probably train a CNN to
color everything too!

While I can see a lot of animation tasks being eliminated, I would like to
imagine that they would hold onto some artists to spend some of that saved
time embellishing and strengthening the quality of the final work.

But most of the heavy lifting will now be done with the press of a button.
It's like Clarke said, "Any sufficiently advanced technology is
indistinguishable from magic."

Magical times.

~~~
mintplant
As with most deep learning papers, though, the results aren't independently
reproducible. The dataset is private, the source is closed, and it hasn't been
turned into a product. The magic inking button is still a ways off yet.

~~~
joshuak
Is this source and data set not open? Is this not the source:

[https://github.com/satoshiiizuka/siggraph2016_colorization](https://github.com/satoshiiizuka/siggraph2016_colorization)

And is this not the dataset:

[http://places.csail.mit.edu](http://places.csail.mit.edu)

I'm really asking, because I've downloaded the project and skimmed the paper,
but haven't had time to vet these assumptions. On the face of it, it seems
everything is provided, but you've vetted it further and learned that isn't
true?

~~~
mintplant
I'm talking about the auto-inker project that BunnyRubenstein linked, not the
auto-colorization project from the parent submission.

------
babo
Tried with some historical photos and my own BW images. It's missing the
global image prior for most of the images except vegetation, it has a hard
time even with people. For local features I've seen similar problems, my guess
is that they trained it on a not large enough dataset, the spectacular samples
are from over-training. While the idea looks promising the current
implementation is far from general.

------
AdamJacobMuller
A small sampling of how this preforms on some B&W images I had laying around
(all my family)....

My Grandfather & Brothers:
[http://adam.gs/v/IMG_0090.jpg](http://adam.gs/v/IMG_0090.jpg)
[http://adam.gs/v/IMG_0090.color.jpg](http://adam.gs/v/IMG_0090.color.jpg)

My Grandfather, My mother and my Aunt
[http://adam.gs/v/IMG_4629.jpg](http://adam.gs/v/IMG_4629.jpg)
[http://adam.gs/v/IMG_4629.color.jpg](http://adam.gs/v/IMG_4629.color.jpg)

My grandfather and my grandmother:
[http://adam.gs/v/IMG_6868.jpg](http://adam.gs/v/IMG_6868.jpg)
[http://adam.gs/v/IMG_6868.color.jpg](http://adam.gs/v/IMG_6868.color.jpg)

From my perspective, these are decent results considering what they have to
work with, I think it did a very good job.

~~~
argonaut
Really? Those are quite atrocious. Are you sure you ran it through their code
in the right input format. It just looks like a sepia filter right now.

------
astrosi
Impressive stuff, I especially like the style transfer that can be done by
using the global features of one image and the local features of another (Fig.
7)

What I find somewhat annoying is that whilst they show some examples from
their validation set, and a couple of examples of the model failures. They
don't appear to show a _random_ selection of cases from their validation set.

~~~
babo
Why not trying it out yourself? They provide the model to download and torch
code to run it on any photo.
[https://github.com/satoshiiizuka/siggraph2016_colorization](https://github.com/satoshiiizuka/siggraph2016_colorization)

------
myth_buster
I'm surprised by how apt Lua is for these kind of algorithms. From the
architecture diagram I expected to be hit by a large blob of code but found
that most of things are taken care by the language/framework itself!

~~~
babo
It's using a trained torch model, available as a separate down download.
That's a large blob of code and data, around 670MB.

~~~
myth_buster
Saw that though couldn't download as it was timing out... I thought it's just
weightages and values for the trained model. Wasn't sure it had code.

------
drkrab
Can someone please put the file colornet.t7 on the torrent network or a high-
volume service somewhere? I'm probably not the only having a hard time
downloading that file.

------
MelmanGI
I would really like to see approaches like these applied to movie scenes.
Especially how differences in single colorized frames depicting the same scene
could be handled.

~~~
aab0
They (I believe, although it might be the other guys) have a video up on
YouTube demonstrating it on a Vietnam movie trailer. It looks appalling: aside
from the usual averaged colors and saturation, the coloring is highly unstable
and changes from second to second.

~~~
MelmanGI
Thank you for the hint!

Found a video [1] colorized with the approach by these guys with the unstable
coloring.

There are some other videos [2][3] colorized using differing approaches, that
don't seem to have as much color unstability though the color in general
appears more off.

Could of course also just be a property of the underlying source.

[1]
[https://www.youtube.com/watch?v=__kcHbzSNC4](https://www.youtube.com/watch?v=__kcHbzSNC4)
[2]
[https://www.youtube.com/watch?v=_MJU8VK2PI4](https://www.youtube.com/watch?v=_MJU8VK2PI4)
[3]
[https://www.youtube.com/watch?v=qQSViqdd0tU](https://www.youtube.com/watch?v=qQSViqdd0tU)

------
hartator
I wonder how companies doing this for money will be impacted. That's amazing
what a time saver it will be.

------
strictnein
Not sure if the title would be too long, but I'll be honest and say that I
thought this was about the news organization for a minute.

CNN = Convolutional Neural Networks in this context.

~~~
dang
We changed the title from "Automatic Image Colorization of Greyscale Images
Using CNN" to a substring of the article's title.

