

Show HN: Neural network color recognition - megalodon
https://github.com/mateogianolio/hopfield-color-filter

======
shazam
Interesting but how is a NN necessary instead of examining pixel values?

~~~
zamalek
I would assume it's a practical (although not entirely useful) example of a
hopfield network. If you look at the wikipedia page for it[1] there is little
to no information on what you would use it for, seeing an example of practical
usage is great for figuring out other practical applications for a hopfield
network.

[1]:[http://en.wikipedia.org/wiki/Hopfield_network](http://en.wikipedia.org/wiki/Hopfield_network)

~~~
megalodon
You're right, it's just a practical application.

For those interested, I came across this chapter [1], which is great for
learning about hopfield networks.

[1]:
[http://www.cs.toronto.edu/~mackay/itprnn/ps/506.522.pdf](http://www.cs.toronto.edu/~mackay/itprnn/ps/506.522.pdf)

------
ilzmastr
Good stuff.

quick q: why do you call this a hopfield network? I see you have a fully
connected 2 layer neural network (0 hidden layers):
[https://github.com/mateogianolio/hopfield-color-
recognition/...](https://github.com/mateogianolio/hopfield-color-
recognition/blob/master/network.js#L5)

instead of a bunch of circularly connected perceptrons:
[http://en.wikipedia.org/wiki/Hopfield_network#mediaviewer/Fi...](http://en.wikipedia.org/wiki/Hopfield_network#mediaviewer/File:Hopfield-
net.png)

p.s. it looks like the library you're using has a built in Hopfield network
`new Architect.Hopfield(10)` (at the bottom:
[here]([https://www.npmjs.com/package/synaptic)](https://www.npmjs.com/package/synaptic\))),
why didn't you use this?

~~~
megalodon
Thanks.

To answer your first question, I (perhaps naively) assumed that the synaptic
library used correct naming for its network prototypes [1].

My implementation contains a few modifications to the one defined as
'Architect.Hopfield' [2], which is why I decided to put it in a separate file.
It also helps a visitor to know how the network is defined without needing to
browse the source of the synaptic library.

[1]:
[https://github.com/cazala/synaptic/blob/master/README.md#hop...](https://github.com/cazala/synaptic/blob/master/README.md#hopfield)

[2]:
[https://github.com/cazala/synaptic/blob/master/src/architect...](https://github.com/cazala/synaptic/blob/master/src/architect.js)

------
kowdermeister
I don't get it. If you trained to recognize black and white, why did it find
such a complex pattern where there were clearly none in the first example?
Isn't it an example that it failed?

~~~
ilzmastr
Interesting stuff, nice work!

The complex pattern is a result of the network doing the equivalent of fitting
3 gaussians (1 for each of RGB color channel) on the intensity of the pixel in
that channel. I was curious, so I recorded a tiny demo of doing the equivalent
piecewise in photoshop: [http://cl.ly/183Z3w1V1B0F](http://cl.ly/183Z3w1V1B0F)

Posterization in photoshop is a k-means technique, and each k-mean is a center
of a gaussian, hence my motivation for the analogue video.

The triangles in the thresholded image make me think that the network is doing
the equivalent of predicting a probability that a certain pixel is white/black
per channel, and then these channels collude for the final decision, and thus
we see triangles when the intersections of the 3 triangly spaced gaussians
yield a value above some threshold.

Pic of the different channels:
[http://cl.ly/image/1w371V3U0N02](http://cl.ly/image/1w371V3U0N02)

~~~
megalodon
Great analysis, I hope you don't mind my linking to this comment as an
explanation of the fractal patterns in the readme?

------
bsaul
Is the purpose of this project to simulate color perception paradox ( blue or
gold dress) ?

~~~
megalodon
Nope, but feel free to do whatever you want with it.

