

Neural Network Visualisation in Clojure - c-oreills
http://clojurefun.wordpress.com/2013/04/10/neural-network-visualisation/

======
saintx
Cool, with caveats. Although this is interesting for people who know how
neural network structures are built and generally how backpropagation and its
successor training algorithms work, it isn't particularly _informative_ as a
visualization. It does show how easy it is to encode information visually,
compared with how difficult it can be for the viewer to _decode_ that same
information. This is a common problem with "information", as opposed to
"scientific data" (such as volumetric scan data or vector maps)
visualizations, where there's no obvious physical correlative that we can use
to help us decode the information as viewers.

~~~
invalidOrTaken
This is a distinction worth making (informative visualizations vs...well,
other ones), but there is a corner case for whom this is a very helpful viz---
the newbie playing around with NN's who could use a visual aid beyond
clojure's pprint. As a member of said corner case, this would be very helpful
to me. All the same, thank you for reminding me of the filter all new viz
projects must pass---"does this communicate meaning?"

------
saosebastiao
I have a question for the author, but please do not interpret this as the
typical HackerNews-esque pessimistic attack, as it is a sincere question.

Do you really feel like visualizing Neural Networks helps to understand them
better? I have yet to find one that has helped me understand it any better
than a textual explanation or pseudo-code of the algorithm.

~~~
Mikera
I'm the author. Yes I find it useful in various ways.

If the learning rate is too high, you can visibly see see weights flicker
between different colours. In simple nets you can use it to identify the
"meaning" of feature detectors by observing positive and negative links (green
and red). You can debug learning algorithms by immediately seeing if something
unusual is happening to the weights or activations.

As always caveats apply, but it is a useful technique (when used alongside a
variety of other tools).

------
dave_sullivan
I think it's a useful visualization, but I prefer matrix plots to observe the
weights. You can see the weights start differentiating themselves as training
proceeds, and you'll notice that some layers tend to learn a lot faster than
others. The unit activations (neuron outputs) are similarly useful to
visualize.

Example of weights on matrix plot: <http://imgur.com/T48Wal1>

------
mark_l_watson
I wrote a commercial NN simulator in the late 1980s and I used a different
approach to visualizing weights (that many others also use): if two connected
layers are viewed a a 1-dimensional vector, then the connection weights are
represented by a 2-dimensional grid. Each weight grid cell is color coded.
This is a much more information rich display.

------
aswanson
What is the color scale for the connection weight strength?

~~~
c-oreills
From github:

(defn weight-colour ([^double weight] (Color. (clamp-colour-value (Math/tanh
(- weight))) (clamp-colour-value (Math/tanh weight)) 0.0)))

So when weight is 0, it's black. As the weight gets more positive, the colour
gets greener, and as the weight gets more negative, the colour gets redder.
clamp-colour-value makes sure the colour component doesn't go outside the
interval (0,1)

tanh curve shown here:
[http://upload.wikimedia.org/wikipedia/commons/7/76/Sinh_cosh...](http://upload.wikimedia.org/wikipedia/commons/7/76/Sinh_cosh_tanh.sv)

~~~
mikegioia
I'm getting a 404 on that URL, I think you forgot the g in .svg :P

~~~
c-oreills
Ah, yes. >_<

[http://upload.wikimedia.org/wikipedia/commons/7/76/Sinh_cosh...](http://upload.wikimedia.org/wikipedia/commons/7/76/Sinh_cosh_tanh.svg)

