Each of the channels (filters, in this case) is visualized separately. The user selects a layer and each of the image maps shown represents the activations of that channel.
You can verify this by looking at the `nb_filters` key in the JSON description of the layer on the left and counting the amount of image maps on the right.
As slick as this kind of thing looks, I don't think it does help much to see raw filters on an image-by-image basis. The kind of insights that I think are more important are the abstract ones, like the recent paper about universal adversarial perturbations and the geometry of the decision boundary.
Visualizations can help, but more in the sense of rapidly iterating through bespoke visualizations to generate and test hypotheses. And to do that efficiently you don't need a slick tool that does one kind of visualization, you need a slick 'grammar' to efficiently build new visualizations with, Bret Victor style.
I created an account just to say that this looks really fantastic. Thanks for creating this! I'm looking forward to steady improvements in neural network tooling, and it's people like yourself that keep that train moving forward.
What is it visualizing in a convolutional layer, does it average all the channels, or select just one?