So, this is definitely good work, and I don't have any suggestions on how to do it better, but I'm ultimately not sure how useful I find it. I have no intuition for what the histograms for `t` should look like, and so seeing the different histograms for ծ vs `not ծ` doesn't tell me much except that they're different (which is trivially true!).
Does anyone else find this sort of visualization useful? Maybe I'm just misunderstanding this. I would love to develop more of an intuition for neuron activations in deep nets- I want to better understand how they work, as right now, the only thing that I do is feed in inputs and look at the outputs, which is wildly inadequate.
I have the same criticism when it comes to the CNN visualizations that Karpathy et. al [1] came up with. They're cool, but I don't find them that useful.
I hope I don't sound too critical. I'm really glad that people are doing this sort of work, as I think that it's incredibly important to the advancement of deep learning, and I think that this work is well-done, but I don't find it personally useful.
Does anyone else find this sort of visualization useful? Maybe I'm just misunderstanding this. I would love to develop more of an intuition for neuron activations in deep nets- I want to better understand how they work, as right now, the only thing that I do is feed in inputs and look at the outputs, which is wildly inadequate.
I have the same criticism when it comes to the CNN visualizations that Karpathy et. al [1] came up with. They're cool, but I don't find them that useful.
I hope I don't sound too critical. I'm really glad that people are doing this sort of work, as I think that it's incredibly important to the advancement of deep learning, and I think that this work is well-done, but I don't find it personally useful.
[1]: https://arxiv.org/abs/1506.02078