Hats off for the authors' achievement, this is no small feat and something that has been tried for years. But IMHO it's time that field moved on from running after matrix accelerators and focused on the real advantages of event-based computing: asynchronous, low-latency, event-based signal processing.
Even for small-network tasks, training spiking networks has been non-trivial. This paper provides a way to get exact gradients, implying probably faster optimisation than using surrogate gradients or other approximation methods for SNNs.
Personally I think that way too many resources were wasted on trying to make better deep networks with spikes. In my opinion it is much more promising to apply spiking networks on problems that are inherently event-based.
Having a functional backpropagation algorithm such as the one provided can help with that, obviously.
I applaud this team's efforts. A real breakthrough.
There you get the full dose of hype for neuromorphic computing, but without any critical reflection (naturally, since it’s a press release advertising a product).
Unfortunately I am not aware of literature that provides critical review of neuromorphic computing. You have to read between the lines of the research papers to find out that the field has failed to live up to the promise of lower-energy deep learning (which was a misguided promise from the outset, IMHO).
Many researchers have been trying hard to shoe-horn deep ANNs into spiking networks for the last 10 years. But this doesn’t change the fact that linear algebra is best accelerated by linear algebra accelerators (i.e. GPUs/TPUs).
Generally, spiking networks will likely have an edge when the signals they are processing are events in time. For example, when processing signal streams from event based sensors, like silicon retinas. There’s also evidence that event-based control has advantages over their periodically-sampling equivalents.
Sparse activations that don't also have a time component (i.e. are sparse in space and time) can be very well implemented without events.
Granted, SNN processors can handle sparse activations better than matrix accelerators. But then again, SNN accelerators might carry lots of SNN overhead that is not required for sparse activations alone.
Edit: A good example for a non-spiking sparse activation accelerator is the NullHop architecture .
However, I think the MNIST and the Ying/Yang dataset, using latency-coding, are not the ideal example to demonstrate its performance.
These datasets are useful to demonstrate nonlinear classification, and it's certainly great to see that the spiking network performs competitively. However, the transformation into a latency code costs time, in terms of computation, and also in terms of representation, before even one item is classified. Perceptron-based ANNs with continuous outputs don't require this step and will always have an edge over spiking networks in such scenarios.
I think what the field is really lacking is an ML problem that can leverage spiking networks directly, that does not require costly conversion of data into a representation that is suitable for spiking networks.
Personally I think SNNs are a very exciting research field, both from a neuroscience as from a computer science angle. The work we are discussing here is deeply impressing for its rigour, and it addresses an important problem in spiking network research.
Whether spiking networks will provide lower-energy deep learning is a totally different question.
I have many ideas and questions regarding your paper:
- How do you adjust weights between different spikes?
- Do you use or implement a kind of wavelet for wave-propagation, in example for spike interferences?
- What neuromorphic hardware can I buy to run your code/ the SNN?
- We only consider one kind of model system in this paper but this method would work for any kind of hybrid dynamical system, so also other physical substrates (a lot of exciting work to do there).
- We used to sell a neuromorphic hardware system Spikey for ~3000 Euro (basically at cost), we've recently completed a similar project, we also provide access to remote users via the ebrains collaboratory (https://ebrains.eu/service/collaboratory/). There are a number of commercial offers in the works (SynSense, Inatera). You can also buy SpiNNaker boards or access them via ebrains. Loihi and TrueNorth either don't sell or are pretty expensive, but they have "research agreements" in place.
Current neuromorphic hardware is not easily accesible, but you can simulate spiking neural networks. Check out, e.g. https://brian2.readthedocs.io/en/stable/ or Nengo.ai
Also, will you release your method as code?
My aim is to release the method as part of Norse https://github.com/norse/norse. There is some subtlety involved in implementing it for a given integration scheme, though. The event based simulator underlying the paper will also be released in due time.
See also our tutorial on neuron parameter optimization to understand how it's useful for machine learning: https://github.com/norse/notebooks#level-intermediate
There's also a great book on the topic by Gerstner available online: https://neuronaldynamics.epfl.ch/
Disclaimer: I'm a co-author of the library Norse
Regarding the target audience, it's actually not entirely clear to me. This lies in the intersection between computational neuroscience and deep learning, which isn't a huge set of people. So, I think you're question is highly relevant and we (as researchers) have a lot of work in front of us to explain why this is interesting and important.