Spikes in silicon are perhaps the most power inefficient way to represent numbers. You dissipate power every time you change a wire from high voltage to low, or back again. Think about representing numbers up to 8 bit precision. With a spiking neural network, you need to charge and discharge the wire 128 times (expected value), assuming the numbers are uniformly distributed between 0-255. With a standard 8-bit representation, you need to charge and discharge a wire 4 times.
Every time people make analogies to biological systems, they seem to overlook the truth: we don’t know how to engineer biological systems. It’s true brains are amazing, but they are not built from silicon. Trying to emulate a crude model of a brain in silicon is like making an airplane flap its wings.
You'd be right, if we needed 8-bit precision on activity in a neural network to get reasonable performance. But we don't --- in fact, binarized neural networks (i.e. 1-bit precision) are under very active research as well as HW development.
Our own HW represents weights with a few bits, and neural states with a few bits. Only actual events transmitted between neurons are single-bit.
You can also consider that because SNNs operate in real time, "time" itself is a variable that can represent values. Several architectures use event timing to represent real values.
There are several much more power efficient ways of representing information than the one you suggested.
I'm one of those people who research low precision NNs. There's a trade off between network size and network precision, so to get binarized networks to work "reasonably well" you need to make them much larger in terms of weight count.
SNNs are not well suitable for existing silicon based process technologies. It's much more power efficient to perform GEMM computation in analog domain, using non-spiking voltage or current levels (e.g. memristor or floating gate transistor crossbars). A brain uses spikes because of very specific biological constraints. We use silicon, because it's extremely reliable, extremely fast, and very cheap. And it does not have those constraints which shaped brain evolution.
Brain simulation research is welcome, but the current SNN algorithms are just slightly less crude brain simplifications than regular ANNs. Using spikes in silicon to do ML computations does not make much sense, imo. I'd rather see neuroscientists focus on fundamental understanding of core brain algorithms, like what Numenta is trying to do.
I'm surprised SNNs don't get more attention. The technology to make efficient hardware for training them exists. The key problem is that current DNNs just take too much power to train. They are fundamentally inefficient and none of the incremental improvements in recent years has changed this.
Spikes in silicon are perhaps the most power inefficient way to represent numbers. You dissipate power every time you change a wire from high voltage to low, or back again. Think about representing numbers up to 8 bit precision. With a spiking neural network, you need to charge and discharge the wire 128 times (expected value), assuming the numbers are uniformly distributed between 0-255. With a standard 8-bit representation, you need to charge and discharge a wire 4 times.
Every time people make analogies to biological systems, they seem to overlook the truth: we don’t know how to engineer biological systems. It’s true brains are amazing, but they are not built from silicon. Trying to emulate a crude model of a brain in silicon is like making an airplane flap its wings.