The reason we ended up with digital logic is because of noise. Hysteresis from digital gates was the only way to make such large systems from such small devices without all of our signaling turning into a garbled mess. Analog processing has its niches and I suspect the biggest niche will be where that noise is a feature rather than a hindrance, something like continuous time neutral networks.
Neuromorphic hardware is an area where I encountered analogue computing [1]. Biological neurons would be modeled by a leaky integration (resistor/capacitator) unit. The system was 1*10^5 times faster than real-time—too fast to use it for robotics—and consumed little power but was sensitive to temperature (much as our brains). If I recall correctly, the technology has been used at cern, as digital HW would have required too high clock speeds. I have no clue what happened to the technology but there were other attempts at neuromorphic, analogue hardware. It was very exciting to observe and use this research!
I worked on a similar project - the Stanford braindrop chip. It's a really interesting technology, what happened is that most people don't really know how to train those systems. There's a group called Vivum that seems to have a solution.
I work with QDI systems, and I've long suspected that it would be possible to use those same design principles to make analog circuits robust to timing variation. QDI design is about sequencing discrete events using digital operators - AND and OR. I wonder if it is possible to do the same with continuous "events" using the equivalent analog operators, mix and sum.
We got some nice results with spiking/pulsed networks but the small number of units limits the application so we usually end up in a simulator or using more abstract models. There seems to be a commercial product but also only with 0.5K neurons, might be enough for 1D data processing though and filling a good niche there (1mW!) [2]
Just read about it and there are familiar names on the author list. I really wish this type of technology gained more traction but I am afraid it will not receive the deserved focus considering the direction of current research in AI.
> I wonder if it is possible to do the same with continuous "events" using the equivalent analog operators, mix and sum.
I actually worked for a startup that makes tiny FPAA's (Field Programmable Analog Arrays) for the low powered battery market. Their major appeal was that you could reprogram the chip to synthesize a real analog network to offload the signal processing portion of your product for a tiny fraction of the power cost.
The key thing is that analog components are influenced much more by environmental fluctuations (think temperature, pressure, and manufacturing), which impacts the "compute" of an analog network. Their novelty was that the chip can be "trimmed" to offset these impacts using floating gate MOSFETs, the same that are used in flash memory, as an analog offset. It works surprisingly well, and I suspect if they can capture the low power market we'll see a revitalization of analog compute in the embedded space. It would be really exciting to see this enter the high bandwidth control system world!
Well it's not just about noise. There is also loss. It's easier to reconstruct and/or amplify a digital signal than an analogue one. Also it's easier to build repeatable digital systems that don't require calibration compared to analogue ones.
Worth noting the exception here which is current loops.
This is a rare misfire from Quanta. No, there is no practical way to model anything non-trivial - especially not ML - with analog hardware of any kind.
Analog hardware just isn't practical for models of equivalent complexity. And even if it was practical, it wouldn't be any more energy efficient.
Whether it's wheels and pulleys or electric currents in capacitors and resistors, analog hardware has to do real work moving energy and mass around.
Modern digital models do an insane amount of work. But each step takes an almost infinitesimal amount of energy, and the amount of energy used for each operation has been decreasing steadily over time.
> No, there is no practical way to model anything non-trivial - especially not ML - with analog hardware of any kind.
Are you aware that multiple companies (IBM, Intel, others) have prototype neuromorphic chips that use analog units to process incoming signals and apply the activation function?
IBM's NorthPole chip has been provided to the DoD for testing, and IBM's published results look pretty promising (faster and less energy for NN workloads compared to Nvidia GPU).
Intel's Loihi 2 chip has been provided to Sandia National laboratories for testing with presumably similar performance benefits as IBM's.
There are many other's with neuromorphic chips in process.
My opinion is that AI workloads will shift to neuromorphic chips as fast as the technology can mature. The only question is which company to buy stock in, not sure who will win.
EDIT: The chips I listed above are NOT analog, they are digital but with alternate architecture to reduce memory access. I've read about IBM's test chips that were analog and assumed these "neuromorphic" chips were analog.