My favorite part is when Boahen says of binary computing, "It was so brute force." I hadn't thought of it like that before, but it makes sense, and highlights how the perspectives of different people can shed light on a problem.
A place to look for innovation is in the shadow of the fears of earlier (or original) innovators.
Most of the would-be inventors of flying machines were afraid of instability, so many of the failed attempts at heavier than air flight were encumbered by very large stabilization surfaces. It took the Wright's insight that dynamic stability could be provided by the pilot to make it work.
The same insight resulted in highly manuverable fly-by-wire fighters like the F-16, which were a departure from aircraft with huge stabilizers for high-speed flight like the MIG-23. (Versions of the MIG-23 actually has a vertical stabilizer that extends below the aircraft, which has to fold-up prior to landing.)
If you're seeking to innovate, think: "What were my predecessors afraid of?" Find those rocks and look under them!
Prof. Boahen gave a talk at TED a couple of years ago. This article doesn't really shed any new info on what they are doing. The video is really interesting and contains a simple demo: http://blog.ted.com/2008/07/kwabena_boahen.php
I was wondering few times after encountering concept of artificial neural networks why people use them as they do.
One or two layered perceptron taught with back-propagation has just as much to do with how brain works as anything.
Why don't people just build network of simplest possible silicon based components with bounds that remember whether they were recently active, and strengthened if a moment later it occurs that reaction of a network to a given stimuli was proper but weakened if it was improper.
I bet you could teach this kind of network anything if it consists of enough elements and density of connections is sufficient and you can counter in period of random component discharges to simulate sleep and thus avoid overlearning.
Brain architecture, chemistry of synapses are just implementation detail of this general idea if your have to build it out of biological cells.
"One or two layered perceptron taught with back-propagation has just as much to do with how brain works as anything."
I understand by the above that they are not related. I agree.
"Why don't people just build network of simplest possible silicon based components with bounds that remember whether they were recently active, and strengthened if a moment later it occurs that reaction of a network to a given stimuli was proper but weakened if it was improper."
Your proposal dismisses just about all we know about neurons, and ignores all we don't yet know about neurons. That we neuroscientists make preliminary simplifications (as a sort of working hypotheses) and that some computer scientists run with them and find use for them in a variety of signal processing situations doesn't mean we have the faintest idea how neuron ensembles compute. Reproducing such simplifications in hardware, as you propose, may result in products that find a technological application, but don't advance our understanding of brain computation.
"I bet you could teach this kind of network anything if it consists of enough elements and density of connections is sufficient and you can counter in period of random component discharges to simulate sleep and thus avoid overlearning."
You'd be hardcoding a battery of special cases. While we expect the brain to be hardwired in some regards, its agility and flexibility are mighty. Your model falls short of emulating even our current limited notions of brain computation.
> Your proposal dismisses just about all we know about neurons, and ignores all we don't yet know about neurons.
Yes. That's because I think that the things we know about neurons are mixture of recipe for great adaptive control system and implementation details of this idea in biological hardware. In fact I think that most of the things we know are implementation details (although valuable form medical point of view and intrinsically interesting).
> Reproducing such simplifications in hardware, as you propose, may result in products that find a technological application, but don't advance our understanding of brain computation.
I agree. I don't claim that such neural network can help us understand how brain works. But I bet such simplified artificial neural network could learn how to walk, run and jump maybe even see - as good as a child.
> You'd be hardcoding a battery of special cases.
I think that you are referring to my statement about sleep. I believe that sleep is not special case of anything but essential thing for any neural network learning. I have not heard about single organism with neural network that does not dream (perhaps with exception of one human being, Ngoc Thai). I interpret hallucinations resulting from sleep deprivation as symptoms of neural network over-learning. I think that periodical disconnecting of neural network from sensors and actuators and letting neurons discharge randomly is essential for proper learning.
> While we expect the brain to be hardwired in some regards, its agility and flexibility are mighty.
I think that specialization of some parts of the brain for some tasks is just minor optimization. I draw that conclusion from the fact that young human with large parts of his brain missing can grow to be perfectly fine because other parts of the brain train themselves to replace functionality of missing specialized parts.
> Your model falls short of emulating even our current limited notions of brain computation.
True. But I am not striving for modeling biological brain. I just want to build new brain in silicon.
Evolution is simple idea obscured in details by strange biochemical hardware it has to run on. I believe that same is true for neural networks.
Good article. Questioning everything that's 'known' is essential to coming up with new paradigms. And so far, computing certainly isn't an 'elegant' solution.
"doubling your signal-to-noise ratio demands quadrupling your energy consumption"
Interesting assertion. Makes sense intuitively; the more we dither about an unobvious/complex choice we're facing, the longer it takes to make it. I'm wondering how broadly that statement applies.
The field of neuromorphic engineering is rather big. Started off with Carver Mead's book "Analog VLSI and Neural Systems" (Addison-Wesley, Reading, Massachusetts, 1989), and a very cool paper titled "A Silicon Neuron" by Mahowald and Douglas (http://www.nature.com/nature/journal/v354/n6354/abs/354515a0... , Nature 354, 515 - 518 (26 December 1991)). The field has grown to amazing devices currently becoming standard engineering practice like silicon retinas and cochleas ( http://siliconretina.ini.uzh.ch ).