
The Future of Neuromorphic Computing - anthotny
http://www.newyorker.com/tech/elements/a-computer-to-rival-the-brain
======
bcatanzaro
The reason AI has been so successful recently is that the research community
has assumed a ruthlessly empirical philosophy: no idea, no matter how
beautiful or interesting, is considered truly useful until it bears measurable
results on some dataset. The reason neuromorphic computing gets such
skepticism from AI researchers is that so far it has resisted any attempts at
this kind of empiricism. No neuromorphic implementation has shown state of the
art results on any important problem.

If/When neuromorphic computers show groundbreaking results, the community will
pivot quickly to using them. But expecting AI researchers to show deference to
neuromorphic computing because it "mimics the brain" is to ignore the
empirical philosophy that has led to AI's success.

~~~
andreyk
To be fair, this whole Deep Learning renaissance was made possible and kicked
off only after decades of research in multi layer neural nets (going back to
the 80s) by Hinton, Lecun, etc. They stuck to their chosen method despite it
not having great empirical results (the research community shunned NNs in the
90s for SVNs cause they worked better), because they believed it should and
will work - and it did, eventually. So a similar argument for 'basic research'
could be made for neuromorphic computing.

~~~
bcatanzaro
Yes, I totally agree. Yann LeCun, Geoff Hinton, Jurgen Schmidhuber and others
did unpopular work for a long time. And they deserve tons of credit for their
perseverance which paid off.

Similarly, I think it's great that there are AI researchers working on
techniques which are currently out of favor. It's important to have diversity
of viewpoint.

What irritates me about neuromorphic computing is that much of the work I see
publicized (including the work in this article) isn't being presented as basic
research on a risky hypothesis. Instead it's presented as the future of AI,
despite the current lack of any demonstrated utility, and the almost complete
disconnect between the AI researchers building the future of AI and the
neuromorphic community.

The burden of proof is always on the researcher to show utility, and if the
neuromorphic computing community can do that, I'll be super excited! Until
then, I'll be waiting for something measurable and concrete, and rolling my
eyes at brain analogies.

~~~
Russell91
> Yes, I totally agree. Yann LeCun, Geoff Hinton, Jurgen Schmidhuber and
> others did unpopular work for a long time.

...

> Until then, I'll be ... rolling my eyes at brain analogies.

Maybe you don't realize this, but these guys made more brain analogies than
you can count over the same period to which you attribute their greatness.
Meanwhile, they were attacked year after year by state-of-the-art land
grabbers saying the same things you just did.

> isn't being presented as basic research on a risky hypothesis.

It is basic research, but it's not a risky hypothesis. Existing neuromorphic
computers achieve 10^14 ops/s at 20 W. Thats 5 Tops/Watt. The best GPUs
currently achieve less than 200 Gops/Watt. Where is the risk in saying that a
man-made neuromorphic chip can achieve more per dollar than a GPU. There is no
risk, and suggesting that this field is somehow has too much risk for advances
to be celebrated is absolutely crazy.

~~~
deepnotderp
Non-neuromorphic (analog) deep learning chip startup here. We're forecasting
AT LEAST ~50 TOPS/watt for inference.

~~~
Russell91
Sure - I guess it's productive for me to answer why this doesn't disagree with
my comment. By the time you get the software to hook up that kind of low bit
precision (READ: neuromorphic) compute performance with extreme communication-
minimizing strategies (READ: neuromorphic), which will invariable require
compute colocated, persistent storage (READ: neuromorphic) in any type of
general AI application, you're not exactly making the argument that
neuromorphic chips are a bad idea.

We literally have to start taking neuromorphic to mean some silly semantics
like "exactly like the brain in every possible way" in order to disagree with
it.

Edit: also, to ground this discussion, there are extremely concrete reason why
current neural net architectures will NOT work with the above optimizations.
That's the primary motivation for talking about "neuromorphic", or any other
synonym you want to coin, as fundamentally different hardware. AI software ppl
need to have a term for hardware of the future, which simply won't be capable
of running AlexNet well at all, in the same way that a GPU can't run CPU code
well. I think the term "neuromorphic" to describe this hardware is as
productive as any.

~~~
p1esk
Which existing neuromorphic computers achieve 10^14 ops/s at 20 W? If you
compare them to GPUs, those "ops" better be FP32 or at least FP16.

Also, you forgot to tell us what is that "extremely concrete reason why
current neural net architectures will NOT work with the above optimizations".

~~~
Russell91
>Which existing neuromorphic computers achieve 10^14 ops/s at 20 W? If you
compare them to GPUs, those "ops" better be FP32 or at least FP16.

The comparison is of 3 bit neuromorphic synaptic ops against FP8 pascal ops.
That factor is important (as it means that the neuromorphic ops are less
useful), but it turns out to be dwarfed by the answer to your second question:

> Also, you forgot to tell us what is that "extremely concrete reason why
> current neural net architectures will NOT work with the above
> optimizations".

this is rather difficult to justify in this margin. But the idea is that
proposals such as those above (50 Tops) tend to be optimistic on the
efficiency of the raw compute ops. But these proposals really don't have much
to say about the costs of communication (e.g. reading from memory,
transmitting along wires, storing in registers, using buses, etc.). It turns
out that if you don't have good ways to reduce these costs directly (and there
are some, such as changing out registers for SRAMs, but nothing like the 100x
speedup from analog computing), you have to just change the ratio of ops /
bit*mm of communication per second. There are lots of easy ways to do that
(e.g. just spin your ops over and over on the same data), but the real
question is how to get useful intelligence out of your compute when it is data
starved. This is an open question, and (sadly), very few ppl are working on
it, compared to say low-bit-precision neural nets. But I predict this
sentiment will be changed over the next few years.

Edit for below: no one is suggesting 50 Top/w hardware running alex net
software to my knowledge (though would love to hear what they are proposing to
run at that efficiency) . Nvidia among others are squeezing efficiency for cv
applications with current software, but this comes at the cost of generality
(it's unlike the communication tradeoffs they're making on that chip will make
sense for generic AI research), and further improvements will rely on broader
software changes, esp revolving around reduced communication. There are a lot
of interesting ways to reduce communication without sacrificing performance,
such as using smaller matrix sizes, which would reverse the state of the art
trends.

~~~
deepnotderp
Our hardware can run AlexNet...

~~~
Russell91
In an integrated system at 50 tops/watt? How are you going to even access
memory at less than 20 fJ per op? Like, you're specifically trying to hide the
catch here. If we were to take you at face value, we'd have to also believe
that Nvidia is working on an energy optimized system that is 50x worse for no
good reason.

For reference, reading 1 bit from a very small 1.5kbit sram, which is much
cheaper than the register caches in a gpu, costs more than 25 fJ per bit you
read.

~~~
deepnotderp
So this is locked up in "secret sauce". But as a hint, the analog aspect can
be exploited.

~~~
Russell91
Look, it sounds like your implying compute colocated storage in the analog
properties of your system (which is exactly what a synaptic weight is btw), on
top of using extremely low bit precision. So explicitly calling your system
totally non-neuromorphic is a little deceiving. But even then I find this idea
that you're going to be running the AlexNet communication protocol to pass
around information in your system to be a little strange. If you're doing
anything like passing digitized inputs through a fixed analog convolution then
you're not going to beat the SRAM limit, which means that instead you have in
mind keeping the data analog at all times, passing it through an increasing
length of analog pipelines. Even if you get this working, I'm quite skeptical
that by the time you have a complete system, you'll have reduce communication
costs by even half the reduction you achieve in computation costs on a log
scale. It's of course possible that I'm wrong there (and my entire argument
hinges on the hypothesis that computation costs will fall faster than
communication - which is true for CMOS but may be less true for optical), but
this is really the only projection on which we disagree. If I'm right, then
regardless of whether you can hit 50 Tops (or any value) on AlexNet, you'd be
foolish not to reoptimize the architecture to reduce communication/compute
ratios anyway.

~~~
p1esk
Oh, I see what you meant now. Yes, when processing large amount of data (e.g.
HD video) on an analog chip, DRAM to SRAM data transfer can potentially be a
significant fraction of the overall energy consumption. However, if this
becomes a bottleneck, you can grab the analog input signal directly (e.g.
current from CCD), and this will reduce the communication costs dramatically
(I don't have the numbers, but I believe Carver Mead built something called
"Silicon Retina" in the 80s, so you can look it up).

Power consumption is not the only reason to switch to analog. Density and
speed are just as important for AI applications.

------
jjaredsimpson
I never understand the odd advantage that brains are assumed to have over
machines when comparing power consumption.

>... AlphaGo ... was able to beat a world-champion human player of Go, but
only after it had trained ... running on approximately a million watts. (Its
opponent’s brain, by contrast, would have been about fifty thousand times more
energy-thrifty, consuming twenty watts.)

A human brain has a severe limitation though. It can't consume more or less
energy even if it I wanted to. AlphaGo could double, triple, etc its power
consumption and expect to improve its performance.

The brain also took decades to train. Computers also have the advantage of
being identical. You can't train any brain to be a master level Go player.

I just don't see brains as the high watermark of intelligence. They occupy a
very specific niche in what I assume is a vast unbounded landscape of possible
intelligences.

~~~
pron
> The brain also took decades to train.

The brain of an insect doesn't take decades to train, and we're currently
unable to match its capabilities, either.

> I just don't see brains as the high watermark of intelligence. They occupy a
> very specific niche in what I assume is a vast unbounded landscape of
> possible intelligences.

That is a hypothetical claim because we don't know what intelligence _is_.
Surely, some algorithms are much better at some tasks than the human brain,
but that has been the case since the advent of computing, and it does not make
them intelligent.

Intelligence, or how we would currently define it colloquially and
imprecisely, is an algorithm or a class of algorithms with some specific
capabilities. Could those capabilities be taken further than the human brain?
We certainly can't say that they cannot, but it's not obvious that they can,
either. The only kind of intelligence we know, our own, comes with a host of
disadvantages that may be features of the particular algorithm employed by the
brain and/or to limitations of the hardware, but they could possibly be
essential to intelligence itself. Who knows, maybe an intelligence with access
to more powerful hardware would be more prone to incapacitating boredom and
depression or other kinds of mental illness. This is just one hypothetical
possibility, but given how limited our understanding of intelligence is, there
are plenty of possible roadblocks ahead.

Even if a higher intelligence than humans' is possible, its hypothetical
achievements are uncertain. Some of the greatest problems encountered by
humans are not constrained by intelligence but by resources and observations,
and others (e.g. politics) are limited by powers of persuasion (that also
don't seem to be simply correlated with intelligence). For example, what's
limiting theoretical physics isn't brains but access to experiments, and
what's limiting certain optimization problems are computational limits, for
which our own intelligence, at least, does not give good approximate solutions
at all.

~~~
return0
> The brain of an insect doesn't take decades to train, and we're currently
> unable to match its capabilities, either.

It's not particularly useful to simulate insects. We can far surpass some of
their capabilities, but the goal is not to make an insect-robot, just like we
didn't care to make a mechanical horse.

~~~
nharada
Both of these are under active development

Robotic insects:
[https://en.wikipedia.org/wiki/RoboBee](https://en.wikipedia.org/wiki/RoboBee)

Robotic horses:
[http://www.bostondynamics.com/robot_bigdog.html](http://www.bostondynamics.com/robot_bigdog.html)

~~~
return0
Those are mostly interested in biomimetic movement rather than intelligence.
They do have some applications, but i don't think they ve convinced the world
that mimicking organisms is necessarily optimal.

------
JackFr
"Neuromorphic" = "Ornithopter of the mind"

Giving up on flapping wings was the first step to flight.

~~~
varjag
Indeed. Imagine that Wright Flyer never happened, but some time in 1940s, the
progress in engines' specific thrust made a wing flapping machine able to take
off. That's where we are with machine learning.

------
deepnotderp
It bugs me when people always talk about "neuromorphic computing" and explore
crazy ideas that never work and look at them in awe, but when anyone brings up
a somewhat novel architecture for deep learning (nets that are being used
today, successfully...) people say "that'll never work".

For example, our startup uses analog computing to achieve accuracy roughly
equivalent to digital circuits, yet we're told that we're crazy? Meanwhile
people dreaming about memristors are showered with grants and money....

~~~
petra
You're from Isocline, right ? Your GPS chip was really good.

But your SIMD chip will be much more impressive, right?

~~~
Quanticles
No, they are not from Isocline...

There are groups at UCSB and U-Tenn working on analog neural network
technologies as well.

~~~
petra
Could you please share a bit more about your chip and when it would be ready ?

------
lend000
There's a lot of backlash and/or dismissiveness on HN every time someone
brings up neuromorphic architectures, and I think it has a lot to do with the
same defensiveness that people display when their political beliefs are
challenged. _When_ neuromorphic architectures start bearing fruit, programmers
will no longer be so in-demand for configuring the machines, as it will shift
the balance of power towards hardware engineers and hard scientists.

~~~
return0
Computational neuroscientists have been using simplified models like these for
decades, and in principle the operation of these 'neuromorphic' neurons can
already be simulated in large numbers in 'ordinary' computers. So, it's not
clear at all what is to be gained. AFAIK, most of the neuroscience community
considers Truenorth a marketing ploy.

I don't think programmers should wait for these chips before they panic. They
should already panic now, because deep learning works.

------
startupdiscuss
If these articles get into the math behind it, I think they will realize that,
currently, the brain is just a metaphor for a style of computation.

The article does state this towards the end: "Given the utter lack of
consensus on how the brain actually works, these designs are more or less
cartoons of what neuroscientists think might be happening."

We don't really know how the brain does what it does.

------
pron
> the recent success of A.I.

I guess they mean the recent success mostly due to modern hardware of 1960s
statistical clustering and classification algorithms that for PR and
historical purposes some people call "AI", but are currently unknown to have
any significant relationship with what we call intelligence.

When we achieve the capabilities of an insect we would be able to call our
algorithms "AI" without getting red in the face, as we'd know there's a decent
chance we're at least on the path to intelligence. Until then, let's just call
them statistical learning. That wouldn't make them any less valuable, but
would represent them much more realistically and fairly.

It's funny how how statistics was once considered the worst kind of lie, and
now for some it's becoming synonymous with intelligence.

------
partycoder
In the movie Terminator 2, a futuristic robot with advanced AI was developed
by reverse engineering a futuristic chip.

In reality, we do not need to reverse engineer a chip. We can just reverse
engineer our own brains.

------
return0
I see nothing in these "neuromorphic" architectures than hogwash trying to
bullshit governments into giving them money. There's no conceptual advancement
offered by these computers that can't be simulated with matlab. Until the day
when we actually learn how neurons work, these will just be extremely
premature optimizations.

~~~
p1esk
These designs are advances in the field of computer architecture. They look at
how brain processes information for ideas to make hardware more efficient, for
some applications (such as pattern matching). Did you expect something more?

~~~
return0
They use very rudimentary sketches that have little to do with real neurons.
ANNs have been mimicking these things in a slightly lower detail since the
60s. We can do better pattern matching with ANNs.

~~~
p1esk
I think you might be confused about terminology.

Neuromorphic computing is running some known ANN model directly in hardware.
Why do we want it? Because ANN models in software work well for pattern
matching, and we want to speed it up/make it more efficient.

~~~
return0
Nope, ANN's and deep learning are not used by these boards (Neurogrid, zeroth,
truenorth).

~~~
p1esk
[https://arxiv.org/abs/1603.08270](https://arxiv.org/abs/1603.08270)

They have been designed, and are being used either for more efficient pattern
matching, or to speed up brain simulations (again, using known neuronal
models).

You seem to expect something else from neuromorphic computing, why?

~~~
return0
I stand with Yann Lecun's criticism on the article:

[https://m.facebook.com/yann.lecun/posts/10152184295832143](https://m.facebook.com/yann.lecun/posts/10152184295832143)

> [the truenorth team had] to shoehorn a convnet on a chip that really wasn't
> designed for it. I mean, if the goal was to run a convnet at low power, they
> should have built a chip for that. The performance (and the accuracy) would
> be a lot better than this.

They used their 'neuromorphic' chip in an explicitly non-neuromorphic way,
basically approximately mapping deep learning processes to their chip. There
is very little neuromorphicity (brain-likeness) about it (plasticity rules out
of their ass, for starters). And they still get less than state-of-the art
performance in most tasks!

I expect 'neuromorphic' to be used when sound neuroscience is used in large
scale implementations that allow us to actually simulate parts of the brain.
Anything else, we call it what it is, ANNs.

~~~
p1esk
Well, none of those chips are brain-like at all. For example, TrueNorth is
fully digital, it uses separate compute/memory blocks, signal multiplexing,
signal encoding, routing protocols, instruction set, etc, none of which is in
any way related to what brain is doing. What makes you think it's
"neuromorphic"?

Whether you like it or not, get used to people calling their hardware ANN
implementations "neuromorphic".

~~~
return0
Nope, neuromorphic means the hardware would simulate the neurobiology, not
ANNs. More practically, they would never publish in Science if their title was
"printing ANNs in hardware".

~~~
p1esk
TrueNorth hardware, as I illustrated, does not resemble neurobiology at all.
There are no brain-like components there, on any level. Moreover, it can run
ANN algorithms just as easily as more "neuromorphic" algorithms.

Pointing to how they chose to name it for publication is not exactly a very
convincing argument to support your view, is it? :)

~~~
return0
My view is they 're useless. i don't get your point, sorry.

~~~
p1esk
My point is architectures like TrueNorth are very impressive from the point of
view of a computer engineer, and they are very efficient when running their
intended applications (neural network algorithms). The fact that they are not
"brain-like" does not make them any less impressive.

~~~
return0
> very impressive from the point of view of a computer engineer

Maybe, i suppose as much as a bitcoin ASIC is.

