
IBM has built a digital rat brain that could power tomorrow’s smartphones - ottoborden
http://qz.com/481164/ibm-has-built-a-digital-rat-brain-that-could-power-tomorrows-smartphones/?utm_source=sft
======
the8472
> a system has about 48 million digital neurons, which is roughly as many as
> found in the brain of a rodent.

The count of neurons alone does not make a (rat) brain. The degree to which
they are connected, the higher-level layouts, the fidelity of the neurons and
many other things are also necessary.

It's like having the world's smallest bowl filled with soup of short DNA
strands worth 3 billion base pairs and saying your bowl is similar to the
human genome.

~~~
kbenson
The first sentence literally says why it's not just due to count of neurons.

 _In August 2014, IBM announced that it had built a “brain-inspired” computer
chip—essentially a computer that was wired like an organic brain, rather than
a traditional computer._

That's not to say it is equivalent to a rat-brain, there will obviously be
differences in efficiency of components and portions that aren't replicated
correctly or just plain done differently, but your analogy is no better than
what you accused the article of.

~~~
_yosefk
...so the phrase "wired like an organic brain" is enough to brush off any
skepticism?

It'd be rather impressive if it actually did a large number of things that
rats do, but a testing environment for that is rather hard to arrange. In the
absence of a "rat benchmark" you can build anything and say it's wired like a
rat's brain.

~~~
kbenson
I never said it didn't deserve criticism, just that the criticism is not
absolved of the responsibility to make useful, supported arguments as well.

I thought the GP post had a point that could be made, but I think they went
about it sloppily, and the analogy used in the criticism was hyperbolic in the
opposite direction as the article. The article's title was _mostly_ link-bait,
but there were some weak assertions in the article that tried to back it up.
The sibling comment at your level by avoid3d actually does a good job of
trying to address some specifics problems of the claim, and if something
similar to that was at the top level, I would either not bothered to reply, or
actually looked into the criticisms by researching the chip if my interest was
piqued (and if not beat to it by a useful reply from someone else). That's
much more useful for discussion.

------
mkaziz
How does IBM keep doing cool stuff like this when every impression I have of
them is a slow moving, lumbering, IT services monster that's lunging to
provide solutions for Big Business?

~~~
modeless
This project is actually an example of how they _are_ slow-moving and
lumbering. If they implemented state-of-the-art learning algorithms in silicon
it would be amazing and potentially revolutionary and they'd have tons of
customers. That would be deep convolutional neural nets. Instead they've gone
with spiking neural nets, which have approximately nothing to do with the
current state of the art in artificial neural networks.

Furthermore, you might be misled by their PR storm. In fact this chip doesn't
implement learning at all. The learning is the important part! This chip is
merely an accelerator for running pre-trained neural networks, and because of
the spiking architecture those neural networks are doomed to perform poorly.

~~~
firebones
This cool research (just like Watson) seems like a loss-leader for selling
consulting services which tailor the tech to very narrow, specific AI
purposes. They're all breathlessly announced as though revolutionary, but none
seem generalizable. Still, investing in so many small bets increases their
exposure to a big win in the long run...

------
varelse
IMO it would really inspire confidence to see one of these chips beat
traditional neural networks at _something_...

Speech recognition, image recognition, tumor detection, whatever. I don't
care, but _something_...

Right now it seems like a cheese shop that doesn't actually have any cheese.
If this is such a superior super mega fantastic processor, one would think it
could at least run AlexNet* or some sort of superior variant, no?

*[http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf](http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf)

~~~
avoid3d
One reason that these nets are not there yet, is that they are sticking to the
'firing' analogue of natural brains. They are using binary output over time
that averages out, a technique that has so far been much slower than neural
networks with continuous output.

------
ay
I found an interesting comment from Yann LeCun about the TrueNorth chip which
this seems to use:
[https://www.facebook.com/yann.lecun/posts/10152184295832143](https://www.facebook.com/yann.lecun/posts/10152184295832143)

------
deciplex
Not totally relevant to the article, but:

> _Modha said his team’s goal is to build a “brain in a shoebox,” with over 10
> billion synapses, consuming less than 1 kilowatt hour of power—the minuscule
> amount of power the human brain requires to work._

Can anyone even _guess_ what information they're trying to convey here? On the
face of it, it makes no sense - it's like saying the brain consumes 90000
kcal, but over what period? A day? A month? A lifetime?

But I can't even figure what they're _trying_ to say. The human brain consumes
about 20W (certainly not 1 kW!), so it would take about two days to use up 1
kWh of energy. I don't think that's what they're going for, but I can't think
of any other reading of it that doesn't amount to basically a guess.

~~~
avoid3d
I imagine they meant kw, and that 20w is close enough to 1kw. (btw, I thought
the human brain uses a lot more than 20w, maybe 20w in chemical signals, but a
lot more in heat?)

~~~
deciplex
The human body is about 100W-120W, which is about 2000-2500 kcal/day. 20W
sounds right.

------
ottoborden
Here is the link to the paper on the programming language:
[https://dl.dropboxusercontent.com/u/91714474/Papers/020.IJCN...](https://dl.dropboxusercontent.com/u/91714474/Papers/020.IJCNN2013.Corelet.pdf)

~~~
mcbuilder
I have to disagree with the paper's claims that OOP is the ideal language
paradigm to be programming in. I worked for years in computational
neuroscience, where most tools are OOP based. I left academia after drinking
the pure functional programming kool-aid, now my work focuses on getting more
mainstream adoption of FP. I hope that with an large enough paradigm shift, we
will be able to write software for neuromorphic chips.

My point is that when programming SNNs, the primary concern is not
_encapsulation_ but rather it should be _composition_ and _declaration_. In my
opinion, most of the computational neuroscience programming field in in
imperative dark ages when it comes to actually writing software that can
account for biological behavior.

~~~
FrankenPC
This is really interesting to me. I make money with OOP in the commercial
arena. But I'm considering getting out and changing careers. Can you summarize
WHY FP is superior? In this context is fine. If I'm asking too much, do you
have a good paper which explains the necessity of FP to a OOP-head?

------
zopf
These guys are using similar technology to the Boahen lab's:

[http://web.stanford.edu/group/brainsinsilicon/index.html](http://web.stanford.edu/group/brainsinsilicon/index.html)

In fact, I think some of the alumni now work at IBM, on this very project.

