
New chip reduces neural networks’ power consumption - kjhughes
http://news.mit.edu/2018/chip-neural-networks-battery-powered-devices-0214
======
Invictus0
Correct me if I'm misunderstanding something, but won't this chip need a ton
of ADCs to process these computations?

~~~
arcticbull
DACs on one end and ADCs on the other, yeah. I guess though, it may be
possible to hook analog inputs directly to it in some way, such as a camera or
mic.

~~~
neolefty
But if it's one-bit values, they're simple and fast -- ADC is just a
threshold, and DAC is a step function.

~~~
ky738
This guy studied signal processing.

------
darknoon
Uses less power to do less useful work (binary weights)…

~~~
phkahler
Right, they need to compare it to the same algorithm running on a conventional
computer (or GPU or whatever they compared to). Math like that can run faster
and more efficiently on a regular CPU too.

------
mtgx
It seems to target mobile phones. Does it come anywhere near close to the ML
processor IP Arm has just announced? (4.6 TOPs at 2W).

------
nairboon
Where is the paper? How can you write about research news without at least
linking to the actual research?

------
tziki
Compared to what? Seems like the fairest comparison point would be Google's
TPUs.

~~~
synthos
It's not really comparable to TPU. The MIT chip uses analog tricks to perform
binary neural network operations. It's probably directly comparable to FPGA
DNN, I think.

------
jaredhansen
Do read the whole thing, but here's the quick summary of most important parts:

 _[neural net] “nodes” are just weights stored in a computer’s memory.
Calculating a dot product usually involves fetching a weight from memory,
fetching the associated data item, multiplying the two, storing the result
somewhere, and then repeating the operation for every input to a node...

In the [new] chip, a node’s input values are converted into electrical
voltages and then multiplied by the appropriate weights. Summing the products
is simply a matter of combining the voltages. Only the combined voltages are
converted back into a digital representation and stored for further
processing.

The chip can thus calculate dot products for multiple nodes — 16 at a time, in
the prototype — in a single step, instead of shuttling between a processor and
memory for every computation...

One of the keys to the system is that all the weights are either 1 or -1...
Recent theoretical work suggests that neural nets trained with only two
weights should lose little accuracy — somewhere between 1 and 2 percent.

Biswas and Chandrakasan’s research bears that prediction out. ... Their chip’s
results were generally within 2 to 3 percent of the conventional network’s._

~~~
newen
If the weights are only 1 or -1, and each neuron can only get 16 inputs, then
this is more similar to neuromorphic systems like IBM's True North than
regular neural networks.

And True North consumes on the order of 70-100 milliwatts.

