
The Next Wave of Deep Learning Hardware Architectures - Katydid
http://www.nextplatform.com/2016/09/07/next-wave-deep-learning-architectures/
======
vonnik_2
The article is a little dated. It was written in the wake of Intel acquiring
Nervana and Movidius, which it used as a hook to talk about Wave Computing.
Wave was founded a few months before Nervana in 2014, and raised a similar
amount of money (~$24M), which is just enough to get to your first chip if you
don't waste resources. There are other companies tackling this (Cerberas Sys.)
and other technologies that can get you acceleration (FPGAs).

~~~
tlarkworthy
And the elephant in the room is Google's chip
[https://cloudplatform.googleblog.com/2016/05/Google-
supercha...](https://cloudplatform.googleblog.com/2016/05/Google-supercharges-
machine-learning-tasks-with-custom-chip.html?m=1)

------
WalterBright
I'd like to see AI applied to handwritten letters. (I have thousands of them,
and want to transcribe them.)

~~~
dougabug
Handwritten digits / OCR was one of the first practical uses in the late 80's
/ early 90's for CNNs (check reading, addresses,
[http://yann.lecun.com/exdb/lenet/](http://yann.lecun.com/exdb/lenet/)).
Integrating an LSTM would be useful for providing higher level sequence
recognition.

~~~
WalterBright
I know just when my bank "upgraded" their check reading software last year. I
was taught to write checks in the form:

    
    
        $1234 66/100
    

in the amount column. I've done this for decades. Suddenly, my account was
only debited $12.34, and I received dunning letters, interest, and penalties.
Showed the bank the check image, they fixed it. (They also admitted that the
OCR software made no attempt to read the handwritten amount, nor the
signature.) I thought it was a fluke, but it happened again for the the next
two checks.

So I started writing the amount as:

    
    
        $1234.66
    

and things started working again. So no, I am not impressed with the OCR used
to read checks, and it is clearly not remotely ready to read general
handwriting.

~~~
danieltillett
The sad thing is you are still writing checks. The last check I wrote was
about 15 years ago.

~~~
WalterBright
I get tired of the fees charged for electronic transfers. It should be the
other way around. Many places will tack on 3% if you use a credit card. Paypal
is what, 1.5%? Wiring money is expensive, Western Union is even worse. Added
up over a year, all those "convenience fees" and crap can be a hefty bill.

Checks still have zero transaction costs for me and the depositor. When people
claim I didn't pay them, it's nice to show them the cancelled check with their
signature on it.

When bargaining with someone, showing them a signed check made out to them can
clinch the deal :-) Of course, cash is even more persuasive, but I don't care
to carry around cash and again, I like having a cancelled check as a receipt.

~~~
danieltillett
I am not criticising you for using checks. As you rightly point out electronic
transfers should be the no fee option - the idea that pushing around bits of
handwritten paper is cheaper than electrons is crazy.

~~~
WalterBright
My bank wants me to use electronic billpay, but they want to charge for it. I
say no thanks, you can keep dealing with my free paper checks which cost you
more.

I've been using ATMs since 1979, and I picked a bank that did not charge for
using it. It's just nuts to charge for an ATM when using their tellers for
free costs them far more.

------
alistproducer2
Is there anyone who can put into practical terms what these domain-specific
processors mean to the future of AI/deep learning?

~~~
bmh100
A few of the basic concepts:

Smarter memory design means that data does not need to be constantly reloaded,
saving time that would otherwise be wasted on data transfer.

Using lower-precision, fixed-point arithmetic is more efficient than standard-
precision floating-point arithmetic. As a result, less time and components are
wasted on unnecessary precision.

Someone more knowledgeable about processors can better describe the benefits
of the Wave architecture.

In terms of benchmarks, Wave claims[1] that a single Wave machine is 25x more
efficient than 100 GPUs for Google's Inception network.

[1]: [http://wavecomp.com/technology/](http://wavecomp.com/technology/)

~~~
varelse
100 of which GPU? And how were they connected to each other?

They almost _never_ give enough details to really understand these bold
claims. If this is 100 K40s in a 10 Gb/s datacenter, this is old hat.

If this is 100 Titan XPs in Big Sur boxes connected by 100+ Gb/s Infiniband,
it's really interesting. I doubt it though.

I suspect the only immediate threat to NVIDIA is underestimating AMD GPUs. 2+
years out, who knows?

~~~
bmh100
Wave seems to be referencing this[1] Google Research blog post. I couldn't
find any reference to what GPU cards Google uses. It still provides a basis of
comparison to Google Cloud at least. But like you say, a specialized data
center deployment probably wouldn't show such a dramatic difference.

[1]: [https://research.googleblog.com/2016/04/announcing-
tensorflo...](https://research.googleblog.com/2016/04/announcing-
tensorflow-08-now-with.html)

