
Intel to buy deep-learning startup Nervana Systems for at least $350M - flinner
http://www.recode.net/2016/8/9/12413600/intel-buys-nervana--350-million
======
p1esk
Here are some details about their upcoming chip:
[http://www.nextplatform.com/2016/08/08/deep-learning-chip-
up...](http://www.nextplatform.com/2016/08/08/deep-learning-chip-upstart-set-
take-gpus-task/)

Summary:

\- 28nm

\- looks similar to P100 (interposer with HBM)

\- 55 teraops/s performance

\- custom number format (variable length fixed point?)

\- simplified memory architecture (no cache?)

\- no info about power consumption

~~~
modeless
Wow, really important details here; very interesting. Now imagine this chip,
but shrunk to Intel's latest 14nm process. If they make that chip I have no
doubt that it will revolutionize the entire field.

~~~
PeCaN
> If they make that chip I have no doubt that it will revolutionize the entire
> field.

Ha, ha. Yeah… no. For one thing, there's no mention of its power consumption,
no CUDA support, questionable memory design (sure, you can get a million
TFlops without cache, now try to get that chip to do anything useful), etc.

Intel probably bought 'em to work on integrated GPUs or Xeon Phi or something.

~~~
modeless
> Intel probably bought 'em to work on integrated GPUs or Xeon Phi or
> something.

Ha, ha. Yeah… no.

> there's no mention of its power consumption

Can't be tremendously higher than Pascal for reasons of physics.

> sure, you can get a million TFlops without cache, now try to get that chip
> to do anything useful

"Without cache" is certainly an exaggeration. It won't have a _globally
coherent_ cache hierarchy in the style of CPUs. It certainly will have various
on chip memories to hold intermediate results. Neural net workloads are
incredibly predictable and homogeneous and are essentially the perfect
scenario for hand optimization of data flows to beat automatic caching.

> no CUDA support

You're just being silly now. CUDA isn't a standard, it's proprietary to NVIDIA
and this isn't a general purpose processor anyway.

~~~
mastazi
> You're just being silly now. CUDA isn't a standard, it's proprietary to
> NVIDIA

I would love to agree with you but the reality is that CUDA is the de-facto
industry standard e.g. [1].

> and this isn't a general purpose processor anyway.

The "GP" part in GPGPU would like to disagree... [2]

[1]
[https://en.wikipedia.org/wiki/Comparison_of_deep_learning_so...](https://en.wikipedia.org/wiki/Comparison_of_deep_learning_software)

[2]
[http://www.nvidia.com/object/cuda_home_new.html](http://www.nvidia.com/object/cuda_home_new.html)

~~~
redcalx
What happened to OpenCL?

[https://en.wikipedia.org/wiki/OpenCL](https://en.wikipedia.org/wiki/OpenCL)

~~~
dagw
Nothing per se. It's just that it lags far enough behind CUDA that unless you
have a specific reason to avoid NVIDIA hardware there are very few pragmatic
reasons to use it.

------
cs702
Finally it looks like Intel is getting serious about competing with Nvidia
GPUs in the nascent deep/machine learning market!

As good as this could be, it would be even better if we also get an open-
source software stack that can compete with Nvidia's proprietary CUDA stack,
which currently dominates everywhere (except maybe in Google's data centers).

~~~
p1esk
_it would be even better if we also get an open-source software stack that can
compete with Nvidia 's proprietary CUDA stack_

You've already got that. It's called OpenCL.

~~~
mastazi
Parent said "that can compete" so no, in practice, we don't have that. (Of
course I hope the situation will improve!!)

[https://en.wikipedia.org/wiki/Comparison_of_deep_learning_so...](https://en.wikipedia.org/wiki/Comparison_of_deep_learning_software)
(see column "Open CL support")

~~~
p1esk
Oh, so you (or OP) want Intel/Nervana to create yet another programming model
which will succeed where OpenCL fails (e.g. beat CUDA)? Seems unlikely...

------
mgreg
Likely a good move for Intel to get in on one of the faster growing areas of
computing. Let's the compete with the GPU folks and offer something to the
cloud players eventually (a la Tensor Processing Unit). Nice to see the
competition.

Also lets Nervana scale and potentially get their Neon deep learning framework
out there in the face of bigger players (a la TensorFlow).

All in all it's good to see the competition in this space.

~~~
dharma1
Neon has been fast in benchmarks because of hand optimised kernels, but their
kernel guy (Scott Gray) recently left to OpenAI. I wonder why

~~~
scott-gray
I was aware of this deal before leaving. I made out pretty well with my vested
shares but was more interested in working on the cutting edge of research than
in continuing on with pure hardware optimization and design. I wish
Nervana/Intel the best and I'm looking forward to seeing their hardware come
to market. I'm mainly working on TensorFlow now, but would love to see Nervana
finish the graph backend they've been working on.

~~~
ericjang
Love your work. We welcome open source contributions to TensorFlow :)

------
scottlegrand
I figured Nervana was mostly dead if they were stuck at 28 nm

I figured Intel was down and out in Santa Clara without a strong deep learning
play

That all changed today.

Intel has been bouncing all over the place trying to break into deep learning.
If they don't screw this up, they just found their way in IMO.

~~~
nibnib
>I figured Nervana was mostly dead if they were stuck at 28 nm

Why? Is there a competitor that has leveraged a more modern process
technology?

~~~
scottlegrand
Yep, NVIDIA. Nervana's dedicated ASIC will deliver 55 (mostly) int16 TOps in
2017. In contrast, the two Titan XP GPUs I bought last week for a total of
$2400 deliver 44 such TOps. Next year, a single Volta GPU will deliver at
least 36 so I saw no way for them to win on their own with NVIDIA's GPU
roadmap merrily marching along since 2007.

However, getting access to Intel's fabs makes them a lot more interesting and
competitive. It's not a slamdunk for Intel yet because they still have to
incorporate this into their product line (anyone seen Altera's Stratix 10 yet?
Because that was supposed to be 2014's 10+ TFLOP GPU killer), but it's a
fantastic acquisition and I wish them the best.

------
modeless
Very exciting that Intel sees a need to compete with NVIDIA here in a way that
isn't just more x86. NVIDIA certainly needs the competition. Now can AMD get
in on the action? They should be in a pretty good place to compete but so far
they seem to have missed the boat, with approximately zero software support.

~~~
flybirdx101
Unfortunately, I am afraid that AMD is to far behind Intel technologically in
the moment. What is so powerful in this deal, is that Nervana will be able to
leverage Intel's capabilities and expertise in chip making, and they are
definitely a leader in this area.

------
lettergram
Having done a technical review of what Nervana has to offer... Why Intel?

My assumption, is that Intel will lease the chips + software, similar to the
way HP, Dell, etc. Lease GPU machines.

Nervana honestly woukd have been obsolete and dead pretty soon without this. I
see this as an acuire-hire.

~~~
byebyetech
>> an acquire-hire.

For $350 million is not an acquire-hire.

~~~
confiscate
I thought so too. Until I saw the HN thread on the recent acquisition of Quip
by Salesforce

------
rdlecler1
Now compare this to Walmart'a $3B acquisition of Jet. This seems like a
brilliant move and maybe a very good price given what it could do for Intel.

------
jeffwass
Addendum from the bottom of the article :

Update: An earlier version of the story indicated the purchase price was more
than $350 million, according to a source. Multiple investors told Recode the
purchase price was significantly above that price, with one pegging it at $408
million.

------
atemerev
Their Neon platform is awesome. I hope it will not be abandoned.

Congratulations!

------
mastazi
@OP: the article's title has changed (to reflect the fact that the sum to be
paid will be in excess of $400M)

------
etrautmann
Congrats Naveen! Very nice work

------
zump
I'm suffering from FOMO !

------
singularity2001
@OP: that's Nervana with 'e' ;)

~~~
flinner
Thank you! Updated

