
Intel axes Nervana just two months after launch - rbanffy
https://fuse.wikichip.org/news/3270/intel-axes-nervana-just-two-months-after-launch/
======
0xff00ffee
Could it be they bought Habana for US$2B?

But this is very intel: get psyched about a technology, dump billions into it
for a few years, flake out. See: cloud services ca. 1997 and IoT ca. every 3
years since 2013.

~~~
y-c-o-m-b
> But this is very intel: get psyched about a technology, dump billions into
> it for a few years, flake out

Hit the nail on the head! I worked for Intel nearly a decade ago. I was a full
time employee for a few years. They saw Android and the Apple app store
capturing mobile users and decided to dump $$$ into an app store for the PC
(Intel AppUp). The effort lasted a few years before it crashed and burned
HARD. During this time they also tried to spin up Meego/Moblin for mobile and
tablet, then a partnership with Samsung to work on Tizen, and bought up
McAffee.

The amount of money Intel flushes down the toilet with fruitless endeavors is
sad, especially considering the number of people that end up jobless (like
many of my good friends did) because of it.

~~~
throwawa66
I noticed this recurring theme. Seems like that eventually all companies go
down this path at some point if they become mega rich. At that point money
alone doesn't matter as much as capturing new markets do so lots of money goes
down the drain. It's a good thing for the companies that get acquired and get
their paycheck.

I see as rich people who buy expensive things and let them rot because they
forgot they bought them or so.

~~~
eric-hu
Sometimes bets like these turn out to be huge wins, like Microsoft going into
the gaming platform space with the Xbox or Amazon building out cloud services.
The Intel app store idea doesn't seem like such an idea to me, but I can see
how the board could have been pushing for "outside the box" big bets.

~~~
backupcavalry
I'd argue that Microsoft and Amazon's branches were at least tangential to
what they were doing in the back-end and that they already have most of the
required talents to do it right.

I have no idea how Intel could work their way into running an app store beyond
"we have tech and market dudes, good enough right?"

~~~
sitkack
With their new fabric/modular system, the modules are effectively "apps".
Though possibly sold by app developers, you could install the Adobe module or
the Autodesk module and it would have hardware/ram/storage tuned to those
applications. It will start with DL and Graphics accelerators but then morph
into application specific.

------
m0zg
This is unbelievably dumb. Folks at NVIDIA HQ are no doubt breaking out their
best champagne over this news: their Tensor Cores got a few more years of life
thanks to this decision, and they retain the pricing power as the only
provider of practical deep learning acceleration that actually works and has
good tooling. I was looking forward to their TPU-like "N" offering in
particular. Roughly the throughput of Google TPUv3, better silicon expertise,
_and_ I'd be able to buy one. As a an applied scientist, I'd pay good money
for 100TFLOPS of bfloat16 throughput, better yet if I can stick 4 of those
things into each machine and train things over lunch that'd normally take a
week to train. And it was pretty simple: just a systolic array with some
extras to efficiently do convolutions, nothing super complicated, which means
that unlike the more DSP-like approaches like Habana you don't have to do
spend years writing and debugging a specialized compiler. You get to have this
throughput _now_. Dumb, dumb move on Intel's part.

~~~
tachyonbeam
AMD support for PyTorch is getting pretty good from what I understand. They're
going to become serious competition soon.

~~~
m0zg
Just in time for NVIDIA to blow them out of the water completely on both the
included RAM (which they're increasing) and performance (+40% if rumors are to
be believed) with Ampere. AMD has a looong way to go.

~~~
tachyonbeam
AMD might still be able to offer competitive price/performance. Even if Nvidia
is faster, I'm glad there's competition. For a while, Intel was really
dominant, and they really stagnated.

------
nnq
I'd bet _most successful hardware advances in the ML /AI will come from
companies that also push the field's edge in software_ Bc all the others would
be at at disadvantage - their "better" hardware will mismatch the software,
and that mismatch will increase development costs A LOT.

 _Google 's TPUs, Tesla's whatever those are etc. Bet on them!_

NOT on whatever Intel or IBM or AMD or Arm are doing this field. Heck, even
Nvidia will probably start loosing at this game in the long run. The field is
too dynamic and volatile, companies need to "eat their own dogfood" by
integrating vertically hardware + software, before pushing hardware to sell to
others too. This is not the old game of general purpose hardware anymore...

~~~
hydroreadsstuff
I think NVIDIA is in a good spot with its strong software ecosystem, and an
army of devtechs. Their architecture is pretty general, so that if the ML
algorithms change they adapt better than say TPU. I can't find the video of
the talk anymore, but late in the process of TPUv3 IIRC Google had to
rebalance their hardware to account for new algorithm(change)s.

Second, most companies that want to deploy AI cannot afford to build their own
custom hardware. Even most carmakers partner with Mobileye or NVIDIA.

These big companies also have research teams whose job it is to stay on the
ball, research and develop AI techniques and influence how the hardware has to
change.

As for eating your own dog food, I think NVIDIA does just that with their
autonomous driving software stack, robotics kit, devtechs for customers and
library optimization/porting.

With sufficient focus, funding and execution I think AMD and Intel can reach a
similar spot.

That said, my hunch is that the legacy and compatibility that the general
purpose hardware makers have to carry forward will become a problem. But until
that happens (I.e. specialized HW delivers better bang/$) in a 2-10 years,
they will likely figure out how to alleviate that and/or develop more
specialized hardware.

~~~
DennisP
Seems that if Tesla was able to build its own custom hardware and have it
working in 2019, other carmakers which are much larger and profitable would
easily be able to do it, if they cared to.

~~~
radiorental
Domain expertise is a finite resource and couple that with the fact the
automotive manufacturers are not adept at higher level software development
(as opposed to embedded)

I'm not surprised companies like Ford have continued to push out dates

------
fallous
Just a quick reminder when considering Intel products... if it ain't Mama x86,
it won't be around very long.

~~~
Rafuino
Counterpoint: Intel SSDs have been going strong since at least 2008.

~~~
mark-r
Define "strong". I've never actually seen one.

~~~
freeone3000
They exist on Newegg. You can buy one. I have one.

~~~
astrodust
I have a few. For certain applications they're excellent.

~~~
mark-r
For which applications would they be better than any other brand?

~~~
astrodust
Datacentre-type applications where you need a relatively inexpensive server-
grade drive.

For example, the P4610 is a great deal for a mixed-use drive.

------
archi42
Not surprised: 100GBe interconnects vs proprietary interconnect. Complex
architecture for which a hell of a lot work has to be done in the compiler
(anyone remember Itanium?) vs something that's barely more complex than NUMA.
Lower TDP per node is relevant for large scale datacenters.

Yes, Nervana looks much sexier, but I'd expect Habana to be more useful/usable
overall.

~~~
m0zg
Hell of a lot has to be done for Habana, too. And hell of a lot had to be done
only for their edge SKU. My understanding is, their "N" SKU was basically
TPUv3 for the masses - it's not even a question there was massive product-
market fit if they integrated it into TF or PyTorch, it'd take off like a
rocket.. Their "I" SKU was more along the lines of Habana: more Movidius-like,
but scaled up. That thing (Movidius) has a specialized VLIW architecture
(SHAVE) for which they had to write a compiler and roll it into LLVM, and if
you read the architecture docs you'll see why. 12 VLIW cores, specialized
memory arch, it's a bear to program for. Nobody will ever touch this shit with
a 10 foot pole if they can avoid it. Systolic TPU-like stuff is more
constrained with what it can do, but it's much easier to use and write drivers
for (not to mention shrink the litography with). And it's also nowhere near
its peak potential either, from either the throughput or energy efficiency
standpoint. _And_ you can scale it down to edge as well, as Google has shown.
It's just a better approach at least in the foreseeable future.

~~~
archi42
> Hell of a lot has to be done for Habana, too.

Yeah "barely more complex than NUMA" was a slight exaggeration and only meant
in comparison to the other ;-) Judging by the architecurial differences layed
out in the link, Habana is still hard-as-hell to develop a toolchain for; but
Nervana is the kind of stuff that can drive people insane.

------
anthony_doan
I didn't even realize they launched it.

I thought it was vapor ware. I went to their party where they talk about it
and it was underwhelming. They did not give any number comparison to Nvidia
and such.

------
nimish
Probably couldn't find customers for nervana, so they bought Habana...

