
Intel Lines Up ThunderX ARM Against Xeons - baazaar
http://www.nextplatform.com/2016/05/31/intel-lines-thunderx-arms-xeons/
======
ryao
Intel might find that these benchmarks achieve the opposite of what they
intended. I had no idea that viable ARMv8 alternatives to Intel's architecture
were on the market. There are likely others who were similarly ignorant. Intel
has just informed all of us that these options are out there.

~~~
emn13
Although given the relative performance, I can't imagine who'd pick the arm
option currently, unless the pricing difference is truly extreme. There's just
too many downsides to arm servers at this time: it's not just total overall
performance, but particuarly single threaded performance (which affects e.g.
latency that your readers experience), power usage (affects costs), and of
course availability of software.

It's getting close, that's for sure - and there may well be some niche where
it already makes sense, but I wouldn't know which one.

Probably the best value proposition is that the gap is likely to shrink, and
investing now may help you prepare for a future switch when it makes more
sense.

~~~
ryao
> It's getting close, that's for sure - and there may well be some niche where
> it already makes sense, but I wouldn't know which one.

Proprietary blobs such as Intel's ME that cannot be removed from Intel Xeon
systems pose a security risk. If the ARM systems are found to lack them and
they achieve the status of being good enough, security minded individuals and
organizations will buy them because of that.

As long as that happens, Cavium will be able to use the revenues to develop
the next generation of chips to better compete on traditional metrics like
Intel was able to do against the RISC vendors of the past. As a security
minded individual myself, I am well aware of that and I look forward to
supporting such developments by purchasing such hardware for applications
where it can be used.

There are others thinking this too. Some of them are watching the Talos
workstation motherboard come to market with that in mind. The people behind it
are even talking about IBM's executives being willing to make the POWER9
processor's microcode open source if the POWER8 model succeeds.

[https://www.raptorengineering.com/TALOS/prerelease.php](https://www.raptorengineering.com/TALOS/prerelease.php)

~~~
gpderetta
"Proprietary blobs such as Intel's ME that cannot be removed from Intel Xeon
systems pose a security risk. If the ARM systems are found to lack them and
they achieve the status of being good enough, security minded individuals and
organizations will buy them because of that."

Do you expect the above to be anything more than a rounding error of the total
server market? I doubt Cavium is going to finance its development from that.

~~~
nitrogen
If you want it to be so, then make it so. There's plenty of ammunition to use
in all of the leaks.

------
ThenAsNow
There seems to be at least one incorrect statement in the article.

"The point Intel is making in the chart above is that it has mastered NUMA
scaling, which is no surprise since the company has been at it for decades."

SGI was one the pioneers of ccNUMA (Origin 2000?) in the late 90s. That would
constitute decades.

Around the same time, Intel delivered the ASCI Red supercomputer using P6
processors with a snooping coherency scheme.

I don't think Intel had significant NUMA experience (at least in any
commercial products) prior to AMD K8 (2003), and their first NUMA product
would have been IA64 (E8870 chipset?) or the Nehalem microarchitecture chips.
That is not quite a decade, let alone decades.

~~~
creshal
I was under the impression that Itanium had it a while longer, but you're
right. Itanium got it even later (2010) thanks to tape-out delays.

~~~
ryao
The SGI Altix 3000 had ccNUMA in 2003, but that was using SGI's ccNUMA
technology to connect Intel's microprocessors rather than anything Intel
developed. I believe that other UNIX vendors that adopted Intel's chips used
their own ccNUMA technology to link them together too.

Anyway, the article's author likely made the mistake of thinking that
multisocket implies NUMA. It is an easy mistake to make for those who do not
know the details of how systems are implemented.

~~~
ThenAsNow
SGI did it with MIPS processors in the 90s before they used Intel chips. For
example:
[http://dl.acm.org/citation.cfm?doid=264107.264206](http://dl.acm.org/citation.cfm?doid=264107.264206)

This was the system in which the directory logic was formally verified:
[http://www.sgidepot.co.uk/origin/compcon97_dv.pdf](http://www.sgidepot.co.uk/origin/compcon97_dv.pdf)

I agree the author probably made a simple mistake, but it is misleading to
people at the "enthusiast" level who then parrot things like this as flamewar
fodder. A lot of respect is owed to computer engineers at corporations that
are defunct or a fraction of their former greatness (e.g., SGI, DEC, HP).

As a further example, Bob Colwell's book, "The Pentium Chronicles", is a great
read and teaches some powerful lessons about how to successfully do
engineering in the kind of large organizations that can scale production of a
design. But if you take it at face value, the book makes no meaningful attempt
to inform the reader that other companies had done Out-of-Order well before
P6. It leaves a naive reader with the idea Intel pioneered OOO as well, which
is no way true.

I guess I'm just (maybe overly) sensitive to sloppiness and omission leaving
an inaccurate impression of the history, especially when it's so easy these
days to fact-check.

~~~
setpatchaddress
My recollection is that "The Pentium Chronicles" said explicitly that OoO had
been done, but not for x86 CPUs, and that there was a significant question at
the time as to whether it was actually viable for x86.

------
jakozaur
With Moore's law slowing down, semiconductors can get commoditized. It
happened with DRAM business in 1980s:
[https://en.wikipedia.org/wiki/Intel#From_DRAM_to_microproces...](https://en.wikipedia.org/wiki/Intel#From_DRAM_to_microprocessors)

Maybe CPU margins are going to drop too? ARM is likely the successor. ARM
already won mobile market, maybe it start eating some of the cloud market? Do
we really care on which CPU architecture runs S3 servers and etc.? That gives
cloud providers huge leverage in negotiations. E.g. AWS can demand more
discounts for Intel CPUs and if they won't comply slowing, but surely move
some portion of it's infrastructure to ARM.

~~~
gpderetta
The problem with comoditization is that margins become razor thin, which in
turn means less money to invest in R&D, which means that it is hard for a
company that isn't already there to come up with high performance (or even
entry level, really) server solutions.

Intel ate the server market in the late '90/eary '00 thanks to the fat margins
it had on the desktop.

That's not currently the case in the ARM world were only Apple and maybe
Qualcomm have the needed margins, and Apple currently doesn't seem interested
in the server market.

~~~
bhouston
If ARM is providing the expensive R&D-based IP, then any licensee can be a
competitor.

~~~
gpderetta
But AFAIK ARM hasn't provided any server specific design. Most ARM based
server attempts so far have been with custom designs (e.g. AppliedMicro
X-Gene).

~~~
bhouston
Give it time, why wouldn't ARM start offering server building blocks? There is
obviously a market for it. ARM went to 64 bit, and it has just gone to
advanced GPU designs. It supports extreme number of cores now too. Server-
centric designs can not be far off.

~~~
gpderetta
While ARM Holdings reference processors are reasonably good, the company is
specialized in low power designs. Apple for example had to do a custom design
to get the high performance ARM it needed. It is not obvious that ARM has the
knowledge, the money and even the will to pursue a server design.

