
Intel 28-core fantasy vs. AMD 32-core reality - BlackMonday
https://www.techspot.com/news/75009-intel-28-core-fantasy-vs-amd-32-core.html
======
hardwaresofton
I'm excited for what this will do to the cost of dedicated servers in ~1 year.

Also, as a person who used to work at Intel, I don't know whose idea this was,
but that person should probably have a long hard look at themselves --
hardware people are exactly the people that this kind of shit wouldn't fly
with, because they'll almost always ask for details and can spot a hack from a
mile away.

On the one hand I can sympathize with Intel -- seeing how tough it was to stay
on the market year over year, trying to predict and start developing the next
trend in hardware. But on the other hand... Why in the world would you do this
-- Intel basically dominates the high end market right now, just take your
time and make a properly better thing.

~~~
dragontamer
> I'm excited for what this will do to the cost of dedicated servers in ~1
> year.

This is the opposite though?

The dedicated servers are turning into HEDTs. AMD 32-core EPYC has been
available since last year, and Intel's 28-core Skylake (although $10,000) has
been also available for a year.

So dedicated servers got this tech first, then HEDT got it a bit later. I
guess Threadripper is Zen+ so its technically HEDT gets the 12nm tech first,
but the 32-core infrastructure was in EPYC first.

~~~
x0f1a
The problem IMO is that Intel HEDTs don't support ECC (as far as I know), so
not very good idea when you are working with workloads that need 64GB - 128GB
of RAM (video editing, etc)

------
lbill
I get that Intel feels threatened by AMD. They are trying to impress the
consumers... but bullshitting a demo is a very bad move! When a consumer
decides to build a new PC, the characteristics of the product matter, but so
does the reputation of the company that manufactures it. Right now Intel is
putting too much effort into sketchy marketing practices: it undermines the
actual work being done on their processors by some very talented people.

Presenting it as an extreme overclocking demo would have been a much wiser
option.

~~~
jacob019
Unfortunately it might work. With today's news cycles, an average consumer may
have noticed the headline about Intel's 5GHz 28-core monster, and that's it.
Follow up articles aren't as interesting.

~~~
esturk
The average consumer may just buy a smartphone over a PC. The average tech
savvy consumer may just build the PC over buying off the shelf ones. In either
case most people won't be buying these 28 core chips unless they are reps for
enterprises in which case they will most definitely have done their HW before
buying this.

~~~
craftyguy
>reps for enterprises in which case they will most definitely have done their
HW before buying this.

Don't assume they would. Plenty of purchasing in large companies is associated
with some higher up hearing about something, wanting it, then buying it to
'help' in some obscure way.

------
dingo_bat
It's awesome that TR2 will be a drop in replacement on existing motherboards!
Props to AMD for delivering real value to consumers.

~~~
havemylife
I recall reading that the thread ripper 2 is supposed to use more power so
early motherboards with that socket my not be able to handle it. Just
something to keep in mind.

Regardless damn good on AMD.

~~~
42656e
2nd Gen TR will draw ~250W. X399 boards have at least 1x 4-Pin and 1x 8-Pin
ATX-12V connector, so at least 225W. The rest is supplied by the ATX 24 pin
connection. Some X399 boards use 2x 8-Pin ATX-12V connectors, that works out
to 300W, which should be more than enough for Threadripper 2.

~~~
havemylife
I'm just relaying what I read when they announced the TR2. Which seems to be
copy-pasted to amongst the tech news sites.

Unfortunately they just state "... some first-generation X399 motherboards may
not be able to deliver enough power..."

And not specifically which.

[https://arstechnica.com/gadgets/2018/06/amd-unveils-
threadri...](https://arstechnica.com/gadgets/2018/06/amd-unveils-
threadripper-2-up-to-32-cores-64-threads-for-an-enthusiast-chip/)

If their statement is false then that's on them.

------
chrisper
I just recently replaced my old i7 920 in the homeserver with an AMD Ryzen 5
2600. Really like it so far. Price / performance is great. This is my first
AMD since probably ever....

The two things I don't like is that their CPUs are pin based. It seemed kind
of old fashioned after Intel CPUs. But this is really a minor thing. The other
issue is memory compatibility is a bit finicky. Maybe it has to do with the
CPU being so new. Not sure.

~~~
AndrewDavis
> don't like is that their CPUs are pin based

To me that's a win. If a CPU pin is bent it's typically fairly easy to
straighten it. Fixing a bent pin a socket is a massive pain.

But it's much easier to protect socket pins with the cover. So there are pros
and cons either way.

~~~
dpedu
True, though it is much easier to damage a pin-based CPU in the first place.

~~~
zaarn
Usually, I'm very careful with the CPU, more than the motherboard, so I think
that balances out.

------
yomritoyj
As an outsider to 'enterprise-grade' computing, I'm curious about situations
where a high number of cores in a single processor would be superior to
multiple processors with the same total energy draw sitting on a single
motherboard?

I can understand HPC applications where the high-speed interconnect on the
chip would make a big difference.

But in business applications where the cores are dedicated to running
independent VMs, or are handling independent client requests, what is really
gained? There would still be some benefits from a shared cache, but how large
quantitatively would that be?

~~~
wmantly
It has to do with memory. In server grade computers, each socket has memory
local slots that it can read and write to very fast. Read this:
[https://en.wikipedia.org/wiki/Non-
uniform_memory_access](https://en.wikipedia.org/wiki/Non-
uniform_memory_access)

~~~
Kubuxu
It is already the case with Thread Ripper processors. They have multiple NUMA
nodes inside one socket.

~~~
devttyeu
It actually presents itself to the system as a single node:

On a TR 1920x system:

    
    
      $ numactl --hardware
      available: 1 nodes (0)
      node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
      node 0 size: 32107 MB
      node 0 free: 20738 MB
      node distances:
      node   0 
        0:  10

~~~
throwaway2048
windows is spectacularly poor at dealing with NUMA CPUS so threadripper is not
displayed to the OS as NUMA.

~~~
pixelpoet
Really? I've been thinking of getting a TR for some NUMA coding experience,
and if Windows can't see that then it really sucks.

~~~
my123
It's togglable in the BIOS/UEFI.

------
rocky1138
Which one of these companies does at better job with free/libre software? I've
always had a soft spot for AMD because it's the underdog, but I want to make
sure that they are free, too.

~~~
Senderman
If the answer turned out to be Intel, would you overlook the misleading
28-core demo the article discusses?

~~~
bayindirh
Intel provides a lot of patches for Linux Kernel, for their hardware,
performance counters, and their graphics drivers are open source under
mainline. I'm not mentioning their wired Ethernet and other various drivers.

Being one of the top contributors in Linux/OSS/FLOSS doesn't allow them to
come clean from their 28-core, inadvertently but conveniently miscommunicated
demo.

------
solomatov
AMD did a great job with Threadripper, making high end CPUs much more
affordable. It's interesting that Intel doesn't lower their prices. What's the
logic behind it?

~~~
jpalomaki
Is there some maintained "bang for buck" chart available that would show how
many units of certain performance metric one $ buys with different CPUs?

~~~
throwawaymath
Yes, you're looking for the Passmark "Best Value" ranking:
[https://www.cpubenchmark.net/cpu_value_available.html](https://www.cpubenchmark.net/cpu_value_available.html)
It's also helpfully represented as a scatter plot:
[https://www.cpubenchmark.net/cpu_value_available.html#multic...](https://www.cpubenchmark.net/cpu_value_available.html#multicpu_xy_scatter_graph).

Unsurprisingly, older generation models tend to dominate that list. Keep in
mind that the thermal efficiency for Xeon v2 and v3 models is lower than v4;
Passmark does not include power draw in this ranking. If you can afford the
extra power usage and don't mind DDR3 RAM, high clockspeed + high core CPUs
can be had relatively cheaply by going with V2.

------
jejones3141
For a long time I saved a copy of a publication by Motorola about how Intel
played fast and loose with benchmarks in comparisons of the 80386 with the
68020. (I lost it in a move, alas.) Can't say I was surprised to read about
the 28-core fiasco.

~~~
NullPrefix
link?

~~~
manual
Not specific to the Motorola but there have been a number of allegations of
Intel doing this over the years:

[https://www.pcworld.com/article/2842647/intel-will-pay-
you-1...](https://www.pcworld.com/article/2842647/intel-will-pay-you-15-to-
settle-claims-it-fudged-pentium-4-benchmarks.html)

[http://www.agner.org/optimize/blog/read.php?i=49](http://www.agner.org/optimize/blog/read.php?i=49)

------
jopython
I believe Intel's designs are based on a single die compared to AMD
threadripper which are multichip.

~~~
bluedino
Could Intel stick more than one on a single chip?

~~~
CarVac
Their first dual-core processors were multiple chip modules.

~~~
zeusk
They however did not have a proper on-die interconnect and instead
communicated over the FSB which made them quite bad at scaling.

------
vbezhenar
In the end Intel got bad PR and AMD got good PR. Is it a major PR failure from
Intel?

------
ben509
There was an interview with an Intel engineer on this, it was quite revealing:
[https://www.youtube.com/watch?v=ozcEel1rNKM](https://www.youtube.com/watch?v=ozcEel1rNKM)

------
orev
This is a short term loss for Intel, but could end up being a long term win as
an attack on AMD. Making this announcement forced AMD to advance their plans
for the 32-core, possibly faster than they really wanted to right now. That
depletes their product pipeline faster, making it more difficult to keep pace
with future advances.

Edit: initial reports said that AMD was only planning to announce the 24-core
CPU, and may have advanced the announcement of the 32-core chip due to Intel’a
stunt. TFA doesn’t mention that, so possibly the initial reports were not
accurate.

~~~
BlackMonday
They maxed out the number of cores they can ship in a single CPU for now but
that doesn't seem like a problem.

AMD will already launch their 7nm EPYC processor based on Zen 2 in 2019
(skipping Zen+ used by the new Threadripper and Ryzen 2xxx) which is expected
to have 48 cores (some rumors even suggest 64 cores but that seems more likely
for 7nm EUV instead of the first 7nm processes). So they will have no problem
releasing more cores with Threadripper 3 next year (if they keep the yearly
releases).

On top of that, in my layman eyes AMDs aproach of using infinity fabric to
connect dies seems better suited to react to changes compared to Intels
monolithic design.

------
glonq
It appears that Intel wanted to trump AMD. ...and also wanted to Trump AMD.

------
Phylter
This article repeats itself many times. Are they trying to hit a word count
quota? Is this high school again?

------
hyperrail
I think of AMD's current approach - a microarchitecture with slower cores, but
more cores, than Intel - as very similar to what Sun/Oracle tried to do from
2005 to 2010 with the Niagara family (UltraSPARC T1-T3).

Each core in those chips was seriously underclocked compared to a Xeon of
similar vintage and price point (1-1.67 GHz; compared to 1.6 GHz to 3 or
more), and lacked features like out-of-order execution and big caches that are
almost minimum requirements for a modern server CPU. Sun hoped to make up for
the slow cores in server applications with having more cores and having
multiple threads per core (though with a simpler technology than SMT/hyper-
threading).

However, Oracle eventually decided to focus on single-threaded performance
with its more recent chips - it turns out that no OoO and < 2 GHz nominal
speeds look pretty bad for many server applications. My suspicion is that even
though the CPU-bound parts of games are becoming more multi-threaded, AMD will
be forced to fix its slower architecture or lose out to Intel again in the
server AND high-end desktop markets in a few years.

~~~
wmf
It's not similar at all. AMD's cores are 10-20% slower while Niagara was
80-90% slower. And AMD isn't intentionally slower; they designed the Zen core
for maximum single-thread performance but they just didn't do as good a job as
Intel because their budget is vastly smaller.

~~~
snuxoll
Zen does a decent job in clock-for-clock performance, huge killer on the
desktop is raw clock speed. When you pit a Ryzen chip maxing out around
3.8-4.2GHz (depending on generation and silicon lottery) to an i7-8700K that
has a base frequency of 4.7GHz it's pretty obvious which is going to come out
on top.

Most of that clock hit comes from the 12nm LPP process that AMD is currently
using too from what I can tell, low-power process typically equates to lower
clocks (see mobile chips) so it's not surprising - and why Zen 2 being based
on GloFo's 7nm process will hopefully close that gap.

~~~
nottorp
Is it such a huge killer outside gaming benchmarks? And even then, the extra
power is useful only if you jumped on the 4K bandwagon.

Personally, consider the amount of random crap that i run, I'd rather have
more cores. And more importantly, 80% of the performance for 50% of the price
is just fine(tm).

~~~
zaarn
The gaming benchmarks are a bit meh since in most realistic builds the GPU
will be the bottleneck, not the CPU (unless you buy a 1080Ti with a Ryzen 3 or
i3, though in that case all help is lost).

More cores do benefit if you run stuff besides the game, which most people do.

~~~
nottorp
I'm into VM abuse, so for me more cores would be a no brainer...

Sadly for AMD that would be IF i needed a new machine. My 2012 Core i7 still
seems to be enough for my needs. (Except the GPU, that I changed recently.)

~~~
zaarn
I feel you there, I was using a i5 2500 since it came out until last year
march when I switched to ryzen. A very good CPU indeed.

~~~
nottorp
At this time the only reason I'd upgrade would be to go from 32 G ram to 64.
32 is barely enough if i happen to need a couple VMs up and the usual 100 tabs
of docs in the browser :(

