
AMD Ryzen Threadripper 1920X and 1950X CPUs Announced - zdw
http://www.anandtech.com/show/11636/amd-ryzen-threadripper-1920x-1950x-16-cores-4g-turbo-799-999-usd
======
ChuckMcM
I really hope the ECC carries through. It irritates me to have to buy a
"server" CPU if I want ECC on my desktop (which I do) and it isn't that many
gates! Its not like folks are tight on transistors or anything. And on my 48GB
desktop (currently using a Xeon CPU) I'll see anywhere from 1 to 4 corrected
single bit errors a month.

For things like large CAD drawings which are essentially one giant data
structure, flipping a bit in the middle of them somewhere silently can leave
the file unable to be opened. So I certainly prefer not to have those bits
flip.

~~~
fcanesin
I have a Ryzen 5 1600 with a pair of Crucial ECC UDIM. It works perfectly on
the ASUS Prime B350M-A/CSM uATX board. Stress tested. Errors are logged and
halt behavior is confirmed (ubuntu 17.04).

~~~
arcaster
Interesting, could you possibly explain how "halt behavior" indicates that ECC
is fully functioning? I've just read a lot about ECC being compatible but not
tests explicitly showing it works.

~~~
ChuckMcM
On my system if an uncorrectable error were to happen the system goes into a
'machine check' state (a halt). In theory, the BIOS supports 'chip kill' which
is that it reboots with a bit set to not use that specific chip in the DIMM.
I've never seen that happen on my desktop but in the data center when it
happened a system that should have 128GB of memory would reboot and come back
with less than the full amount (as expected by the DIMMs plugged in) reported
by the BIOS.

~~~
fulafel
Chipkill is an IBM name for the feature that corrects multiple bit errors in
DRAM, are you sure about the reboot thing or are there two similarly named ECC
techs that do different things?

~~~
ChuckMcM
Intel discussion of Chipkill
([https://www.intel.com/content/dam/www/public/us/en/documents...](https://www.intel.com/content/dam/www/public/us/en/documents/white-
papers/xeon-e7-family-ras-server-paper.pdf)). Servers in question were
Supermicro "Jupiter" chassis with the Westmere processors, discussion on
Supermicro of their implementation
([https://www.supermicro.com/support/faqs/faq.cfm?faq=2642](https://www.supermicro.com/support/faqs/faq.cfm?faq=2642))

When systems rebooted with less memory than the system configuration database
said they should have most of the time there would be a multi-bit error
detection, machine check, and 'memory update' in the IPMI buffer.

------
walkingolof
Best thing about this is that competition is back (in the high end x86 market)
and the winner is the consumer, CPU market have been stale for a while.

~~~
le-mark
I kinda missed the boat on this story, I was of the mindset that cpu game was
over, and Intel had an insurmountable advantage in 14 nanometer fabrication,
and basically their scale meant AMD could never compete again. How has AMD
managed to a comeback like this?

Edit; reading wikipedia, seems like it was the fact AMD could outsource 14nm
finfet fabrication to GlobalFoundries that they're able to do this, since they
didn't have to build their own fabrication facility, or could they have done
their own fab?

~~~
deaddodo
Because AMD doesn't handle it's fabrication any longer. It's all done through
third-parties (GloFo and Samsung, particularly) whom are competitive against
Intel. Intel still has the advantage in that they are vertically integrated,
however Samsung is pushing out far more chips/day than Intel and is the lead
fab for cutting edge ARM processors. This makes it much more capable of
multiparadigm fabbing.

~~~
B1FF_PSUVM
AMD founder Jerry Sanders used to say "Real men have fabs" (i.e. ownership of
the whole production chain), and is probably sad that had to be given up for
financial reasons.

Looking up that phrase gets a history of upsides and downsides, e.g.
[https://webcache.googleusercontent.com/search?q=cache:Oxfqym...](https://webcache.googleusercontent.com/search?q=cache:OxfqymS01qAJ:https://www.bloomberg.com/news/articles/1994-04-10/real-
men-have-fabs+&cd=2&hl=en&ct=clnk)

------
SCdF
How do people with many CPU cores find it helps their day to day, excluding
people who run VMs, or do highly parallelisable things as their 80% core job
loop (ie you run some form of data.paralellMap(awesomeness) all day)?

Does it help with general responsiveness? Do many apps / processes parallalise
nicely? Or is it more "Everything is 99% idle until I need to run that
Photoshop filter, and then it does it really fast"?

~~~
sqeaky
Build Time!

I will pay a great deal of money to reduce my build times. Building is readily
parallelizable and with a good enough build scripts and system design there
are several smaller linking steps instead of one huge serialized one at the
end. On my current system building the code I care about only takes about 2
minutes, about 5MLOCs of C++.

Anything more than 10 seconds is enough to get stuck in the time waste that is
Reddit... or HN... I am getting back to work now, my build is likely done.

~~~
lfowles
Oof, I'm limited to building at -j4 due to memory constraints, not CPU
constraints.

~~~
sqeaky
How much memory do you have and what are building? Last I was memory
constrained I was trying to build Clang on a Raspberry Pi.

~~~
lfowles
16 gigs working on internal code, but apparently the culprit is
boost::units....

------
jokoon
A CPU that large reminds me of the famous remark made by Grace Hopper about
how light can move 30cm in one nano second, I guess theoretically meaning that
CPU could have some kind of maximum size.

Of course since current CPU contains cores, it doesn't apply.

~~~
Aardwolf
Instead of placing the dies next to each other on a flat surface, the dies
could be above each other, or the CPU made spherical, for optimally close
distances. And RAM of the motherboard could also be in a circle or sphere
around it.

Is there any other concern than cooling with this?

~~~
ihsw2
Well for one thing, _light doesn 't bend_. There's something to be said for
the flat rectangular shape.

~~~
d3ckard
Really? I'm pretty sure Einstein would disagree.

~~~
db48x
If your cpu is dense enough to gravitationally bend light by any significant
amount then you might have other problems.

But we bounce light off of electric fields all the time, so the a flat chip is
not really required (even if you made an all-optical chip). As others have
said, it's all about heat distribution.

------
shmerl
I'm still waiting for this bug to be fixed:
[https://community.amd.com/message/2796982](https://community.amd.com/message/2796982)

Note: this isn't a bug in gcc, but looks like hardware bug related to
hyperthreading.

~~~
InTheArena
Just a question (this is how out of touch I am). I thought most linux
compilation moved over to a lvvm toolchain?

~~~
unmole
Not even remotely close. No major distribution uses the LLVM toolchain.

~~~
shmerl
If you are talking about the kernel itself, I think Debian did some work to
support building Linux with clang. But last time I checked, it wasn't ready.

See also: [https://wiki.debian.org/llvm-clang](https://wiki.debian.org/llvm-
clang)

~~~
cyphar
Distributions compile more than just a kernel. That's just one of several
thousand packages we build. Usually the problem (even with something as simple
as PIC/ASLR) is that old projects might have problems compiling.

We just switched openSUSE Tumbleweed to GCC7 a few months ago, and it was a
very large amount of work. I can't imagine how much work it would take to
switch to LLVM.

~~~
DannyBee
Counterpoint: We compile more than one entire linux distribution (one is
gentoo based, others vary) with llvm with essentially no issues (the number of
local patches is incredibly small, and most is actually just buggy code that
clang happens to detect, and we are waiting for upstream to accept).

Is it work? Sure. But it's not like "years of real work". It's about 5-10
people for a quarter or two.

~~~
cyphar
It is definitely possible, but I was saying that switching to LLVM would take
a while, and that even GCC updates are painful. 5-10 people for 6 months is a
fairly serious time investment for a community project (especially since we
have a lot of other things on our plate).

------
InTheArena
What I am going to be interested in is this versus EPYC parts. I think the
higher clocks are mainly to achieve some of the more insane (and useless) FPS
counts for games. If you are willing to ramp down the FPS to a number that
your monitor can actually display, it may be better to find a general purpose
EPYC MB and chipset, and use that. Especially if homelab / big data /
compiling linux/ occasional gaming is you cup of tea.

~~~
minikites
There's still plenty of room to grow into VR and/or 120Hz/240Hz monitors at
4k/5k.

~~~
onli
Resolutions has nothing to do with it. To the contrary: You need less cpu
power since you will get your FPS limit from the gpu, not the cpu. Similar for
VR: Current cpus are strong enough to get the 90FPS current VR headsets want.
240Hz maybe, but those processors won't be faster in games than a Intel Core
i7-7700K, as games don't use that many cores.

~~~
minikites
So then what is the original comment referring to?

~~~
onli
I think that is is comparing Threadripper to other workstation cpus, or it
rather explicitly does compare to EPYC. Since Threadripper has a high clock,
it might work better in games as well. Parent is right in saying that when
adapting expectations, that additional gaming performance is unlikely to be
necessary.

Reminds me of using one of the Xeon E5-2670 for gaming, as in
[https://www.techspot.com/review/1155-affordable-dual-xeon-
pc...](https://www.techspot.com/review/1155-affordable-dual-xeon-pc/).

~~~
dis-sys
Posting from my dual Xeon E5-2670 here, it is a great machine for gaming, look
at those average fps benchmarks in your link.

------
johnbellone
Its been awhile since I've built a computer with my own two hands, but either
that man's hands are really small or hot damn AMD Ryzen CPU are huge.

~~~
losvedir
For what it's worth, I interpreted that image to be a still from the
announcement video, which from the article:

> _Today’s announcements, accompanied by a video from the CEO of AMD Dr. Lisa
> Su_

So, given that she's an Asian woman, it might be a smaller hand than you were
thinking. Nevertheless, it does seem to be a very large component (hence the
joke image at the bottom of the article).

~~~
floatboth
Of course it's her, no one else publicly showed a Threadripper yet :)

It is large — 4094 pin socket!

------
arcaster
I'm still waiting for a more diverse set of synthetic and real-world
benchmarks. It'll be interesting to see how IPC performance holds up with
Threadripper, however I think the most interesting debate will be whether the
1920x or lowest end Epyc CPU are a better buy.

Unfortunately, even as an enthusiast $799 is more than I'm willing to spend on
a CPU. I'm also still hard pressed to build a Ryzen 1700 System since I can
purchase an i7 7700 from MicroCenter for about $10 less than the Ryzen part
(and have equal or better general performance with notable better IPC).

~~~
tothepixel
Cinebench scores are fantastic for speculating on how these CPUs will work in
the context of 3D rendering/simulations. That personally has me very excited.

~~~
LeifCarrotson
Agreed - 3062 for the TR-1950X and 2431 for the TR-1920X are colossal results.
2167 for the i9-7900X is still huge!

At first I thought that (given it's AMD running the test) they'd hamstring the
Intel CPU with an under powered cooler, but it looks like they gave it a big
Corsair H100i 240mm water cooler. If that's not enough for a 140W CPU, I don't
know what is - it was probably on turbo the entire time. They gave their own
CPUs an "EK Prototype SP3" cooler, though - is this it?
[https://www.ekwb.com/shop/ek-kit-s360](https://www.ekwb.com/shop/ek-kit-s360)

Regardless, the message seems to be that if you want top performance, get a
big cooler.

~~~
pzduniak
The "EK Prototype SP3" cooler was probably a prototype of a water block
designed for the X399 platform. The chip is massive, so the existing solutions
probably don't fit it.

------
strong-minded
A simple formula: The 1920X beats the 7920X by a few hundred in Cinebench and
a couple of hundred in the pocket.

I wonder if the 'Number Copy War' (started with the X299 vs. X399 Chipset)
will continue throughout the year.

------
thoughtexprmnt
Since the article does refer to these as desktop CPUs, I'm curious what kind
of desktop workloads people are running that could benefit from / justify
them?

~~~
IanCutress
Intel likes to market these CPUs to people who 'mega-task'.

Video Editing while Gaming while Streaming while Hosting a server while
Dealing with encrypted data while Multi-monitor while Contributing to science
while while while

etc

AMD hasn't really declared which market they are after yet. It's likely going
to be a big part of the launch event.

~~~
Theodores
Mega task - as in have a hundred tabs open, a dozen Excel spreadsheets plus
Outlook. Maybe there will be lots of cores since there is no more Moore's law,
32 core computing will be the new core i5 with 64 core computing being the new
i7. Some software might run albeit slowly on older computers that only have 16
cores.

In this future the choice of 64 core computing is one of those things like car
choice. Obviously we need cars electronically restricted to 155, we would not
consider a car that only did 120 or so. There is a theoretical scenario where
that vital power is needed but it is not a rational decision based on cost
benefit analysis.

Therefore you can expect a change over time to where core total becomes
marketing.

------
drewg123
It is great that they're announced for an August release, but when I can
actually _BUY_ one?

Given that Naples (aka Epyc) was "released" in June, I went looking to
actually buy one, and I could not find a single place selling them. Not
Newegg, nothing local, nothing in Google shopping, etc.

~~~
bryanlarsen
On July 27, you'll be able to buy an Alienware Area-51 with Threadripper.

~~~
IanCutress
You'll be able to pre-order. Still no official word on when it'll ship.

~~~
a012
With recently a flood of CPU SKUs, I suggest to wait at least end of this year
for all of them widely available.

~~~
mizzack
All three R7 SKUs were all readily available at launch, and so were the R5s at
their launch. Now mobos on the other hand...

------
dis-sys
$999 list price translates to $1100-$1150 retail price in countries where you
have a GST style tax, then you factor in an expensive motherboard plus heat
sink, 64GB of RAM, the upgrade is like $2k.

the problem is with this confirmed return of competition between Intel vs AMD,
I am no longer sure whether it is a good idea to upgrade now as it is
basically the first iteration between those two. Are they going to release
something even better in 6-12 months time?

~~~
musha68k
Well, the new iMac Pro (8/10/18 core) will _start_ at $5000 - I also feel ever
slower development experiences on my MBP and it will be pretty much a solved
problem on a 16 core Threadripper workstation.

I feel like I can't wait for another iteration (actually don't need to) or the
_planned_ December release date for the new iMac Pro...

[https://www.macrumors.com/roundup/imac-
pro/](https://www.macrumors.com/roundup/imac-pro/)

~~~
dis-sys
Wow, that 18 core iMac Pro with 4T SSD and 128G ECC RAM could shot up to $15k.
I can buy an army of Ryzen at that price.

------
crb002
AMD needs to come out with a few AVX-1024 instructions for vector ops.
Essentially make one core into a GPU that doesn't suck at branching.

~~~
opencl
A "GPU that doesn't suck at branching" is basically what the Xeon Phi is
intended to be, with 72 AVX512-enabled Atom cores per chip. However it costs
over $6000.

~~~
dis-sys
and it is export controlled - Chinese are not allowed to buy it.

~~~
arcanus
And it is not competitive with GPUs for truly parallel workloads

------
eemax
The comparisons in this article are mostly against the high-end Intel core
line, but these CPUs support server / enterprise type features like ECC
memory, lots of PCI-E lanes, and virtualization features (I think?).

Shouldn't Threadripper be compared to Xeons?

EDIT: Or rather, what I'm really wondering is what these CPUs lack that AMD's
server line (EPYC) have.

~~~
IanCutress
EPYC has double the PCIe lanes, double the DRAM channels, and will have
enterprise level support. Threadripper is classified by AMD as a Ryzen family
product, and is consumer focused (or super high-end desktop focused) rather
than enterprise focused. TR will be on shelves, EPYC will not.

AMD's 16-core EPYC part (the 1P 7351P) is around $750, but supports 2TB/socket
and 128 PCIe lanes in exchange for a good chunk of frequency (2.4G base, 2.9G
Turbo). Threadripper is also single socket only - most of EPYC is 2P.

Though given Intel's pricing, if AMD has the ecosystem, then the mid-range of
the Xeon line might migrate to TR/EPYC.

------
sergiotapia
I'm waiting for these to launch so I can build a great multi-threaded
computer. My Elixir apps are waiting for all these threads! :)

Does anyone know if Plex is going to see much benefit transcoding video files
on the fly?

~~~
bhouston
> Does anyone know if Plex is going to see much benefit transcoding video
> files on the fly?

Does the underlying avcodec / ffmpeg support huge thread counts?

~~~
Veratyr
It's not ffmpeg that matters here but libx264, which does support
multithreading. I ran a quick test on my PC (workstation class Xeon with
6c/12t) and found that with 1/2/4/8/12t it took (wall time)
28.5/14.7/9.7/7.2/6 seconds to encode 10s of video at 1080p. So it is able to
take advantage of multiple cores but there are diminishing returns.

For Plex I'd really focus on using hardware encoders rather more than
anything. NVENC and QuickSync are both widely available and able to do decent
H.264/HEVC encodes far faster than any CPU and on a home network you're not
really bandwidth constrained so the degree of compression doesn't matter too
much. On a mobile network I'd go for Plex's offline optimization.

------
gigatexal
soon as i have some funds i will be getting one but only if ECC is supported
-- what would be even better is if one could do a mild OC on the part but also
have ECC

------
balls187
Quad channel, so you have to install RAM with 4 match sticks at a time?

~~~
jhasse
If you don't want to run in Dual or Single channel mode - yes.

~~~
blattimwind
Kinda.

Modern (newer than ~10 years) modern controllers are quite flexible, so you
can pretty much populate any number of DIMMs in any slots (apart from wrong
ordering, though this is not a IMC restriction, rather firmware, for
stability).

However. Disregarding the recommended populations usually means that the MC
can't interleave the channels properly and also must choose other settings in
a lowest-common denominator way. This severely reduces the performance of the
memory subsystem, _which includes cache coherency and all core communication
in Zen_.

~~~
jhasse
Interesting, thanks :)

------
api
I did a lot of work with artificial life and evolutionary computation in the
early 2000s. Wish we had these chips back then.

------
DonHopkins
How long does it take to drip a threa?

------
jhoutromundo
Opteron feels o//

------
mrilhan
I recently tried to go the AMD/Ryzen route. I like an underdog comeback story
as much as the next guy.

But be warned: Motherboards that "support" Ryzen do not in fact support Ryzen
out of the box. You have to update the BIOS to support Ryzen. How do you POST
without a CPU you ask? Who knows? Magic, possibly.

I still don't understand how AMD expects their customers to have more than one
CPU (and possibly DDR4-2133 sticks) to be able to POST and update the BIOS.

I returned everything AMD and went back to safe, good ole Intel. Worked on
first try. I'm never getting sucked into AMD hype again.

Also, when I went back to return the AMD components to Fry's, the manager said
they were aware/used to getting Ryzen returns because of this.

~~~
floatboth
What?!

That sounds totally bogus. You don't have to update the BIOS to support Ryzen.
Ryzen is the first CPU on AM4.

You don't need "DDR4-2133 sticks". Ever. Any DDR4 sticks can run at 2133,
that's literally the DDR4 standard, everything above is overclocking. 2133
rated sticks are the cheapest (and worst).

I got my R7 1700 and mainboard yesterday. Everything worked on first try.
Speaking of DDR4, my 2400 rated (Hynix) sticks overclocked to 3200, with
decent timings, even :) (Well, decent for Hynix.) My previous system
(overclocked non-K Skylake) couldn't run these sticks above 2450.

~~~
mrilhan
Why... why would I lie? What could I possibly have to gain from lying about
this? I only have HN karma to lose, which I don't have much to begin with.

You can, before simply defending AMD, research and see for yourself that
required BIOS updates are indeed an issue.

I tried it with an MSI Tomahawk, and an Asus B350 Prime. Had a Ryzen-5-1600
and a Ryzen-7-1700, as well as a pair of DDR4-2133, and DDR4-3000 modules
each, and tried every combination, and was never able to post.

Switched to Intel CPU and Asus Intel-supporting MB with the same DDR4-3000 RAM
and POST'd on the first try. I'm saying that to 'prove' that the other
components were fine.

I'm glad that it worked out for you, and I wish it had gone smooth for me too.

[http://www.tomshardware.com/answers/id-3387164/am4-motherboa...](http://www.tomshardware.com/answers/id-3387164/am4-motherboard-
ship-ryzen-support.html)

[https://www.reddit.com/r/Amd/comments/66b7vu/to_all_ryzen_5_...](https://www.reddit.com/r/Amd/comments/66b7vu/to_all_ryzen_5_owners_did_you_have_to_flash_bios/)

... so many of these.

~~~
cbraz
You messed up something. My 1700 and Asus B350 Prime were able to POST without
issues and without any BIOS updates, and I got both a couple of weeks after
release. Most BIOS updates have only been required for overclocking
improvements.

