
Ryzen is for Programmers - old-gregg
http://timothypratley.blogspot.com/2017/03/ryzen-is-for-programmers.html
======
nxc18
I just recently completed a build myself. It was pre-Ryzen so AMD wasn't an
option at the time. I went with the Intel i7-7700 (quad core) at 3.6 GHz.

I was surprised by two big things: high-performance devices are extremely
affordable nowadays and processors have gotten a lot better despite minimal
changes in clock speed.

I went on PC Part Picker and built my 'budget' device and my 'fantasy' device.
It turned out they were just about the same. I couldn't really spend more than
$1500 for anything better without jumping to server-grade parts.

Yesterday I played Halo Wars 2, listened to Spotify, kept Word, Excel, Edge,
Chrome, & WEKA open, and was running an embarrassingly inefficient data mining
program in Python via WSL, while running a Jenkins build server in an Ubuntu
Server VM, and running TeamCity in a Windows Server VM.

Even on the Intel chip, which is presumably less good at multitasking than
Ryzen, I just can't throw enough at it to experience anything resembling
slowdown. 8 logical cores still seems to be plenty for most users, but I could
see with even more VMs that one might need more. As an aside, it is incredibly
nice to be able to run VMs without even thinking about their resource
consumption - it's like having my own little cloud.

I wonder if maybe Intel's marketing for their chips could use improvement. On
paper, my MacBook Pro with 16Gb RAM and 3GHz i7 shouldn't be that much worse
than my desktop with 32Gb RAM (most of which is unused) and 3.6GHz i7. The
difference in practice is night and day. I'm losing faith in the concept of a
laptop as a developer's primary device (especially with SSH & Remote Desktop
being so good these days).

~~~
jrockway
My impression is that Moore's law is totally dead and CPUs have not changed in
any meaningful way in 4 years. My old i7-4771 feels just as fast as my new
i7-6950x. The 10 cores are nice when you need 10 cores though.

~~~
tormeh
I have an FX 8350, and I really want to upgrade. Performance per core of the
newest processors are approaching 2x of mine. Trouble is, except for
Civilization, no game actually needs a faster processor. It's really hard to
justify. Processors just last a really long time these days.

~~~
distances
Cities: Skylines with large cities benefits very much of a higher core count,
as the citizen/vehicle simulation parallelizes nicely. No experience how well
it behaves with Ryzen though, but at least it is utilizing all cores on my
trusty old Sandy Bridge i7.

~~~
tormeh
The 8350 has no shortage of cores, at least as long as there's little floating
point calculus going in.

------
dis-sys
Bought my Q6600 quad core processor 10 years ago. 10 years on, Intel still
refuses to make 8 core processors mainstream to PC. Lucky enough, I don't have
to listen to Intel's stories on why 8 cores are not useful for PC, I can pay
AMD $300 and get Ryzen.

Just bought Intel Optane memory this morning, installed it in my AMD Ryzen DEV
machine and it works perfectly without any issue. Interestingly, according to
Intel, this is not suppose to happen - you are expected to use both of their
latest processor/chipset to run Optane memory!

Now you tell can me who is preventing people from accessing the best techs at
reasonable prices.

~~~
maksimum
> Just bought Intel Optane memory this morning, installed it in my AMD Ryzen
> DEV machine and it works perfectly without any issue. Interestingly,
> according to Intel, this is not suppose to happen - you are expected to use
> both of their latest processor/chipset to run Optane memory!

How are you measuring Optane working? AFAIK it's supposed to improve boot
times and application load times as it learns after a few times without
configuration. Are you observing this improvement or does the Optane just show
up as a regular storage drive?

~~~
wtallis
The Optane Memory hardware is a NVMe SSD. The caching is done in software, and
Intel's caching software is locked to their most recent consumer-grade
platform (and Windows 10 64-bit, and only caching the boot volume). Using
caching is optional; it also makes for a very fast primary storage drive, if
you can survive on a mere 32GB. Or you can use non-Intel caching software for
Windows or Linux and then the only system requirement for Optane Memory is
that you have PCIe lanes to connect it to.

~~~
dis-sys
You mean Intel doesn't want to tell the _full_ story? I have to agree with you
on that. Official statement from Intel included below -

"A system that is Intel® Optane™ memory ready includes: a 7th Gen Intel® Core™
processor, an Intel® 200 series chipset, M.2 type 2280-S1-B-M connector on a
PCH Remapped PCIe* Controller and Lanes in a x2 or x4 configuration with B-M
keys that meet NVMe* Spec 1.1 and System BIOS that supports the Intel® Rapid
Storage Technology (Intel® RST) 15.5 driver."

[http://www.intel.com/content/www/us/en/architecture-and-
tech...](http://www.intel.com/content/www/us/en/architecture-and-
technology/optane-memory.html)

Did they mention things like

"it also makes for a very fast primary storage drive, if you can survive on a
mere 32GB"

or

"you can use non-Intel caching software for Windows or Linux and then the only
system requirement for Optane Memory is that you have PCIe lanes to connect it
to"

~~~
floatboth
They use "Intel® Optane™" to refer to the whole "solution" including their
awful Windows-specific hardware-supported caching hack (Rapid Storage
Whatever), because that's how they're going to sell it to consumers. Of course
the drives are normal NVMe and you can use them as L2ARC+ZIL in ZFS :D

~~~
dis-sys
I won't call it a normal nvme. Companies don't make/sell nvme SSD with such
awful sequential R/W performance. On the other hand, its random write
performance at low QD is not matched by any competitor.

It is $45 unit price is another interesting factor for me - I only need
16GBytes but I need 16GBytes each on many machines. I am not really aware of
any other tier-1 brand nvme SSDs that are available in such size/price. ;)

------
slackingoff2017
As someone who's been near server space, seriously fuck Intel. Blocking ECC
memory from consumer CPUs is monopolistic bullshit, my next PC will be AMD
even if it's a bit slower. At least AMD doesn't switch socket every 6 months
and cripple my hardware on purpose

~~~
thomastjeffery
Have you actually experienced enough memory errors that you need ECC?

~~~
slackingoff2017
You can't run a production database without ECC. Well, you could, but you
probably wouldn't have a job very long.

Memory gets less reliable with each process shrink. They keep raising the DRAM
refresh rate with various tricks but we're always right on the edge of having
memory with so many errors that it's unusable. The size of memory cells has
decreased to the point that a stray subatomic particle could cause multiple
bit errors.

Just wait for the next big solar storm. We don't know how often they strike
the Earth but it seems to happen every 20 years or so. When it does we're
going to have a massive flash of corrupt data from machines not using ECC.
Probably worst than y2k issues

~~~
thomastjeffery
> Just wait for the next big solar storm.

So when that happens, it will be important to have the option.

Until then, since I have no reason to run a "production database" on my
_workstation_ , why would I want ECC?

> we're always right on the edge of having memory with so many errors that
> it's unusable.

That's fine by me. Living here on "the edge", even with _overclocked memory_ ,
my system is the _most stable_ system I have ever owned.

I really can't see any reason to use ECC on my desktop apart from fear of
hand-wavy potential issues.

Is my personal experience so unique? Are any of you _actually_ experiencing
instability that you are sure is caused by physical memory errors that are
fixed by using ECC memory?

------
xuejie
While the solution here works, the alternative is you can find a used Intel
S2600CP from some random data center together with one or two E5 CPUs. You can
get much more possibilities of upgrading the rig this way.

I recently built such a PC with roughly $950, it includes one used E5-2650c2,
used S2600CP board, 750W power supply, 32GB DDR3 ECC memory(I'm buying CPU,
motherboard, power supply and memory from one vendor so I didn't know if the
memory is new), brand new GTX 1070 and a brand new 128GB SSD. The good part is
the board has 2 CPU slots, 16 memory slots and 6 PCIE slots, so if I feel the
need, I can easily by another E5-2650 CPU, or 12 8GB memory sticks to make the
total memory 128GB(or more if I choose to upgrade current 4*8GB memory sticks
as well), or add more graphics cards for machine learning purposes.

Of course this won't cover all use cases since server CPUs tend to have more
cores than high frequencies(mine has 8 cores, and 16 threads after HT, but
only at 2.00GHz), which might be bad for high end games, but for programming
tasks as described by the article here, this might be a better choice

~~~
luca_ing
Nice build. I've been considering something similar every now and then.

The argument that makes me pull back though is my fear that its power draw
would be considerable.

How much power does your system draw? Idle, and/or under load?

~~~
xuejie
TBH I haven't tested it yet since the GTX 1070 card is still in shipping
state, but I guess ihattendorf has provided detailed information :) If I can
complete the setup before HN locks me from editing, I will edit this post

~~~
xuejie
Okay so I manage to complete the build, on idle this is about 80w, but notice
I have a 1070 card, which costs around 37w according to nvidia-smi, so if you
don't need a graphics card, I'd say 40w - 50w is more than enough in idle.

~~~
luca_ing
Thanks for taking the time to reply :-)

Very interesting.

------
kaosjester
This article reads like a rewrite of the wiki over at /r/buildapc, and I'm not
sure I see how much of it leads to his conclusion---any modern, $1500 desktop
is going to outperform a two-generation-old laptop with a quarter as much ram.
That doesn't make the Ryzen `for programmers.` I came in expected some crazy
assembly insights, but instead this is just a PC build log. Nothing really
supports the title of this article: it's just clickbait without any real
support. And, at this point, most people reading hackernews know it's cheaper
to build a PC than buy another PC / laptop---and those that still don't are
doing it because a thousand dollars usually buys you a lot of warranty and
convenience.

~~~
codinghorror
The multi-core angle is the main "programmer" bit of Ryzen. If you are writing
heavily multithreaded code you will get 2x the cores for your $.

~~~
breul99
I still don't really see the pro to Ryzen in this case. It would be
cheaper/better to get a couple used sandy bridge era (or newer in some cases)
xeons off ebay with a dual socket motherboard in the multithreaded case, or
get an i7-7700 for single threaded performance while still having a reasonable
amount of cores+threads to play with.

~~~
hhandoko
For me it brings a lot of benefits: easier to find parts, consumer-level parts
pricing, and lower TDP.

I'm running a dual Xeon as you mentioned, through buying ex-fleet parts at
less than half price of new ones. Several issues I experienced:

\- Lack of motherboard options. I had to purchase new motherboard at a high
price since the ones that support dual Xeons are either in an incompatible
form factor or simply out-of-stock. I settled with Asus Z9PE-D8 WS with an
SSI-EEB form factor.

\- Outdated BIOS. I had to order a new, pre-flashed, BIOS chip since the BIOS
that came with the motherboard refused to boot with the CPU and memory combo.

\- Hard to find suitable ECC RAM. The motherboard only supports limited RAM
(speed + latency) configs, and finding those is becoming harder. Availability
looks seasonal at times.

\- Needs capable power supply. One thing that people often look past is the
need of a proper PSU. I had to upgrade to one which support two CPU power
connectors.

~~~
roel_v
I'm running a similar setup to yours (IIRC I have the same mobo even), but I'm
quite happy with it. I got new Xeons (1630 e5 @ 3.7 ghz), RAM compatibility
was not something I found an issue (we're talking +/\- 5 months ago here),
power supply - I got a 1000W PS anyway to power several GPU's, I guess at that
level they come with several connectors standard and I just didn't run into it
as an issue. To be honest though I went for Xeon to get ECC, so if these new
AMD's support that, then maybe next time...

What is SIMD support like for Ryzen? Does it do avx2 or something similar?

It's true that server components are generally loud. If you have the room, I
recommend my setup - which is to have a (home build) rack in the basement, and
run long DisplayPort cables (and USB extension cables) to the desk. Or build a
closet around a rack in an office, which can be soundproofed. This does push
it to the next level in terms of work involved, obviously (and cost as well if
you don't have tools or time to DIY most of it).

~~~
AlphaSite
It does AVX2 at half speed, AVX1 at full speed, but in return it runs at full
clock rates with AVX, so it's not as bad as it sounds.

~~~
emn13
Most programming related workloads I can think of hardly benefit from avx2.
Also, the additional power draw while using avx on intel is considerable,
despite the clock rate drop; so perf/watt may not be as much behind as one
might initially think.

This downside is likely to become slightly more serious as time goes on and
more software uses avx2; but it's certainly not crippling.

------
mirekrusin
Funny, I just finished building mine - same Ryzen 7 1800X, switching from
MacBook Pro/iMac to Ubuntu as well, got 3200MHz RAM also, which booted into
2333MHz as well (will need to wait for motherboard update to get full speed I
guess), M.2 SSD... I wonder if it's becoming a trend (to move from macOS to
Linux)?

~~~
kogepathic
> got 3200MHz RAM also, which booted into 2333MHz as well (will need to wait
> for motherboard update to get full speed I guess)

The highest official DDR4 frequency is 2400MHz. Anything beyond that is
technically overclocking.

Testing has shown that there is very little performance benefit above 2666MHz.
[0]

The long and short is that manufacturers are happy to sell you 3200MHz RAM,
but you're paying for speed you'll likely never use: your CPU memory
controller needs to be stable at those overclocked speeds, and the performance
gains are minimal.

[0]
[https://www.youtube.com/watch?v=D_Yt4vSZKVk](https://www.youtube.com/watch?v=D_Yt4vSZKVk)

~~~
onli
While it may be technically overclocking, the default ram clock of Ryzen
boards is DDR4-2666. If you look at the right benchmarks you will see that
faster ram can bring enormous performance benefits, even in games, which is an
area where the contrary was claimed in the last few years. Some examples are
[https://www.youtube.com/watch?v=G5ejBlynOV8](https://www.youtube.com/watch?v=G5ejBlynOV8)
and
[https://www.reddit.com/r/buildapc/comments/5agh8f/skylake_cp...](https://www.reddit.com/r/buildapc/comments/5agh8f/skylake_cpu_and_ram_gaming_impact_benchmarked/).
That's for Intel, but the same is true for AMD and Ryzen.

~~~
kogepathic
_> you will see that faster ram can bring enormous performance benefits_

Okay, so looking at the spreadsheet of results from the Reddit thread [0]:

\- Overclocking the RAM from 2133MHz to 3000MHz resulted on average an 8%
increase in FPS. This is a 40% higher frequency netting on average 8% more
performance

\- They only tested two speeds: 2133MHz and 3000MHz

I would imagine that the difference between 2400MHz (officially the top speed
of DDR4) and 3000MHz would be even less than the 8% they found.

Let's assume, based on no evidence whatsoever, that the performance gains are
linear. For 40% increase in RAM clock, you get 8% performance gain.

So from:

\- 2400MHz->3000MHz: 5% increase in performance

\- 2666MHz->3000MHz: 2.5% increase in performance

You also have to account for the fact that:

\- Higher speed RAM costs more

\- Overclocking will consume more power and generate more heat

Based on the above, where the average benefit from a 40% RAM overclock was a
mere 8% performance gain, I'm just not seeing how "enormous" the performance
benefits are.

[0]
[https://docs.google.com/spreadsheets/d/1LKmt9FDEjFXeu3USK1bz...](https://docs.google.com/spreadsheets/d/1LKmt9FDEjFXeu3USK1bzjzRidhg3frgwWx7mPUlOWEc/edit#gid=1267098471)

~~~
onli
8% are huge for something that was said to be of no relevance at all in that
specific workload. And it is 8% for a small price difference. Ram prices are
in flux currently, but right now I see 16GB DDR4-2400 for 119€ and 16GB
DDR4-3000 for 139€. Some time ago the price was almost identical.

Also, please watch the video as well. The practical effect of having faster
ram on frametimes is immense and sometimes a lot bigger than what you'd
expect. Moving from just barely reaching 60 FPS in the Crysis train sequence
to comfortable 80, with an i3, is very nice. There is also the Ryse sequence
(~ at minute 7) where performance of the i3 doubles, just because of the
faster ram.

------
danielparks
It's ridiculous how well the 2013 MBP does.

Of course, the last four years have had much less focus on core performance,
or even on putting more cores into pro-sumer machines. Apple in particular is
all about thinness, battery life, and passive cooling.

That's not a complaint. For most people, including me, those are good trade-
offs.

~~~
old-gregg
Hm... Not sure I agree. I have a 2015 MBP and I was blown away by nearly 2x
difference vs a comparable desktop with identical amount of RAM and CPU cores.
My workload is compiling Golang code. The desktop is twice as fast and without
the heat + fan noise drama.

The laptops, or at least the MBP, seem to be built to mostly run on idle IMO.

~~~
look_lookatme
I use Chrome, JetBrains IDEs and emacs all day and recently made the switch to
a beefy Linux desktop after 13 years of Apple laptops and I'm blown away by
how much faster everything is. I guess I was living in denial about the
performance differences.

------
kondbg
I also built a Ryzen machine for development. It's great when it works, but
I've found that Ryzen is unstable on Linux (Ubuntu 16.04). Every once in a
while, I get kernel errors like

    
    
       NMI watchdog: BUG: soft lockup - CPU#9 stuck for 23s!
    

which requires a hard reset. This behavior doesn't occur on Windows, though,
so if you use Windows for development, you should be good.

~~~
Pengwin
Did you try a later kernel? out of the box ubuntu 16.04 is on the longterm 4.4
kernel. It looks like some ryzen features and patches have been added to 4.10,
and they probably were not back-ported to the longterm kernels.

~~~
look_lookatme
Not OP, but just built at Ryzen 1600 box. I've had _more_ instability when
running 17.04 and settled on 16.04 which has been mostly fine, but hard
crashes occasionally.

~~~
bdcravens
> hard crashes occasionally

This sounds disturbingly unacceptable yet accepted

~~~
dom0
Nothing out of the ordinary with a new platform. While Linux often takes
longer to work this stuff out, Windows often has had similar issues in the
past as well.

~~~
shady-lady
Not sure if intent but this is AMD issue.

Having experienced first hand AMD driver instability on both Windows & Linux,
AMD have lost my custom for the next 10 years..

Multiple re-occurring driver crashes using their main graphics card product
line(RX380) on Windows 10.. So, pretty mainstream and yet having driver
crashing (even when doing non-intensive tasks e.g. web browsing)

For the record, I'm not so sure Nvidia is any more stable either. The only
(constantly) stable graphics provider over the years has been Intel's on-board
graphics.

------
dman
Am waiting for Naples to ship before assembling my workstation. Hope it shows
up soon.

~~~
KSS42
32 cores / 64 threads?

~~~
maksimum
8 memory lanes too, just over 150GB/s with 2400MHz RAM.

------
minhajuddin
I have a Lenovo y510p with 3rd generation core i7. I wish I could get a new
laptop/desktop, but my laptop performs just so damn well. The only changes I
made are the addition of a good ssd, putting in more RAM and replacing the cd
drive with the 1TB hard drive. For my workloads (typical web dev stuff) it
performs quite well.

~~~
tgb
I have the same laptop. In the future I'd never buy one like it again - I
think it's at a bad point in the power-mobility trade-off for me. But like you
it's just too good to warrant replacing anytime soon. Everything is extremely
fast since I put in an ssd and I get super confused by the posts we see all
the time saying that as computers get faster software gets proportionally more
bloated. Using this machine is way smoother than anything I grew up.

But in the future I want a desktop and a highly mobile laptop, not an
awkwardly immobile laptop. These large laptops were nice for LAN gaming in
college but I just don't do that anymore.

~~~
minhajuddin
Yeah, I've had the same experience mobility wise. It is a pretty heavy laptop.
I always have it plugged and on a table, Never in my lap :)

------
kevin_thibedeau
> Cooling $100 Cooler Master MasterLiquid Pro 280

99% of programmers don't need a machine with liquid cooling. Even if they are
targeting high performance hardware, development work can be run from a more
modest machine.

~~~
mikekchar
He appears to be trying to build a fairly silent machine as well, so the
liquid cooling will help.

Does he need that much power, though? I certainly wouldn't, but for what he is
doing I can see it. He's running 12 containers in VirtualBox. Avoiding
VirtualBox would help a lot, but depending on what he's doing, he may not be
able to (and on Mac, I think you have no choice -- he may not realise that
he'll get tons better performance without it). On top of that he's doing
Closure (presumably with power hungry development toys) and lots of video
conferencing. I often pair program remotely with video conferencing and it can
eat half of your CPU(s). Mumble and Tmux is a _much_ better way to go, but
again setting up and getting used to the tooling is not trivial.

$2K and building your own machine vs doing some pretty serious work trying to
lighten the development environment -- I can see an argument for it, even if
it's not the way I would choose to go.

~~~
ferongr
All-in-one watercooling systems are not more silent. The pump is audible and
the lack of radiator surface means you'll hear the fans when they ramp up.

~~~
egeozcan
My motherboard has a liquid cooling setting for managing fans. I can rarely
hear the pump and fans are very silent. It's an AIO system from corsair with 2
fans (I'm outside & can't remember the name now).

~~~
glenneroo
Is it anecdotal evidence time? I have NOCTUA CPU coolers in multiple Fractal
R4/R5 cases and all 4 of these machines run at 100% CPU for weeks on end and
you also can't hear anything. I considered going the water cooling route, but
from what I could gather from various reviewers, is that like the parent you
replied to, the pump makes noise as well as the fans when going overtime (not
to mention a lot more expensive and time-consuming to install). Maybe it has
something to do with the case? What case do you have? Did you do anything
extra to silence it?

~~~
egeozcan
I have a mid-range Cooler Master case, don't have the model name in mind but I
don't remember it having anything specific for sound isolation.

When it comes to these things, I can't find anything but anecdotal evidence.
Even the most respected reviewers usually have a sample size of one.

------
cyrusaf
I don't see this as a comparison between Ryzen and Intel. He compares a custom
built PC with a 2017 3.6 GHz 8 core Ryzen to an older 2013 2.3 GHz 4 core i7.
He could have replaced his Ryzen CPU with a similar priced intel processor and
seen the same performance.

~~~
Sorreah
Not really, as 8 core intel chips cost significantly more.

------
sundvor
Interesting article, and nothing surprising. With an 8 core CPU - and NVME
storage, the more interesting test would be to run docker compose _whilst_
profiling a page or whatever. The author hints at it, but hard numbers would
be even more fun. I would fully expect the laptop to grind to a halt in
scenarios where the Ryzen just chugs on as if nothing special was going on.

I built my own 6850k system last June (4.2ghz +32gb ram + samsung 950 pro
nvme), and find it great for programming - well, everything, really.

------
cagenut
As much as this isn't for me (it was during the celeron 300a era), I really
appreciate the detailed writeup and specific workflow example benchmarking.

------
partycoder
Building a PC is fun but there's stuff that can go wrong.

In case you are not familiar with what can go wrong, here there is a vastly
incomplete list:

\- You can damage the PSU by selecting the wrong voltage, you might have a PSU
doesn't supply enough wattage.

\- You can buy incompatible components that won't work with each other.

\- You could have static electricity, and you won't notice until you turn the
computer on and you hear a little spark sound... that means something died.

\- Causing a short circuit by leaving a metallic object lying around (e.g: a
screw)

\- Getting a thermal problem by applying the thermal paste wrong or connecting
a fan spinning to the other side.

\- Crushing the processor by installing the heatsink with too much muscle.

\- Connecting light cables to pins that are not for lights causing unintended
behavior.

In short, there's a lot you can get from building your own system but it can
also go wrong... and when it does: no warranty, you are on your own.

~~~
ryan-allen
For a small fee, shops usually will build the PC for you, and you can pick
from a set of their builds. Some shops will do fully custom ones for you.

Also there is this:
[http://www.logicalincrements.com/](http://www.logicalincrements.com/)

~~~
clarry
A few years ago, my little sister decided to buy a herself a gaming PC, built
to spec. The shop provided two tiers of building service, and she went with
the more expensive "Pro" service. When the machine arrived, it did not boot.
CPU error on board lit. Peeking through the back IO panel, I caught a glimpse
of CPU pins. Yes, AMD. CPU wasn't seated properly _sigh_. They still had
managed to force the heatsink on it, and the metal bracket was a little bent
so after reseating the CPU properly, the sink would sit on rather loose. I'm
surprised the pins were intact and the CPU still worked just fine.

~~~
jquery
Just out of curiosity, which shop was this? In my opinion what they did is
basically fraud, any legitimate "Pro" service should run benchmarks and tests
to ensure the hardware is in 100% working condition. Unfortunately, a lot of
places prey on students and the budget conscious and do a slap-dash job no
matter what kind of build you select (speaking from personal experience).

Storytime: I bought my newest computer a couple weeks ago from Puget Systems.
Along with the PC, they shipped a 30-page custom binder that included the
results of all the benchmarks and tests they ran, plus thermal images they
took during the process to ensure optimal airflow. When I hit the power button
for the first time, my desktop appeared within 5 seconds, with all Windows
updates and the latest drivers/firmware installed already. The cabling job
inside the PC was immaculate, and the PC is whisper quiet even though it's
overclocked.

Moral is, if you're going to pay a premium for a pre-built custom PC, do it
from a high quality vendor.

~~~
clarry
SystemaStore OY. Cost of building service was 70 eur.

------
shmerl
8 core / 16 threads CPUs for sane prices are tempting. Building big projects
in less time is useful.

------
Cieplak
I was a bit disappointed to hear that Libreboot will likely never support AMD
hardware :(

[https://libreboot.org/faq.html#amd](https://libreboot.org/faq.html#amd)

~~~
sb057
There's still hope!

[https://www.reddit.com/r/linux/comments/5xvn4i/update_corebo...](https://www.reddit.com/r/linux/comments/5xvn4i/update_corebootlibreboot_on_amd_has_ceo_level/)

------
mamon
What really blows my mind is that we seem to be stuck in terms of single-core
performance. Passmark score of Ryzen 1800X is only 25% higher than middle-
class, low power i5-6260U[1], despite over 1GHz difference in maximum
frequency. Given that, it is really suprising that 8 core processors didn't
become popular earlier

[1]
[https://www.cpubenchmark.net/compare.php?cmp[]=2966&cmp[]=26...](https://www.cpubenchmark.net/compare.php?cmp\[\]=2966&cmp\[\]=2671)

------
MichaelBurge
My only worry is they don't seem to have very many PCIe lanes. But they're
cheap enough that you might be able to get a motherboard that supports
multiple CPUs, and still save money compared to going 1 high-end Intel CPU.
And it might perform better too, if the workload has mixed computation & IO.

Web development like the article mentions is probably the perfect fit there.
Analytics is probably good too(R, Python, SQL). Careful if you need a video
card in there, though.

~~~
floatboth
Having a fuckton of PCIe lanes only really matters for multi-GPU setups. You
can run two GPUs completely fine on an X370 board though.

------
anorphirith
I'm pretty sure the read write speeds of the newer ssd make a much greater
difference in the performance than the CPU, a pci ssd would make an even
greater difference

------
JepZ
I am wondering if the benchmarks were made Linux vs. OSX...

I mean the author writes that "The Ryzen remains fully responsive and
completely usable." and some years ago there was this magical patch that
greatly improved the responsiveness of Linux systems:
[http://marc.info/?l=linux-
kernel&m=128979084506774&w=2](http://marc.info/?l=linux-
kernel&m=128979084506774&w=2)

------
intrasight
My new build has the same case, PSU, and SSD. I went with Xeon and ECC
however. My final price was very close as well. That case is HUGE - but I do
like having room to maneuver in there. And it's quiet and has air filters. For
reasons of stability, my "desktop" is a Windows 10 VM. Moved all my dev VMs on
from my now retired machine, which I'll keep as a backup for a year.

------
codinghorror
True! Basically 2x the cores Intel will sell you, at the same price. IPC is
"only" at broadwell levels but that's still pretty solid.

------
philliphaydon
The author states he runs his docker images as if they were production. But in
production the load of those images is larger than in development. Isn't there
more insentive to run lower spec images in development to find noticible
performance issues?? I've always thought that dev/test should be smaller env
to prod

~~~
PetahNZ
You don't really wanna limit your daily grind. When you need to performance
test you spin up a staging environment and throw benches at it.

~~~
noir_lord
Absolutely this, I want to throw as many resources at the development machines
as I can fit onto a single machine.

I hate any kind of latency between "this should work" -> "does it work?".

I've even considered buying beefy server hardware at home but electricity is
expensive and it's a rented place so I can't exactly start install server
racks.

------
floatboth
The 1800X is bad value though. It's the "I'm rich and afraid of overclocking
for some silly reason" option. It's only _slightly_ better binned. Most 1700s
will reach 3.9 GHz easily (77% of them, according to siliconlottery.com). The
R7 1700 is the real deal.

~~~
hvidgaard
That's fine if you want to overclock. I need performance for work and I'm not
going to dick around with overclocking - one work day lost to overclocking or
an error caused by overclocking means that it's less value that just buying
top-of-the-line from the beginning.

~~~
mrmidnite
Guys, the x-ryzens are not overclocking stuff, sure good for it, but what they
do is increase the speed as much as possible on the fly meaning if you cool
them, they run faster (read liquid cooling). This is true also at stock
clocking/speed. So yes ryzen 1700 is great with it's 65 tdp, but 1700x is even
greater also WITHOUT overclocking, but well cooled. No picking, just wanted to
make this obvious.

------
natch
What is the video card in this setup? I missed it if he mentioned it. I
suppose he's just using something that was built in to the motherboard? Yes I
understand he's using it for editing text.

~~~
sb057
Ryzen doesn't have iGPU, so he had to be using a dedicated GPU of some sort.

~~~
mrmidnite
The ryzen may have no apu(igpu) no, but if the mobo has graphics (as many
ryzen mobos do) there is no need for a pci-e gpu unless a heavy one is needed.
Indeed not for editors and compiling.

------
bdcravens
My understanding is that the workload was more memory bound than CPU bound?
Since the author only went with 32GB, wouldn't a laptop with 32 GB result in
comparable gains?

~~~
AstralStorm
No, most laptops will throttle down or reduce boosting quickly under a real
workload. This also reduces effective memory bandwidth.

------
Warp__
Interesting.

I have a 1700X/16GB/SSD build and a Dell XPS 15 9560, both of which are very
fast, but the Ryzen just flies!

~~~
StavrosK
Is the XPS 15 good? I went on Amazon to buy one, but the reviews are 50%
1-star, and I got scared.

~~~
Warp__
I have had 0 issues with it except some Audio stuttering due to DPS Latency
issues.

I'm getting a replacement (Model Up actually, Dell did a 12% off offer) so
we'll see how it is. I look forward to UHD Touch & Fingerprint Reader.

~~~
StavrosK
Thank you, I got the smaller (not base) model but added the fingerprint
reader, as it was only $20ish and it might prove useful. I'm glad you had no
issues.

~~~
Warp__
Have fun! They are really nice tbh.

------
peeyek
I second this. I build PC with Ryzen 1500X + GTX1070 and it works flawlessly
with TensorFflow and Torch7.

------
faragon
I would love a laptop with 8-core Ryzen with a decent integrated GPU in the
SoC :-)

------
throwit2mewillU
My learning with tuning computers in several companies with a lot of money for
faster turn around speeds:

1\. IO is not as important as your gut feeling tells you (e.g. 5x faster IO
often led to <2x faster compiles)

2\. Programming languages tout the parallelism horn, but their compilers and
tools are really not optimized for many cores.

~~~
marcosdumay
For #1, why would it be? Programs are a bunch of text files, read from
beginning to end, that almost always fit in a small part of RAM.

2x faster compiles is still too big an improvement, something there is badly
optimized.

