Hacker News new | past | comments | ask | show | jobs | submit login
AMD Q4:16-core Ryzen 9 3950X, Threadripper Up To 32-Core 3970X (anandtech.com)
263 points by pella 5 days ago | hide | past | web | favorite | 232 comments

This is fantastic. I'm most interested in the eventual arrival of Zen 2-based APUs for upgrading my tiny home server. Getting a (hopefully) 8 core, 16 thread part with integrated graphics at the end of 2020 would be a fantastic value if prices stay similar to the current announcement (sub $200). Seeing AMD continue to make incredible progress on their chips makes me very excited!

What are your use cases for the tiny home server and the APU? I built a smallish FreeNas with an i3 earlier this year. It was before this year’s AMD announcements. I like the i3 because it has ECC memory and can fit into some supermicro server boards. IPMI makes it easy to setup over LAN and I don’t have to ever plug in a monitor or keyboard. It would be nice to see more AMD boards with it beside the AsRock X470D4U.

My desktop/plex server is due for an upgrade next year. Maybe the threadripper price will go down.

> IPMI makes it easy to setup over LAN and I don’t have to ever plug in a monitor or keyboard.

Hardware BMCs have their place (e.g. low-overhead compute-cluster nodes, where free cores = profit.)

But, for most workloads—and especially consumer workloads—there’s no reason that the concept of a “Baseboard Management Controller” needs to be instantiated as hardware; you can just as well set the system up with a hypervisor OS (e.g. a minimal Linux KVM install; or an appliance-OS designed for this, like VMWare’s ESXi), set your regular workload up as one VM guest (and pass through to it all the nice hardware you have, like GPUs), and then set up another “control plane” guest VM that exposes IPMI management of your regular guest and of the hypervisor itself. As they say, “there’s no problem that can’t be solved with another layer of indirection.” ;)

(I should note, this is exactly the setup you get by default if you install ESXi [hypervisor] + a free home license of vSphere Server Center [BMC-equivalent appliance] onto a box. I was happily using this exact setup for quite a while, though I eventually moved to Linux+KVM+Xen just because I wanted the host to be able to create guest volumes from a thin-provisioned storage pool and then serve them out to the guests over iSCSI, as if I had a teeny-tiny SAN.)

Of course, this has only become a viable approach for IoT integrators very recently, which is why we don’t see any IoT appliances (e.g. NASes) coming set up this way from the manufacturer just yet. Until recently, your choices for building IoT devices were microcontrollers at the low end; old ARM cores in the middle; and Intel’s most “power efficient”, feature-stripped cores on the high end. None of these were particularly suited to hosting virtualization. But Ryzen is! While it may only be affordable to home-builders today, I expect to see AMD chasing Intel up on its “power-efficient embedded profile” market segment quite soon, with Ryzen-based, highly-cored, virtualization-capable equivalents to the Intel Atom line being sold for cheap enough to get system integrators excited.

FreeNAS does not recommend running in a VM and I’ve heard about problems with iSCSI :-). I could easily pick up used Dell servers dual core E5 Xeons with 128gb of ecc RAM and whatever SATA/SAS controllers I want off Craigslist. ESXi costs money and a yearly cost at that but I have played around with the trial version.

But! The FreeNAS community is a bunch of grumpy sys admins. I’m considering going down the Linux and ZFS route. I’d be able to do more with VMs (I feel more comfortable in Linux vs FreeBSD). I’m building some IoT Pi’s to collect data and have it a Linux box would be nice.

The UniFi USG handles DynaDNS and my VPN.

Raspberry Pis haven't quite got there yet but I'm hoping the next iteration will have an NVME or SATA implementation. Although to be honest it doesn't have to be the Pi. Any small board that'll run Linux, has at least gigabit ethernet and a fast path to disk will do. At that point it'll be possible to make a ceph cluster with one Pi per disk.

For some time, I've been thinking of making a ceph cluster out of ODROID-HC1 or 2 (https://www.hardkernel.com/shop/odroid-hc2-home-cloud-two).

A few years go now Western Digital demonstrated an onboard controller with two 1 GB NICs and a mini linux distribution with a single Ceph OSD installed. Unfortunately it never made it out of the lab. I would gladly pay a $50 premium per device for spinning rust to have that onboard. Perhaps the issue is with NVME-connected devices that could be a much costlier device to build? Or maybe there's no standard for housing network-connected storage devices in a rack?

You can do that, but they're not really replacements for one another, and there are lots of things that can pull one way or another for that use case.

- You generally don't want to run storage servers virtualized.

- Tooling matters. There are multiple reasons I generally do things the same way at home as I do at work (within reason).

- Probably a niche concern, but I have some hardware that is only configurable during early boot.

- Virtualization costs performance. Not a huge issue at home, granted, and you have to quantify it for your specific workload. (It is usually going to be IO.) But it certainly can matter with home workloads; home theater video processing is probably the most common.

I use both for what they're good at. IMPI is for managing hardware. Virtualization is for not needing more of it.

Why not a network kvm instance? There is lots of hardware out there that can do this and it doesn't limit you to the IPMI integrated hardware

I still have a haswell i5 quad core as my plex/pi hole/openvpn box. It’s never cpu pegged. I do have hardware transcoding enabled which looks like crap but teaches people to not have such low birtates on their gigabit connections. Silly plex defaults. The most cpu used is deluge/openvpn combo out to PIA as it uhhh acquires new content. That and rclone as it pulls off google drive. But I see no need to upgrade. What do you use yours for that it’s cpu needs refreshing?

Intel QSV does degrade quality. The i7-7700k can’t handle a 4K HVEC transcode to 1080p very well on the fly with Plex. Plex optimizing a 4K video takes about as long as the movie to get a 1080p version. I’d like something that’s not dependent on QSV. My Asus motherboard also has a Bluetooth issue from time to time that requires unplugging the motherboard power connection.

Not sure I want to spend the money on a 10gb backbone or wait and upgrade the desktop. My UniFi switches and supermicro board can do link aggregation but my Asus mobo only has one port.

If your UniFi switch has SFP+ you can get 10Gb copper SFPs these days for under $60 and then a dual 10Gb PCIe is another $180-200. Probably not worth the upgrade just for Plex - but most people don't realize how cheap copper 10Gb PHY has gotten. And if you have SFP+ on both ends and just need something to fill in-between you can get fused 10Gb DAC cables for under $20 which is a drop in the bucket for those SFP+ ports that are sitting there collecting dust.

The SFP+ ports on my 150w PoE switch are only 1Gb. The MikroTik CRS305 switch is the most cost effective one I’ve found at ~$130. UniFi has an option but it’s $599.

DAC is the most sensible option considering I need to go from my basement, up 2 more stories to my attic, and drop into my office.

It’s a weird time with UniFi gear. Is the USG getting replace or is it now the Dream Machine only? Where is 10g?

True that. Between the weird "phone home" stuff that Ubiquiti just stopped responding to publicly and some of the odd overlap with their product line. That being said if you own their stock today you were smiling because they're up over 30% at the time of this writing.

With regard to USG the Dream Machine looks to be a kind-of-sort-of successor to the really old USG. I would guess a USG replacement is coming, but the Dream Machine Pro will have 10Gb as I understand it. They do product rollouts pretty horribly IMO.

I don’t have any 4K TVs. So i guess I’m lucky I don’t need much horse power.

If you're up for a migration project on deluge/ovpn you can probably cut out quite a bit of overhead by moving to WireGuard and Mullvad, WireVPN, etc... I thought PIA donated a bunch to WireGuard dev, but it doesn't look like they support it in GA yet.

Thing I love about deluge and PIA is it’s all in a docker container. So I didn’t have to muck with any settings on the host machine. Works great. I didn’t see anything like that for wireguard sadly.

> What are your use cases for the tiny home server

Not the op, but small form factor home servers are so excellent. Pihole, UniFi controller, Home Assistant (coffee machine warming up for 30 mins before I get up!), some testing VMs and a load of docker containers for various chores.

AMD added the Athlon 3000G today, but it's Zen+ based. Personally I'm looking forward to their Mobile Zen 2 based APUs. I believe they are expected in early 2020.

Yeah, and finally with LPDDR4x support which should, in addition to 7nm and the improvements Zen 2 has, further help increase battery life.

Not to mention that the better bandwidth (if they use a 128 bit interface like the MacBooks) should help increase their iGPU advantage.

Ditto.. running a mid-2014 rmbp with nvidia 750 graphics that is showing its' age. Hoping to make a jump to a nice linux environment on next year hardware.

Some interest in tiny home servers on this thread so I thought I'd share mine, and why I went Intel.

I used the InWin Chopin case, which is too small for a video card. Here it is next to a NUC: https://imgur.com/l7dFKCl

This server is used for development and has an i7-8700. Unfortunately AMD does not offer a similarly fast chip with integrated graphics.

There are rumors of AMD building a NUC competitor, which means we may have more options soon.

I like that idea and I'd like a chip like that too, though compared to say a current 3400G I'd rather have more memory bandwidth, much more cache and a couple of extra Vega cores. Don't care much about doubling the core count, four cores with SMT is enough for what I do with it (mostly gaming from the couch). It depends on the use case I guess. Maybe AMD will surprise us with further segmentation to server APUs and gaming APUs.

If you don't mind me asking what do you use your home server for?

A couple uses, but the primary purpose is as a VM host and file server. Hyper-V for the VMs, and Storage Spaces with ReFS for the storage.

My current build uses a core i5-4570s, but I'd love to have more cores & threads for running additional VMs. My use cases don't require _real_ server hardware (IPMI is overkill), just the ability to run a good amount of test and lab environments.

I'm not the op, but running a TOR node, some Minecraft servers, and some web apps will occasionally bog down my old FX-6300 server. Having a low power part that can do the same work is appealing.

AMD appears in a trending section almost every other day.

How can OEM's still ignore AMD, I mean. It's obviously very popular. They have the best offering and no one can match their price.

How long till Intel's monopoly will fall, because of consumer demand and seemingly (almost) perfect long-term execution of AMD:

- 2016 - 8,1% market share

- 2019 - 18 % market share

According to Intel, they have free game untill 2021. My guess is that they will have free game long after 2023 ( when 3nm architecture will be released by TSMC)

I think the market share now is mostly because of DIY builders. Am I wrong? The only serious AMD OEM offering i have seen was from Microsoft's Surface.

Side-note: AMD is mostly interesting for desktops for now ( -> battery life), until next Q.

But how come OEM desktops lack an AMD alternative? Any articles/information on OEM's partnerships with Intel?

PS. Please upvote a more ontopic comment concerning the product itselve. I didn't want this to be the top comment.

It's been an interesting year for AMD in the DIY builder space. The every man chip that seems to get recommended at every turn is the 3600X (or 3600 if you're partial to Gamers Nexus). Most of big ticket builds seem to be opting for a 3900X these days. That, however, is a fickle market and subject to the kind of fandom where lines are drawn on brand loyalty that no manner of contrary evidence will shift. As an enthusiast market, let 'em have at it. The saltiness in comment threads can be comedy gold. The day that Intel launches a 12-core + CPU on 10mm, it'll be the belle of the ball for all of the next-gen big ticket builds. Swings and roundabouts.

AMD are starting to make some waves in the hyperscale DC with EPYC Rome. The real fun starts if that progress translates into your corporate workhorses starting to opt for EPYC over Xeon in their data centres. Intel are a big company to take down a peg, and have had little motivation to innovate any more than needed. AMD also have a bit of a history of doing amazing things and then dropping the ball in spectacular fashion, which is a candle that no corporate buyer wants to be holding when they've pumped a ton of capex into a 5 year deal pinned on fleet maintenance. Intel might not be dynamic, but at least you know what you're buying into. I think that the same translates to those making the purchasing decisions for corporate users. For enthusiasts, what your machine runs on is a big deal. For everyone else, as long as it switches on and lets you get through a working day without going tits up, who cares? And if it does, it needs to be fixed or replaced before you start falling behind on work. That also trends toward buying patterns that focus on known good config.

It's a fun market to watch. If I were back in my old role, I'd be looking for EPYC solutions in my servers and attempting to wrestle a few test laptops with Ryzen 4000 CPU's next year, even if only to worry my boss.

> The day that Intel launches a 12-core + CPU on 10mm, it'll be the belle of the ball for all of the next-gen big ticket builds

But it's based on Skylake. I'm not convinced that the game will shift sides that fast.

I only mentioned the 3nm becauses it seems a big difference. But i think the architecture is more important and that seems to be a home game for AMD right now.

Most of the OEM desktops are designed with large contracts far in advance, and AMD has had a fairly poor reputation for the past decade or so thanks to Bulldozer et al. In addition, intels marketing budget is in the neighborhood of amds gross so they can't compete on volume deals. Obv scandals are a thing (see Intel anti trust cases) but I tend to assume it's just a slow moving industry instead of pure shady dealings.

Related, AMD's consumer zen 2 chips support ecc, which Intel has always segregated to their enterprise grade parts. Typically ecc is a requirement for the volume business oems (Dell, mostly) so amds new products obliterate any value proposition Intel could even have.

Apologies for odd caps, mobile auto correct fights me.

> How can OEM's still ignore AMD

Intel has been mired in an antitrust action in Europe for a decade now, based on charges that they have engaged in

> two types of conduct by Intel vis-à-vis its trading partners, namely conditional rebates and so-called ‘naked restrictions’, intended to exclude a competitor, AMD, from the market for x86 CPUs. The first type of conduct consisted in the grant of rebates to four OEMs, namely Dell, Lenovo, HP and NEC, which were conditioned on these OEMs purchasing all or almost all of their x86 CPUs from Intel. The second type of conduct consisted in making payments to OEMs so that they would delay, cancel or restrict the marketing of certain products equipped with AMD CPUs.

(quoted from http://curia.europa.eu/juris/document/document.jsf?text=&doc...)

This resulted in the levying of a €1bn fine against Intel in 2009, which was sent back to a lower court for review by the Court of Justice of the European Union in 2017. (Not on the grounds that Intel didn't do these things, but that the actions by themselves didn't automatically break the law until someone could demonstrate they actually had anticompetitive effects.)

There was also a very visible effect of what is assumed to be deals with large retail chains, where even if the OEMs offered AMD products the retailers wouldn't carry them.

> How can OEM's still ignore AMD, I mean.

Most of them got burned very very badly by Opterons. They are likely waiting if this is new AMD or the good old one that cost them millions in the past. Also, AMD has no production capability to displace Intel in any meaningful volume.

Out of interest, what happened with the Opterons?

I've read somewhere (can't find it) that they signed some long-term contracts with OEMs and then cancelled them in the middle (probably to avoid bankruptcy).

Best guess: They were only relevant for like 4 years. As soon as Conroe Xeons hit the scene it was game over.

The newest gen ryzens don't have mobile chips yet is one reason. Backroom agreements with Intel is another.

They’re also making headway in the server market. AWS has quite a few AMD powered offerings that have hourly costs lower than intel equivalents


Yes I know.

Azure also and Netflix is considering to change their CPU portfolio :). One of the latest super computers was AMD only.

But that is not the OEM market.

My exact question - Lenovo or HP should come out with small business desktop - you cannot buy anything current from either now. Dell never will in their optiplex line I don't think.

Rumor has it that Lenovo will do a M75Q machine with AMD. Seeing is believing.

> How can OEM's still ignore AMD?

In the last 45 years or so there were many times where Intel's status as king of the hill looked vulnerable: Opteron, Athlon, Cyrix, PowerPC, Motorola 68k, Z80, 6502, ... They've always come back, and anybody who bet against them has been burned badly.

Intel actually most recently admitted that it will take them until 2023 before they can grow their margins again.

I have HP elite book with AMd?

I would bite AMDs hand off for a Mac mini equivalent with a reasonable graphics card and AMDs latest chips in. Everything in JavaScript/Docker land on projects of scale takes an absolute age to run all the tests on a laptop. Fans are constantly on and throttling. It’s crazy that JS dev is this hardware inefficient...

"It’s crazy that JS dev is this hardware inefficient..."

That's the whole point of cloudy businesses. They can masquerade it under whatever pretense but the end result is that they want you to use more and more hardware resources where each abstract hardware unit gets healthy profit margin for infrastructure suppliers.

Skipping cases like Facebook, Google, Netflix scale most of real life businesses can chug along just fine on dedicated servers with the software written in "normal" high performance languages. You would be surprised what $10,000 worth of modern hardware with high performance software can achieve.

You are right. At one point it was Java's territory for this stuff. But now we have 10x more inefficient systems than Java.

Rapid prototyping will make a lot more money than cost savings on hardware in most cases

1) Where did you get the idea that rapid prototyping is not possible with high performance languages?

2) You are making an uber-generic claim. Where is substantiation? I'll make another claim of the same nature: with this never ending rapid prototyping along with whatever they sell under Agile/SCRUM/etc methodology the end result is unbelievable mess of patchwork that after a while becomes to impossible to maintain and add new features to. It literally becomes house of cards

I think above poster may be right but in a sense that I do not prefer. E.g. most of these Electron based stuff looks like rapid prototypes and it made billions for companies like Slack etc.

Meanwhile as a user I suffer from crappy and slow software and told to upgrade very reasonable hardware otherwise.

It is one case when crappy software being pushed on victims as yourself. They fill their pockets and you suffer ;) But when companies produce same stuff for themselves (internal operations) ...


I do a lot of work with databases and I find it absolutely insane the amount of money you have to spend in the cloud to get the performance equivalent of an 8 core bare metal server with 64 gigs of ram and a couple NVMe hard drives.

absolutely. from my prior calculations buying hardware yourself pays for itself within 2 1/2 months. the cloud is expensive as hell. you pay dearly for not having to manage your own hardware and scalability...

Scalability which you only need because it’s slow as hell...

This is what you want: https://www.asrock.com/nettop/AMD/DeskMini%20A300%20Series/

I have the Intel version (running Hackintosh). It's very fast and quiet. Only downside is no Thunderbolt.

I've actually been looking at those vs the Intel NUC. The DeskMini looks great on paper, but ASRock seems to have some quality issues with those boards. That includes the Intel version of the box. More then 25% bad reviews on Newegg/Amazon last I looked.

On the other hand, they're so cheap you can just buy two. That way you have another one on hand when the first one craps out in six months.

My impression is that the NUC has better quality these days - if you can find one. The i5 and i7 models seem to have virtually disappeared from store shelves within the last month and nobody knows when they're getting more. Otherwise the NUC is amazing bang for the buck if you want a small form factor. The power of a Mac Mini for roughly half the price. And apparently you can Hackintosh it fairly easily.

If you don't care about size, you can build a high quality 8/16 core Mini-ITX based AMD server with a mid-range video card for ML applications at less then $800.

If you’re pondering a Nuc, hang tight for a bit. The next generation seem close and look pretty good. One of the more powerful options (and I believe not the top spec) is https://wccftech.com/intel-ghost-canyon-nuc-element-pc-revie...

The reddit sub has a steady trickle of leaks. https://www.reddit.com/r/intelnuc/

My home desktop is a NUC + eGPU and it's been working great for me for the past year or so. I love that I can just turn the power to the eGPU off when I'm not using it and run my whole computing setup for something like ~40W.

Uh maybe use better webdev stacks?

I know I’m arguing against 20+ years of software practice, but Moores law is over.

I'm tempted to be very very sarcastic here but instead I'll just say clearly I'm not able to move the whole of 3000 person organisation onto a better stack because convincing people they would get no features for 3-6 months would be impossible.

Cant vouch for every situation but I think in general it actually pretty much achievable using slow attrition. From what I've observer while consulting corporate systems are collections of hundreds components. Start with one with clearly defined functionality and slowly chip away. Does not prevent from delivering new features.

The only thing here is that it will introduce certain overhead and has to be carefully managed. Also one has to make sure if it has measurable ROI.

Okay, fine. I will try for Elixir, Phoenix Live View and as little JS as possible and see how we go.

I am not sure if Elixir is performance beast. But then again I do not know much about it other than it runs on VM.

I wouldn't call Elixir/Phoenix a "top performer" in the league of Java/Vert.x or Rust/Actix (see TechEmpower benchmarks) but it does VERY well at chugging along and not falling to pieces. Its dev experience is really nice. Its missing type-safety, but, honestly, I quite enjoy messing with it.

Everyone thinks Elixir is slow until they use it and then realise throughput is often more important than absolute performance.

As I said I do not know much about Elixir so yo be the judge. I do however have doubts about throughput of well designed code running on VM being faster then that of native. My understanding is that on Erlang VM this throughput is achieved by using async and message passing patterns backed up by thread pools inside. All perfectly available for "low level" languages as well.

I understand, my preference is Elixir but while fast enough and scalable, it is a memory and cpu hog.

That’s possibly a trend from the places I’ve worked. The larger he organization the better the odds there are little niches of few users using specific software and website services. From around. 2x-5x GP’s size.

Corporate America, am I right?

But this is a safe place for us devs.

Moores law is still alive and kicking. Sure, it hit a midlife crisis, but it got divorced and bought a bitchin' corvette. Chiplets are the future and will ensure continued scaling.

> bitchin' corvette

Is it bad that I read it as bitcoin corvette at first?

Biggest jumps I've seen for building node projects is a fast pipe for `npm ci` and a fast disk (nvme preferred).

Why is it crazy? Javascript is an interpreted language built for browsers, it was never meant to even know about hardware.

it's interpreted on first run. I'm pretty sure most JS engines, JSCore, SpiderMonkey, and V8 all JIT that code. And while JS was built for browsers, ever since the beginning, there's been server-side implementations. Most currently, Node and Deno are leading the pack. (But RhinoJS and GraalVM do exist)

Yes... a Mac Mini Pro with a workstation processor (doesn't have to be this particular beast) would be nice. Heck, even an i9 would be a welcome addition since it currently tops at i7 only.

Note the mini and even the nuc are pretty much laptop processors... an i7 laptop proc is nowhere near an i7 desktop cpu. The issue is the form factor doesn't allow for good cooling. There are options in the ITX space though, but they aren't cheap. Asus has a nice mini-itx x570 board if you want a relatively sff with power.


They are - but even then the Hades Canyon and coming Ghost Canyon have some beef to them. Dual nic too, so options abound.

There are some small form factor cases that work pretty well. Not mac mini size but also not desktop size - maybe slightly larger than a console. They tend to be fairly expensive because they're niche but if you're using your computer for business it might be worth it.

Single biggest boost to Node/JS projects I've had has been going to a good NVME drive, even from M/Sata SSD.

Put all your build files on a RAM drive.

if you have enough ram to fit all your build files, your OS's block cache should take care of keeping them cached.

I will never understand actual real world software developers using Macs for anything.

Because they’re the right tool for the job. MacBooks aren’t perfect but I can’t imagine a productive workflow without them in my experience. Perhaps your requirements are different than others?

The first computer I ever built had a Cyrix 133mhz CPU, and over time I tried out Celeron and Pentium CPUs up until current day which is an i5 3.2ghz from ~6 years ago.

I still don't feel the need to upgrade but when I do this might be the first time where I really consider an AMD CPU. Things are looking really really solid for them lately. I could totally see using one for an all purpose development, video editing / recording and gaming box to replace this i5 eventually.

If you do lots of compiling, and run the typical suite of things a developer might (IDE, terminal apps, compilers, your backend, emulators, web browser) these kinds of cpu's can be a large benefit.

for the average user, not as much.

>> for the average user, not as much

Actually there are workloads on which they will heavily benefit the average user as well. Image processing and video editing come to mind especially. You may argue that few users do this on a PC nowadays, but that's mostly oversight by OS developers. MS should revive Movie Maker. I used it a lot 10 years ago when my kid was little. Apple already ships iPhotos and iMovie with every new mac, and both of them are pretty great for what they were designed for. Then there's also more and more 4K content on youtube by the day. My 5 year old iMac does spin up its fans quite a lot nowadays.

I think it's also a good time to start moving some of the AI workloads to the edge as well. It's ridiculous that we have near instantaneous on-device speech recognition on phones now, but PCs still have to dial back home and incur perceptible latency. I want local speech recognition out of the box in Windows and MacOS (and ideally Linux as well), with automatic punctuation and robust to background noise.

Microsoft's video editor today is Photos app. But it's a very unconventional one: it has motion tracking but no layers.

It also sucks pretty badly as a photo management app, especially when compared to Apple Photos.

In regards to compiling, I'm under the impression that "it depends" [1]

I think the biggest perk is that AMD's new CPUs allow for faster hard drives which many people argue are more important.[2] This is my main reason for looking at one.

[1] - https://www.youtube.com/watch?v=CVAt4fz--bQ

[2] - https://weblogs.asp.net/scottgu/Tip_2F00_Trick_3A00_-Optimiz...

I'd say the average user spends most of their time inside a web browser, so they should see some benefit. Not nearly as much as a developer, but certainly something.

I replaced my i7 4790k with an AMD Ryzen 5 3600 two weeks ago and am extremely happy with it so far. It was only $195!

Same, my old system started acting up... was (and still) waiting for r9-3950X to drop.. but even the jump to a 3600 was significant to say the least. Will swap for 3950X on release, and should keep me for the next 5+ years. That said the 3600 is no slouch.

Aside: I did a min-cycle upgrade to a good nvme drive (samsung 860) and a GTX 1080 when that video card came out, which was a pretty big bump on the old box.

> I did a min-cycle upgrade to a good nvme drive (samsung 860) and a GTX 1080 when that video card came out, which was a pretty big bump on the old box.

In your day to day, how big of a difference is that drive vs a SATA SSD you might have bought 5-6 years ago?

I know going from a HDD to an SSD was a mind blowing experience but now things open so fast even on an old SATA SSD that I find it hard to imagine things can feel that much faster.

Unless the parent made a typo, the Samsung 860 is a SATA over m.2 drive, so it's nowhere near as fast as a PCI NVME drive.

I have a Samsung 970 EVO and I spend most of my time compiling with g++. Copying large files is nice but overall I find the disk cache in Linux mostly makes the drive performance irrelevant for compiling if you have enough RAM.

Also be aware that if you have an older motherboard with PCI-E 2.0, you'll be limited to 1GB/sec transfer rates anyways.

860 comes in both SATA and M.2 form factors.

I also have 960 and 970 evos, they are mostly fine, but there's a fall in speed during writes, when the slc cache becomes full. The 960 is also starting to have longer trim times than it used to.

The m.2 form factor 860 uses the SATA protocol and has SATA performance characteristics. Not all m.2 drives are NVME and different m.2 keys mean not all m.2 slots support both SATA and NVME.

I meant 960.

960, my bad.

There is very little practical difference going from a slow SATA ssd to a fast top of the line nvme ssd. If you are doing huge sequential reads and writes as part of some workflow, then the difference will be big, but boot times, game load times, compile times, are not much different. Like maybe you boot in 29 seconds instead of 30.

So don't spend too much! Just get something known to be reliable with a decent cache on it.

the claims in a sibling thread about a given web build being 30 seconds on a sata SSD -> 5 seconds NVME are not likely true, I think that poster may be assuming the differences would be that large.

Not an assumption... literally that (large web/node projects) and database work (crunching long data files) are the only two places I really notice the difference. The project in particular, I had to build on an old office machine (hdd) was over 3 minutes... at home before on ssd was around 30 seconds, and under 5 seconds on the nvme.

For day to day, I really don't notice the difference much... building node/web projects it's as big a night/day difference as going from HDD to SSD was... Say a web project takes 3+ minutes on HDD, then 30 seconds on SSD, think <5 seconds on NVME. The load times are slightly better, from noticeable to blink of an eye for most apps.

Not too much difference for most tasks, but man if you're doing a lot of work in node (or anything else touching lots of files) it's pretty significant. But ymmv on this.

Database work is significantly faster as well.

Replaced my i5-4670k with a ryzen 3900x a couple of months ago and demoted my old desktop to a home server.

But now considering upgrading that server to a r5-3600 anyway, as I've been using VS code's remote feature for development on the train, and that would still benefit for faster compiles.

If you aren't already, drop in an nvme drive (may need a pcie x4 adapter)... depending on what you are using, that would be a huge boost as well.

When my older computer started acting up a couple months ago (i7-4790K), I made the jump to an X570 motherboard build, using an R5-3600 waiting for the R9-3950x to drop. Even the R5-3600 and going to 64gb ram (docker databases, development, etc) on day to day work has been a phenominal leap. Even when I'm just browsing it's much faster.

The 3950X should make my bluray rip/encodes go so much faster as well. I have a stack of a few series waiting to even rip, the older i7 I was using was just about painful how long CPU encodes took, and GPU encodes were really crappy, or too large (h.265/hevc). I cannot state how much I've been looking forward to a 3950X (since this time last year actually). Which should do well for me for the next 5+ years.

Intel has quicksync, which really helps with video transcoding. However it’s getting to the point where the extra cores might make it the switch worthwhile, and software transcoding is reportedly superior (not that I would be able to tell). The rumoured AMD Nuc offerings look promising too. Interesting times after so much stagnation.

"amd" "nuc" seems a contradiction, "NUC" is an Intel trademark, right ?

Assuming you're using "nuc" as a generic term, is there an AMD-specific search term I can use to find these rumors? I've been in the market recently for a nuc form-factor machine and I've wanted to go with AMD, but haven't found much.

From the replies, these two might be interesting... no idea if they're worth it vs. a little more for an ITX build. The second one looks like a higher level cpu, but not sure of the real differences.

[1] https://www.newegg.com/p/N82E16856158066 [2] https://www.neweggbusiness.com/Product/Product.aspx?Item=9B-...

It’s definitely a contradiction - it’s just the Intel seem to have defined the category now. I’m struggling and can’t find the link I thought I’d found in the r/intelnuc sub. It was unreleased hardware at this stage though. It was more powerful than the below, but larger. http://linuxgizmos.com/worlds-first-amd-based-nuc-mini-pc-sh...

IIRC the 4790K was before quicksync.

I hat to upgrade my fine working i7-2700k because of my new camera. Those raw files slowed down Lightroom to unusable in minutes.

I wonder how fast I can compile my code base with this. On my hexa-core i7-8850h, it often takes more than 4 hours to build everything in full throttle. And I do this quite often, so pain is definitely present. Given the network and disk i/o aren't the bottleneck, having more than 5 times the cores should theoretically reduce the build time at least by 3 folds, conservatively?

Phoronix does compilation benchmarks (for the Linux kernel and LLVM), the existing Ryzen chips do perform quite well on them. The i5-8400 is probably the closest thing on the chart to your 8850h.

But there are diminishing returns to adding more cores past a certain point which will depend on your codebase and compiler. If your builds are at 100% CPU utilization most of the time then you will probably see pretty large gains, but sometimes a significant chunk of the time ends up being bottlenecked by single threaded performance.


> But there are diminishing returns to adding more cores past a certain point which will depend on your codebase and compiler. If your builds are at 100% CPU utilization most of the time then you will probably see pretty large gains, but sometimes a significant chunk of the time ends up being bottlenecked by single threaded performance.

You should check out Phoronix's Rome benchmarks. Compilers seem to love L3 cache, and the new Threadripper parts have 128MB of it. https://www.phoronix.com/scan.php?page=article&item=amd-epyc...

The Epyc 7502 in that chart is going to be roughly equivalent to the 32-core Threadripper 3 announced today. Both are 32 cores with 128MB of L3, but the Threadripper part has a much higher base & turbo clock speed so it'd compile even faster. Probably.

Does linux compilation take a few minutes? The chart there says Ryzen 3 2200G takes 242 seconds to compile the whole kernel. I find that difficult to believe.

Phoronix tests compiling the upstream default config which is pretty barebones. A normal kernel build for a desktop machine will take much longer because there are more modules enabled.

Thanks for the comparison. How do you know the 8850h is desktop equivalent to the i5-8400? I can't seem to find it in the chart.

Passmark benchmarks are usually a good indicator [0]. They're also from the same generation and have the same number of cores.

[0] https://www.cpubenchmark.net/compare/Intel-i5-8400-vs-Intel-...

Ah great, thanks. It's actually very helpful. I was actually looking at the Passmark score a week ago, but couldn't tell if it's trustworthy. Nice to get an endorsement.

Please don't mind me asking, but why do you compile your entire codebase often? What's the challenge of splitting it into units and compiling each of them independently when code is changed?

I suppose I should've provided more context. My build usually takes a few minutes only during the day, as the changes are small and they are broken into multiple packages / units. Once or twice a week, some changes trigger chain reaction and takes 4+ hours to build. This is painful because it eats up chunk of my productivity. And perhaps once a week, I build from scratch just to prove everything still can be built from scratch.

Just in case you aren't aware of it, "ccache" helps with this, and you might find it trustworthy enough for your typical weekly rebuild.

ccache has been huge for me -- can't recommend it enough. The next big win for me was externing common templates into its own translation unit: http://gameangst.com/?p=246

It's a cool little trick few people seem to know.

I read the article, but I'm still a little confused. Do you put common std templates into their own translation unit, or are you putting only your own user-defined templates into their own translation unit?

Both, although most of the heavy ones in my projects are from the application-layer/user-defined.

having strings with common vector/map/unordered_map/set/unordered_set template specializations help a bit (i.e basic_string<char>, uint64_t int64_t, int and uint)

My methodology wasn't very scientific: when I found a template being specialized at a low-level, I added it to my list. another heuristic is anything that templates off of std::string (basic_string<char>), char, uint64_t int64_t, int and uint are all pretty good candidates as the likelyhood of them being reused everywhere is high.

Are you building like FPGA bitstreams or something? 4 hours seems insanely long for software unless it is literally millions upon millions of lines of code

Maybe he #included boost.

Maybe it's an aggressively optimizing and correctness-checking C++ compiler. GCC, clang can take a long time depending on complexity of code and flags provided.

I've built bitstreams, but these days I build software for consumer electronics with screens running ARM Linux.

in large c++ projects it's really hard to avoid the situation where almost every file includes a couple of the same key headers. change one char in an important header and you have to rebuild the whole project.

Probably header only C++

I’ve experienced very substantial improvements in compilation times jumping from the 2700X to the 3900X. I suspect the 3950X should be even better, as long as there are not frequency issues.

Just curious, why do you have to "build everything" quite often? Normally it's enough to just build the parts that changed. Perhaps you can improve your build process instead of investing in new hardware?

Not the OP, but nightly build to prove it all still works is a pretty common thing.

Reasons for mysterious breakage:

- Compiler Updates - Dependencies getting lost - Code changes (you break things into parts, doesn't mean they work together now).

Yes. I don't do nightly, but weekly. Some changes still trigger long builds that last multiple hours.

binary stability in c++ is hard, especially with dynamic plugins. You need to make daily or at least weekly build from scratch to confirm everything works. Otherwise for example things might seem to work because you are changing a field in class named x through api with a different name for it, and it won't work when you use new version on both sides. These kinds of bugs are the worst.

For what it's worth, Linux kernel compilation took about 6~7 minutes with -j25 on Ryzen 3900X (12C/24T).

Huh? I can do a full kernel rebuild in ~40s on a 3900X.

Maybe because Arch has more modules enabled than the default config?

without running clean?

cause that's not even in the right ballpark for a stripped kernel config

Yes, in a clean tree that just got cloned, and the result does boot.

20 years ago, it took me only around 10 minutes on a single-core 32-bit CPU, with 1/1024th the RAM, and spinning rust for storage. How is it not even twice as fast today?

Maybe better and more time consuming optimizations? 2 mio LOC vs. 26 mio LOC probably doesn't help either. Maybe there is some bottleneck somehwere in the software or hardware?

I have no sense of how long that is. What would it take on a 4 thread Macbook Pro?

I have not tried installing Linux on a more recent MacBook Pro (USB-C generation) so I couldn't tell, but I remembered it took around an hour with Arch Linux's default config (e.g. "let's compile this and go have a breakfast & make a coffee and hope it's done")

Went to lunch and timed a compile of the linux kernel (v3.19) (default options) at 14 minutes on my 13" 2015 macbook pro. (3.1 GHz Intel Core i7, 16GB RAM).

depends which four thread MacBook pro...

as a rough reference, it took about 35 minutes to build the linux kernel on my xps 13 a few years ago. that computer has a 2C/4T kaby lake processor. your macbook pro might be a little faster if it doesn't have one of the ultra low power CPUs.

I think it might depend a lot on the compiler and codebase you use. I got myself a 3900x, and for compiling Rust code it's actually not that much of a speed-up as expected. A lot of things are done in serial fashion - e.g. compiling single crates and linking. The average utilization during compiling a large project was maybe between 40 and 60%.

When compiling LVVM however all cores where churning along at 100% utilization, so I expect a big speed-up there.

gcc tends to compile quite a bit faster on the new Ryzens vs Intel due to the large l3 cache, so you may get quite an improvement.

I don’t know your codebase but my experience with slow compiles when using ninja or make -j or whatever is that it has always been the overuse of code-generators or templates or something like that. A bit of strategic de-templating usually works wonders.

Modern processor design is mindboggling in it's capacity. Coworker has a modern Alenware...it was able to host a VR session, play a 1440p game and transcode video, all at once.

Goodness. That really is incredible.

I would like to know if somebody has good experiences to share about AMD video cards on Linux (with the latest driver AMDGPU).

I bought a Ryzen (Zen 2) for workstation, where I need to run a few VMs, a local k8s cluster, run builds, some browsers tabs, and Slack. I have everything running smoothly on top of a Linux 5 kernel, and so far, Im pleased with the results.

But I kept an older NVIDIA card, and the drivers always had a bit of trouble with desktop Linux support (like Wayland, plymouth bootsplash, etc).

I ran a Radeon 380 between 2015 and this year, it worked flawlessly.

I bought a 5700 XT in July; it was not usable out of the box, but all the pieces are at least upstreamed now. Desktop stability is great, gaming performance is great, and all the basic stuff (Wayland, Plymouth) is solid.

On Linux, AMD has a MAJOR advantage over Nvidia simply due to the fact that the driver is FLOSS and built into the kernel itself. This means you get full GPU support out of the box and fixes/improvements are delivered through the same update channel as the kernel.

The userland tools aren't ported to Linux however, so you don't get access to the fancy social-media-augmented gamer stuff. If you want to overclock/etc you have to rely either on a /sys filesystem interface (which wasn't stabilized when I tried it but could very well be now) or third party tools of varying quality.

As for the actual experience itself, I've owned GPUs from multiple architectures (Polaris, Raven Ridge, Vega) and I've noticed a common pattern. When the hardware is new, it's unstable. A few kernel updates later (typically over a month) they run flawlessly. To be fair a lot of the crashes/freezes I've experienced could be traced down to Mesa and LLVM. I still would give new AMD hardware time to mature though.

Performance is on par with the Windows driver package (probably because they share a lot of code). You get your money's worth. Some of the games I run on DXVK offer near-native performance.

tl;dr there's never been better a GPU driver on Linux but it's not quite ready for your grandma yet

I've been using AMD GPUs since they first stabilized the radeonsi driver ~6 years ago.

7870 -> 290 -> 580 -> just got a 5700 XT yesterday.

They are good. It generally takes 6 months after a card is announced for the drivers to work properly, but I'm currently on linux-mainline 5.4r6 and mesa-git and the 5700 XT is working nicely. On 5.3 and Mesa 19.2 / LLVM 9 there were a lot of graphical glitches and crashes, so that series should be in place within a few months.

The other 3 just keep chugging along working nicely. The 7870 is too old to get AMDGPU / Vulkan support unless its turned on manually, but that has worked in light testing.

My only complaint is that hardware video encoding is awful - it hogs enough resources to substantially hamper game performance if used concurrently, enough that it makes more sense to software encode on a beefier CPU than to try to use the hardware encoder on the GPU.

I've got an RX580; it works almost flawlessly, including for games emulated via Proton/etc.. The only significant problem I've had was when I (unwittingly) received a new Mesa installation (I'm on NixOS/unstable, so, rolling release) and everything I'd already played stopped working. Took me a while to figure out I had to delete shaders that'd been cached for the older version of Mesa. I imagine most non-rolling distros wouldn't have that problem.

Bought a RX570, worked great. Undervolted for more performance, would recommend it.

Meanwhile, I'm still waiting for a AMD-based NUC size PC with Ryzen 2 core and more powerful Radeon iGPU.

It's not a NUC, but the ASRock DESKMINI A300 is quite small and might cover some of your needs...

The A300 is wonderful, but there aren't any Zen 2 APU's available. When the next gen APU's with Zen 2 cores come out I'm hoping to build an A300 with a Ryzen 5 APU and 32GB RAM. Seems like a dream dev machine for me.

As an alternative for the Asrock A300 but again a bit bigger there's also the beautiful but expensive Cirrus 7 incus A300, which is completely fanless[0].

[0] https://www1.cirrus7.com/en/produkte/cirrus7-incus/

Using the ASRock A300 today and already pretty happy with it (small, quiet, reasonable cost, gets the work done), though of course agree the next APUs should be a noticeable step up.

It'd've been nice if ASRock's micro-STX formfactor had taken off and gotten an AM4 variation (http://www.asrock.com/nettop/Intel/DeskMini%20GTXRX/index.as... ). It's a mini-tower bigger than NUC or the A300 but smaller than mini-ITX. Compared to mini-ITX, it takes an MXM GPU instead of having a full-sized PCI slot and uses an external brick PSU. The extra benefit on the AM4 side would've been that even a weak dGPU allows using non-APU chips, at least up to the supported TDP.

Throw in passive cooling please! Still waiting for the iBOX-R1000 to become available. Meanwhile more options are very welcome.

Abandoning x399 is silly; AMD effectively cut-off entry level HEDT folks (24 core is minimum now; 8-12c is a plenty for e.g. Deep Learning researchers) and upset previous enthusiasts that invested a lot of $ for the possibility of a future upgrade (boards in the range of $350-$650). Given that EPYC Rome up to 32 cores can run in 1st gen EPYC boards with just BIOS update, it's difficult to understand the decision (unless they cut the corners for TR1/TR2 boards). I don't see any reason not to release backwards-compatible TR3 even if they had to limit frequency/TDP on older boards...

I had plans buying 64c TR3, but I'll be skipping this and next gen and buy TR5 with DDR5 in 2021 instead.

They changed the interface between the CPU & the chipset, that appears to be the main breakage. Specifically they doubled the width of it, not just bumped from PCI-E 3.0 to 4.0. Instead of an x4 connection it's now an x8 connection.

It does suck, but since they also seem to have dropped any of the lower-cost SKUs it's probably not a motherboard upgrade that's going to stop you from dropping $1400 on a CPU.

And if you really were considering a 64C one, it's hard to believe a price difference of ~$400 will matter on what's going to likely be a ~$4000 CPU. It's a ~10% price difference.

They could have just released x399-compatible TRs that would have been e.g. frequency/TDP/voltage limited with just older PCIex, or make it configurable. They didn't need to do that for new EPYCs anyway so there obviously was a way (new EPYC socket was needed only for >32c parts).

I planned to bump my Zenith Extreme TR with 128GB ECC RAM to 32c from this gen and use it for e.g. gaming, while investing into a TRX80/WRX80 64c TR. Now I am actually pretty upset; I'll rather invest into a bunch of RTX 8000. They went from something I was looking forward to in the past year to something I'd like to forget about ASAP, like with final GoT season... I might even become an Intel fanboy now.

> They didn't need to do that for new EPYCs anyway so there obviously was a way (new EPYC socket was needed only for >32c parts).

Epyc has a different PCI-E layout from Threadripper and always did.

> I planned to bump my Zenith Extreme TR with 128GB ECC RAM to 32c from this gen and use it for e.g. gaming

I mean you still can, it just costs slightly more expensive than it otherwise would have? And instead of selling 1 used part you now sell 2?

Like I said I agree it sucks, but you seem to be really blowing this out of proportion. I'm far more annoyed at the missing lower-end SKUs than the motherboard cost. Where's the update for the 12-core where the platform IO is more valuable than raw core counts?

> I might even become an Intel fanboy now.

You're going to become a fanboy of the company that never does backwards compatibility just because you didn't get 3 generations of backwards compatibility on 1 out of 3 platforms?

> blowing this out of proportion

The best way/time to express disappointment is to do it right away and in full force. If AMD were on IMDB, they would get 1/10 for handling this. I have all rights to behave emotionally instead of rationally anyway.

Used TRs sell for peanuts, the same for mobos (no demand for used stuff; look at what actually sells instead of what is listed for months), it would be going from $1.5k to $400, writing off like $1.1k in the process. And there will be plenty on eBay soon, putting even higher downward pressure (both AM4 3900x and 3950x now beat all TR1/2s up to 16 core, sometimes even 24c). The missing low-core parts is another thing that wasn't well thought out in all this, I agree.

As for Intel, they were always upfront about the need to change mobos with almost every new generation (the last few were exceptions); I also never had so many issues with any Intel pro board that I had with ASUS Zenith Extreme, their "flagship" TR mobo that can't even run 2x Titan RTX properly...

> As for Intel, they were always upfront about the need to change mobos with almost every new generation

AMD never said TR4 was forwards-compatible. They did say that for AM4 & for Epyc SP3.

Hindsight is 20/20 yada yada but the lack of forwards-compatibility promises should be treated as rolling the dice on that.

I think everybody expected it as people even booted EPYCs in TR4 boards.


I somewhat agree. I don't need 24 cores and 12 would be more than enough for me, but proper ECC support, workstation-grade motherboard, 4 memory channels, big L3 cache are nice to have.

fwiw ECC support and workstation-grade motherboards are both features you'll find on AM4 platform. "workstation-grade motherboards" being a standard feature across the entire X570 lineup, even, as PCI-E 4.0 signaling requirements basically forced it. TR3 does have larger L3, but the 3900X isn't a slouch either with 64MB of it.

Here's a bad question from someone who doesn't really understand the differences of CPU architectures and is therefore apprehensive about making related decisions:

Will I be hurting myself if I buy a computer with an AMD chip, in that I might end up in a situation where certain programs won't work for me? E.g., if I do fancy 3d modeling (Cinema4d, fancy renderers), if I do multi-threaded programming (in matlab), if I do physical simulations ( in COSMOL), etc.?

Nope, with 1 possible caveat. Amd doesn't have support for avx512 instructions in any of their cpus as far as I know, but avx512 is very uncommon. Most of the Intel chips don't even support avx512. This won't be a "it won't work" scenario, but in case the developer has written code which uses avx512, any cpu that supports it will run the operations substantially faster.

The only real negative one is that rr ( https://github.com/mozilla/rr ) is not available on Ryzen for now.

Other than that, you are safe.

No, programs do not interact with the CPU directly so there is no difference from that perspective. Rendering and anything multithreaded will definitely favor Ryzen as you will have more cores/threads at the same price point.

As someone who has just tried to run tensorflow and found out that for my specific CPU I can't use prebuilt docker images and have to build my own from scratch, yes, it's certainly a headache.

The only time I could imagine this being the case is if there was code already compiled for a specific architecture. In these cases you would have compatibility issues between different intel CPUs as well so I do not think it is a reason to avoid AMD.

Sadly my FX 8350 is still cranking away just fine for my needs - really looking forward to upgrading to Ryzen when the time comes.

Can't wait for the 3950x. Hope launch supplies are adequate.

Back when Ultima 9 was released around 2000 my PC had 128 MB of RAM. Now Threadripper has the same amount of L3 cache in total (Epyc twice as much).

I've been away from AMD since Athlon times. Now finally I felt secure getting an AMD processor with the Ryzen family; Ended up getting a Ryzen 5 3600X since it was on sale. If not I would've got a normal 3600. Ended up paying less on my new build;

> AMD today has lifted the covers on its next generation Threadripper platform, which includes Zen 2-based chiplets, a new socket

Hopefully this new socket-change is for Threadripper CPUS only.

AFAIK They already had their own one distinct from the “regular” AM4-socket.

AMD has confirmed that AM4 will be supported at least through next year. https://hothardware.com/news/amd-confirms-am4-socket-support... They didn't make any such promises about TR sockets.

Who are the best PC builders to get a Ryzen 9 through with a fair amount of RAM for a dev box?

If your work pays for it, just get a prebuilt box.

If you're buying it for yourself, it's easy to DIY. I just built a 3900x with 32gb (you could easily add more), super easy, cost me £800 total for cpu, ram and mobo. The only thing was availability of the 3900x, had to wait for a bit.

I already had:

- old case

- 1080ti gpu

- nvme & ssd drives

I bought:

- 3900x (comes with a good heatsink+cooler) - £480

- 32gb (2x16gb 3200mhz DDR4, also runs at 3600mhz without issues) Ballistix Sport AES ram (micron e-die) - £140

- Asus TUF x570 Gaming Plus mobo - £180

Took about 2 hours to build and configure + about 3-4 hours researching what to buy! which you can skip if you buy the same :)

Very happy, it's super duper fast for workloads that use multiple cores.

Is there some reason you're not interested in building it yourself?

Though to answer your question, I don't think you can go wrong with https://system76.com/desktops

The answer to why not diy is usually predominantly available time.

I recently did a tear down of my main desktop to upgrade the SSD mounted on the back side of the motherboard. Due to being a small form factor case, I had to pretty much tear the whole computer apart to take the motherboard out. This whole process of tear down to the point of reinstalling the OS was probably 45 minutes. It shouldn't take too much time to build a desktop, but I'll admit it doesn't always go perfectly to plan.

There are plenty of places to find lists of parts picked by people who just love building computers, so getting a list of parts to order isn't too difficult even without a lot of knowledge into components.

That said, for people who really want to just get a fully assembled computer most Ryzen 9's are going to be sold by boutique builders. It seems like most of the big name OEMs aren't building many units with Ryzen. This is especially true for the highest end parts such as the Ryzen 9's and Threadripper. System 76 is probably the best place to go to get a professional looking machine, otherwise try the smaller gaming rig places such as Cyberpower or Alienware if those kind of aesthetics are acceptable (or wanted, some people need RGB LEDs everywhere :) )

With how easy it is to slip up and destroy the pins on these things, I'm amazed how comfortable everyone seems to be with handling a $750 chip with absolutely no recourse for damaging it besides spending another $750.

> With how easy it is to slip up and destroy the pins on these things

Every socket based CPU I ever saw had a graphical mark or a little cut out on the top of the chip which lined up on the motherboard's socket.

Unless you went out of your way to ignore that marking and jammed it in the odds of bending pins are really really low / close to impossible.

I'd be more concerned about mounting an after market heat sink on the CPU. I don't know how much has changed in the last few years but the amount of force you need to use to lock them down makes you think you're going to snap your motherboard in half.

You really have to be unlucky or do something stupid to bend the keys. You just have to pick up the CPU and put it back down at some place without any pressure. If you let it fall down or try to force it to go somewhere - sure you can kill it. But how often do you let something fall down every day? For me it's the same probability then damaging my car while parking in a spacious parking lot - which will be even more costly. Or dropping my laptop.

I recently watched my friend attempt to remove the stock cooler from his 2200G and it yanked clean from the socket, bending about 50 pins in the process.

Unlucky, sure. Dropping things is unlucky, and that is all it takes. People drop things all the time.

You're supposed to twist the cooler off to break the thermal compound. Pulling it straight off is a really bad idea.

> With how easy it is to slip up and destroy the pins on these things

Is it any easier to destroy than any other CPU? Just as easy to crash a Lambo as a BMW but people still drive those.

With Intel CPUs the pins are on the motherboard socket. You can easily destroy the motherboard instead, but when dealing with high end CPUs the motherboard is probably the less expensive component.

Start to finish build time(including OS install) is maybe two hours if you're a complete newbie. Everything is color coded and keyed to fit into only one spot.

Putting the computer together is typically the fastest step in the process. However, you have to factor in trouble-shooting time if some component doesn't work. More wait time if you need to ship it back and get a new component. Then there's all the time up front to research desired parts and find the best prices. Filling out rebates and then having to think about getting the rebate check back for the next 4 to 8 weeks.

There's a lot more to time and cognitive load to account for than just plugging in RAM sticks.

> you have to factor in trouble-shooting time if some component doesn't work.

> More wait time if you need to ship it back and get a new component.

> Then there's all the time up front to research desired parts and find the best prices.

These three downsides are not avoided by purchasing a pre-built, either.

Aren't they?

1: The computer has been tested before shipping (at least enough to install an OS on it), so odds that it's got a DoA part is virtually nil. 2: Sure, but see above: less risk that any components will be broken. 3: You don't have to do nearly as much research - you know that they're shipping you a working configuration, that the motherboard socket fits the CPU and the RAM in the listed configuration. You might still want to do research to try to find the best bargains or performance, but the time is way less.

What you described is not a normal experience. Sure problems crop up, but it also can and definitely does happen with a prebuilt machine. If you're already working on existing machine, wait time doesn't exactly factor into any of it.

I've never built a machine without at least one DOA part. I've never had one just turn on and work the first time either.

As someone who has built at least a hundred PCs, I have to assume that you are either breaking the parts yourself or buying from the shadiest resellers. Or, only built one and gotten very unlucky.

I build a PC every few years. You've built at least 100, so you're much better at it. You have the experience to avoid problems and solve problems that come up that the average builder doesn't.

The advice to someone like me, someone who just needs a new computer once in awhile, that building is the way to go, is not so obviously the best advice.

You said that you always receive DOA parts. This shouldn't be affected by experience, unless you are damaging them.

or for work. i ain't building something for myself for work. if it breaks, i fix it. im going to buy it from someone and if it breaks, they fix it.

If you're going prebuilt, you can afford to throw around money. If you can afford to throw around money, you can just not think about the parts you buy and get the most expensive a la carte parts and end off better.

Then, putting them together takes maybe thirty minutes?

Going prebuilt for a desktop is never worth it.

> Then, putting them together takes maybe thirty minutes?

Getting the cabling in neat order takes that 30 mins alone. So does CPU+cooler installation. I always reserve a whole afternoon+evening for full computer rebuilds. Then again, that happens every 5-8 years so I'm always a bit rusty when starting.

Also, never had a prebuilt so could be they aren't as clean with the cabling of course.

They're definitely not that clean. The wires are just shoved in there. Maybe not will some high end Dell servers. You can do the same in your builds, too -- only time it matters is if you're ricing a uATX case.

It could be over 30 minutes for sure, but at most an hour or two unless you've never done it and have no idea what the parts do. But just looking at the parts on Amazon or whatever, I see a lot of people figure out where they go in their head.

Having a computer that's already ready to go, and any problems are dealt with via a single supplier? I'd pay a couple hundred extra at least for that.

That's kind of optimistic. The only service that does this I know of is the Dell enterprise agreement. Computer breaks, they fix it within a few hours or give you a loaner.

Anything else you're looking at at least 2 weeks. Typically it's better to just buy another machine or buy more parts then repurpose the fixed one.

Does this require liquid cooling?

Not at all. I've got an air-cooled 1920X, which works just fine. Noctua makes good fans.

The downside is, a fan capable of cooling a 200W CPU is going to be huge; mine only has a few millimeters of clearance, and that's in an EATX case. AIO water-cooling is easier to fit.

Same here with a 1950X. I use a Noctua NH-U12S and it usually operates around 45-50C with a few VMs and a IDE running. Mine is crammed into an old SUN Ultra 24 case with a few mm's to spare.

You should definitely share pictures of that setup. At least, I would like to see it!

Agree - would love to see the build

Can motherboard break from the fan weight? Those huge coolers have solid weight.

These days there's usually a back bracket which helps distribute the weight of the cooler over a large part of the board as opposed to earlier coolers which would only mount to a few screw holes. Extra large aftermarket coolers usually come with extra large mounting brackets. I wouldn't be too worried about any name brand cooler causing damage to the board. Follow the directions from the kit and it will probably be fine.

If you get a giant cooler it's usually recommended to uninstall it before transporting the computer (which means having rubbing alcohol and new thermal paste to reinstall it after transport), but stationary it's perfectly fine. They also have to hold up high end gaming GPUs which weigh equally much (and also recommended to be removed for transport).

The TR4 socket is extremely sturdy, and can take a great deal of weight - far more than you may be used to for more home desktop oriented parts. Given that the new socket needs even more cooling I think it's a reasonable assumption it'll be at least as sturdy as before.

Maybe at sufficient g-forces.

I might unmount the cooler before shipping the box somewhere, but if it's just sitting there you will generally be alright.

Is that at base speed? I just recently got the parts for a 3900X build and while I orginally went for air cooling I switched out for a liquid cooler last minute worried about overclocking temps

I care about reliability a lot more than speed, so I'm not overclocking.

There's a few threadripper aircoolers on the market that perform fairly well. Here's a year-old guide to them: https://www.tomshardware.com/news/air-liquid-cooler-threadri...

I assume anything that's rated for 250watts will work fine with these chips. Probably not ideal for overclocking, but if your case has decent airflow you'll be fine.

The 3950X is suggested to use with a liquid cooler, but I would be amazed if one of the dual tower coolers like the Dark Rock Pro4 or Noctua DH15 didn't work fine in a case with good airflow.

threadrippers don't require liquid cooling, they have a larger heat spreader and lower peak clocks so heat is more manageable.

The 3950X has the same TDP as a 3900X. Both should be fine with good air coolers. I recently built my first new desktop system in years: A 3900X with a Dark Rock Pro 4 - in a slightly more spacious Mini ITX case (Lian Li TU150). Works great!

The AMD TDP number are much closer to real world figures compared to Intel which is pretty much an imaginary number that is only valid at its base clock.

Just a word of caution. Although this is generally true at the moment, AMD's definition of TDP has nothing to do with electrical power or heat output. It's just a marketing number that happens to come close to electrical power. Here's a very good source if you want details: https://www.gamersnexus.net/guides/3525-amd-ryzen-tdp-explai...

Drool. I want one! Make -j64 ;)

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact