We sort of make fun of the multi-colored CPU coolers that come with the CPU though. It seems weird to have these in business machines.
(We have also invested in another 16 AMD Threadrippers, they are just amazing power for the price. It is sort of weird that is less than 6 months, we have become an AMD shop after being Intel for as long as I can remember.)
This is also the case for GPU cards, and their "gamer" marketing approach.
The big thing I usually look for is if the PCB layout matches the reference layout. Else liquid cooling (mostly for how much quieter it is under load) becomes much more of a pain.
How do you do that?
I mean, where do you look for the PCB layout of a potential card you will be buying? Do you physically open the box and take a picture?
Also, why would the PCB layout make any difference?
I mean, I've been buying 3D cards since the original Voodoo but I just never heard of this metric. Sounds interesting but would be nice to have some more details :)
Unfortunately, it's a true pain to determine whether or not a card matches reference. The reason is simple: if the layout changes, there's a good chance the water cooling block won't correctly align with the high points on the card. Having that happen means two things: you will have points where the block won't be making contact, or you will have to bend the PCB, or you will have a component not being cooled. All really bad things. This can even be impacted by the thermal pads used by the OEM.
I mostly check to see if they're using the reference cooler design. I haven't bothered with AMD/ATI cards in ages, so this only really applies to Nvidia products. Cards with the blower style coolers with a single fan towards the rear of the card, with an output near the video outputs are almost always reference. Cards with coolers the OEM designs often are, but sometimes are not. That's mostly where things are annoying. Thankfully, water cooling OEMs often have specs from the start as to which cards will match if you're planning on buying day on.
An example: With reference layouts the 980 and 980 Ti ought to be compatible. On some EVGA cards, you have memory mounted on the rear of the card, that makes backplate installation impossible depending on which backplate you use, as it'll teeter: either a 980 backplate, or one meant to be compatible with both the 980 and the 980 Ti. Worse, that can mean having to find a different set of screws to mount the water block you wish to use.
A Graphics Card is composed of much more than the GPH itself. They basically stuff all the components of a mini computer (power management, memory, processor) onto a single board and each one of those subcomponents needs cooling in order to achieve optimal performance. A GPU heatsink that neglects the VRMs or memory will not be able to be overclocked as far as a GPU where those parts are actively cooled and in a liquid cooled system, there is typically very little airflow within the case. Slap an all-in-one water cooling system intended for a CPU onto a GPU and you'll often get memory issues right off the bat even with stock clocks and overheated VRMs will lead to throttling after a relatively short time in heavy use. The fans on a GPU provide enough direcf airflow to keep those components within spec, but remove the fans and replace them with nothing and you lose that necessary airflow. Since each card manufacturer uses their own PCB layout (once they figure out what to improve or lower costs on versus the reference design), there is no way to guarantee a heatsink will work with anything but the reference design. Card manufacturers place their VRMs all over the place and sometimes even stray from spec in memory placement so heatsink manufacturers can't standardize on anything but the one design guaranteed to have a market presence which is the OEM reference PCB.
1* Here's an article covering an old Nvidia reference cooler design: https://techbuyersguru.com/founders-edition-vs-open-air-vide...
2* Here's a GPU water heatsink: http://www.phanteks.com/PH-GB1080.html
It does leave someone building a "serious" budget PC with Ryzen with the feeling that they're doing something wrong. Maybe that's the intent.
Depends on what you need. In the general case, No. If however you need something 'special', like the very fastest CPUs, lots of RAM and/or multiple GPUs for example, then Dell/HP etc. tend to greatly overcharge for these configurations.
This has changed somewhat with the decline of Moore's Law. Companies are keeping equipment longer, past the point that OEMs typically provide warranties or even parts.
When you know that's going to happen you want to start with something that uses standard parts rather than vendor-specific ones.
For the small or midsize companies though that need that extra power, building can work if you have the talent in house. Cheaper, more standard, and more powerful parts and everything is very reliable these days so its no longer a major support issue.
Setting "typical current idle" in "Power Supply Idle Control" in the BIOS seems to solve the problem here. Comments in the kernel bug report regarding this also show that this apparently solves the problem.
I've seen about a dozen reports of faulty post-week-25 processors crop up in the AMD Community Forum and on Reddit from different users. Which is of course a lot better than pre-week-25 where the overwhelming majority of units are faulty. Of course, most users aren't testing for it in the first place, especially since the newer processors were widely (and incorrectly) reported to be "fixed", so it's hard to say.
https://community.amd.com/thread/215773?start=1635&tstart=0 (user jcoiner, 1726PGT)
https://community.amd.com/thread/215773?start=1725&tstart=0 (user scorpio810, 2x 1728SUS)
https://community.amd.com/thread/215773?start=1770&tstart=0 (user ryzlin, 1733PGT)
https://community.amd.com/thread/215773?start=1785&tstart=0 (user flyinryzen1700, 1737SUS)
https://community.amd.com/thread/215773?start=1830&tstart=0 (user skimba 1725PGT, user xtronom 1728PGT)
https://community.amd.com/thread/215773?start=1860&tstart=0 (user jc_yang, 1742SUS)
https://community.amd.com/thread/215773?start=1875&tstart=0 (user fedor_s 1726SUS, user karabojkov 1741SUS+1737SUS, user spiffy 1748PGS)
https://www.reddit.com/r/Amd/comments/76q7ne/got_a_defective... (user grosbof 1733SUS)
https://www.reddit.com/r/Amd/comments/7ar15o/psa_amazon_is_s... (user triplesal, 1733PGS)
https://www.reddit.com/r/Amd/comments/7ar15o/psa_amazon_is_s... (user istockporno, 1726PGT)
I am guessing that this was finally squashed for good in the 2000-series stepping.
Incidentally, there have been reports of the latest patch of Battlefield One having strange random crashes on Ryzen 1000-series but not 2000-series or Intel processors, that is reportedly fixed by disabling SMT, and I'm wondering if that's another manifestation of the segfault bug. It's hard to say, and the reality is the first-gen silicon will never be 100% trustworthy. Buy 2000-series processors instead, it's worth it for the peace of mind.
How does one do the test?
There is also a Windows variant that may be more sensitive than the Linux variant but fewer people have used it so YMMV.
I get lockups once every 2-3 days. I was convinced the lockups were caused by a Logitech wireless keyboard, but the lockups persisted even after trying several different keyboards.
I got my Ryzen 5 soon after release. I really want to love this chip but the lockups are killing me and so painful to debug. I will try disabling C6 and see if this helps!
Intel stock: https://i.imgur.com/egFD543l.jpg
Ryzen stock: https://i.ytimg.com/vi/Y0efXDSgSPU/maxresdefault.jpg
For first-generation Ryzen, the -X models were often sold without a bundled stock cooler, which limited uptake of Wraith Max. Some had Wraith Spires with LEDs, others without.
For second-generation Ryzen, all CPUs come with stock coolers. The Ryzen 7 2700X has Wraith Prism and the 2700 non-X has Wraith Spire w/LED. The stock Wraith Spire and Wraith Stealth coolers for other Ryzen 2000-series CPUs do not have LEDs.
Of course, you can always turn the LEDs off.
But right, OP was asking only for a 15W cpu and the gpu level of a 650 Ti, which is weaker than this.
It's smaller than Mac Mini, yet you can do serious gaming, Deep Learning and cryptomining with it; that's not possible with Hades Canyon. It's quiet as well. Power consumption should be quite a bit lower than 1080 version as well.
But Thunderbolt is going to become an open standard. So hopefully that means lot of people will be design or selling IP for SoC. And I hope it would be in time for AMD to include it in their Zen 2 products.
"On 24 May 2017, Intel announced that Thunderbolt 3 would become a royalty-free standard to OEMs and chip manufacturers in 2018, as part of an effort to boost the adoption of the protocol."
If you gonna get one of those, go for 35W TDP CPU as I had some issues with overheating on Ryzen 2400G using this case -- https://www.newegg.ca/Product/Product.aspx?Item=9SIA93K72F62...
This is a complete part list of my build -- https://ca.pcpartpicker.com/user/avolkov/saved/#view=mYmM8d
When do you see the lag? While playing games?? development work??? I am looking forward to buy an NUC, I do lots of photo editing and development.
- Morex 557 Universal Mini-ITX Case
- Mini Box PicoPSU-150-XT 12V
- ASRock Mini-ITX Motherboard - AB350 GAMING-ITX/AC
- AMD Ryzen 3 2200G
- Noctua NH-L9a-AM4 low-profile
- 12V/10A power supply with a 5mmx2.5mm plug
- your choice of DDR4 and M.2 SSD
Amazing performance to say the least, the GPU makes a huge difference.
About driver support, based on my experience it’s the other way. Custom designed hardware decreases reliability. I have never encountered issues caused by driver support in the desktop PCs I’ve built from commodity components. But I remember issues caused by non-standard hardware design in e.g. sony vaio laptops.
They said 2nd quarter but there has been nothing but silence since CES.
By breaking Intel's hold on the NUC form factor, it should bring prices down as well.
edit: As AMD is already throttling DRAM speed dynamically for Ryzen Mobile to conserve power, it would be an interesting experiment to see if they could throttle ECC DRAM even lower, so that it consumes less power than normal DRAM.
If you don't need or want your employees to have a portable machine, it offers all the traditional benefits. Specifically:
- Lower purchase price/higher performance for a given price.
- Lower failure rate/easier to repair. They have to make far fewer design compromises. They don't have to worry about battery size/life, they have plenty of space for proper cooling and don't have to be pushing the limits thermally, and with all the space they can design for easy servicing. Open up a modern corporate desktop machine and it's probably not going to require any tools to swap out anything but the PSU or CPU/Motherboard. It's also not going to have much in proprietary parts hard to find or purchase for a reasonable price.
- Upgradeable. No soldered on RAM or Flash/SSD, expansion slots, etc. Plenty of places don't have the budgets for "ideal" upgrade cycles and so that can become important to keep what they have tolerable for longer.
The sensitive information leakage thing is protected via full disk encryption in all those cases.
At least once a year I have to explain to the auditors that no, we do not clamp our desktop boxes down. Every system is set up with full-disk encryption.
Auditor: "What happens if someone steals one of those boxes?"
Me: "We lose perhaps 1k worth of kit. An annoyance. Someone has to spend time reinstalling their setup. But the data on the disk is illegible garbage. Useless."
Auditor: "Have you documented that as an accepted risk?"
Not everyone (as in "almost nobody") has demands that stress any modern processor. I'm a developer and the small laptop I carry to hackathons is a Celeron-based Acer. Emacs runs fine on it and it's very unlikely I'll ever do more running Python web apps and a browser with one or two tabs open.
To put that in context, one desktop PC was shipped for each 2.2 iPhones. That's pretty impressive.