My desktop/plex server is due for an upgrade next year. Maybe the threadripper price will go down.
Hardware BMCs have their place (e.g. low-overhead compute-cluster nodes, where free cores = profit.)
But, for most workloads—and especially consumer workloads—there’s no reason that the concept of a “Baseboard
Management Controller” needs to be instantiated as hardware; you can just as well set the system up with a hypervisor OS (e.g. a minimal Linux KVM install; or an appliance-OS designed for this, like VMWare’s ESXi), set your regular workload up as one VM guest (and pass through to it all the nice hardware you have, like GPUs), and then set up another “control plane” guest VM that exposes IPMI management of your regular guest and of the hypervisor itself. As they say, “there’s no problem that can’t be solved with another layer of indirection.” ;)
(I should note, this is exactly the setup you get by default if you install ESXi [hypervisor] + a free home license of vSphere Server Center [BMC-equivalent appliance] onto a box. I was happily using this exact setup for quite a while, though I eventually moved to Linux+KVM+Xen just because I wanted the host to be able to create guest volumes from a thin-provisioned storage pool and then serve them out to the guests over iSCSI, as if I had a teeny-tiny SAN.)
Of course, this has only become a viable approach for IoT integrators very recently, which is why we don’t see any IoT appliances (e.g. NASes) coming set up this way from the manufacturer just yet. Until recently, your choices for building IoT devices were microcontrollers at the low end; old ARM cores in the middle; and Intel’s most “power efficient”, feature-stripped cores on the high end. None of these were particularly suited to hosting virtualization. But Ryzen is! While it may only be affordable to home-builders today, I expect to see AMD chasing Intel up on its “power-efficient embedded profile” market segment quite soon, with Ryzen-based, highly-cored, virtualization-capable equivalents to the Intel Atom line being sold for cheap enough to get system integrators excited.
But! The FreeNAS community is a bunch of grumpy sys admins. I’m considering going down the Linux and ZFS route. I’d be able to do more with VMs (I feel more comfortable in Linux vs FreeBSD). I’m building some IoT Pi’s to collect data and have it a Linux box would be nice.
The UniFi USG handles DynaDNS and my VPN.
- You generally don't want to run storage servers virtualized.
- Tooling matters. There are multiple reasons I generally do things the same way at home as I do at work (within reason).
- Probably a niche concern, but I have some hardware that is only configurable during early boot.
- Virtualization costs performance. Not a huge issue at home, granted, and you have to quantify it for your specific workload. (It is usually going to be IO.) But it certainly can matter with home workloads; home theater video processing is probably the most common.
I use both for what they're good at. IMPI is for managing hardware. Virtualization is for not needing more of it.
Not sure I want to spend the money on a 10gb backbone or wait and upgrade the desktop. My UniFi switches and supermicro board can do link aggregation but my Asus mobo only has one port.
DAC is the most sensible option considering I need to go from my basement, up 2 more stories to my attic, and drop into my office.
With regard to USG the Dream Machine looks to be a kind-of-sort-of successor to the really old USG. I would guess a USG replacement is coming, but the Dream Machine Pro will have 10Gb as I understand it. They do product rollouts pretty horribly IMO.
Not the op, but small form factor home servers are so excellent. Pihole, UniFi controller, Home Assistant (coffee machine warming up for 30 mins before I get up!), some testing VMs and a load of docker containers for various chores.
Not to mention that the better bandwidth (if they use a 128 bit interface like the MacBooks) should help increase their iGPU advantage.
I used the InWin Chopin case, which is too small for a video card. Here it is next to a NUC: https://imgur.com/l7dFKCl
This server is used for development and has an i7-8700. Unfortunately AMD does not offer a similarly fast chip with integrated graphics.
There are rumors of AMD building a NUC competitor, which means we may have more options soon.
My current build uses a core i5-4570s, but I'd love to have more cores & threads for running additional VMs. My use cases don't require _real_ server hardware (IPMI is overkill), just the ability to run a good amount of test and lab environments.
How can OEM's still ignore AMD, I mean. It's obviously very popular. They have the best offering and no one can match their price.
How long till Intel's monopoly will fall, because of consumer demand and seemingly (almost) perfect long-term execution of AMD:
- 2016 - 8,1% market share
- 2019 - 18 % market share
According to Intel, they have free game untill 2021. My guess is that they will have free game long after 2023 ( when 3nm architecture will be released by TSMC)
I think the market share now is mostly because of DIY builders. Am I wrong? The only serious AMD OEM offering i have seen was from Microsoft's Surface.
Side-note: AMD is mostly interesting for desktops for now ( -> battery life), until next Q.
But how come OEM desktops lack an AMD alternative? Any articles/information on OEM's partnerships with Intel?
PS. Please upvote a more ontopic comment concerning the product itselve. I didn't want this to be the top comment.
AMD are starting to make some waves in the hyperscale DC with EPYC Rome. The real fun starts if that progress translates into your corporate workhorses starting to opt for EPYC over Xeon in their data centres. Intel are a big company to take down a peg, and have had little motivation to innovate any more than needed. AMD also have a bit of a history of doing amazing things and then dropping the ball in spectacular fashion, which is a candle that no corporate buyer wants to be holding when they've pumped a ton of capex into a 5 year deal pinned on fleet maintenance. Intel might not be dynamic, but at least you know what you're buying into. I think that the same translates to those making the purchasing decisions for corporate users. For enthusiasts, what your machine runs on is a big deal. For everyone else, as long as it switches on and lets you get through a working day without going tits up, who cares? And if it does, it needs to be fixed or replaced before you start falling behind on work. That also trends toward buying patterns that focus on known good config.
It's a fun market to watch. If I were back in my old role, I'd be looking for EPYC solutions in my servers and attempting to wrestle a few test laptops with Ryzen 4000 CPU's next year, even if only to worry my boss.
But it's based on Skylake. I'm not convinced that the game will shift sides that fast.
I only mentioned the 3nm becauses it seems a big difference. But i think the architecture is more important and that seems to be a home game for AMD right now.
Related, AMD's consumer zen 2 chips support ecc, which Intel has always segregated to their enterprise grade parts. Typically ecc is a requirement for the volume business oems (Dell, mostly) so amds new products obliterate any value proposition Intel could even have.
Apologies for odd caps, mobile auto correct fights me.
Intel has been mired in an antitrust action in Europe for a decade now, based on charges that they have engaged in
> two types of conduct by Intel vis-à-vis its trading partners, namely conditional rebates and so-called ‘naked restrictions’, intended to exclude a competitor, AMD, from the market for x86 CPUs. The first type of conduct consisted in the grant of rebates to four OEMs, namely Dell, Lenovo, HP and NEC, which were conditioned on these OEMs purchasing all or almost all of their x86 CPUs from Intel. The second type of conduct consisted in making payments to OEMs so that they would delay, cancel or restrict the marketing of certain products equipped with AMD CPUs.
(quoted from http://curia.europa.eu/juris/document/document.jsf?text=&doc...)
This resulted in the levying of a €1bn fine against Intel in 2009, which was sent back to a lower court for review by the Court of Justice of the European Union in 2017. (Not on the grounds that Intel didn't do these things, but that the actions by themselves didn't automatically break the law until someone could demonstrate they actually had anticompetitive effects.)
Most of them got burned very very badly by Opterons. They are likely waiting if this is new AMD or the good old one that cost them millions in the past. Also, AMD has no production capability to displace Intel in any meaningful volume.
Azure also and Netflix is considering to change their CPU portfolio :). One of the latest super computers was AMD only.
But that is not the OEM market.
Rumor has it that Lenovo will do a M75Q machine with AMD. Seeing is believing.
In the last 45 years or so there were many times where Intel's status as king of the hill looked vulnerable: Opteron, Athlon, Cyrix, PowerPC, Motorola 68k, Z80, 6502, ... They've always come back, and anybody who bet against them has been burned badly.
That's the whole point of cloudy businesses. They can masquerade it under whatever pretense but the end result is that they want you to use more and more hardware resources where each abstract hardware unit gets healthy profit margin for infrastructure suppliers.
Skipping cases like Facebook, Google, Netflix scale most of real life businesses can chug along just fine on dedicated servers with the software written in "normal" high performance languages. You would be surprised what $10,000 worth of modern hardware with high performance software can achieve.
2) You are making an uber-generic claim. Where is substantiation? I'll make another claim of the same nature: with this never ending rapid prototyping along with whatever they sell under Agile/SCRUM/etc methodology the end result is unbelievable mess of patchwork that after a while becomes to impossible to maintain and add new features to. It literally becomes house of cards
Meanwhile as a user I suffer from crappy and slow software and told to upgrade very reasonable hardware otherwise.
I have the Intel version (running Hackintosh). It's very fast and quiet. Only downside is no Thunderbolt.
On the other hand, they're so cheap you can just buy two. That way you have another one on hand when the first one craps out in six months.
My impression is that the NUC has better quality these days - if you can find one. The i5 and i7 models seem to have virtually disappeared from store shelves within the last month and nobody knows when they're getting more. Otherwise the NUC is amazing bang for the buck if you want a small form factor. The power of a Mac Mini for roughly half the price. And apparently you can Hackintosh it fairly easily.
If you don't care about size, you can build a high quality 8/16 core Mini-ITX based AMD server with a mid-range video card for ML applications at less then $800.
The reddit sub has a steady trickle of leaks. https://www.reddit.com/r/intelnuc/
I know I’m arguing against 20+ years of software practice, but Moores law is over.
The only thing here is that it will introduce certain overhead and has to be carefully managed. Also one has to make sure if it has measurable ROI.
But this is a safe place for us devs.
Is it bad that I read it as bitcoin corvette at first?
I still don't feel the need to upgrade but when I do this might be the first time where I really consider an AMD CPU. Things are looking really really solid for them lately. I could totally see using one for an all purpose development, video editing / recording and gaming box to replace this i5 eventually.
for the average user, not as much.
Actually there are workloads on which they will heavily benefit the average user as well. Image processing and video editing come to mind especially. You may argue that few users do this on a PC nowadays, but that's mostly oversight by OS developers. MS should revive Movie Maker. I used it a lot 10 years ago when my kid was little. Apple already ships iPhotos and iMovie with every new mac, and both of them are pretty great for what they were designed for. Then there's also more and more 4K content on youtube by the day. My 5 year old iMac does spin up its fans quite a lot nowadays.
I think it's also a good time to start moving some of the AI workloads to the edge as well. It's ridiculous that we have near instantaneous on-device speech recognition on phones now, but PCs still have to dial back home and incur perceptible latency. I want local speech recognition out of the box in Windows and MacOS (and ideally Linux as well), with automatic punctuation and robust to background noise.
I think the biggest perk is that AMD's new CPUs allow for faster hard drives which many people argue are more important. This is my main reason for looking at one.
 - https://www.youtube.com/watch?v=CVAt4fz--bQ
 - https://weblogs.asp.net/scottgu/Tip_2F00_Trick_3A00_-Optimiz...
Aside: I did a min-cycle upgrade to a good nvme drive (samsung 860) and a GTX 1080 when that video card came out, which was a pretty big bump on the old box.
In your day to day, how big of a difference is that drive vs a SATA SSD you might have bought 5-6 years ago?
I know going from a HDD to an SSD was a mind blowing experience but now things open so fast even on an old SATA SSD that I find it hard to imagine things can feel that much faster.
I have a Samsung 970 EVO and I spend most of my time compiling with g++. Copying large files is nice but overall I find the disk cache in Linux mostly makes the drive performance irrelevant for compiling if you have enough RAM.
Also be aware that if you have an older motherboard with PCI-E 2.0, you'll be limited to 1GB/sec transfer rates anyways.
I also have 960 and 970 evos, they are mostly fine, but there's a fall in speed during writes, when the slc cache becomes full. The 960 is also starting to have longer trim times than it used to.
So don't spend too much! Just get something known to be reliable with a decent cache on it.
the claims in a sibling thread about a given web build being 30 seconds on a sata SSD -> 5 seconds NVME are not likely true, I think that poster may be assuming the differences would be that large.
Not too much difference for most tasks, but man if you're doing a lot of work in node (or anything else touching lots of files) it's pretty significant. But ymmv on this.
Database work is significantly faster as well.
But now considering upgrading that server to a r5-3600 anyway, as I've been using VS code's remote feature for development on the train, and that would still benefit for faster compiles.
The 3950X should make my bluray rip/encodes go so much faster as well. I have a stack of a few series waiting to even rip, the older i7 I was using was just about painful how long CPU encodes took, and GPU encodes were really crappy, or too large (h.265/hevc). I cannot state how much I've been looking forward to a 3950X (since this time last year actually). Which should do well for me for the next 5+ years.
Assuming you're using "nuc" as a generic term, is there an AMD-specific search term I can use to find these rumors? I've been in the market recently for a nuc form-factor machine and I've wanted to go with AMD, but haven't found much.
But there are diminishing returns to adding more cores past a certain point which will depend on your codebase and compiler. If your builds are at 100% CPU utilization most of the time then you will probably see pretty large gains, but sometimes a significant chunk of the time ends up being bottlenecked by single threaded performance.
You should check out Phoronix's Rome benchmarks. Compilers seem to love L3 cache, and the new Threadripper parts have 128MB of it. https://www.phoronix.com/scan.php?page=article&item=amd-epyc...
The Epyc 7502 in that chart is going to be roughly equivalent to the 32-core Threadripper 3 announced today. Both are 32 cores with 128MB of L3, but the Threadripper part has a much higher base & turbo clock speed so it'd compile even faster. Probably.
It's a cool little trick few people seem to know.
having strings with common vector/map/unordered_map/set/unordered_set template specializations help a bit (i.e basic_string<char>, uint64_t int64_t, int and uint)
My methodology wasn't very scientific: when I found a template being specialized at a low-level, I added it to my list. another heuristic is anything that templates off of std::string (basic_string<char>), char, uint64_t int64_t, int and uint are all pretty good candidates as the likelyhood of them being reused everywhere is high.
Reasons for mysterious breakage:
- Compiler Updates
- Dependencies getting lost
- Code changes (you break things into parts, doesn't mean they work together now).
cause that's not even in the right ballpark for a stripped kernel config
as a rough reference, it took about 35 minutes to build the linux kernel on my xps 13 a few years ago. that computer has a 2C/4T kaby lake processor. your macbook pro might be a little faster if it doesn't have one of the ultra low power CPUs.
When compiling LVVM however all cores where churning along at 100% utilization, so I expect a big speed-up there.
I bought a Ryzen (Zen 2) for workstation, where I need to run a few VMs, a local k8s cluster, run builds, some browsers tabs, and Slack. I have everything running smoothly on top of a Linux 5 kernel, and so far, Im pleased with the results.
But I kept an older NVIDIA card, and the drivers always had a bit of trouble with desktop Linux support (like Wayland, plymouth bootsplash, etc).
I bought a 5700 XT in July; it was not usable out of the box, but all the pieces are at least upstreamed now. Desktop stability is great, gaming performance is great, and all the basic stuff (Wayland, Plymouth) is solid.
The userland tools aren't ported to Linux however, so you don't get access to the fancy social-media-augmented gamer stuff. If you want to overclock/etc you have to rely either on a /sys filesystem interface (which wasn't stabilized when I tried it but could very well be now) or third party tools of varying quality.
As for the actual experience itself, I've owned GPUs from multiple architectures (Polaris, Raven Ridge, Vega) and I've noticed a common pattern. When the hardware is new, it's unstable. A few kernel updates later (typically over a month) they run flawlessly. To be fair a lot of the crashes/freezes I've experienced could be traced down to Mesa and LLVM. I still would give new AMD hardware time to mature though.
Performance is on par with the Windows driver package (probably because they share a lot of code). You get your money's worth. Some of the games I run on DXVK offer near-native performance.
tl;dr there's never been better a GPU driver on Linux but it's not quite ready for your grandma yet
7870 -> 290 -> 580 -> just got a 5700 XT yesterday.
They are good. It generally takes 6 months after a card is announced for the drivers to work properly, but I'm currently on linux-mainline 5.4r6 and mesa-git and the 5700 XT is working nicely. On 5.3 and Mesa 19.2 / LLVM 9 there were a lot of graphical glitches and crashes, so that series should be in place within a few months.
The other 3 just keep chugging along working nicely. The 7870 is too old to get AMDGPU / Vulkan support unless its turned on manually, but that has worked in light testing.
My only complaint is that hardware video encoding is awful - it hogs enough resources to substantially hamper game performance if used concurrently, enough that it makes more sense to software encode on a beefier CPU than to try to use the hardware encoder on the GPU.
It'd've been nice if ASRock's micro-STX formfactor had taken off and gotten an AM4 variation (http://www.asrock.com/nettop/Intel/DeskMini%20GTXRX/index.as... ). It's a mini-tower bigger than NUC or the A300 but smaller than mini-ITX. Compared to mini-ITX, it takes an MXM GPU instead of having a full-sized PCI slot and uses an external brick PSU. The extra benefit on the AM4 side would've been that even a weak dGPU allows using non-APU chips, at least up to the supported TDP.
I had plans buying 64c TR3, but I'll be skipping this and next gen and buy TR5 with DDR5 in 2021 instead.
It does suck, but since they also seem to have dropped any of the lower-cost SKUs it's probably not a motherboard upgrade that's going to stop you from dropping $1400 on a CPU.
And if you really were considering a 64C one, it's hard to believe a price difference of ~$400 will matter on what's going to likely be a ~$4000 CPU. It's a ~10% price difference.
I planned to bump my Zenith Extreme TR with 128GB ECC RAM to 32c from this gen and use it for e.g. gaming, while investing into a TRX80/WRX80 64c TR. Now I am actually pretty upset; I'll rather invest into a bunch of RTX 8000. They went from something I was looking forward to in the past year to something I'd like to forget about ASAP, like with final GoT season... I might even become an Intel fanboy now.
Epyc has a different PCI-E layout from Threadripper and always did.
> I planned to bump my Zenith Extreme TR with 128GB ECC RAM to 32c from this gen and use it for e.g. gaming
I mean you still can, it just costs slightly more expensive than it otherwise would have? And instead of selling 1 used part you now sell 2?
Like I said I agree it sucks, but you seem to be really blowing this out of proportion. I'm far more annoyed at the missing lower-end SKUs than the motherboard cost. Where's the update for the 12-core where the platform IO is more valuable than raw core counts?
> I might even become an Intel fanboy now.
You're going to become a fanboy of the company that never does backwards compatibility just because you didn't get 3 generations of backwards compatibility on 1 out of 3 platforms?
The best way/time to express disappointment is to do it right away and in full force. If AMD were on IMDB, they would get 1/10 for handling this. I have all rights to behave emotionally instead of rationally anyway.
Used TRs sell for peanuts, the same for mobos (no demand for used stuff; look at what actually sells instead of what is listed for months), it would be going from $1.5k to $400, writing off like $1.1k in the process. And there will be plenty on eBay soon, putting even higher downward pressure (both AM4 3900x and 3950x now beat all TR1/2s up to 16 core, sometimes even 24c). The missing low-core parts is another thing that wasn't well thought out in all this, I agree.
As for Intel, they were always upfront about the need to change mobos with almost every new generation (the last few were exceptions); I also never had so many issues with any Intel pro board that I had with ASUS Zenith Extreme, their "flagship" TR mobo that can't even run 2x Titan RTX properly...
AMD never said TR4 was forwards-compatible. They did say that for AM4 & for Epyc SP3.
Hindsight is 20/20 yada yada but the lack of forwards-compatibility promises should be treated as rolling the dice on that.
Will I be hurting myself if I buy a computer with an AMD chip, in that I might end up in a situation where certain programs won't work for me? E.g., if I do fancy 3d modeling (Cinema4d, fancy renderers), if I do multi-threaded programming (in matlab), if I do physical simulations ( in COSMOL), etc.?
Other than that, you are safe.
Hopefully this new socket-change is for Threadripper CPUS only.
AFAIK They already had their own one distinct from the “regular” AM4-socket.
If you're buying it for yourself, it's easy to DIY. I just built a 3900x with 32gb (you could easily add more), super easy, cost me £800 total for cpu, ram and mobo. The only thing was availability of the 3900x, had to wait for a bit.
I already had:
- old case
- 1080ti gpu
- nvme & ssd drives
- 3900x (comes with a good heatsink+cooler) - £480
- 32gb (2x16gb 3200mhz DDR4, also runs at 3600mhz without issues) Ballistix Sport AES ram (micron e-die) - £140
- Asus TUF x570 Gaming Plus mobo - £180
Took about 2 hours to build and configure + about 3-4 hours researching what to buy! which you can skip if you buy the same :)
Very happy, it's super duper fast for workloads that use multiple cores.
Though to answer your question, I don't think you can go wrong with https://system76.com/desktops
There are plenty of places to find lists of parts picked by people who just love building computers, so getting a list of parts to order isn't too difficult even without a lot of knowledge into components.
That said, for people who really want to just get a fully assembled computer most Ryzen 9's are going to be sold by boutique builders. It seems like most of the big name OEMs aren't building many units with Ryzen. This is especially true for the highest end parts such as the Ryzen 9's and Threadripper. System 76 is probably the best place to go to get a professional looking machine, otherwise try the smaller gaming rig places such as Cyberpower or Alienware if those kind of aesthetics are acceptable (or wanted, some people need RGB LEDs everywhere :) )
Every socket based CPU I ever saw had a graphical mark or a little cut out on the top of the chip which lined up on the motherboard's socket.
Unless you went out of your way to ignore that marking and jammed it in the odds of bending pins are really really low / close to impossible.
I'd be more concerned about mounting an after market heat sink on the CPU. I don't know how much has changed in the last few years but the amount of force you need to use to lock them down makes you think you're going to snap your motherboard in half.
Unlucky, sure. Dropping things is unlucky, and that is all it takes. People drop things all the time.
Is it any easier to destroy than any other CPU? Just as easy to crash a Lambo as a BMW but people still drive those.
There's a lot more to time and cognitive load to account for than just plugging in RAM sticks.
> More wait time if you need to ship it back and get a new component.
> Then there's all the time up front to research desired parts and find the best prices.
These three downsides are not avoided by purchasing a pre-built, either.
1: The computer has been tested before shipping (at least enough to install an OS on it), so odds that it's got a DoA part is virtually nil.
2: Sure, but see above: less risk that any components will be broken.
3: You don't have to do nearly as much research - you know that they're shipping you a working configuration, that the motherboard socket fits the CPU and the RAM in the listed configuration. You might still want to do research to try to find the best bargains or performance, but the time is way less.
The advice to someone like me, someone who just needs a new computer once in awhile, that building is the way to go, is not so obviously the best advice.
Then, putting them together takes maybe thirty minutes?
Going prebuilt for a desktop is never worth it.
Getting the cabling in neat order takes that 30 mins alone. So does CPU+cooler installation. I always reserve a whole afternoon+evening for full computer rebuilds. Then again, that happens every 5-8 years so I'm always a bit rusty when starting.
Also, never had a prebuilt so could be they aren't as clean with the cabling of course.
It could be over 30 minutes for sure, but at most an hour or two unless you've never done it and have no idea what the parts do. But just looking at the parts on Amazon or whatever, I see a lot of people figure out where they go in their head.
Anything else you're looking at at least 2 weeks. Typically it's better to just buy another machine or buy more parts then repurpose the fixed one.
The downside is, a fan capable of cooling a 200W CPU is going to be huge; mine only has a few millimeters of clearance, and that's in an EATX case. AIO water-cooling is easier to fit.
I assume anything that's rated for 250watts will work fine with these chips. Probably not ideal for overclocking, but if your case has decent airflow you'll be fine.
threadrippers don't require liquid cooling, they have a larger heat spreader and lower peak clocks so heat is more manageable.