I would really like to be more enthusiastic for my next build to use something like this, but all my computers are presently trustable in a way new platforms with proprietary coprocessors that haven't seen me_cleaner support cannot achieve.
It really sucks to give Intel money - its not like they support the me cleaner project and are actively antagonistic to third parties disabling their backdoors - but at some point it stops being a matter of principle and becomes one of practicality. I can disable the unwanted parts of the hardware on one platform and not on the other.
AMD could drop Arm and move to a RISC-V based secure enclave. Google is developing OpenTitan as open hardware based on RISC-V.
Meanwhile, AMD seemed to in the past.
Hence why its practicality. Neither respects user freedom, but the community has reverse engineered the ability to disable the backdoor of one company and not the other.
I haven't really seen that mentioned much I wonder why is that. I do love the potential for Zen2 + 7nm. The 65w of 3700x and the high frequency of the 3900X both suggest interesting potential for the future. One could end up seeing that the Ryzen 5, six cores might have higher overclocking headroom.
Then there's of course Navi, the first new GPU core in a long long time.
AMD just aren't selling many CPUs to business desktop system integrators, partly due to dubious tactics by Intel to keep them out of the market. If you sell most of your CPUs to enthusiasts, it just doesn't make sense to squander die area on a crappy iGPU. Gamers obviously want a fast GPU, but so do most creative professionals - Photoshop is heavily GPU accelerated, as is Premiere and Resolve, not to mention essentially all 3D modelling and CAD packages. Scientific computing is also rapidly moving towards the GPU. GPU performance has a surprisingly large impact on day-to-day responsiveness, because all the major browsers use GPU compositing.
The market for fast chips with crappy iGPUs just isn't as big as it used to be, nor is it particularly accessible to AMD. The Athlon and Ryzen APUs make a great deal of sense for the current market, offering a good balance of performance between CPU and GPU. I expect to see 6 and 8 core Ryzen chips with Vega GPU cores as part of the Ryzen 3000 generation, which will further close the gap.
I'm a developer, and I want fast build times. I don't need a dedicated GPU for that.
Right now I'm squandering money and power on a dedicated GPU which is probably idling at 0.0000001% rendering a composited 2D desktop in its sleep.
Still a better deal than Intel and a superior desktop experience.
Implying devs don't use dedicated graphics cards...
Then again, I'm an i3/sway-user, so I guess I don't exactly represent the average (Linux) user.
For AMD, what they've done makes the most sense, as fighting Intel in the mobile space is the toughest market to break into. You can get a cheap Geforce 1050 for $130 or so, and with the perk of having great OpenGL/DX drivers to keep anything that you do end up using it for, nice and snappy. I'm in your same boat and use a 2700X and a 1060.
AMD should really have their motherboard vendors add some sort of basic functionality, like the old IGPs.
I also considered going with a USB-DisplayPort adapter, but it was more expensive and I wasn't sure how well it would work.
I have a separate gaming machine, and would have rather just used the integrated for my Linux machine. It doesn't look like AMD is refreshing their APU lineup at all in this release. Did I miss it, or are there no APUs in the list?
The only market that doesn't care about CPU price is the market that doesn't care about price at all. i.e people building systems with i9-9900K or X299 platform + RTX 2080 TIs.
In the $50 range you can get an R7 350, which promises 4K support (although I suspect that will be at 30Hz) and would certainly be enough to light a monitor with enough oomph to put shadows and transparency effects on your windows.
It supports 4k@60. It looks like everything with a GCN core does, so most 7xxx, most 2xx, and all 3xx.
A Radeon R7 240 from 6 years ago will still set you back $50, but will not give you modern video outputs or video codecs (although it at least still has relatively prime driver support). It's probably even slower than Intel's current integrated graphics too. Might as well go for the upsell then.
The prevalence of internal GPUs unfortunately seems to have killed the market for up-to-date very low end discrete GPUs. The "budget stuff" starts at $80, which is quite steep for something that barely has added value over an integrated GPU.
What? No you don't. They literally have an entire line dedicated to the exact use case you mentioned (CPU w/ iGPU for business use):
The 2400G, which I think is currently the top desktop APU, launched in Feb. 2018 and has four cores.
Regardless, I do expect G parts will be announced soon, too. 3 CPUs definitely isn't a full lineup.
Though that temps are higher because of that, resulting in more work for the cooler, is not ideal for a CPU that's apart from that great for a HTPC.
For the vast majority of professional use cases 8 core / 16 thread running at 4.5+ boost is going to be way enough for a while. AMD sure has spoiled the market in just two years on commoditizing 8 core desktop chips because in 2016 the conversation would be about what flavor of $400+ quadcore or $800+ sextacore from Intel you were going to get.
Hardly the latest technology but you can plug a monitor into it.
Here we are, 8 years after Apple started shipping "Retina" displays, and PC software hasn't caught up. It's embarrassing.
Edit: Found a few AMD R5 230 for the same 30-40$ range. Assuming the drivers are also good that seems like a good option.
Edit2: Researching some more the R7 240 has a similar price and is probably the first that is already supported by the new amdgpu drivers so may be a better bet.
 NVE0 here: https://nouveau.freedesktop.org/wiki/FeatureMatrix/ Seems like everything but power-management is fully supported. Need to do some more digging but if the incomplete power-management means it just doesn't throttle up as much that's fine for the application.
The RX550 is low power enough to not need a PSU connector but for some reason nobody has made a passively cooled version.
But 64MB of L3 Cache? In a consumer CPU at a price that I would hardly call expensive ( I would even go on to call it a bargain ). We used to talk about performance enhancements and cache miss, we now have 64MB to mess with, we could have the whole languages VM living in Cache!
When dual-core processors came out someone said you could now have one core run your stuff and another run the anti-virus. That was widely joked about. This feels a little close to that. Having more CPU cache than we recently had RAM ending up being used for programming language overhead.
Looking back to the 8M total RAM I had on a Mac SE/30, running A/UX Unix and a MacOS GUI comfortably, the sloppiness of modern productions is a disgrace and an embarrassment.
What galls is not the wasteful extravagance. It's the failure of imagination that makes such meager, pitiable use of such extravagance. We make, of titanium airframes and turbojet engines, oxcarts.
It is a trade off between Cross Platform, time to market and development resources. And unlike any other scientific and engineering industry, Software Development doesn't even agree on a few industry standards, instead everything is hyped up every 2 years and something new come around and becomes a new "standard". And we keep wasting resources reinventing the flat tire.
 for the curious reader, I've two machines: a 3MB-L3 i5 and a 20MB-L3 Xeon. So id't be looking at 20x and 3x improvements -- without taking into account other architectural improvemnents, like not-underclocked-AVX2, and the GHz count
And just in case that somebody missed it, here is again the link to the PDF "What Every Programmer Should Know About Memory" by Ulrich Drepper...
...which was posted here sometimes earlier this year and which talks about every detail of RAM and L1/2/3 cache access times and architectures etc... . Very heavy for me (read so far 25%) but as well very interesting.
(There's also the extra complexity around MiB vs MB for base 2 vs base 10 prefixes. In this case it would actually be MiB as RAM and cache is normally base 2 sized. But not everyone uses that, relying on convention from context instead.)
1. playing some background music
2. running a local database
3. running a local webserver
4. running a browser
5. running an ide
6. running all of that stuff concurrently while testing the backend code
7. doing builds which are multi threaded in near linear speedup fashion in many languages/environments.
I don't know if 12 cores 24 logical is going to make that scenario feel overall better than 4 cores 8 logical, but I do know that 4x8 feels much much better than 2x4 in my own use cases.
#7 alone can be a really, really big win for long compiling projects.
Electron apps do manage to make a mockery of my PC's specs though.
(For some reason, when building projects like LLVM with -j 16, 32 GB w/o swap may be not enough. With -j 2, it is enough, but takes an eternity).
Things speed up rather linearly up to -j28, but once I get past -j28 (say -j32) it levels off, and -j56 starts being counterproductive.
Same thing with the 160 thread POWER8 machine I have access to. That one runs 8 threads per core, and CPU-limited -jN tops out at around -j20.
All of this is very worflow and CPU specific, but generally speaking don't blindly trust what things like "htop" show you as the number of available CPUs, under the hood many of them aren't "real".
When you start two unrelated processes one after the other (such as multiple compile steps) that operate on some of the same in-memory assets (files, things being sent over a pipe etc.) they're not just going to be in the main RAM, but also L1-3 cache, and the RAM itself may be segmented under the hood (even if it's presented to you as one logical address space).
Thus you can benefit from pinning certain groups of tasks to a given CPU/memory space if you know the caches can be re-used without the OS having to transfer the memory to another CPU's purview, or re-populate the relevant caches from RAM.
Examples: multiple VMs, big editors/IDEs, local databases, local k8s clusters, local network simulators, and dont even start with AI or big analytics stuff.
Will also be getting a Ryzen if the singlecore benchmarks show it's reasonable. Bunch of games I play at home tend to absolutely trash single core perf.
And then there is the hassle of setting it up for all the possible projects one might work on.
Just some examples. Instead of developing electron application think of making changes to chromium. Instead of depeloping Qt application think of developing Qt itself. Etc etc.
Most of the time buildsystems work. Including one in Visual Studio. But I have never encountered a system that would always work flawlessly. From time to time one gets things like changes not detected or something else going haywire and one has to do make clean and just redo the whole thing.
Another thing is when your'e doing profile guided builds then one has to do a full rebuild after each profiling run.
Also I always connect it to a large monitor + mechanical keyboard both at work and at home for any serious work... so not sure why you mentioned the neck/hand position.
I get the advantage of having nice large monitors, proper clicky keyboards and mice, and the ability to charge/power the laptop - all over one cable. When I want to move away from the desk - unjack and keep going.
Having 12 cores without the hyperthreading issues with intel and boost to 4.6 is going to rock.
This is going to be a solid 75%+ boost to performance, given I regularly max out my machines threads. Pretty amazing improvement in 2 years.
Ryzen runs them tight to spec, whereas intel is more relaxed. Here tight is better, but it also means exposing lies in manufacturer specs on ram.
Cheapening on ram is never a good idea, but there's always the option of configuring more relaxed timings on the firmware settings, if the ram isn't up to its advertised spec.
It's also possible your CPU is one of the first iteration of 1800x which had this issue that famously caused segfaults when compiling software on Linux. This was only seen on the first months of 1800x, and AMD offered free replacements to those affected. It's likely a bit late for that, but you're better off upgrading to Ryzen 3xxx anyway.
I backed the memory clocks down from 3200 (which it's supposed to be rated for) to 3000 and it passed with no errors.
This is the issue they're certainly talking about: https://www.phoronix.com/scan.php?page=news_item&px=Ryzen-Se...
And it is assuredly not a problem with any Ryzen processors manufactured after the very first few months.
After applying workarounds, I still see some strange crashes, not sure if at least some of those are still related to the CPU hangs from the bug above. TBF this might not be the CPU's fault. This is all quite annoying to me and time intensive to investigate (where do I even start?). Even though I really like AMD's tech I am quite frustrated and I haven't had these problems with my previous Intel builds so far...
I never experienced it myself but in my BIOS there's an option about "power on idle" that's suggested to not turn off for compatibility (I don't recall the correct words but I could check).
It usually depends on MB manufacturers and BIOS/AGESA versions.
sudo ZenStates-Linux/zenstates.py --c6-disable
Although they may save it for a "Zen2+" or something similar, like they did with 32 core Threadripper
I think the 3900x might be a happy middle ground. I'm guessing we would probably see (with the increased IPC, core count, and core clock) like 70-80% increases over a 2700x in these kinds of multithreaded workloads. So probably slightly more than half way to a 2970x or 2990wx?
Tradeoffs: threadripper boards officially support ECC; Ryzen boards are hit or miss. TR boards tend to be priced around $300 whereas you can get a Ryzen board for $100ish. TR had (prior generations) twice the DRAM channels and way more PCIe lanes than Ryzen, so if you're doing GPU-intense work or something else with use for lots of PCIe, that's a plus. Not to mention, additional core count over Ryzen, although with greater inter-die latency. Not sure what that will look like with TR3.
Is 3900X worth $500 at list over $400 3800X at list? Actually, yeah, it looks at least 25% better to me (esp. the doubled L3) if you can use the cores. The 3800X is overpriced; they probably are learning from the 1700<->1800 dynamic in gen1. Is it worth it over the 3700X at $330? Maybe not.
For me the question is really, how long will Ryzen 3000 be on the market before those better IPC/clocks/core densities show up in TR3? PCIe 4.0 support is huge; AMD wasn't anemic on PCIe channels on Zen and Zen+, and PCIe 4.0 doubles bandwidth from 3.0. Hopefully those IPC gains do not come attached to Spectre/Meltdown-like vulnerabilities. I'm excited for Zen 3 TR! That might be worth an upgrade from the 1950X. Meanwhile, it doesn't seem like Intel will get to PCIe 4 until 2020 (although that's reasonably soon).
I think the 3900x is in a great position to provide the best of both gaming and productivity. Extremely aggressively priced at $500 for the horsepower it seems to give you.
I suspect there is going to be a 16 core 3950x later in the year. Maybe with slightly lower single core frequencies. But maybe 20-25% greater multicore performance.
I bet they are delaying that to keep something up their sleeves when Intel responds. And to not totally cannibalize TR prior to releasing TR3.
Do any existing AM4 mobos / chipsets have support for full PCIe 4.0 bandwith (64Gbps)?
Or will the existing mobos be limited to PCIe 3.0 (~5-6Gbps)?
All of the five processors will
be PCIe 4.0 enabled, and while
they are being accompanied by
the new X570 chipset launch,
they still use the same AM4
socket, meaning some AMD 300
and 400- series motherboards can
still be used.
Would be awesome because the rate when transferring terabytes across SSD RAID arrays will see a 3-10x increase from ~500-600MBps to ~1.5-6GBps+. Fantastic!
AMD now has 256-bit AVX2 units, but unlike Intel, they don't need to downclock due to 7nm TSMC's lower power requirements compared to Intel's 14nm process. This should also affect 128-bit AVX instructions. It should be possible to reorder and push 2 through the pipeline at the same time in a lot of circumstances.
To be clear, the existing chips had two 128-bit vector units, and the new ones have two 256-bit vector units. So that would get you 4 total.
Also each unit, at least on existing chips, is capable of either a single FMA or a completely independent multiply and add at the same time. I don't think Intel chips can do this?
I am hopeful for the actual third party benchmarks.
People have been banging their heads against the 'rewrite this software to take advantage of multiple cores' wall for decades. The lack of progress is telling. For a straightforward example, look at the second half of this Factorio update blog: https://www.factorio.com/blog/post/fff-215
Factorio is a sim game that you would think on first consideration would hugely benefit from a multithreaded design. It turns out that doing so is actually slower(!). And although this example is pulled from a game, it is essentially the same story again and again, no matter what the subject.
Now consider the second half of your statement- that the main benefit of multi-core processing is that it provides more CPUs, so that if any one gets choked, the general environment continues to operate.
(Which is true, and a great advantage of having a multi-core CPU.)
But consider a little deeper, too. If the first, best defense we have regarding multi-core designs is that they are simply more single-cores to have on hand, what does that say about the relative value of parallel processing vs. single-thread performance? Inherently serial workloads dominate across the board, in every field. The few parallel problems we have, we have because people have put a lot of brain sweat in to figuring out what, exactly, we can even do with all these cores lying around.
Meanwhile, there are entire classes of problems that are simply waiting for better single-thread performance before we can move ahead.
This is a very real problem, and it isn't going away.
The people who would benefit from more cores have the server specific lines of CPUs to choose from, so that makes consumer grade CPUs a compromise between core count and single core performance.
And some games (like some Source-engine titles) crash if you have a high core count.
I certainly benefit from the higher core count because I usually have a VM, 40 browser tabs, Slack, and a bunch of other stuff open at any given time, but my parents would see no benefit with their 5 tabs + iTunes + Word usage.
I have been running 4 players with one of the most taxing modpacks on a mid tier digital ocean VPS with no hitches. Not many players I guess but in case you were curious if you could use a VPS. Even when we had multiple excavators sending thousands of entities through sorting pipelines it was stil doing surprisingly well.
It's a vanilla server and I also get around 4 players. The real problem occurs when people are generating new terrain while flying on an elytra, sometimes causing the server to crash altogether. When not exploring, it will frequently report "Can't keep up!" messages even when hanging around spawn, which I think might be due to the truly insane amount of hoppers we have (although haven't seen this as much in the recent update).
If you're curious, the CPU is a i5-3570K @ 3.40GHz. The game is certainly playable, but it struggles under load like I described.
This is why when looking at iMacs, I'd rather get the iMac than the iMac Pro. Multiple cores just aren't as important to me as is single-core performance.
I'm sold, my desktop is most likely going to be a Ryzen (although not the 8/16 monster, come on, it's a desktop, if I need high core count, I have stuff at work for that).
"Medium" is really good.
Not very surprising even only from the manufacturing process used difference though, since Intel's next desktop generation is still 14nm it won't be changing soon.
It's a debugger from Mozilla that works by recording and replaying program executions.
That's why there is an entire range of such weakness that doesn't affect AMD while it hurts Intel very strongly.
Hear it here.
I guess AM4 also means no real improvements on the PCIe lane count: Would love to see real and IF switches to give a bit of flexibility and what they plan for a new Threadripper.
The new Ryzen 3000 CPUs support PCIe Gen4.. so while the number of lanes will remain the same, their bandwidth could be doubled. Just announced Navi GPUs also support Gen4.
Those duel CCXs means the 16 core Ryzen could be ready for release (that's what I have my eye on!). It's funny, going from 12 to 16 cores is basically the i7 6700k in my desktop.
Plus, it's great that Intel is being aggressive by releasing the 9900KS (which is a pretty good GPU for gamers). It's been a while since we've seen any real competition between AMD & Intel.
I think the cooler is a forgotten value add over Intel chips, since any chip you buy from them, whether it comes with a stock cooler or not, is going to require an aftermarket cooler in order to get decent temps/noise levels.
NVIDIA redefined TDP to their convenience, to mean something more like "averages". So their numbers can't be compared directly.
Always look at third party measurements. I'm looking forward to Navi's, while on the topic, as they've announced large improvements in power efficiency.
Their dominance and tendency to push for proprietary features seems quite bad for the industry as a whole.
I'm expecting some new, crazy high binned, extremely expensive Intel CPU with sky-high TDP to be announced soon.
Apart from that, I think you meant two-and-a-half, not one?
And their interconnect tech seems like it's going to be of huge importance in the server space - I can only imagine the yields on 8x4 core modules will be far higher than on a monolithic 32 core chip.
The next EPYC's going to be up to 8 chiplets x 8 cores, competing against Intel's current 28-core monolithic die (or their dual-die non-socketed 56-core). How many cores remain enabled on a hypothetical next-generation Threadripper is an open question, but they would probably go beyond 32 cores total. And a 32-core Threadripper would probably not have 8 chiplets but rather four fully-enabled active chiplets and four mechanical spacers.
To my knowledge, those $40 silent air coolers perform well but still can't match a big liquid radiator.
The best air coolers can match AIOs. Specifically Noctua's air coolers can go toe to toe with most commercial AIOs, with one fewer point of failure (pump). At least according to Gamer's Nexus and similar sites.
The biggest arguments for AIOs is:
- Space around the CPU block (air coolers in this league are HUGE)
- Improved small form factor build flexibility
In terms of raw performance, only custom loops really challenge high end air cooling.
With air cooling you typically save some money (even a high end Noctua is often 40% or more cheaper than a branded AIO). No leak issues. Fewer points of failure.
I recently purchased an NH-U12A and can confirm it is excellent.
It makes researching and comparing a bunch of brands and parts as well as waiting for the best times to buy a breeze.
Thanks for the recommendation, though; it really is an awesome resource.