There are already reports of Intel CPUs getting price cuts, so this looks good for now at least.
Multicore advantage for gaming is starting to materialize with Vulkan, mantle, etc.
A quote from the linked article:
To to summarize:
DirectX 11: Your CPU communicates to the GPU 1 core to 1 core at a time. It is still a big boost over DirectX 9 where only 1 dedicated thread was allowed to talk to the GPU but it’s still only scratching the surface.
DirectX 12: Every core can talk to the GPU at the same time and, depending on the driver, I could theoretically start taking control and talking to all those cores.
That’s basically the difference. Oversimplified to be sure but it’s why everyone is so excited about this.
 nVidia actually implements Vulkan on top of their OpenGL driver, but nVidia's drivers have been relatively well optimized in terms of CPU usage already. AMD is the bigger winner here.
Vulkan fixes this big time, by allowing apps to construct GPU workloads for a single GPU in parallel. Only the final submission step (which is supposed to be very low overhead if the driver design is decent) is single-threaded per GPU context. And even for that Vulkan is better: It allows you to allocate different contexts for separate engines (e.g. rendering vs. compute vs. copy engine for data up/download to/from the GPU vram).
The lower CPU overhead is just the icing on the cake, the real deal is that Vulkan fixed the threading/locking model.
Competitive on price, sure. But it never held the performance crown. AMD hasn't held that since the Athlon 64 era.
The first gen Phenoms were outpaced by the Core 2 lineup, and the second gen Phenoms that became quite popular were handily beaten by the first gen i7's. The Phenom II was a great bang-for-the-buck chip though, no doubt.
I've owned all the relative players here: the Phenom 9500, Core2 Quad Q6600, Phenom II 955BE, and a core i7-920 (which is still in my main work machine that I'm typing this from right now).
The Q6600 and 955 were comparable in performance (955 much better at stock, slightly better when both chips OC'd), but the i7-920 was out at the 955's release, essentially leaving AMD a generation behind in performance. The 9500 was a dog and well known for an errata that hampered performance after a microcode update, and the i7 was leaps and bounds ahead of anything else at the time, and is still pretty usable today. My little brother has the 955 and it's showing its age in games (KotK being the worst offender).
Even 20% higher memory bandwidth or computational performance per socket matters for the time to solution, since a higher number of nodes is uncertain to replace that due to the communication overhead. As does the performance per watt (costs of cooling).
Furthermore, the biggest news for me regarding Ryzen is what they're doing with APUs. Affordable APUs with unified memory supporting HBM (stacked memory) could be a game changer. HBM means an order of magnitude higher memory bandwidth than what we're used to from CPUs, matching what the high end Nvidia Pascal cards are currently offering. By coupling this to a capable multicore X86 CPU with fat cores (as opposed to Knights landing with a higher number of slow cores), could mean tremendous speedups without any programming work for bandwidth bound applications - which includes pretty much any stencil application, e.g. atmospheric models, ocean models, earthquake prediction etc. By 'tremendous' I mean 5-10x per Socket, which is a big deal in these fields.
Yes that's a fact, and if you're trying to evaluate a solution based on performance/$ then the comparison is made with regards to this factor, and not others.
> In practice, the cost efficiency of the system should be seen as 1 / (time to solution for typical applications) / (cost of ownership per year).
If you believe that criteria is relevant then you should know that Ryzen's advertised TDP is around 95W while some Opteron 61XX processors have a TDP of 85W.
> Furthermore, the biggest news for me regarding Ryzen is what they're doing with APUs. Affordable APUs with unified memory supporting HBM (stacked memory) could be a game changer.
For 40$ you can buy 4 opterons with 12 cores each. This means that for less than 1000$ you can put together a 48-core system that supports up to 512 GB of RAM. For around 500$, anyone can put together a 24-core system that also supports a couple of GPUs.
Anyone that cares about HPC on a budget knows quite well that used Opterons is where the optimal price/performance ratio can be found, particularly as Opterons already out-perform Xeons in BLAS-based work.
I'm not an AMD hater (actually own AMD stock), just cautious.
As a result it does look pro-AMD, because there is significantly more hype (the infamous hype train) in AMD community (just compare /r/amd and /r/nvidia on Reddit). I can't say if it's something that emerged organically in those communities and embraced by corporate or the other way round.
If Ryzen is anywhere near as good as the benchmarks suggest, then Intel have got a lot of work to do. They'll need massive performance increases, huge price cuts or both to remain relevant. That's good news for everyone except Intel.
Right now the hot topic is this gem:
Any insight for that? Why do you think AMD is good investment (or speculation).
I absolutely hate how they cap maximum RAM on consumer machines, so that you need to pony up for a Xeon workstation if you need more than 64 GB.
Less sarcastically, what do people need >64 gigs for on a consumer desktop? I'm a dev and work on a pretty memory intensive service and don't need 128 gigs. Holding an entire double layer blu-ray in memory would only require 50 gigs.
I use mine as a terminal server to some super cheap Raspberry PI machines that are basically just monitors running VNC software.
For good performance you need about 4 to 8GB per user. I don't have that many users personally to need 64GB, but if you have a bunch of kids having terminals is much cheaper than a real computer for each user, plus you only need to keep one computer updated.
You could easily afford a computer in each room that way.
VMs are my biggest memory hog, particularly when they are running lots and lots of separate Java processes.
It's usually foolish to say that we don't have need for more X in computing because nothing today warrants more than X; the thing is, if we have more of X, we find uses for it. Most of what we do can trade space for time, or incorporate things in cache hierarchies; and sometimes quantitative change adds up to a qualitative change because new architectural approaches become possible.
If I could have 1TB in my laptop, it would change the way we at my company develop software. In fact we'd rewrite our product to take advantage of that much memory, the case would be overwhelming; we'd introduce a new tier in our architecture for keeping a copy of the customer's working set (typically several GB) in memory. As it is, we rely on the database and its caching; it's OK, but it can't support all the operators we'd like to support.
You can get commercial software that is a bit better tuned for running on "ordinary" desktop hardware, but it's so expensive that you don't really come out ahead on total cost.
The clocks are a little lower than the retail Xeons, and you do need to pay attention to compatible boards/BIOS versions, but you can get a big Xeon E5 for like $200-300.
If you follow this very thread, you'll notice that another user already complained that "you need to pony up for a Xeon workstation if you need more than 64 GB."
I was pointing out that there were other processors already on the market that met his requirements: Xeon engineering samples, which most people don't think about when they are talking about the costs of high-end Xeon workstations.
(This is partially because they are not "finished" products like retail chips. Clocks are lower, and you need to use motherboards/BIOSs from a specific compatibility list, and they are not something you can source from Intel. Thus, they slip many people's minds because they aren't something you would consider for an actual server deployment - but they are perfect for a workstation with specialty needs that needs to hit a tight budget)
So yes, I did actually read the thread. You just didn't grasp enough of the context to make a sensible response and figured it was the perfect time for a superslam "did you even read bro?" gotcha response.
Typically this sort of response is not welcome on HN. If you don't have something substantiate to post, please don't post at all.
(Tangentially: I wouldn't bother with ECC RAM for this application unless that were required by the rest of the hardware. Memory errors may slightly affect time to a converged solution, but it's rare that a flipped bit would actually cause a job to converge to a different solution.)
Disclosure: I work on Google Cloud,
The reason I don't use cloud computing is that you constantly have to worry about moving data around, turning stuff on and off, API changes, data movement costs, product lineup changes (should I store this stuff on S3 or local HD), locally compiling all your stuff again or containerization and associated time overhead.
Local large box makes way more sense if you're doing solo projects. And if traveling, it would way more sense to rent a VPS from Hetzner. Flat cost structure and tons of bang for the buck.
BTW, I you can get 128 gb LGA2011-v3 motherboards.
and it seems like it works:
even though the spec says max 64gb:
Absolutely no one that owns one of these professional cameras uses them with non-Xeon CPUs.
Really the only use for greater than 64Gb for non-Xeon CPUs would be student animation or machine-learning projects.
Which brings a slightly unrelated note: people seem to be blissfully unaware that you need less than 9mpix for a 12x9" 300 dpi print or that the zoom lens they bought with the camera has sharpness that effectively limits the resolution to 5-10, maybe 12 mpix if they are lucky. But megapixel count is easy to sell -> race for more megapixels -> smaller physical pixels -> more signal amplification needed -> more noise / general lower photo quality.
64Gb is completely unnecessary for photography. Photographers don't even go over 16GB.
You don't buy/rent a camera you can't afford to process images from just like you don't buy a car you can't afford to service and you don't buy a house you can't afford taxes for. Or if you do, I have little to no sympathy.
The computers often are owned by the company renting the gear, where the end customer pays for the camera rental.
So it's not so simple. And one thing - go easy on using the word idiot... I work with some very smart people who fit your description of idiot on a regular basis.
If it's really true, then your pricing is clearly wrong.
All of your hypothetical examples are also for very niche high-resource usage professionals. It's absurd that these cases need extremely high memory support from consumer processors. Intel is not obligated to subsidize anyone, especially not people who can afford to pay for high end systems.
1. Are they actually doing that? I'll be honest, I don't know for sure, but I do know that more capacity rarely comes for free. I assume that supporting 256GB of memory efficiently relative to 64GB requires either a faster clock speed or some more pins. Is Intel "gimping" the consumer product or simply not building in the extra capacity?
2. Is it necessarily a bad thing if they are? If consumer demand peaks at, say, 32GB and professional demand reaches, say, 512GB, Intel could develop two entirely different architectures, which seems wasteful and more costly to everyone. They could ship just one chip with professional capacity supported, which drives up consumer costs effectively subsidizing professionals (because professionals are no longer paying the premium for the "pro" chip; they're just buying the consumer one). Or they could ship a consumer chip that doesn't support professional needs and ship a professional chip at a price that makes pros pay for the capacity Intel had to engineer for them.
The last option seems like a good option for everyone except the people who think everyone else should pay a premium for unnecessary pro support so that a few people can get cheaper pro chips.
AFAICT nobody on this sub-thread is claiming that consumer desktops should cater to our niche use cases. We're just offering examples of application domains where a high RAM-to-CPU ratio is useful.
Additionally, as for your original question: For hobby or even 'normal' professional photography, I'd guess none. But rigs like the ones used for Gmaps Street View, the recent CNN gigapixel, etc. would probably have the capability to approach that size.
And yes, I also want to know what turns a 500MB photo into >16GB in memory. That's 32 full-res copies of the photo in memory. Just "a couple layers and a couple undo steps", really?
Layer size: Layers add more data on top of this: alpha channel (8-16 bits/pixel), clipping mask and maybe half a dozen other things, easily bumping the whole thing from 6 to 9 or 10 bytes per pixel.
Layer count: It's fairly common to have multiple layers that are copies of the original photo, with different processing applied and blending them into the final one. I'm very much an amateur and when I'm serious about just making a photo look (not even creating something new) I end up with around 10 layers.
Undo step memory: a lot of work in the photo processing workflow is global: color correction, brightness, contrast, layer blending settings and modes, filters (including sharpening or blur) apply to every pixel of a layer. Each confirmed change (by releasing mouse button / lifting stylus / confirming dialog) is likely to have to store an undo step for entire layer.
Of course you can just persist some of that to disk, but if a single layer/undo step can be 800MB this will hurt productivity - only very recently we got drives that can write this much fast enough and that's why just a couple years ago, when having enough RAM was not really an option a lot of pro photographers had 10 or 15k RPM HDDs running in RAID0 in their workstations.
For the layers/undo steps, the "couple" of each was not my phrase. It was yours. If by "a couple" you actually meant dozens, then sure, maybe an 80 mpix photo really needs that much memory to process.
Presumably that's 16-bits for each channel of Red, Green, Blue. Since every pixel is composed of the three RGB components, that's how you get 48-bits for every pixel (and internally, Photoshop refers to 16-bits per color channel as 'RGB48Mode').
Do you really need this much headroom, e.g. for producing music professionally? (If so, why are you arguing over the $50-100 extra for a Xeon?)
Because for listening, it's proven fact that even professional musicians and mixing engineers using equipment costing >$100k can't tell 24 bit from 16 bit in a proper double-blind A/B test. Presumably you use a high sample rate as well (maybe 192 kHz), which is equally useless. Drop both of those, and you're within 64GB easily with no noticable SQ loss.
It's the same rationale for doing calculations in full 80-bit or 64-bit FP even if you don't need the full bits of precision -- if you start your calculations at your target precision, your result will have insufficient precision due to truncation, roundoff, overflow, underflow, clipping, etc. in intermediate calculations.
In theory, if you knew what your editing pipeline is, one could mathematically derive the starting headroom required to end up with acceptable error, but in practice that's probably a very risky proposition because 1) you don't know every intermediate calculation going on in every piece of software and 2) errors grow in highly non-linear ways, so even a small change in your pipeline might cause a large change in required headroom .
Also, you don't cause errors when you compress and decompress using a lossless compression format (as the name sort of implies).
That's not how it works. You can totally hear clipping or noise even as an amateur. Just as an amateur photographer first thing first learns to look out for overlit skies and hilights that aren't really recoverable at all unless they perhaps have access to the camera's RAW files. But unlike in photography, in sound work you typically need to process as well as sum multiple recordings, all while not blowing out the peaks that might just happen to line up.
If anything, dealing with the large dynamic range of 24-bit is easier for the amateur. An experienced pro would probably have a good bunch of tricks in his bag should he really have to mix music in 16 bits, enough to produce something that doesn't sound horrible. The amateur would be more likely to struggle.
 - https://en.wikipedia.org/wiki/Fixed-point_arithmetic
In practice, fixed-point or floating-point is better depends on the operation you plan to use and whether you have a good idea whether you'll be able to stay in a fixed range ahead of time .
 The loss of 'footroom bits' corresponds to the extra resolution lost whenever the FP mantissa increases - http://www.tvtechnology.com/audio/0014/fixedpoint-vs-floatin...
For mixed and mastered music, the ~120dB perceived dynamic range of properly dithered 16-bit audio is more than sufficient. If you're listening at sufficient volume for the dither noise floor to be higher than the ambient noise floor of the room, you'll deafen yourself in minutes.
For production use, the extra dynamic range of 24 bit recording is invaluable. You're dealing with unpredictable signal levels and often applying large amounts of gain, so you'll run into the limits of 16-bit recording fairly often. In a professional context, noise is unacceptable and clipping is completely disastrous. Most DAW software mixes signals at 64 bits - the computational overhead is marginal, it minimises rounding errors and it frees the operator from having to worry about gain staging.
You can probably get away with 16 bit recordings for a sample library, but it's completely unnecessary. Modern sampling engines use direct-from-disk streaming, so only a tiny fraction of each sample is stored in RAM to account for disk latency. The software used by OP (Hauptwerk) is inexplicably inefficient, because it loads all samples into RAM when an instrument is loaded.
Digital samples can and will generate intermodulation distortion, quantization and other audio artifacts when mixed. Using 24-bit lessens or eliminates the effect versus 16-bit.
Experiments and results on this software are here: http://www.sonusparadisi.cz/en/blog/do-not-load-in-16-bit/
While people here like to quote that infamous looks-like-science-but-isn't Xiph video, the reality is that a lot of professional engineers absolutely can hear the difference.
If you have good studio monitors and you know what to listen for the difference between a 24-bit master and a downsampled and dithered 16-bit CD master is very obvious indeed, and there are peer reviewed papers that explain why.
Dither was developed precisely because naively interpolated or truncated 16-bit audio is audibly inferior to a 24-bit source.
Many people certainly can hear the effect of dither on half-decent hardware, even though it's usually only applied to the LSB of 16-bit data. From a naive understanding of audio a single bit difference should be completely inaudible, because it's really, really quiet.
From a more informed view it isn't hard to hear at all - and there are good technical reasons for that.
For synthesis, you don't want dither. You want as much bit depth as possible so you can choose between dynamic range and low-level detail. So 24-bit data is the industry standard for orchestral samples.
Dither was developed because it's technically better. Audibly inferior though? Perhaps, with utterly exceptional source material (e.g. test tones) and when the output gain is high enough for peaks levels to be uncomfortably loud.
In reality, many recording studios have a higher ambient noise level than the dither, making it redundant in practice — the lower bits are already noise, so audible quantisation artefacts weren't going to happen anyway. That said, dithering is pretty much never worse than not dithering, and almost all tools offer it, so everyone does it.
24 bits is important because it gives the recording engineer ample headroom, and it gives the mixing and mastering engineers confidence that every last significant bit caught by the microphone will survive numerous transformation stages intact. Once the final master is decimated to 16 bits per sample, you know that your 16 bits will be as good as they could have been.
Have you actually heard the difference between simple dither, noise-shaped dither, and non-dithered 16-bit audio? A test tone is the worst possible way to hear what they do.
24-bit audio is used because you want as clean a source as possible.
This also applies to mastering for output at the other end. With the exception of CD and MP3, most audio is delivered as 24-bit WAV at either 44.1, 48, 88, or 96k.
Even vinyl is usually cut from a 24-bit master. Here's a nice overview of what mastering engineers deliver in practice:
Dither is noise. Well chosen noise, very quiet noise, but noise nonetheless. Whether the signal is noisy for one reason or another, the consequences at the point of decimation/quantization are the same. Either way the least significant bits are filled with stochastic values and the desired signal isn't plagued with quantization noise artifacts.
> Have you actually heard the difference between simple dither, noise-shaped dither, and non-dithered 16-bit audio? A test tone is the worst possible way to hear what they do.
...is the sort of thing someone who hasn't done a blinded listening test would say. Stop assuming the commercially successful "experts" are also technical experts, because few are. I doubt more than a tiny fraction could describe what a least significant digit is.
(Cutting vinyl, a hilariously lossy process that requires compression and EQ to avoid the needle jumping the groove, doesn't need a 24 bit master. Barely needs a 14 bit master. But since an extra hundred megabytes of really accurate noise floor doesn't make anything worse, they do it anyway.)
Their words mean very little.
I totally get going for 24 bit over 16 for the noise floor, though.
Such audible differences constitute a testable claim. So far, the number of claims that have withstood reasonable test conditions is zero.
Nobody is arguing that 16 bits is enough during the recording and mixing process. If they are claiming that 16 bits is insufficient for consumer distributed music, they need to go back to remedial science class.
I'm all in favor of keeping sample sizes better.. but raising the price for everyone by 10-20%, so that less than 1% can take advantage of it is a bit ridiculous.
It can be quickly done on a consumer workstation, but you do need 128 GB in many scenarios.
The complaint is not about adding the capability to all chips. It's about smoothing out the step function between consumer machines and professional workstations.
Which is what confuses people. I am actually not sure what that max memory refers to. Possibly max per module?
I see an i7 6950X should also support 128GB:
I wish it had ECC, but you can't have everything.
So far, none of the modern ones are confirmed to support ECC.
Looks like AMD has made a real performance breakthrough here. When is general availability expected?
If Intel slash prices it will presumably be at the bulk "tray" level and reflected on their ARK website.
Not saying this is predictive of an Intel price cut, but I do think that MicroCenter knows its audience.
AMD Ryzen™ Processors
4 x DIMM, Max. 64GB, DDR4 3200(O.C.)/2933(O.C.)/2666/2400/2133 MHz Non-ECC, Un-buffered Memory
I'm guessing the demand isn't there and won't be there unless some company can market it as some new gimmick with more mainstream appeal.
AMD Ryzen™ Processors
4 x DIMM, Max. 64GB, DDR4 3200(O.C.)/2933(O.C.)/2666/2400/2133 MHz Non-ECC, Un-buffered Memory *
AMD 7th Generation A-series/Athlon™ Processors
4 x DIMM, Max. 64GB, DDR4 2400/2133 MHz Non-ECC, Un-buffered Memory *
Dual Channel Memory Architecture
Refer to www.asus.com for the Memory QVL (Qualified Vendors Lists).
AFAIK, Ryzen won't support ECC, sadly.
I'm not holding my breath for ECC support. My thinking is that if it could run ECC RAM with the actual corrections enabled, they'd be talked about that as a selling point. I'd sure like it with the 32GB I have going into this build, but I just went with "G.SKILL F4-3000C15D-32GTZ TridentZ Series 32GB (2 x 16GB) 288-Pin DDR4 SDRAM DDR4 3000 (PC4 24000)" for now, with the idea that I'll get a second of the same kit in a couple of years when it's half the price of today.
Even if you can run with ECC RAM, I doubt you can enable the ECC functionality, it will just be running in non-ECC mode.
No, they don't.
> We did ask about a potential single socket Ryzen/ Zen part with ECC memory support and were told that AMD was not announcing such a product at this time alongside the Ryzen/ Zen launch.
If you were on the verge of getting a 7700k or something, it looks like now is the time to buy (or give Ryzen a shot!).
Personally I'm willing to pay a bit more to give Ryzen a chance, because hopefully it will lead to sustained competition in this space again. A $500 8-core CPU is more than I need for gaming right now, but given that my 2500k has been great for 6 years, I don't have a problem spending $2k on a PC every 4-6 years for really great performance.
Those are Microcenter's normal prices - they have always sold Intel processors significantly under normal pricing as a loss-leader (not sure if it's an actual loss). They will also knock another $30 off if you buy a motherboard at the same time.
For example, they have been running 6700Ks for $260-280 since before Christmas. They sold Kaby Lake 7700Ks for $330 once those launched, now they're down to $300. This is just the normal Microcenter pricing life-cycle.
But, that's the kind of "journalism" you come to expect from WCCFTech. They will literally publish any random garbage someone writes up.
Namely do these processors support ECC and what virtualisation capabilities do they have (for KVM with full GPU access).
But lack of ECC support is a problem if you want to use virtualization for security, as Rowhammer can be used for attacking other VMs and ECC is often the last line of defense against Rowhammer.
Those 4 cores are stronger than the first 4 cores of the 1700X or the 1800X. If you are a gamer, then most if not all of your games will use 4 or less cores. Why pay more money for worse gaming performance?
- One retailers sale price on one chip is a poor way to judge the relative value of 2 entire lines of products.
- the Intel i7 7700k is normally north of 350 which is what a lot of retailers are selling it for. There is no reason to suppose that one retailers temporary sale right now ought to be compared to the projected sale price of amd chips.
-This price is without cooler/paste amd 1700 includes both for 329 so most would end up paying almost 400 for Intel $70 more if you include the next item.
- Historically Intel motherboards have been more expensive and have been worthless as far as upgrading even to the very next generation of cpu. Amd motherboards are often upgradable.
- It would be more reasonable to compare it to the 1700 which has uses 2/3 the power to run twice the cores with substantially better multi core performance as much as 40% better in fact.
While single core performance isn't quite as good if you followed pc gaming you would in fact know that either chip is going to be more than enough to keep up with any mid and most high end gpu setups you could throw at it.
In summary going Intel on your next mid to low high end gaming rig will probably cost you slightly more, use a lot more power, give you substantially worse performance and no better gaming performance.
btw: the 65 watt tdp Ryzen R7 1700 for 329 dollar is matched against the Intel i7 7700k and has double the cores. Yes it will be slower in single thread applications like ARMA 3 (having only Broadwell IPC and being clocked lower), but even at stock you will have better performance in modern games (where the GPU is more important anyway).
“With our overclocked 1800X sample cooled by the Noctua unit AMD provided in the reviewer’s kit we managed to surpass the 7700K in single threaded performance and the temperatures were great."
The 7700k might be a reasonable value choice for gaming today, but I have little doubt that the extra four cores of the 1800X will come into play quite soon.
Not so long ago, the same argument applied to dual vs quad core processors. Game developers didn't start optimising for four cores until there were a significant number of users with quad core processors.
We're also yet to see the performance of the hex-core and quad-core Ryzen parts. At $199, the 4c/8t 1400X could demolish the 7700k.
AMD mentions some AI technology to improve the perf. If one runs the same software many times, will the performance change? It could be good if it learns and improves the performance, but results might not be reproduce. Is it like the Pentium 4 with its long pipelines that ideally result in better performance but meant more misses?
Good that AMD has something in peto to compete with Intel again.
While the performance of Ryzen is pretty good, the overall power consumption and efficiency (idle, semi-load and load) i still no where near Intel. They won't kill 2-3 hour battery life on Macbooks for a AMD CPU.
The Raven Ridge chips, which are meant for laptops, would have the added advantage of removing the need for a discrete GPU in the 15" MacBook Pro. Switching to AMD seems unlikely but I'd be interested if they did.
So let's wait for real benchmarks (at march 2) before saying Intel is better in power consumption @ sparkling. But keep the following in mind: https://news.ycombinator.com/item?id=13736809