No. I do not want a web page to be able to eat up more than one core. Frankly, I feel like I want to try a browser extension that "locks" any web page within 20% of a core performance.
Sure its tempting to blame "lazy programmers" and bloated frameworks, but bad engineering, tight schedules and tech debut have always existed, for all platforms, all the time. There's just something fundamentally broken about apps built on asynchronous loading of dozens of UI components from a remote location and assembling them into a coherent UI screen as the pieces arrive.
An in-browser photo editor or 3D modeling software would be expected to run much better and apply complex effects or render models much faster on high-end hardware. It sort of works this way with WebGL content, but even there you're limited to feeding the render pipeline with a single thread.
I want the web as a platform to succeed. It is the only truly universal runtime and with some care it could offer very few performance trade-offs. I sincerely hope tools like servo and others unlock this potential.
But looking from the other end, somehow they definitely do. And we're probably wasting trillions on this problem. Wasted time, wasted electricity, wasted batteries, wasted computers and phones.
> An in-browser photo editor or 3D modeling software would be expected to run much better and apply complex effects or render models much faster on high-end hardware.
Isn't that kind of a tautology?
The DOM and other elements of browsers also have single-threaded assumptions or behaviors.
More multi-threaded capability is a first step, and then we'll need to take advantage of that capability.
Contrary to what the sibling comment says, Ryzen is nowhere near Intel's performance here. Even in very parallel tasks like x264 video encoding, the 7800K (6-cores) is beating the highest-clocked 8C Ryzens by almost 6%. Comparing apples to apples, the 7820X (8C) is 33% faster than the 1800X.
That's a very good place for Intel to be overall, even if their power consumption is higher than I'd like to see. Downclocked Xeons will probably beat Ryzen's efficiency. And there may be some UEFI/microcode gains to be made here as well, both in efficiency and total performance.
The bigger problem (for the enthusiast market) is the thermals. Intel just switched from solder to TIM on the HEDT processors and it's very clear that they cannot move heat out of the package fast enough and it's limiting the total OC, which would likely reach 5 GHz a good chunk of the time.
veryslow is where the magic happens. As far as I can tell that's where you start getting decent quality/compression. If I increase CRF any higher (decrease quality) then shit gets super blocky super fast. Unfortunately I haven't found a way to crush high-quality streams like that in remotely realtime, you need to record 1440p (and dump an archival stream to disk) and then transcode to 1080p since that's doable.
This will not turn out good for the GPU unless you constrain it to very specific parameters. I know this for a fact; I've done it.
Edit: Having just seen that the i7-7800X is priced at $389, I'm thinking the I7-6850K is still overpriced, unless you need 40 PCI-E lanes.
I've seen comments on their platform being "poor" due to software not tuning for the platform specifically, which is definitely possible. But unfortunately I can't bank on that for now, so we had to stay with Intel for our batch transcoding processes. Still very excited to monitor this product line; it's been a long time coming.
You should look for x264 (H.264/AVC encoding) or x265 (H.265/HEVC encoding) tests. HandBrake is often used as a frontend for these, so the test may be named Handbrake H.264 HQ encoding or something like that.
x265 benefits a bit more from Intel's faster AVX instructions, but Ryzen (or Threadripper/Epyc) is still a very valid choice for high quality HEVC encoding. The GPU alternative would be Intel Skylake's GPU-accelerated encoder through Media Server Studio, which has scored well in MSU's test: http://compression.ru/video/codec_comparison/hevc_2016/
As for H.264 encoding, there are no GPU encoders that could approach x264 quality.
>x265 (H.265/HEVC encoding)
Not something our services can reasonably support or use currently, so even if true, Ryzen/Threadripper is still probably not something we can go with.
I look forward to switching in 2-3 years when we re-visit this conversation, or adopting it for other services soon. I'm very excited for AMD's competition and fully willing to switch; I just haven't seen the evidence that supports that it would be a good idea for our particular use case.
It may not have made it down the pipe to everyone's package managers yet but it's out there and should improve performance on Skylake-X and related Xeons.
The CPU to GPU speedup is so massive that it's really not worth it. I don't recall the exact numbers, but I got an order of magnitude boost going from HEVC encoding on CPU to GPU with no noticeable quality difference. Works out to several times more efficient too FPS/watt wise.
I use CRF though mostly for GPU encoding which should remedy the issue at the possible cost of file size (didn't seem to be a significant increase, but was a slight one).
I'm also talking HEVC and not H.264 though primarily - and using NVENC not a CUDA based encoder as your link mentions. I've also seen it mentioned that things have improved a lot on the newer GPUs, so might be worth a try again if you've got a 10xx card around.
Not in my experience. Furthermore, there are a lot of errors (comparatively) that are introduced when GPUs are used at high speeds. Other comments have linked articles indicating why this can be the case.
It that true?
I suppose for the next generation Intel will announce that liquid nitrogen is basically required.
It's just fantastic at the price point.
I lived through the age of self build K6's, Athlons, dual Celerons and SLI Voodoo 2's and heatsinks the size of a can of coke. I'd rather just have a shitty old thinkpad and a box in AWS now. This all happened when I realised a naff old Pentium III 1.4 GHz (Compaq AP230) was actually my goto machine despite having things 3-5x the speed.
I run a fair few high CPU/RAM simulations with LTspice as well for ref.
I went for 32GB of Corsair Vengeance over 2x16GB at a conservative timing (the newer beta bios for the mobo I have supports more memory options as was as clocking) since it's a machine at work.
A) In system price there are many components and that
makes ultra cheap CPU:s in other vice identical configurations not so good price-performance.
B) Performance is really the performance differential from what you upgrade. And even more importantly the performance differential years from now to systems of that time, if you upgrade to a system that lasts 5 years before you upgrade or to a system that lasts 7 years before you upgrade is significant in terms of price/performance because later means you get the high performance early and price last longer.
C) Significantly higher single threaded performance compared to Ryzen 7 the main contender. There are still many tasks that are single threaded, especially if you run legacy code.
D) AVX-512 I doubt the review benchmarks are in AVX-512 but some legacy code. AVX-512 increases both width of vector and fraction of code and algorithms that can be parallerized significantly. Once compilers are well tuned to use AVX-512 the code compiled with AVX-512 optimizations turned on should be significantly faster than what it was before hand. Simply being able to do 8-16 times work per cycle in large variety of tasks is significant advantage even if that is for a good fraction of time instead of all the time.
Personally I7-920 has given me far better price-performance compared to people who bought dual core at the time simply because it has lasted LONGER so it had superior price/(time between CPU upgrades) measurement. Right now in that same measurement I7-7820X is the king.
So in conclusion I7-7820X is faster in both real legacy code and the future code, and gives good enough performance longer simply because code that you run when you would start considering upgrades run much faster on it simply because of extensions.
https://www.phoronix.com/scan.php?page=article&item=ryzen-18... has the Ryzen 7 1800X beating even the i7 5960X when compiling the Linux kernel.
http://www.hardware.fr/articles/956-13/compilation-visual-st... has the Ryzen 7 1800X drawing around even when compiling Boost (under both MSVC and MinGW gcc, so it isn't just a compiler difference).
I wonder if it's linking the huge binaries at the end of the compile that makes all the difference, especially with the difference in L3 caches? I'd be surprised if the individual compile units were that much larger as to make a difference.
... I'd suspect the memory channels here.
But the difference between Skylake-EP/X and XeonPhi in relation to AVX-512 support were known way in advance.
I don't think differences in specific instructions were known in advanced, at least at/ shortly after KNL launch. Or rather I was instructed not to mention it.
The instruction set info on that page is 2-3 years old, nothing has changed since then.
I've seen those tables at least a year before the KL launch they were also available on Intel's webiste (the blog irrc) at least a year before KL.
I don't know who told you that that information is embargoed.
EDIT: The inital table was here: http://software.intel.com/en-us/blogs/2013/avx-512-instructi... the blog post no longer exists, try archive.is
"The instruction groups common to both the Intel Xeon Phi processor x200 and the Intel Xeon processor are AVX-512F and AVX-512CD. The AVX-512ER and AVX-512PF groups are implemented in the Intel Xeon Phi processor x200 only. The AVX-512BW, AVX-512DQ and AVX-512VL groups are implemented only in the Intel Xeon processor."
https://software.intel.com/en-us/blogs/additional-avx-512-in... This used to lead to a PDF with the tables, the PDF is still there but it's a living document which is updated every time, the latest revision was in 2017 but as early as 2014 there was already a split into COMMON-AVX512, MIC-AVX512, CORE-AVX512 with a difference between future Xeon Phi and Xeon processors.
Incredibly, the 7900X is hitting 240 watts. When was the last time we had a chip that burned that much energy? Granted, its destined for the server room, but a 2U quad CPU box will be hitting 1000 watts on CPU usage alone at 100% utilization. That's close to a medium-sized window AC unit.
I was going to hold off for Intel's response, but I knew they wouldn't have anything with 8 cores that was anywhere near 65W TDP. CoffeeLake should offer lower TDPs, but with 6 cores max which seems unappealing compared to 8C.
I'm going to revisit Intel after they've had time to prepare a real response to Ryzen in ~2020 or so.
One of the best computer investments I've made, since getting my first SSD. Instead of a loud whine occasionally, I get a steady hum. I have a Silverstone SG13 case, which is about the size of a sneaker box, fit into that one without any issues.
I use a Raven RVZ01 (see also FT01 for less aggressive styling) and I have a 120mm AIO in it.
Intel has changed the power distribution design, it now has a linear voltage regulator on the CPU, it seems that some of the motherboards have issues with it.
The power draw for Bit-Tech pots the 7900X which is a 10c CPU slightly over the 8c 1800x.
I suggest you read more reviews and actually wait until post-launch updates most of these are still using very early and beta versions of the BIOS.
Which you are also going to need, to cool your new 1800W computational space heater.
Since Intel is still using thermal gunk on it's HEDT processors, thermal resistance is ridiculous (0.3 K/W Rthjc).
Hopefully competitive pressure will get them to start taking their process more seriously.
Lets imagine it spends 4 hours per day under load and 6 hours idle and the rest of the time sleeping. We are talking about 0.5 kwh or 15ish per month. The average us household uses by comparison around 900 kwh in the average month.
Upgrading to one of these will in that scenario increase your household energy budget 1.6%.
ED: looks like it's missing the favored core mode which is probably important for gaming. Which brings up an interesting question if CPU's are now going to last ~5 years, is buying a 500+$ CPU reasonable?
In PCPerspectives i9-7900X review you can see it uses significantly more power than the i7-6950X despite them both ostensibly being 140W parts.
Thermal Design Power (TDP) should be used for processor thermal solution design targets. TDP is not the maximum power that the processor can dissipate. TDP is measured at maximum TCASE. TCASE is the temperature measured on the heat spreader over the CPU.
That's not correct. Both Intel's TDP and AMD's TDP are maximum-typical values, for different scenarios. Every processor can exceed it's TDP by a significant margin with some notorious applications.
Are you sure about that? I can't dig up the link at the moment, but I seem to recall some hardware hacker thread where people would regularly exceed the TDP of Intel chips by running SSE/AVX-heavy workloads like Prime95, and Intel's response was basically, "That's not a real workload, it doesn't count". Which is not an unreasonable answer, but it's definitely not defining TDP as the maximum possible power draw.
EDIT: why the downvotes? explain yourselves
Speaking of browsers, they aren't really a good benchmark for 6-10 core processors since anything is going to be fast enough. Also, existing JS can't really be parallelized so I suspect even a magical browser would still benefit from single-thread performance.
And anandtech couldn't bother writing a proper article? Sentence after sentence just hard to read, with extra commas and just poor sentence structures. Did they not have time during the embargo?