EDIT: To clarify: If you compare both "out of the box" then the old one would be unsafe while the new one hopefully is safe because it comes with hardware/firmware patches built in. If you compare both patched, the old architecture gets a large performance hit compared to when it was launched. With time the patches usually evolve and change the performance characteristic. So you have to be careful which OS or firmware revision you are using. However way you do it, it's not an easy apples-to-apples comparison anymore unless you're talking about a very specific use case at a very specific point in time.
Also that 6850k CPU / Asus X99 Gaming Strix which I bought just after release is the worst combo I've ever experienced; so many problems, which included frying the CPU if using the then promoted as easy XMP settings (took me 2 replacements to figure it out).
3900x or 3950x is coming up as soon as possible. I am peeved about upgrading so early (it'll be only 3.5 years), but I've had it. The old 2600k CPU PC is still ticking along very nicely at 4.4ghz.
The 6850k was such a massive let down. Yes, when it works the extra performance was great, but it has been anything but stress free. Thus: The new Ryzens look simply astonishing, at a much lower power / heat cost to boot.
Tech promoters have spent a lot of time and energy explaining that anyone saying that is just an idiot who doesn't understand that cloud is the future (e.g. 1, 2). But the basic insight was never wrong, and the people saying it knew just fine what they were talking about. 'The cloud' means giving up physical-layer control, essentially by definition. That's a real tradeoff people ought to make consciously, and it's one that lost some ground lately.
Certainly a server in a datacenter can be as isolated as a server in the basement. And unless your threat model involves governments, a reputable hosting company having physical access to the box shouldn't be much scarier than having it in your office.
But lots of people (including those cloud-hyping articles I linked) claim that dedicated servers, even with virtualization, are just "remote hosting". Their standard for 'cloud' is basically "computing as a utility", with on-demand provisioning and pooled resources. I know some huge companies have attempted "private clouds" that provision on-demand from within a dedicated hardware pool, but I think most smaller projects have to choose between on-demand and dedicated.
And I'm assuming my 5$/mo DO droplet isn't on it's own dedicated die....
To be fair though, those chips are a comparatively small part of the datacenter market. Most of them are sitting in IT closets, or per the example above are running HPC workloads on bare metal. Cloud services are the sexy poster child for the segment, but not that large in total.
Stuff like rowhammer is very different vs something like a SQL injection on a website.
They also run games that are just not secure. I know one that was storing user credentials in plain text in the registry (where no special permissions are needed to access it)
According to Anandtech  only Spectre V1 requires pure software mitigation.
Perhaps we've just been spoiled with the leaps that AMD has been making recently.
It's probably not a deliberate trade-off, their 10nm process is simply not good enough yet to get high yields when pushing clock speed. Their 14nm process was excellent in this regard, after they optimized it for so long, it is the only main advantage 9th-generation Intel parts still have compared to Ryzen 2. It's not a surprise they have to take a step back in that regard until 10nm improves.
That's not something given. The "more performant under same power envelope" chips in the lineups all have more cache, more vector extensions, smt unlocked, more turboboost, but no actual qualitative improvements.
Some even argued that Ice Lake is less power efficient than latest Skylakes derivatives.
The chip is getting mixed reviews.
personally am more excited about the prospect of less-crippled ultras than a few % bump in single thread performance.
Just before the smartphone revolution happened, Microsoft decided to make a GPU-centric graphics architecture (WDDM, which Wayland) imitates and Intel decided to make an integrated GPU that was just barely adequate to run Windows Vista.
Intel's plan has always been to take as much of the BOM for a PC for themselves, so they hoped to vanquish NVIDIA and ATI and might have even succeeded if it wasn't for cryptocurrency and deep learning.
The trouble is that Intel has been pursuing phoneishness for the last ten years instead of performance, systematically erasing the reasons why you would buy a PC instead of a phone. They've tried to sell phone chips in China where there is no free press and they can keep people from talking about the benchmark results, how slow their phone is, how hot it gets, etc.
Intel's idea has been that gaming is playing Candy Crush and they've let AMD steal their fire by making the CPU/GPU SoC for the XB1 and PS4 -- PC gaming has converged with console gaming in many ways, but the common denominator is that Intel integrated graphics is tolerable for the most casual of casual games, and even the recent performance improvements get you to 2 frames per second in League of Legends as opposed to 1. Intel is entirely AWOL when it comes to GPGPU on their integrated graphics but it just isn't worth the effort with their low performance parts.
Chinese gov blocking specific keywords and network ranges is one thing. Intel censoring reviews is an extraordinary claim... could use some extraordinary evidence.
Maybe this is due to design improvements unrelated to the process node?
I hope AMD starts using more than 1 chiplet for laptops. Until they do an AMD laptop is limited to 4 cores.
3700U is currently a mediocre mobile CPU, between 8th gen i7 and i5. Base clock is 2.3Ghz. But that's the Zen+ architecture and the graphic performance is already on the Iris Pro level. If AMD can get the same IPC, clock and TDP improvements as on the desktop for mobile Zen2, where the clocks have not been reduced 30% like Intel did, I think Ice Lake won't be able to compete at all, from what we've seen so far. Of course, there is much more nuance there in terms of heat / power envelopes and now it all ends up together while boosting, but definitely doesn't look very good for Intel based on this...
Other than some extra power draw needed to couple the 2 chips together, which I assume is minimal, splitting computation from rendering seems like a very good idea - where am I wrong? Would the extra monetary cost be significant - if so roughly by how much?
I would have much rather left the GPU out entirely and used the space for something else, but the high end Ryzens don't have any graphics supports. I'd have to go down to the APU units, where I'd trade off power.
There's little point to even having HDMI/DisplayPort outs on the board itself; they're unusable except for a small subset of APUs.
I'm personally still excited about these chips for laptops. Lower power and higher IPC mean same-ish performance as the previous generations, but with better battery life and thermals. Plus you get better turbo boost, better graphics, built-in support for TB3, WiFi 6, etc. Seems perfect for something like the Surface Pro. The Core uarch is getting dated, yeah, but Intel is going for breadth and better integration here and it looks compelling.
IOW - by the time we see 10nm Xeons hit the market, AMD will most likely be on the next iteration of the Zen architecture.
They might also be more energy efficient which I think is the most relevant advantage Intel has against their competitors. So I don't really get the impression that the new chips don't perform well.
Spectre probably knocked of Intels performance advantage, but is CPU performance really our current bottleneck?
Still the topic that would interest me most compared to CPU performance.
Intel keeps doing these paper launches and early announcements meant to keep them a little in the spotlight. They're not having a good time for a while now. Some websites take it with a gran of salt, some still don't, even after being repeatedly used to tout a nonexistent horn.
this is what my computers have to say about that :
Also, most ram here is not taken by the build process but by firefox tabs & a windows VM.
Because transistors have a capacitance that one has to drive, higher frequency requires more voltage and more voltage means bigger leakage currents which means more heat and power consumption.
Hardware is cheap. Programmers are not. It makes much more economic sense to have programmers "use layers upon layers" and buy extra CPU, than to have a couple of x86 assembly gurus hand craft every opcode of your application.
Development time may be more expensive than hardware if you're developing internal software for a small-ish company, or something extremely niche, but less so when the bloat saves some time for your relatively small team in exchange for bringing massive pain to the thousands/millions of your users.
I personally refuse to use "bloated" software even when there's no alternative, but it's a drop in the ocean. Some of the folks I know can't take the same stance simply because they don't know that a text messenger doesn't have to eat half of your machine resources and slurp the battery to zero in half an hour of usage. For them it's just something you have to deal with.
Same with programmers.
It doesn’t make sense to make 1billion people buy more hardware bc a small bunch of programmers.
It has nothing to do with handcrafting asm. All the virtualization, virtual machines own little patchy frameworks which are unnecessary, virtual dom, the actual dom. Just to show a some text or a button on the screen which doesn’t adhere the OS Interface guidelines and probably doesn’t support screen readers, color scenes, font changes, dpi changes, proper behavior, keyboard navigation, scriptability.
Yes, I like things and people lean.
I hate this line of reasoning. Yes it is true, as long as you keep things reasonable. But many use this principle to go beyond the reasonable.
Slack probably has cost much more in electricity alone than it would have taken to build a more efficient client.
But seriously, if programmers hadn't several software levels to rely upon, a simple software tool would cost millions and would require an army of engineers or a few years to be completed.
Programmed a scroll bar for an UI toolkit once. That isn't as trivial as it seems and takes a while. A scrollbar...
So you piqued my curiosity. What are some examples of those non-trivial things?
It was implemented in C for a small display for debugging controller for a µC. It did take quite even without any dynamic content for something we take for granted. And in the end it still was pretty clunky.
I believe that browsers still disallow customizing scrollbars and there are countless examples of people having build their own scrollbars to replace them. Many of those are quite wonky, so that calms me down at least.
I guess it's one of those things we take for granted now and thus you think that it can't be that hard to create... and then you try to do so and you realize how complex some things really are.
Thanks for the details!
Programming doesen't exist in a vacuum. It has real business constraints, and you simply cannot make a perfect program. You must make the right compromises in order to be succesfull.