Fixing these bugs at CPU level will require changes in architectures. Such sweeping changes won't come for years. Intel has not made such a overhaul in years and AMD just made one.
The average HN user is not the typical desktop/laptop user. I bet we use virtual machines, compilers, technologies like nvme, databases (postgres/mysql)... more often than the typical desktop/laptop user.
These perf issues will only affect the server and cloud providers or some edge cases on the desktop. Everyone else can just apply the patches and go to work.
Maybe so, maybe not. Personnaly I have the hunch that most HN users use the same setup as the typical desktop/laptop users.
Also i do not host critical production machines, but some web and database server. I wonder how the "i host some stuff on the internets" guy is affected by this stuff.
Depends on a game, though. While true in most cases, one of my favourites - Kerbal Space Program - is heavily CPU-bound due to being mostly a real-time physics simulation.
Evaluate your threat model in the context of Meltdown/Spectre and opt in or out of the mitigations accordingly. There are relatively few cases where the workload is both significantly affected by the mitigations and vulnerable to these attacks (Xen paravirtualization would be the prototypical example). Personally, I opt out of page table isolation, KASLR and any retpoline-style mitigations on my desktop systems and compute servers for performance reasons. Make sure you understand the implications of these choices if you go that way, though.
Read the following thread:
If you operate safe computing practices it is unlikely you will be hit by either the Meltdown or Spectre attacks.
Not just limited to browsers though. Content may be injected via other sources. Its difficult to get an exhaustive list of such application. But here is an example of iMessage : https://threatpost.com/inside-the-latest-apple-imessage-bug/...
Too bad Apple doesn't give users the option to have only "text-only" messages.
Loading complex file formats is structurally similar to interpreting code; the output of the interpretation is some data structure, rather than some side effect, but otherwise it's very similar. There may be some loaders that are controllable and configurable enough to be programmed by data - perhaps something in the video decoding space that isn't offloaded to dedicated hardware or GPU - but I think it would be tough to find. JS or a downloadable game (which already lets people in the front door) of some kind are the best vectors.
In other words, blocking 'malicious' code is reactive, not pro-active. Reactive solutions by definition end up solving yesterday's problems while being oblivious to today's - viz. liquids restriction in hand luggage.
I suppose you could disable it on Linux, but at that point it seems like you’re going against the tide.
This is such a dangerous advice... you should really only disable these things if:
a) your computer is not connected to the internet in any way
b) it is in a trusted environment
This is similar to purchasing used cars, you can steadily upgrade with much less capital and therefore each purchase holds much less risk. Also it's much better for the environment.
I suggested delaying a new purchase until the CPUs are fixed (who knows how long that will take) or upgrade by purchasing used to get a bump in performance regardless of the hardware flaw (which will still require software mitigation and thus have performance degradation).
Most hardware worth using right now has the hardware flaw so you might as well upgrade (for performance) by purchasing used.
If at scale, you probably don’t have an option because you need it now.
If for personal use, we don’t know what the fix would look like, or whether there would be a perf hit. If you can hold off for a few years sure do it, but it’s not a huge part of the equation now.
How long will this device last you? If less than 2 years buy now.
Honestly very few reasons to wait. Everyone has this problem. The only thing I might wait for is clarification of the attack surface area of spectre since that can’t be patched but it seems like it’s difficult to pull off in most cases.
I mean how frustrated would one feel if it turned out that one spent that amount of money in a computer that should be replaced asap?
The other truth is, that Intel CPUs are just faster than Amd Cpus (now with a few exceptions, but for general use that still applies).
I'd say if you are talking about a large quantity purchase, you should maybe wait or look for alternatives. Other than that it should not make much difference for us "normal folks."
Another truth is that there is / was so much misinformation out there.
Finally, I wonder if there are going to be any optimizations to these fixes in the future?
I think when it comes to most desktop workloads, the security issues are far more interesting and serious. And while Meltdown only affects Intel CPUs, Spectre affects all (?) out-of-order CPUs and there may be more/more effictive attacks in the future. So every fast, modern CPU is pretty much in the same boat and it will take some time for the dust to settle.
So, I would just stick with the Intel CPU. You can vote with your wallet once one vendor decides to axe ME/PSP and/or properly mitigates Spectre-type attacks ;).
No one likes to get stuff taken away. People were also annoyed how "bad" their VW drives after the diesel fixes. Even if you probably never hit the limits to feel the difference, just the thought that from now on your CPU could be slower than it was yesterday is annoying enough.
But I agree with you. It is the wrong crowd that complains the loudest!
I think this largely due to faulty bottleneck analysis. I'm not hugely slowed down overall if the switch to a chat application is 50ms slower, even if I may notice it. If the tests I need to run at various stages of the development process are slowed down by 30% however, that might actually be noticeable for development pace. More likely to context switch to something else / get out of flow.
It's also interesting how perception of these bottlenecks differs. I wouldn't care much if tests are running 30% slower, I run them in the background from my editor anyway. I usually tidy up stuff in the meanwhile or git stage some lines (which needs to be done anyway). However, if an editor has a higher latency than I am used to, it drives me crazy.
Heh. I do that too, but I'm anxious enough that I rerun tests after finalizing the commit. I'll still use the time for the tests to finish to do another read through the commit, but especially when pushing to many supported branches at the same time, tests still take longer (work on a pg, which has 5 years of supported back branches...).
> However, if an editor has a higher latency than I am used to, it drives me crazy.
Oh yea, but I wouldn't use a web based editor, ever ;). The point I was trying to make was less about something as central as an editor - where I spent a lot of my time, fighting for first place besides terminals - but more around secondary non-critical path stuff like IM.
In fact, unless I'm restricted to a single program I don't have any alternatives to, I'll generally even select programs written in a language I understand and/or prefer.
I have several colleagues that use slightly different metrics, but still pay very close attention to performance and details of the tools they use. Especially if that's something they use for hours over the course of a day.
Of course I am annoyed by the fact that I will lose some power. But I do not have much of a choice now, do I?
Either way, I am not utilizing my CPU the way servers with many users do. These servers will feel this impact the most.
I personally don't care if I lose 2 fps or if my compilation will take 20 seconds longer after all these fixes. But if you budget let's say 5000 USD per month for servers and everything was running fine and now it turns out you need to add an additional 3000 USD a month, then that's where the impact is felt the most and not with normal users like us where the compilation time might take a bit longer.
Also, for some people it is just a hobby to always have the latest and best :)
I write and run a ton of multithreaded data processing scripts and really feel this is going to be a world of hurt for me.
My last fully speced iMac has lasted me eight years, so I might just wait it out and perhaps pick up something far cheaper in the meantime.