Hacker News new | past | comments | ask | show | jobs | submit login
How to make Linux run fast again on Intel CPUs (linuxreviews.org)
369 points by weinzierl 37 days ago | hide | past | web | favorite | 194 comments



I am still shocked at how little repercussions Intel has actually faced for this fisco. This must have cost companies like AWS millions of dollars in lost capacity/early upgrades--why have they not made a serious effort to get these issues under control and why are they not rapidly losing market share to AMD and Arm?


It takes time. Look closely and you will see that some cloud providers are sending out patches for things such as AMD-specific KVM code. Maybe they will not even ever reach feature parity in their offerings, but at least we can understand that interested customers exist, and it is surely sending signals to Intel.


Perhaps much of their workload is io bound and they didn’t notice a thing?


I would think that would actually benefit companies like AWS. Patches for the bug causes applications to run slower, because of that their customers will be forced to use bigger instances and bigger instances cost more.

I suppose only loss for AWS would be if someone decides to move back to own data center because of that, bit I don't think this would cause that.


I don't work at AWS (and haven't) but I do work in the world of big infra and my wild-ass guess is that the user stuff in AWS doesn't run as consistently hot as the SaaS stuff that Amazon runs itself.


Agreed. We run at like 11% average CPU usage across 5k+ EC2 instances in AWS.


Why would that be the case? AWS makes no guarantees about the performance of their instances. If everything gets slower, the most likely thing that happens is high-scale customers need a correspondingly higher number of servers to handle their load.


AWS will care about performance per KW/h.

They also have in the past at least advertised the specific processor models different instance types use. Information is sprinkled around so that people who care can know what to expect..


Aws has a very nice lock in for their customers. They dont have to care much.


> AWS makes no guarantees about the performance of their instances. I

They do for services that are higher-level than EC2, where cost/performance assumptions about the underlying instances used are built into the cost structure.

AWS tries very hard to get customers onto those higher-level managed services I stead of rolling their own on EC2.


Not only were there few repercussions for Intel, their CEO and some on his team faced few repercussions too.

What amazed me is how Intel execs avoided insider trading charges from the SEC when they sold their own Intel shares, knowing of the problems themselves, but before it was public knowledge.


The Intel vulnerabilities are a matter of national security across the world. If you can use a known exploit to take complete root access at a hardware level, just by running some Network requests on the local network - imagine what a foreign government could do if they wanted.


Because a system is secure as long as somebody doesn't break it. If tomorrow someone comes with an algorithm that can factor a prime number in polynomial time RSA will be useless basically, but since then we consider it secure.

Beside that, all these vulnerabilities comes from optimizations done by the CPU. The only reliable way to avoid them is not to have the optimizations to start with, and that is what mitigations do.

It's the choice of the user to choose if he wants or not the mitigations, I personally have `mitigations=off` on my machines because I think that the security threat is not so big for my workflow.


I can't speak for everyone who's ever used an Intel CPU, but Intel's certainly lost my tiny fraction of market share to AMD, at least for desktops (AMD laptops with solid specs seem to be rather unfortunately rare); switched from Intel + Nvidia to AMD + AMD when I built my latest rig, and haven't looked back.

Granted, in my case it was less about CPU security issues (some - though certainly not all - of which also impacted AMD and ARM CPUs, from what I understand) and more about AMD getting me way better bang for my buck (and also, because I wanted to try out a Threadripper and my God am I glad I did), but still.


they are rapidly losing market share to amd right now


Intel has been here before, they will persevere. I type this using a AMD CPU.


Is that true in server chips? I know AMD has gotten design wins from e.g. Microsoft Surface, but I've seen nothing to indicate an appreciable change in server sales.


It is especially true in server chips. AMD offers double the cores per watt of TDP at roughly half the price at the same IPC.


Have they actually gained much market share in the server space though - big corps don't move very quickly, even when the options are way more competitive. "Nobody ever got fired for choosing Intel" and so on.


Dell and HP are having trouble sourcing Intel CPU's and started rolling out way more AMD machines. Customers needing a new machines don't have a lot of options and has resulted in AMD having about 1% server market share to about 10% in just months.


The thing about Intel’s sales dept is that they are able to muscle big clients with incentives from their “make comp” fund. Remember the lawsuit from years ago about Intel paying Dell not to buy AMD products? They’re still doing that, and haven’t paid that fine yet.


Surely if they have the funds to bribe companies into picking intel, they could instead be using that money to make their offerings on par with AMD $/performance wise instead?


Well, I'm not sure what to make of these "cores per watt of TDP" and I suspect few will. SPEC has the "power" benchmark, measuring the performance of a Java server and relating it to the electrical power needed for that.

Seems to me that systems based on AMD's EPYC are doing considerably better in this benchmark than those using Intel CPUs: http://spec.org/power_ssj2008/results/res2019q3/


Atleast from what I've seen in SMB; we have been buying AMD. Plenty of AMD. For our servers.


It is a perfect opportunity for AMD, unfortunately it looks like there isn't the manufacturing capacity at TSMC on 7nm for them to fully exploit the situation.

The recent delays and high prices in high-end consumer ryzen appears to point to AMD trying to satisfy and increasingly high demand for rome epyc.


Know someone who works on AWS. Says it hasn't affected them at all.


In all my computing life every time I've bought something from AMD (be it a CPU or a GPU) I've regretted it. It's going to take me some time to get over that.


I bought a K6. It was amazing compared to Intel at the time . No regrets


i bought the latest ryzen. very satisfied with the purchase so far :)


because the alternatives really are not alternatives, and because moving from something like investments in hardware are not quick and easy moves. I will tell you once or if ppl figure out then the drain will be pulled for intel


as of pretty recently epic is better by almost all metrics.


KVM maintainer here. Virtualization still has slightly better overhead on Intel for several workloads. Even with all mitigations enabled (they hardly affect AMD times), AMD is sitting at about 3000 clock cycles vs 2000 for Intel.


I've never heard about this, genuinely curious what workloads those are.


Anything that causes the guest to go to sleep (HLT) a lot, for example message passing between two processes on two different VMs. You can try TCP_RR from netperf.


In all, unless you use Optane. Which you might on database servers with large infrequently accessed datasets? Can't really think of many use cases where throwing in an NVMe drive isn't better.


I'm currently pricing out a new home workstation build, not looking at intel parts.


I don't have the overall impression that AMD/ARM are significantly less affected by all those issues.

There were a few intel specific fuckups. Maybe more, maybe more people were looking. But the core problem is deeply embedded in how CPUs were built for decades.


AMD and most ARM CPUs weren't vulnerable to the entire Meltdown class of issues.

https://openbenchmarking.org/embed.php?i=1907066-HV-MITIGATI... from https://www.phoronix.com/scan.php?page=article&item=3900x-99...


I recently brought back to life a PC that a friend was about to throw away. It has an Intel i7 3820 CPU and running latest Ubuntu. Here are my results from Geekbench 5.0.2 Tryout for Linux x86 (64-bit) before and after the "mitigations=off":

mitigations on score: 865 Single-Core Score 2624 Multi-Core Score

mitigations off score: 912 Single-Core Score 3995 Multi-Core Score


Throw away an i7? That's a bit wasteful.


Many people think about Intel branding this way. And I blame Intel for the confusion. The number after the "i" is not what matters when comparing chips between generations, it's the number after. This is a 3rd generation chip as you can see it's a 3000 series.

Currently for sale are generation 9 (9000), and these are even out the door soon. 10th gen is right around the corner.

Within these thousand series are the ACTUAL part numbers.

Ex: 9900K, 9980XE, etc.

These names are even more confusing but are what tell you the real power of a chip within the lineup for that year.


i3-8121U (one year old) vs. i7-2600K (eight years old):

https://www.anandtech.com/bench/product/2367?vs=2413

The newer i3 wins a few, and has much lower power consumption, but the old i7 wins more. Naturally the 2018 i7 thoroughly defeats the eight year old one (while using more power):

https://www.anandtech.com/bench/product/2258?vs=2413

But the generation is only one factor. Generational improvements don't always win over other factors like clock speed, cache size, core count, etc.


The i7-2600K is a desktop CPU, the i3-8121U is a laptop CPU.

A recent desktop i3 is clearly faster: https://www.anandtech.com/bench/product/2277?vs=2413


And not only that. The K series is the unlocked high-performance version of the desktop series. Whereas U is the low-voltage/ultrabook version of the mobile series. So they are sitting pretty much at the opposite end of the spectrum (if we remove server and phone CPUs from the picture), and it's no huge surprise that it still shows up as faster.


> The i7-2600K is a desktop CPU, the i3-8121U is a laptop CPU.

The i3-8121U is a low-power CPU. Low-power processors often end up in desktops like the Mac Mini (which doesn't use that specific model but has used U-series processors) or the Intel NUC (which uses that exact CPU):

https://www.newegg.com/intel-nuc-8-home-boxnuc8i3cysm1-stude...

Meanwhile the i3-8350K is the top end i3 which has the same number of cores and cache and a higher clock speed than the i7-2600K. No surprise it wins when the only thing it really lacks is hyperthreading, and even that causes it to lose some of the threaded tests.

The point was that e.g. an old quad core could often beat a new dual core. Every new dual core i3 is low-power, but if you want to see the 2600K acquitting itself well against a modern dual core "desktop" processor, here you are:

https://www.anandtech.com/bench/product/2268?vs=2413


And the difference is going to rocket up a lot pretty soon, as the next i3 will have 4 cores/8 threads.


That chip's almost 8 years old. It can barely keep up with today's entry-level desktop processors, and draws twice as much power while doing so.


It’s arguable whether it keeps up but it offers perfectly fine performance for pretty much whatever most people want to do, including a lot of gaming (with a proper GPU).

It does use considerable more power to do so — a few additional LED bulbs worth.

In general, processor performance has outpaced consumer needs for quite some time and most people could get along fine and performant with this 8 year old processor.

Citation: have and still use one, right next to my blown out 8700k system.


Can confirm, I have been using an old processor in my T430 for a couple years and it works great, even when compiling Rust projects. battery life is terrible, though. So I have a Pinebook Pro coming in, hopefully there are no ARM-related headaches but even with some large papercuts I think it will be an improvement for usability.


Indeed, Sandy Bridge i7-2600 is still more than enough for me (no gaming but crunching numbers frequently).


Probably one of the best investments in the past decade. I had a 2500K for 6 years and only sold it since I moved and didn't carry the old desktop with me. Performance-wise it was still going strong and left nothing to be desired. Definitely faster than the 2015 Macbook Pro I had been using since then - which is also not that bad as an everyday PC.


I'm still rocking the i7 920. It's still running perfectly fine. I throw a new gpu in it every couple of years and have never had performance problems.


I recently moved from a 920 to an 8700k.

The 920 will bottleneck GPUs as slow as a GTX970 in some titles (PUBG, Rage 2 for example).

Great chip and still plenty for development/daily use, but IMO past its prime for gaming.

I also had to have expansion cards for USB 3.1 and SATA 3 which was annoying. NVMe is out of the question :(


Gaming, sure, but just throwing an i7 away? Heck, my ham radio PC is a dual core duo.


Oh yeah I'm not advocating tossing them. There's lots of opportunities to repurpose a chip like that.

I just see lots of people saying things like "still daily driving an abacus!" as if there's been no progress on CPU development in the last 10 years, and that's just false. Progress has _slowed_, but a modern Ryzen or i-Series is quantifiably better than older gear.


And if you use the computer extensively and live in a country where electricity is expensive, you could also be paying for that inefficiency in your utility bill.


I have a desktop PC in the living room, with an i5-2400 + GTX1050TI - still runs all modern games in high settings at 1080p@60fps. Literally no idea why someone would throw out a 3xxx series CPU at this point, it's all fast enough for daily use.


Best machine in the house is an i3 540 running Ubuntu. I recently upgraded RAM to 12GB.

It is perfectly fine for my needs: mostly browsing, some video editing, and coding Perl.


Bla Bla bla. We have a ton of Westmere generations running everything from HPC workloads to kubernetes to desktop machines which are fine.


Even with the mitigations that chip is significantly faster than an athlon 200GE, without them it's more than twice as fast (in geekbench at least).


I bet it also uses much more than twice the power.


Athlon? I'm surprised to see that AMD is still using that brand name.


It's just the brand for their super low end stuff, similar to how Intel is still using the Pentium name.


I have an i7 3770k, overclocked to 4.4Ghz, it runs with no problem every workflow I need.

Newer processors have more cores for sure, but if we look at single-core performance there isn't so much difference, and my almost 8 years old overclocked i7 runs probably faster for a single threaded application than a new 16 core Ryzen.

Of course if we talk about multi threaded workflows a newer CPU is better.


Eh. My main desktop is a Core 2 Duo 2.4GHz from 2005. I'm typing this comment on said machine right now.

It's running Kubuntu and I'm running the latest versions of everything on this -- Chrome, RStudio, etc. -- and it runs plenty fast. I have a Windows 10 computer at work with the latest i7 chip which feels slightly faster, but not that much.

Most software these days aren't CPU-bound but IO bound. I got the most performance boost upgrading to an SSD and upping the RAM to 8 GB.

If you run Linux, you can have a very comfortable experience running on hardware from yesteryears. Throwing out a perfectly good machine is a waste.


The mitigations can halve peformance. People need to make Intel pay up.


My old, but still functioning i5 3210 laptop has exactly half the single-core speed of the 2018 MBP.

Still a viable development machine!


There was a huge jump in mobile CPU performance between 7th and 8th gen; 8th gen mobile i7 was on the level of desktop 4790k, almost doubled the performance of 7th gen i7.


Funny that you mention the 4790K, because that's exactly the setup I have and I was shocked how fast the i7-8565U is is in my Razer Blade Stealth. When I first got that laptop, just for a test I started a video conversion of a blu-ray movie in handbrake on both my desktop(i7-4790K) and my laptop(i7-8565U) at the exact same time, and both machines finished after 2 hours within a minute of each other. It's absolutely remarkable how fast that mobile chip is, while only using 25W of power vs the 4790K's 90W.


Yup. 8th gen finally made the 15W i7's quad core, as opposed to being more or less low-clock/power desktop i3's. The turbos were nicely tuned too, so single-thread perf went up too.

Reportedly for the 10th gen, a lot of said i7's will be equivalent to desktop i3's (4C/8T) again. (progress?)


heh, they copied my website :p https://make-linux-fast-again.com


Big thanks for making the website. I stumbled upon it a few days ago and I'm pretty happy that somebody pointed my attention towards this.

The parameter list could be even reduced to fewer elements as mitigations=off covers nopti, nospectre_v1, nospectre_v2, l1tf=off, nospec_store_bypass_disable and mds=off.[1]

[1] https://github.com/torvalds/linux/blob/master/Documentation/...


That seems to depend on kernel version. So just add them all. Better safe than sorry... I mean... unsafe than sorry.


Isn't mitigations=off enough on newer kernels?



it is, but I know a fair amount of people on older distros - ubuntu 16.04, 18.04, etc... whom may have some flags available but not mitigations=off which is a more recent addition


One of the article's comments says so.


Thank you for that website. As I disable mitigations on all my personal hardware, this saved me a bit of time when this whole thing started and has served as a nice quick reference since.


That looks cool. Always wanted to know though, do you benefit something from such websites? Just interested, thanks!


As the owner of a couple vanity domains for no useful purpose, no, I don’t “benefit” from it. In fact, each domain costs me a small amount of money. But the amusement I get from them makes it worth it. :)


As someone also with a few dozen vanity domains. The benefit is learning how to manage multiple web properties with minimum resources. It costs me about $500/year in domain registration alone. I would consider it my hobby as well, but it makes us a shoe-in candidate for a lot of technical positions because we dont need to tell we can simply show.


Why don't you put them on a tld that'll give you cheap domains?


Then they’re not vanity domains anymore (;

I’ve been doing this for 20 years, all discounts found and baked in. FWIW all of my demo sites since last summer use .xyz before they become full fledged vanity domains


> Always wanted to know though, do you benefit something from such websites?

it's a net loss of 10€ a year but makes me happy which certainly improves my cardiovascular health :p


They seem not. Just one line is nothing. Need to read the f info about kernel.


yes, that was tongue in cheek - actual work is done by people writing the kernel


Meanwhile at the NSA: "The plan worked perfectly! Now they are disabling the security patches willingly. Everyone, back to work!" (:


For a local workstation user not doing anything sensitive etc I don’t see why you wouldn’t do this. Be smart. Be clean online. And enjoy the full power of your box. A server though, especially one that will be targeted, should take the performance hit and get patched.


Isn't it the other way round? Most of these mitigations are to stop hostile code from stealing your secrets. A physical server doesn't run hostile code. A workstation with a web browser which supports JavaScript runs potentially hostile code all day, every day.


Some random server you have control over most likely doesn't run much hostile stuff. But nowadays a HUGE number of "servers" are really cloud-hosted VMs and on those every single thing is hostile from the cloud company's perspective.


True - i did write physical on purpose!


But the machines hosting those VMs are physical servers too and i'm certain than gigatexal was including them when he wrote that comment.


Depends on your browsing pattern. I don't visit random websites often. And even if I do, I don't stick there for long (those exploits need a lot of time to extract useful data). And sites that I visit are trusted (like HN) so they are very unlikely to run exploits.


Sorry, I consider this to be a very poor defence.

You might trust the site you go to; but that's not all the javascript that gets loaded, there are includes that a lot of web developers will use for convenience, there are ad networks and there are bugs that could allow some unintended javascript to get loaded (like with bit-squatting[0]).

It's not a given that you're safe running javascript if you only visit sites you trust. Even if you are INCREDIBLY sanitary, which I would argue is unrealistic (just check your history and you'll see that you probably visit many sites off of a google search).

[0]: http://www.youtube.com/watch?v=lZ8s1JwtNas / http://dinaburg.org/bitsquatting.html


You run arbitrary code from a lot more sites than you just those you visit, and I doubt you're combing through it.

Third-party JS, tracking, unblocked advertisements, possibly malicious on-site code, etc.

If you're allowing any site to run JavaScript, you're running entirely arbitrary and uncontrolled code on your machine, and thus is in dire need of the mitigations.

It's a different situation for a server designed to not run remote code, which makes it practically impossible to exploit speculative execution vulnerabilities.


I didn’t read the article, but I would advice against the idea that I can read a good chunk of your memory with Meltdown and Spectre via a JS ad.

I am fuzzy on the details, but I do think it’s possible.

Meltdown is out of order execution detectable from caches. They made it difficult to get a good timing source in JS, so there’s that.

Spectre is training the branch predictor unit to execute code that you want in caches. You would need a good timing source again.

So there is some security in place. I think a creative hacker will get around it though.


> They made it difficult to get a good timing source in JS, so there’s that.

No, they didn't. A while loop with a counter is a good timing source, and you can't really prevent that without severely crippling everything's performance. Want a good timing source in your browser? Just follow those three easy steps:

1. Start a while loop in a webworker

2. In that while loop, incrementing a counter in a SharedArrayBuffer

3. In the main javascript, read that number from the SharedArrayBuffer.

Bam, a decent clock.


Didn't they disable SharedArraybuffer for that reason?


Yes and no?

> Note that SharedArrayBuffer was disabled by default in all major browsers on 5 January, 2018 in response to Spectre. Chrome [re-enabled it in v67](https://bugs.chromium.org/p/chromium/issues/detail?id=821270) on platforms where its site-isolation feature is enabled to protect against Spectre-style vulnerabilities.

from https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


> They made it difficult to get a good timing source in JS

It's true that Date.now() has a reduced precision, e.g. Firefox with privacy.resistFingerprinting=true rounds to the nearest 100 ms increment. But you still get high-precision timestamps from window.requestAnimationFrame(), so you get a precision of at least 16 ms, assuming 60 Hz refresh rate, or even higher on gaming rigs with 144 Hz monitors.


I find it amusing that having an expansive monitor could be used to leak data more efficiently from CPU caches and RAM


If you don't run untrusted code you should be fine on bare metal servers too.


These days I find it hard to trust the many dependencies installed by language-level package managers. OS-level system package managers tend to do better, but they too have also had incidents.


If you have compromised dependency, spectre is the least thing you should worry about IMO.


Then why do you have them installed? Surely you're not using those dependencies you don't trust as actual dependencies of your server application, are you?


Fraudulent transitive dependencies.

My application depends on libuseful version 1 or later. libuseful_1 depends on libtiny, but libuseful_2 depends on libbigballofmud, which consists of libtiny and 800 other libraries that have been merged together for political reasons. libuseless, which was also merged into libbigballofmud, depends on rce-daemon, or nvidia-brick-the-install, or systemd, and I've never heard of rce-daemon before, so it's not blacklisted and installs with no error message. As the other two options suggest, this is not hypothetical.


> nvidia-brick-the-install

Is this in reference to when the nvidia driver had

  sudo rm -rf /usr /lib/something/something
in its install script?


... I'm not actually surprised.

But no; my desktop machine has a video card that doesn't display anything (black screen) if booted with the non-legacy nvidia drivers. I had to boot off a old 32-bit install to get rid of them and then wrestle apt/dpkg back into a sane state.


Ah right, yes, NVidia frequently drops support for old models in their newer driver releases. I'm so happy I have left that world behind, with both my PC and notebook using AMD GPUs.


I tried this and run Geekbench 5.0.2

here are the results:

mitigations on score: 980 Single-Core Score 2008 Multi-Core Score

mitigations off score: 976 Single-Core Score 2741 Multi-Core Score

links before: https://browser.geekbench.com/v5/cpu/376453 after: https://browser.geekbench.com/v5/cpu/376504


Your single core score went down? How?


Its just a few numbers.. actually the test should be run multiple times and take median.

Oh and I ran it more times and it ranged from 2500 to 2900 for multicore score.


It went down by 0.5%, that's most likely irrelevant.


There really just needs to be a good way to run your browser in a horrible jail. Someone just needs to sell an arm on a USB stick that you run v8 on for chrome and nothing else.

Destroying the performance of the entire machine to enable the ability to run JS seems a bit over-zealous.


Those exist and they are called chrome sticks. They are also available in laptop format, called Chromebooks. Also, they are available in small form factor box format, known as chromeboxes


So the window manager just takes a raw stream of the video from the browser on the USB-based machine & sends input requests to & fro?

Sounds like the old days with X11.

export DISPLAY=win:0.0


Edge in Windows have mode to run in a separate VM. Not sure if they kept it after moving to Chromium engine.


I wish they included some level of benchmark for the performance gain. In practice, how much of a difference would that make? Also a short summary of what those mitigations protect against would be great.

This way the reader could make an informed decision: do I want to sacrifice X to get Y?


Difference depends very much on actual workload. For example, I have a small server with mostly CPU-bound programs running, where the mitigations do not noticeably slow down anything, but a simple "xz -l large_iso_file" takes 20 minutes without mitigations=off, but only 2 minutes with.



I've read that it's possible to exploit meltdown or spectre just with javascript. Apparely it's now patched with most browsers.

Other than js, am I right to think it's still a difficult vulnerability to exploit since you need to execute code on a target machine? It seems to be bad news for cloud techs, but not for users.


Gibson Research’s InSpectre utility will allow you to do the same thing on Windows.

https://www.grc.com/inspectre.htm


>We don't need no barriers between software, they could be friends

Wholesome.


Lucky for me, I never got to patch for Spectre or Meltdown in the BIOS upgrades for my Razer Blade because they only support Windows 10 as a BIOS upgrade option so you can have a GUI experience for what could be done with a bin file on a USB.


I'm not an expert, but my understanding was that Linux updates microcode by itself, has nothing to do with BIOS.

EDIT: depends on the distro


Also, some vendors support putting the single-file updater .exe on a USB stick and upgrading from the BIOS directly: https://www.dell.com/support/article/us/en/19/sln171755/upda...


So... what is the worse case scenario if you decide to disable these mitigations? What precausions one can take to be relatively safe while turning these off?


If/when someone finds a way around the timer patch in your web browser (or if your browser doesn't have the patch yet for some weird reason), a website running javascript for long enough can extract kernel secrets and AES keys, a password manager's entire database, any application's authentication token and pretty much anything else in memory.

Executing such an attack is difficult and unreliable, but in my opinion this also means that casting a wide net might be worth it for the attacker instead of only using the exploit kit to target specific computers.

If you run any software in a virtual machine or docker app, this boundary is easily crossed. If you run any application whose supply chain might be attacked (for example, one of the million javascript files in an electron app), such an app can get root on your system through any sandboxing and virtualisation.

Edit: obviously, this is a worst case scenario.


Is it feisable to make privilege escalation of this sort impossible, by shifting cryptographic verification to dedicated hardware?


Intel hardware has SGX, the concept of a secure enclave for sensitive data ... which is also vulnerable to the same class of speculative execution exploits on hardware generations prior to Cascade Lake

Specifically it was a trio of vulnerabilities colloquially known as 'Foreshadow / L1TF': CVE-2018-3615, CVE-2018-3620, and CVE-2018-3646,

https://en.wikipedia.org/wiki/Foreshadow_(security_vulnerabi...


It's not really a privilege escalation, but a method for completely bypassing memory protection. So keeping cryptographic verification outside the CPU is not enough, if you end up having your sensitive data in the RAM anyway.


What about on a database server that no web browser runs on?


Majority of users won't turn those protections off as they don't even know about them. So one can't really cast a wide net.


I think they meant casting a wide net in search of the small percentage of machines where the mitigation’s are turned off. An 0.01% success rate is okay if you hit ten million systems.


In general, these kinds of attacks are cross-process information leaks and sandbox escapes. The attack vector is untrusted code running on your system, of which the by far most common and most potentially malicious is javascript when you use a web browser.

So the worst case scenario? You visit a website with some attacker-controlled javascript, and it successfully picks up information leaks from your password manager allowing it to obtain all your passwords.

I'm not sure how realistic that attack is. I know that for some of these vulnerabilities attacks from javascript were shown to be possible, but I'm not sure what the target information was in that case.


Are there ways to prevent data from being in the cache?

I know almost nothing about computer security.

I ask because ages ago, I added copyright protection to some products (under protest). Though encrypted on disk, once loaded, all our code and key phrases were "plaintext" in memory, easy to read. Hackers would post cracked versions of our releases within 24 hours.

The whole effort seemed pointless. My takeaway was if you had physical access, you have everything. But maybe things have improved.


If you have physical access, you have everything (given sufficient time and effort). This will always be true, though dedicated security hardware (TPMs, etc) might drive up that time and effort requirement to get at their contents.

The access provided by these attacks is a far cry from physical access. There are plenty of things that can be done to mitigate them (removing data from the cache, for example).

Of course, this entire post is about how to disable those mitigations, as they almost always come with a performance penalty.


In memory data like that is always accessible by someone who wants it bad enough (with a physical device and a clean install), one can run Windows in a debugger like environment and view all of system memory. I think that's why most DRM is web authentication based


I have a c.2016 Intel development machine. I never log in at the console or use any GUI, it runs command line only Fedora Linux and apart from that it only ever runs code that I myself have written. Because I use it for compiling and testing I need it to run as fast as possible.

Over the past few years it has really slowed down. I got some (not nearly all) of the performance back by using some of the command line options shown here. I'm going to try the whole command line later today and do some actual before and after testing.

But IMO this is the kind of situation where turning off the mitigations makes sense.


Also check SSD wear level if you use one.


How do you check that?


On Linux one does with `smartmontools` usually. Indicator depends on the device vendor. https://unix.stackexchange.com/questions/106678/how-to-check...


So as a somewhat unconcerned user, I've been disabling some kernel mitigation since more or less when these horrors were first revealed, and when I could remember/be bothered after a kernel upgrade.

But I'm on an AMD CPU not intel.

I'm very curious to hear what knowledgeable folks here could tell me about performance gains on AMD (1950x) from these or any other similar switches.


Here is my geekbench results:

before: https://browser.geekbench.com/v5/cpu/381062

another before: https://browser.geekbench.com/v5/cpu/381122

after with mitigations off: https://browser.geekbench.com/v5/cpu/381082

I believe there is almost no difference (unless my bios hardened something like this that I am not aware off: https://www.asus.com/us/support/FAQ/1035323/)

If one needs those extra ~200 point I think he might already ensured that no extra app running and he is using cpu isolation


Thanks, knowledgeable folk person! Was beginning to think you wouldn't show up ;)


I believe they only apply to intel


Some mitigations also apply to AMD CPUs.


Spectre v2 affects AMD CPUs.


Still AMD is overall much better protected: https://images.anandtech.com/doci/14525/Mark_Papermaster-Nex...


There are some benchmarks of mitigation impact on Phoronix [1]. These benchmarks are only for mitigation sensitive workloads so real world impact will be lower.

[1] https://www.phoronix.com/scan.php?page=article&item=amd-zen2...


Could someone knowledge create a subset of these for AMD Zen? I have some compute-only servers in my garage for which I don't care about mitigations, but do care about performance.


mitigations=off is all you need on a recent kernel


I do not know why but disabling all the mitigations as described actually resulted in worse performance for me

mitigations on score: 719 Single-Core Score 1438 Multi-Core Score

mitigations off score: 679 Single-Core Score 1418 Multi-Core Score

Running Ubuntu 18.04.3 LTS with i5-4300 U on a T440s.

Any ideas why? Makes no sense to me.


How many times did you repeat the test? 98% you had a youtube video playing in the background or some full text indexer was running or similar. As for the 2%, I find it hard to believe constantly flushing caches and rejigging page tables on every syscall could ever be faster, but there is always room to be surprised :)

The other obvious option since you mention it's a laptop is probably throttling. Worth at least checking dmesg output to see if the kernel logged any events during the run, but I'm not sure that's conclusive.

On my Dell it's possible to force fans to 100% through a third party utility, maybe yours supports similar


I don't have /etc/sysconfig/grub on Manjaro. Is it /etc/default/grub?

Should I add the line as specified in the article if my CPU is as old as Core 2 Duo? Will it make any difference? Won't it render the system unbootable?


You can check with this command if your CPU is affected by any of those:

  grep . /sys/devices/system/cpu/vulnerabilities/*
And yes, you'll need to edit the /etc/default/grub file, but make sure that you add your extra parameters to GRUB_CMDLINE_LINUX_DEFAULT based on this:

https://wiki.archlinux.org/index.php/Kernel_parameters#GRUB


Thanks!


Yes. You should end up with a line like this:

GRUB_CMDLINE_LINUX_DEFAULT="noresume noibrs noibpb nopti nospectre_v2 nospectre_v1 l1tf=off nospec_store_bypass_disable no_stf_barrier mds=off mitigations=off"



Would be great if this could be done for Windows (mitigations=off). I’d be happy to reboot to a mode where I have 3% better perf. I’d easily do it every time I wanted to play any game where I’m CPU bound


You can turn mitigations off for Spectre and Meltdown by using the GRC InSpectre program from Gibson Research. The tool's main purpose is to display if your system is patched but it also allows you to turn them off. I never tried it to see if it actually works though. Windows might turn them back on for all I know.


You can disable them through setting a registry key or use this handy tool instead: https://www.grc.com/inspectre.htm


Isn't this largely redudant?

My understanding was that "mitigations=off" is a shortcut for "disable all spectre/meltdown changes that cost me performance", at least on a recent kernel.


As explained on the site, yes, but if you are running an older kernel you need to use the individual flags instead, and since the kernel safely ignores any flags it doesn't recognize, the full string works for everyone and doesn't do any harm. (Other than turning mitigations off, ofc...)


Removing all shielding from the nuclear reactor doesn't do any harm... to the reactor.


That's the thing. On old kernels you have to disable each one separately, on new kernels you can do either that or mitigations=off. If you want to make sure you disable everything on old kernels and disable potential new options on new kernels without going into details - you can use the line provided.


It would be nice if we were able to do this for specific cores, and then somehow assign critical applications to secure cores and regular applications to performance cores


I've been wondering why we haven't seen a linux kernel scheduler yet that'll SMT colocate only threads of the same process. Derisk the SMT-based attacks by making the only targets be "self".

(Or processes of the same user in the same namespaces, or some such criteria for "mutually trusting" processes.)


you could get two machines


You mean like a heterogeneous cluster? Do you have suggestions on how to manage load deployment on such a topology based on how security critical the application is


That actually sounds straightforward in most job scheduling software that I've seen; just tag the hosts "nomitigations"/"mitigations_on" or so, and set constraints on where jobs are scheduled to match.


If you want to try to do this for games, then at least measure the impact with some benchmarks in games that are CPU bound. Maybe it's not worth it.


the real impacts here are the userland/kernel space barrier.

Games, for instance; don't cross that boundary as much as, say, networked databases.


What about qemu with virtio networking and drives?


Then yes, very much impacted.


Yeah, aren’t most games single-threaded? I doubt this affects gaming performance very much.


Games have necessarily become more and more parallelised (whether through multithreading or multiprocessing) over the last decade, as even consoles are multi-core these days.

Both XB1 and PS4 have 8 cores (2xJaguar modules). Both also reserve one of the core exclusively for the system, so a single-threaded game would only leverage 14% of available CPU compute power.

The Switch is somewhat similar through to a lower extent: it effectively uses a quad-core ARM, with one of the core reserved to the system leaving 3 to game developers.


Most games are heavily multithreaded.

For the last decade most of them are console ports, consoles have many CPU cores since Xbox 360 (3 cores / 6 threads, 2005) and PS3 (1+6 asymmetric cores, 2006).

Most games render from single thread, but besides submitting these draw calls games do a lot of things under the hood.


When running Overwatch under VFIO according to Task Manager and i7z tool on the host it loads cores pretty evenly for me.

Can't keep 150 FPS at all times during a replay for instance, gonna disable mitigations and see if anything changes.

EDIT: nothing drastic happened, probably need to disable Windows protections as well.


Can't edit anymore, so adding here. Between 'not vulnerable' and 'max speed' there are a few different intermediate states, probably was in one of those. And I have to admit that for All Green in spectre-meltdown-checker.sh and InSpectre.exe I'd have to pay a pretty noticeable price based on my Overwatch VFIO testing.

Pretty sure there's no point returning to that intermediate state again, might as well commit to one or the other.


We are adult. And free. You give us option and we can choose. And talk about it.


What would this translate to in a typical workstation situation?

Browser with some YouTube video playing, 30 tabs, some webpack Dev server doing compiles and a text editor?

50ms faster compiling?


Infection.


I'm not sure if it is related, but going from Linux kernel 5.2 to 5.3 on Arch makes my laptop constantly spin the fans, and it runs considerably more hot. I tried upgrading, and downgrading a few times, but 5.2 results in a much cooler laptop for me, so I've disabled kernel update for now.


Hm, for AMD we need a fix for this: https://sourceware.org/bugzilla/show_bug.cgi?id=24979

The bug was closed, but it doesn't look like it was fixed.


I like the sarcastic tone it's written in. Wonder if makes enough difference to notice?


Wow, running Ubuntu 19.04 on an XPS 13 and this was a very noticeable boost. I knew it had gotten slower but I'd forgotten by how much. I'm just ignoring what I've potentially opened myself up to...


Can someone please share some better resource on how to activate it, especially for a Linux noob?


In the file "/etc/default/grub" there is a line that says GRUB_CMDLINE_LINUX_DEFAULT="", where inside the "" is a possible list of parameters. Just add the parameter to this list with a space. Back in the terminal type "sudo update-grub" and restart your machine.


Would love to get a pointer for the equivalent for Windows 10. Thanks.



Actually, using clear Linux is THE way to have the fastest Linux.



Sadly funroll-loops[.]org is no more, but there's an archive mirror: https://web.archive.org/web/20060803063141/http://funroll-lo...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: