I suppose only loss for AWS would be if someone decides to move back to own data center because of that, bit I don't think this would cause that.
They also have in the past at least advertised the specific processor models different instance types use. Information is sprinkled around so that people who care can know what to expect..
They do for services that are higher-level than EC2, where cost/performance assumptions about the underlying instances used are built into the cost structure.
AWS tries very hard to get customers onto those higher-level managed services I stead of rolling their own on EC2.
What amazed me is how Intel execs avoided insider trading charges from the SEC when they sold their own Intel shares, knowing of the problems themselves, but before it was public knowledge.
Beside that, all these vulnerabilities comes from optimizations done by the CPU. The only reliable way to avoid them is not to have the optimizations to start with, and that is what mitigations do.
It's the choice of the user to choose if he wants or not the mitigations, I personally have `mitigations=off` on my machines because I think that the security threat is not so big for my workflow.
Granted, in my case it was less about CPU security issues (some - though certainly not all - of which also impacted AMD and ARM CPUs, from what I understand) and more about AMD getting me way better bang for my buck (and also, because I wanted to try out a Threadripper and my God am I glad I did), but still.
Seems to me that systems based on AMD's EPYC are doing considerably better in this benchmark than those using Intel CPUs: http://spec.org/power_ssj2008/results/res2019q3/
The recent delays and high prices in high-end consumer ryzen appears to point to AMD trying to satisfy and increasingly high demand for rome epyc.
There were a few intel specific fuckups. Maybe more, maybe more people were looking. But the core problem is deeply embedded in how CPUs were built for decades.
https://openbenchmarking.org/embed.php?i=1907066-HV-MITIGATI... from https://www.phoronix.com/scan.php?page=article&item=3900x-99...
mitigations on score:
865 Single-Core Score
2624 Multi-Core Score
mitigations off score:
912 Single-Core Score
3995 Multi-Core Score
Currently for sale are generation 9 (9000), and these are even out the door soon. 10th gen is right around the corner.
Within these thousand series are the ACTUAL part numbers.
Ex: 9900K, 9980XE, etc.
These names are even more confusing but are what tell you the real power of a chip within the lineup for that year.
The newer i3 wins a few, and has much lower power consumption, but the old i7 wins more. Naturally the 2018 i7 thoroughly defeats the eight year old one (while using more power):
But the generation is only one factor. Generational improvements don't always win over other factors like clock speed, cache size, core count, etc.
A recent desktop i3 is clearly faster: https://www.anandtech.com/bench/product/2277?vs=2413
The i3-8121U is a low-power CPU. Low-power processors often end up in desktops like the Mac Mini (which doesn't use that specific model but has used U-series processors) or the Intel NUC (which uses that exact CPU):
Meanwhile the i3-8350K is the top end i3 which has the same number of cores and cache and a higher clock speed than the i7-2600K. No surprise it wins when the only thing it really lacks is hyperthreading, and even that causes it to lose some of the threaded tests.
The point was that e.g. an old quad core could often beat a new dual core. Every new dual core i3 is low-power, but if you want to see the 2600K acquitting itself well against a modern dual core "desktop" processor, here you are:
It does use considerable more power to do so — a few additional LED bulbs worth.
In general, processor performance has outpaced consumer needs for quite some time and most people could get along fine and performant with this 8 year old processor.
Citation: have and still use one, right next to my blown out 8700k system.
The 920 will bottleneck GPUs as slow as a GTX970 in some titles (PUBG, Rage 2 for example).
Great chip and still plenty for development/daily use, but IMO past its prime for gaming.
I also had to have expansion cards for USB 3.1 and SATA 3 which was annoying. NVMe is out of the question :(
I just see lots of people saying things like "still daily driving an abacus!" as if there's been no progress on CPU development in the last 10 years, and that's just false. Progress has _slowed_, but a modern Ryzen or i-Series is quantifiably better than older gear.
It is perfectly fine for my needs: mostly browsing, some video editing, and coding Perl.
Newer processors have more cores for sure, but if we look at single-core performance there isn't so much difference, and my almost 8 years old overclocked i7 runs probably faster for a single threaded application than a new 16 core Ryzen.
Of course if we talk about multi threaded workflows a newer CPU is better.
It's running Kubuntu and I'm running the latest versions of everything on this -- Chrome, RStudio, etc. -- and it runs plenty fast. I have a Windows 10 computer at work with the latest i7 chip which feels slightly faster, but not that much.
Most software these days aren't CPU-bound but IO bound. I got the most performance boost upgrading to an SSD and upping the RAM to 8 GB.
If you run Linux, you can have a very comfortable experience running on hardware from yesteryears. Throwing out a perfectly good machine is a waste.
Still a viable development machine!
Reportedly for the 10th gen, a lot of said i7's will be equivalent to desktop i3's (4C/8T) again. (progress?)
The parameter list could be even reduced to fewer elements as mitigations=off covers nopti, nospectre_v1, nospectre_v2, l1tf=off, nospec_store_bypass_disable and mds=off.
ctrl+f: cpu/speculation: Add 'mitigations=' cmdline option
I’ve been doing this for 20 years, all discounts found and baked in. FWIW all of my demo sites since last summer use .xyz before they become full fledged vanity domains
it's a net loss of 10€ a year but makes me happy which certainly improves my cardiovascular health :p
: http://www.youtube.com/watch?v=lZ8s1JwtNas / http://dinaburg.org/bitsquatting.html
Third-party JS, tracking, unblocked advertisements, possibly malicious on-site code, etc.
It's a different situation for a server designed to not run remote code, which makes it practically impossible to exploit speculative execution vulnerabilities.
I am fuzzy on the details, but I do think it’s possible.
Meltdown is out of order execution detectable from caches. They made it difficult to get a good timing source in JS, so there’s that.
Spectre is training the branch predictor unit to execute code that you want in caches. You would need a good timing source again.
So there is some security in place. I think a creative hacker will get around it though.
No, they didn't. A while loop with a counter is a good timing source, and you can't really prevent that without severely crippling everything's performance. Want a good timing source in your browser? Just follow those three easy steps:
1. Start a while loop in a webworker
2. In that while loop, incrementing a counter in a SharedArrayBuffer
Bam, a decent clock.
> Note that SharedArrayBuffer was disabled by default in all major browsers on 5 January, 2018 in response to Spectre. Chrome [re-enabled it in v67](https://bugs.chromium.org/p/chromium/issues/detail?id=821270) on platforms where its site-isolation feature is enabled to protect against Spectre-style vulnerabilities.
It's true that Date.now() has a reduced precision, e.g. Firefox with privacy.resistFingerprinting=true rounds to the nearest 100 ms increment. But you still get high-precision timestamps from window.requestAnimationFrame(), so you get a precision of at least 16 ms, assuming 60 Hz refresh rate, or even higher on gaming rigs with 144 Hz monitors.
My application depends on libuseful version 1 or later. libuseful_1 depends on libtiny, but libuseful_2 depends on libbigballofmud, which consists of libtiny and 800 other libraries that have been merged together for political reasons. libuseless, which was also merged into libbigballofmud, depends on rce-daemon, or nvidia-brick-the-install, or systemd, and I've never heard of rce-daemon before, so it's not blacklisted and installs with no error message. As the other two options suggest, this is not hypothetical.
Is this in reference to when the nvidia driver had
sudo rm -rf /usr /lib/something/something
But no; my desktop machine has a video card that doesn't display anything (black screen) if booted with the non-legacy nvidia drivers. I had to boot off a old 32-bit install to get rid of them and then wrestle apt/dpkg back into a sane state.
here are the results:
mitigations on score: 980 Single-Core Score 2008 Multi-Core Score
mitigations off score: 976 Single-Core Score 2741 Multi-Core Score
Oh and I ran it more times and it ranged from 2500 to 2900 for multicore score.
Destroying the performance of the entire machine to enable the ability to run JS seems a bit over-zealous.
Sounds like the old days with X11.
This way the reader could make an informed decision: do I want to sacrifice X to get Y?
Older hardware is worse off
Other than js, am I right to think it's still a difficult vulnerability to exploit since you need to execute code on a target machine? It seems to be bad news for cloud techs, but not for users.
EDIT: depends on the distro
Executing such an attack is difficult and unreliable, but in my opinion this also means that casting a wide net might be worth it for the attacker instead of only using the exploit kit to target specific computers.
Edit: obviously, this is a worst case scenario.
Specifically it was a trio of vulnerabilities colloquially known as 'Foreshadow / L1TF': CVE-2018-3615, CVE-2018-3620, and CVE-2018-3646,
I know almost nothing about computer security.
I ask because ages ago, I added copyright protection to some products (under protest). Though encrypted on disk, once loaded, all our code and key phrases were "plaintext" in memory, easy to read. Hackers would post cracked versions of our releases within 24 hours.
The whole effort seemed pointless. My takeaway was if you had physical access, you have everything. But maybe things have improved.
The access provided by these attacks is a far cry from physical access. There are plenty of things that can be done to mitigate them (removing data from the cache, for example).
Of course, this entire post is about how to disable those mitigations, as they almost always come with a performance penalty.
Over the past few years it has really slowed down. I got some (not nearly all) of the performance back by using some of the command line options shown here. I'm going to try the whole command line later today and do some actual before and after testing.
But IMO this is the kind of situation where turning off the mitigations makes sense.
But I'm on an AMD CPU not intel.
I'm very curious to hear what knowledgeable folks here could tell me about performance gains on AMD (1950x) from these or any other similar switches.
another before: https://browser.geekbench.com/v5/cpu/381122
after with mitigations off: https://browser.geekbench.com/v5/cpu/381082
I believe there is almost no difference (unless my bios hardened something like this that I am not aware off: https://www.asus.com/us/support/FAQ/1035323/)
If one needs those extra ~200 point I think he might already ensured that no extra app running and he is using cpu isolation
mitigations on score: 719 Single-Core Score 1438 Multi-Core Score
mitigations off score: 679 Single-Core Score 1418 Multi-Core Score
Running Ubuntu 18.04.3 LTS with i5-4300 U on a T440s.
Any ideas why? Makes no sense to me.
The other obvious option since you mention it's a laptop is probably throttling. Worth at least checking dmesg output to see if the kernel logged any events during the run, but I'm not sure that's conclusive.
On my Dell it's possible to force fans to 100% through a third party utility, maybe yours supports similar
Should I add the line as specified in the article if my CPU is as old as Core 2 Duo? Will it make any difference? Won't it render the system unbootable?
grep . /sys/devices/system/cpu/vulnerabilities/*
GRUB_CMDLINE_LINUX_DEFAULT="noresume noibrs noibpb nopti nospectre_v2 nospectre_v1 l1tf=off nospec_store_bypass_disable no_stf_barrier mds=off mitigations=off"
My understanding was that "mitigations=off" is a shortcut for "disable all spectre/meltdown changes that cost me performance", at least on a recent kernel.
(Or processes of the same user in the same namespaces, or some such criteria for "mutually trusting" processes.)
Games, for instance; don't cross that boundary as much as, say, networked databases.
Both XB1 and PS4 have 8 cores (2xJaguar modules). Both also reserve one of the core exclusively for the system, so a single-threaded game would only leverage 14% of available CPU compute power.
The Switch is somewhat similar through to a lower extent: it effectively uses a quad-core ARM, with one of the core reserved to the system leaving 3 to game developers.
For the last decade most of them are console ports, consoles have many CPU cores since Xbox 360 (3 cores / 6 threads, 2005) and PS3 (1+6 asymmetric cores, 2006).
Most games render from single thread, but besides submitting these draw calls games do a lot of things under the hood.
Can't keep 150 FPS at all times during a replay for instance, gonna disable mitigations and see if anything changes.
EDIT: nothing drastic happened, probably need to disable Windows protections as well.
Pretty sure there's no point returning to that intermediate state again, might as well commit to one or the other.
Browser with some YouTube video playing, 30 tabs, some webpack Dev server doing compiles and a text editor?
50ms faster compiling?
The bug was closed, but it doesn't look like it was fixed.