This is troubling.
However, unlike buffer overflow exploits, most researches on CPUs are conducted within academic institutions, doing this certainly breaches the code of conduct. Also, CPUs are the most critical components of all computers and their vulnerabilities are difficult to fix, doing this would put a lot of users under immediate risks, unlike a root exploit, which is less risky and can be fixed within a week. Doing Full Disclosure of hardware exploits that users can not fix is much more ethically problematic than software exploits.
But leaving the users in the dark and allowing Intel to delay its fixes by not to exert pressure is obviously irresponsible, which is the original argument for full disclosure.
So I'm not sure. Perhaps Google's Project Zero is a good model and a good compromise between resposible/full disclosure - embargo for 90 days, full disclosure later, all information becomes public after 90 days and no extension is allowed, period. For CPUs, perhaps we can use 180 days.
Imagine if Intel had squashed the competition only to find their architecture is a security risk. As it is now, it really sucks to know that yet another cpu power robbing patch is coming. As a processor monolith they could set back most operations 10 years plugging security flaws. At least I feel like I have a few replacement options.
However the skeptic in me feels like it’s only a matter of time that other platforms are researched with equal scrutiny.
I’d argue that side-channel attacks are less of a problem in the server space because servers generally don’t download untrusted code from the internet and execute that code blindly (assuming you aren’t running other peoples’ VMs).
That's the problem right there: the vast majority of workloads in the cloud runs on shared hosts.
AWS/GC dedicated host pricing is not actually that crazy of a markup (around 50% last time I investigated), but that's still very noticeable at scale, and billing granularity is by the hour.
Otherwise you lose a security boundary.
But that’s exactly what cloud does.
> Intel's mitigation plan being PoC-oriented with a complete lack of security engineering and underlying root cause analysis
Being PoC-oriented means that even if there is full disclosure, Intel is still going to patch PoCs in an ad-hoc manner, rather than a full fix.
Also, if you have full disclosure of a potential problem without PoC, it doesn't solve the problem - Intel can choose to ignore it by not doing a careful root-cause analysis, and fix the problem if there is any.
At the risk of sounding pedantic, that's not 'full disclosure', that's 'responsible disclosure' - https://en.wikipedia.org/wiki/Responsible_disclosure
I think "Responsible Disclosure" involves cooperation with the vendor before releasing any information, e.g. Project Zero is responsible disclosure because the information is released after the problem is patched. I label all disclosure of unpatched vulnerability without cooperation with the vendor as "Full Disclosure".
> I label all disclosure of unpatched vulnerability without cooperation with the vendor as "Full Disclosure".
If you aren't disclosing everything you have, it's not full disclosure, it's some other disclosure model.
I think we're just arguing semantics though, as I fully agree with your earlier paragraph, that hardware vulnerabilities are trickier than software vulnerabilities regarding disclosure, and that Project Zero is doing roughly the right thing.
Not an EE so I’m just dumping my thoughts.
> By using this software, you agree not to: [...] display the object code of the Software on any computer screen.
From a security perspective, it doesn't inspire confidence, there's no ability to do an independent verification, something like doing a reproducible build with an compiler which source code is open to audit. There's no equivalence of GCC or LLVM for FPGA.
Fortunately, there are some people working on it, just saw it on the homepage, although it's a long way to go... https://news.ycombinator.com/item?id=21522522
I'm not an EE, just my 0.02 USD.
I.e. ... I can print it? Good.
The FPGAs used for data-center usage don't even use LUTs primarily anymore, because fully configurable LUTs are too slow and too expensive to manufacture at scale. Instead, data-center sized FPGAs use "DSP Slices" primarily (yeah, LUTS exist but its mostly DSP Slices). They're very expensive (https://www.digikey.com/product-detail/en/xilinx-inc/A-U200-...) and require a very specific set of skills to work with.
Of course they are. Every square-micrometer of the die is going to be either a LUT, DSP, or RAM on that chip. Xilinx makes a decision for how many of each is most useful to its customers.
> There's nothing about LUTs that are harder to make at scale than DSP slices.
Scaling "typical" designs on LUTs is worse than scalaing "typical" designs on a DSP Slice.
Synthesize a 32-bit wallace-tree multiplier for instance, and you'll use thousands of LUTs (maybe 2000ish). However, reserve a DSP-slice for a 32-bit multiply routine, and you'll only use ONE slice.
However, those multipliers on the DSP-slice could be "wasted" if you didn't need that many multipliers. Maybe your arithmetic is primarily addition and subtraction. In any case, when most people talk about "Reconfigurable FPGAs", they're talking about the LUTs which can be the building block of any logic. They aren't talking about DSP-slices, which are effectively prebuilt ALUs connected in a mesh.
Think of today's supercomputer problems: Deep Learning, 3d Rendering, Finite Element Analysis, Weather Modeling, nuclear research, protein folding, even high-frequency trading. What do they all have in common?
They're all giant matrix-multiplication problems at their core... fundamentally built up from the multiply-and-accumulate primitive.
EDIT: The only real exception is maybe EDA. I don't know too much about those algorithms involved, but IIRC it involves binary decision diagrams (https://en.wikipedia.org/wiki/Binary_decision_diagram). So some problems aren't matrix-multiplication based... but I dare say the majority of today's supercomputer problems involve matrix-multiplication.
Also, deep learning isn't a great fit for FPGAs anyway. There's dedicated silicon that does a better job if you're looking for matrix multiplies per watt.
Additionally, high frequency trading isn't doing much in the way of matrix multiplies, as they don't have enough time. They've got few hundred clock cycles per packet in to get a reply out.
Xilinx absolutely markets these Alveo U200 FPGAs as deep-learning accelerators.
> Additionally, high frequency trading isn't doing much in the way of matrix multiplies, as they don't have enough time. They've got few hundred clock cycles per packet in to get a reply out.
I'm not a HFT-user, but I've always assumed that simulating Monte-carlo Black Scholes was roughly what HFT-traders were doing. Maybe not Black-scholes itself, but maybe some other differential equation that requires a lot of Monte-carlo runs of.
Either way, Black-scholes (and other models) are partial differential equations, which are best simulated as a sequence of matrix multiplications. That's my understanding anyway.
And there's cute tricks to avoid matrix multiplies in the critical path for HFT.
There really isn't much need for LUTs any more, barring some fundamental change in data processing.
I suppose LUTs could be the best way to do create the glue logic that basically amounts to signal routing and synchronization, but it seems unlikely.
Dedicated-hardware that serves a purpose. Sure, you can build a CLOS-network out of LUTs, but it'd be more efficient if you made dedicated hardware for it instead.
Not all security researchers are doing it for the money, 0day markets always exist. Malicious people who wish to use the vulnerability for profit wouldn't disclose the vulnerability to begin with. So it's not a new problem.
> this case, it would probably be Intel, to "catch and kill" widespread knowledge of the exploit. Another high bidder would be an investor in position to exploit a short-term massive short
But your comment is insightful - Intel's case is unprecedented. Due to the huge impact of the vulnerabilities, the threat is not coming from the blackhat sellers or black market buyers, but Intel. Intel itself can simply pay researchers with an NDA to keep their mouths shut, while still doing nothing. The impact of the vulnerabilities is also high enough to move the stock market, allowing insider trading.
Here's an example of Gotham City Research trying this approach and failing: https://www.thedrum.com/news/2017/09/21/criteo-counters-frau...
(Not a lawyer)
The goal with this strategy is to find serious issues that will rightly cause investors to immediately decrease their estimate of the value of the company.
Thanks for the information, never thought about that. On the other hand, would purchasing information about a flaw and using the information to bet the stock constitutes insider trading?
If they signed a NDA to cash in on the bug bounty and then sold it to you, I think they are vulnerable to breach of contract, but I don't think you are legally vulnerable, however this is not legal advice, I am not a lawyer, and do you want to go through the expense of finding out?
Makes perfect sense to me now. I now realized I've some serious misunderstanding of insider trading. Thanks for clearing it up for me.
Example: Your neighbor Alice is a hotshot executive at BigCorp. If Alice comes over to your BBQ and starts telling you about how bad their upcoming quarter is going to be, that's probably tainted information. However, if your other neighbor Bob comes to your BBQ, and starts talking about how grumpy Alice has been, and she's always grumpy like this when BigCorp is about to have a bad quarter... That's fair game. Same information, different sources.
That said, in the US I think there must be some indication or reasonable expectation that you knew or were willingly blind to the non-public status of the information.
Additionally, this had big implications for cloud providers. If additional liabilities of data leaks are foisted on companies, insurance companies and corporate counsel may just say no more using amazon, google cloud, azure, etc.
More likely some agencies don't want their exploits to stop working.
I can understand a state agency keeping a security flaw a secret to exploit in the near term... but stockpiling for years only to let stuff leak eventually is just irresponsible.
Note: I'm not saying that I like state sponsored hacking, only that I understand it being a reality and pragmatically wish they struck a better balance.
Block everything except io is just one of its blocking mode.
And the list is even configurable. Docker do use such ability to filter out sys-calls that shouldn't be used in the container.
The difference is that the contents of e.g. the store buffer are short-lived, while branch prediction is by design learning behavior of the program for long term use.
It's not a coincidence that Spectre v1 affected all processors and not just Intel x86. Also Spectre v1 still has no widely-deployed hardware mitigation, while for all other vulnerabilities the processor could be patched to add a "flush the leaked state" instruction with a microcode update.
Only for people who for whatever reason still continue to buy Intel-based systems.
And they still didn't fix it.
What's going on with Intel?
Like they're going all in with lying in benchmarks against AMD and straight up forgetting what has been reported as security issues.
>Company is dying and has no way to turn itself around.
>Culture is a "go along to get along" / don't rock the boat. Most workers are very passive. Inclusive culture focused on internal "networking" rather than winning. A lot of make work going on, probably 25% extra headcount
>Middle and upper management are in direct revolt against CEO and his plans.
During my orientation (2nd quarter 2013) my orientation meeting was about how the CEO is wrong on his plans.
>Advice to Management
>Not much you can do. These problems are the result of near monopoly on PC CPUs for 20 years. This place is what Hewlett-Packard was probably like in 2000 (with the printer monopoly), the collapse is coming, but without starting over there is no way to fix it.
Interestingly, that printer monopoly seems to be doing fine today, given how I have to go to hp.com to get drivers for my Samsung printer.
e.g. I recently found a datasheet of an IC in an amateur radio gear, it were really the glorious days. Those chips with an HP logo (or a Motorola logo) are really cool.
> The INA series of MMICs is fabricated using HP’s 25 GHz f_max, ISOSAT™-I silicon bipolar process which uses nitride self-alignment, submicrometer lithography, trench isolation, ion implantation, gold metallization and polyimide intermetal dielectric and scratch protection to achieve excellent performance, uniformity and reliability.
Around 2000s, the EE & semiconductor department became an independent company, Agillent, and later Keysight.
I think the last major innovation at HP was probably memristor that was supposed to be the next evolution of digital logic and storage, and HP claimed that it was developing a memristor computer system. Unfortunately, it failed to materialize.
Not trying to besmirch or minimize early HP by comparing to Google -- probably better to use Musk/SpaceX or something -- but IMO this is the key takeaway that doesn't seem widely discussed. Innovative companies have innovative leaders with expert-level knowledge in their technical specialties. At some point, the MBAs take over, attempt to put it on autopilot, and it's all downhill from there.
You don't, not if you're a small investor. You go index and hedge your bets that they're not all going to rot at the same time.
>Although the initiative, and as such much of the credit for the birth of the information revolution, must go to Tom Jr., considerable courage was also displayed by his then aging father who, despite his long commitment to internal funding, backed his son to the hilt; reportedly with the words "It is harder to keep a business great than it is to build it."
However, at some point any company is probably bound to promote wrong people, who will end up promoting/hiring the wrong people under them, eventually completely changing the course of the business.
It seems like the death of a company is inevitable.
Which, from a higher perspective, is good. As it breaks down monopolies and give way to more innovative new companies.
By putting the burden of scheduling parallelism on the compiler, does the EPIC design avoid speculative execution vulnerabilities, or did Intel implement Itanium with the same flaws?
They’ve split up into companies that are widely successful in each of their individual fields.
The rebranding I’ll never understand is Fujitsu -> Socionext.
I have to look it up... This is stupid. If I see a Fujitsu chip on a board, I know the brand and may find it interesting. But if I see a Socionext chip, I'll think it's some random U.S. startup from California.
I think there's a reason - Wikipedia says it's a joint venture of Fujitsu and Panasonic, so it makes sense for them to give it a new name, and it's common for these Japanese companies to choose a English name for global business (in the same way that NEC becomes Renesas due to the two reasons).
But what a terrible name!
But the interesting parts have been spun out like: Agillent and Keysight.
HP Enterprise was also spun out.
I'm sure no one intended to mislead, but organisationally Intel just isn't designed to fix bugs. It doesn't have a process to respond to issues.
Ultra-simplified bookkeeping interpreted through the lens of too much coke nearly destroyed them. The dotcom bubble was a very strange time.
Instead of focusing on a PR war and scrambling to salvage one fire after another, Intel needs to produce a strategy that will keep them in the game by actually providing value -- not by twisting the hands of big corps for lucrative long-term contracts.
The Key Goals for the past 4 years if not longer were to ship 10nm with respectable yield. Which as closing of 2019 still didn't happen.
So I am not even sure what they were focused on.
Meanwhile it became clear that a decent part of Intel's per core advantage was because of massive shortcuts they took, notably security wise, and so fixing those security holes takes away the performance.
In that situation, I doubt management at Intel is eager to take out yet another one of their shortcut. Intel needs a win, but they are not at all in the state of mind necessary to achieve one...
The assumption being that they could in the time given, but are sitting on their hands?
Or incompetence in as "they weren't capable of doing it"?
The latter is very probable. The former could underestimate the difficulty of such fixes...
Incompetent can mean "as to the particular project context" (i.e. they could be great engineers that din't manage to deliver the fixes for this issue), or "as to their general professional capacity" (i.e. they are lesser engineers).
I'm assuming this is referencing the "intentionally misleading benchmarks" piece in servethehome. It's worth reading the follow up, in which the author discovers their biggest complaint (older version doesn't have avx2 enabled by default on zen2) was not an issue because Intel manually enabled avx2.
AMD has its fair share of issue in their own self published benchmarks, but at least when third party publish they're not forced to cripple other companies' chips.
There was a good bit of time where Opterons where everywhere on servers, including things like the whole Sun x86 line. At one gig we had datacenters full of IBM blade servers stuffed with dual Opteron boards; they were a big success in the market. I know at one point another gig's non-US folks had significant installed base of AMD-based Fujitsu servers.
Didn't see as much on the desktop/laptop side, though I did carry a AMD-based Thinkpad for a short while. Apparently HP sold a successful, low-end, AMD-based business desktop.
YMMV...I think it's a matter of 'I haven't seen any tigers in my back yard, therefore I doubt they exist'.
But it does seem that adoption is growing again, with recent performance figures and the attractive price point.
The only bad period was where AMD Opteron gained a big market share in servers while Intel was doing Pentium 4 Netburst crap.
What does this usage of the word 'missed' mean in this context? That they lost it / failed to deliver the PoC to the relevant team? Or that they released a "fix" knowing that it didn't defeat the PoC?
Generally speaking, that really illustrate the dumb way Intel is going about it, fixing on a PoC basis rather than going after the strong underlying problem. It basically screams "there will always be issues, the question is can you find them !".
AMD chips don't have the feature that speculation failure is determined at instruction commit time when it is already too late, so most issues just can't happen.
I found the list I made a few months back. no guarantees, but i think it is mostly accurate.
Meltdown: Intel, IBM, some ARM
Spectre v1: Intel, ARM, IBM, AMD
Spectre v2: Intel, ARM, IBM, AMD
Spectre v3a: Intel, ARM
Spectre v4: Intel, ARM, IBM, AMD
L1TF: Intel, IBM
Spectre-PHT: Intel, ARM, AMD
Meltdown-BND: Intel, AMD
There are no sweeping fixes for either v1 or v2, and there probably won't be for a long time at best.
But the positive news is that v1 & v2 only matter at all if you do in-process sandboxing of untrusted code. Which most things don't do, so most things are not at any risk from it.
I don't think this is accurate. It seems to be a widespread misunderstanding that started because the original proof of concept was within a single process. Spectre, before mitigations, allowed userspace to read kernel memory if appropriate gadgets in the kernel could be identified and exploited.
My understanding is the impact is only intra-process after mitigations.
"On August 15, 2018, security vulnerabilities codenamed Foreshadow/L1TF (CVE-2018-3620, CVE-2018-3646 and CVE-2018-3615) were announced. Two of the vulnerabilities (CVE-2018-3620 and CVE-2018-3646) could potentially impact Power Systems. The Firmware and OS patches released by IBM in February and March 2018 to address the original Meltdown vulnerability (CVE-2017-5754) also address the L1TF/Foreshadow vulnerability, except for Power 9 Systems running with KVM Hypervisor. OS patch for Power 9 KVM Systems will be made available soon. The Firmware and OS patches for all other Power Systems are available in this blog below.
The third L1TF/Foreshadow vulnerability (CVE-2018-3615) relates to SGX implementation and does not impact the Power Systems."
L1TF is this: https://software.intel.com/security-software-guidance/insigh...
I'm surprised IBM did this. I thought better of them.
Processes are really "meant" to be the units of security (user, files, network, memory limit, etc.); it's reasonable to need different processes for security partitions.
1 - https://en.wikipedia.org/wiki/Singularity_(operating_system)
WebAssembly doubles down on it today.
Technically, though, software isolated lightweight processes within the same address space is still a very real possibility, it's just that isolation is up to the compilers now that have to emit spectre-proof code, so no native blobs. Which, let's get real, has to happen sooner or later for all userspace code, because hardware can't be trusted.
"Spectre-proof" code are specific workarounds for hardware bugs, not protection against all hardware security issues.
> AMD chips don't have the feature that speculation failure is determined at instruction commit time when it is already too late, so most issues just can't happen.
Putting memory permissions check in retirement stage of the pipeline couldn't be accidental, I say...
Sounds like a great "efficiency hack". Till it fucks up.
Other manufacturers AMD included didn't get affected by those variants.
They're the >~10% of cpu perf intel chips have lost in the last few years with all the mitigations.
Given AMD's Zen 2 has comparable IPC to Intel at this point without doing the ACL check late it's not evident that the difference in when the ACL is done was a key efficiency gain.
Each process has different delay, setup, and hold times. On one process I might be able to fit 15 layers of logic before I have to add a pipeline stage. On another I might be able to fit 20 at the same frequency. Pipeline stages are also not free with each adding additional overhead.
This determines how advanced I can make things like my branch predictor or cache pre-fetcher while still meeting timing requirements. For example I might want a larger lookback buffer but I can’t use a larger memory - not because of how much area it takes but because it simply takes too long to do the look up now.
Spectre variant 1 is probably unavoidable so security inside a single virtual address space is kinda dead. Use separate processes and mmu.
I left in about a year and a half and moved to a startup company. My issue with Intel was that, as a monopoly, they had grown fat and complacent.
I am not exaggerating when I say that in 1998-99, engineers were working maybe 4-5 productive hours in a week. Political savvy and alliance-building were the most important things for promotions and influence.
Those who actually produced good work had credit for their work diluted through many layers of management. You could do something amazing and your manager's manager might present it in a powerpoint to the company without mentioning your name at all, and acting like the idea was his all along.
I'm surprised the company has lasted this long. Its a place where mediocre people gather.
If the history of Microsoft and Windows security is any indication, it'll take Intel many many years to turn that ship around.
There's a question of whether AMD has been mostly unaffected only because their chips haven't received as much scrutiny, but for the time being it does seem that if you care about security, you'd better go with Epyc.
Plus, with better IPCs on Ryzen and much greater performance per dollar, why Intel?!
AMD's Bulldozer debacle may be a better example, because in some ways it's a better example than Windows.
By that, I mean two things.
1) Silicon typically has a pretty long up-front design phase. Meaning there's probably at least 2 upcoming generations in development at any given time.
2) Intel's marketing is somewhat coy about 'architecture' changes, but the sources I found (admittedly just an SO post and Wikipedia) indicate that the number of pipeline stages in the Core series has not changed much over the last few years. IOW it's probably not a full architecture revamp in their 3-step cycle.
P6 as an Arch lasted around 15 years, when you think about it (including the original 'Core' i.e. Core 2 Duo/Solo here, as while heavily optimized and revamped, it was still P6). K7/K8 lasted about the same longer (15-ish). Netburst was a bit of an outlier, only lasting around 7 years. Same for Bulldozer.
Big assumption here, but based on the pattern I'd assume that Intel originally wasn't even considering a full revamp (or, depending on how they do their iterations, it wouldn't be fully revamped) to be ready until around 2023.
Because of the time involved (back to the first point,) I doubt they would be able to have the problem mitigated in silicon until 2021 at the earliest. (As a bonus, they'd probably want/have to qualify it extra well, lest they accidentally introduce a whole new class of vulnerabilities.)
If a car was advertised as emitting 20% less pollution than it actually did, people would be pissed!
Definitely jumping to AMD next time around though. My next upgrade was originally going to be dual Xeons but those Ryzen Pro 3000s are looking nice.
And an excuse to postpone.
- name: Disable CPU-sapping security mitigations
line: GRUB_CMDLINE_LINUX_DEFAULT="noresume noibrs noibpb nopti nospectre_v2 nospectre_v1 l1tf=off nospec_store_bypass_disable no_stf_barrier mds=off mitigations=off"
- name: Update grub
command: /usr/sbin/grub-mkconfig -o /boot/grub/grub.cfg
It's not a crazy threat model to have on a personal PC, the risk is so very minimal. If your threat model is that strict you shouldn't be running JS anyway.
Session keys, private keys, passwords and all other kinds of access tokens that your system is using, it's the next worst thing from remote code execution.
Your browser runs so much untrusted code that it's really unreasonable, and yes, we should definitely be pushing back hard on this. But it's probably the most stupid thing you can do to disable these mitigations because they're not just theoretical, they're real, they're here and everyone knows about them.
This is like anti-vaxx philosophy. "the risk is low"; well, maybe the risk is low because of herd immunity, it's not feasible to run these attacks as they'll be obvious to those who have the mitigations in place (100% CPU), but if there's a 0.00000001% return then it becomes profitable to exploit, just like mail spam.
If you have a database which is only accessible internally, you can disable the mitigations, because those things are hit hardest by the mitigations and do not run untrusted code.
But really, your desktop is running untrusted code _a lot_. Please do not do this, not only for your own sake but for everyones sake. Don't make it profitable for malicious agents to run these attacks.
> It's missing the entire actual implementation of the Spectre attack, which requires analysis of read times to see if you're hitting the processor cache or not.
"analysis of read times" is what the browsers "fixed" to mitigate the attacks (and site isolation later). Again, there has been no working attacks on updated browsers.
Please feel free to link an example of one though, I will gladly admit I'm wrong. You just seem to frankly have no idea how the exploits actually work though (did you actually read the code in the reddit post?), so I suspect that this conversation will be a waste of time.
That's just not true. The timer precision doesn't have anything to do with the kernel mitigations.
The browser vendors themselves said this; and it's not a permanent solution as tech such as Stadia and WebVR rely on high precision timers.
But, whatever man, I'm telling you that it's stupid and you want to bury your head in the sand.
You just make these attacks more likely; I'm not going to be impacted except for a few trillion CPU cycles of idiots trying to exploit me.
You're the one who puts their entire digital life on the line by eking out 5% performance.
Oh, so it just takes more time, so you have knowledge of an exploit? Fine, show me any PoC or similar bypassing the lower accuracy and site isolation.
You are such a big part of the problem with how this whole class of exploits have been handled. No technical knowledge, just spewing stuff like "You're the one who puts their entire digital life on the line", when we there is no indication that anything like that can transpire.
Please stop spreading misinformation.
This is not misinformation, _you_ are spreading "certainty" of safety surrounding a dangerous idea.
Even if I was wrong, and very wrong, why the hell would you choose to be less safe? this whole thread chain is absolutely baffling. Buy an AMD CPU or leave the mitigations on. Everything else is needlessly opening yourself up.
Yeah, which was why SharedArrayBuffer was disabled when spectrev1 was released. It is still disabled in Chrome if site-isolation is disabled and it's still disabled in firefox.
You should really know all this if you are so very well versed in the subject.
>Even if I was wrong, and very wrong, why the hell would you choose to be less safe? this whole thread chain is absolutely baffling. Buy an AMD CPU or leave the mitigations on. Everything else is needlessly opening yourself up.
That’s a very wide attack scope.
You might be thinking of when you have native code execution.
They leak in flight data, using a detached spidermonkey engine, patched to make performance.now() return rdtscp at a rate of 1B/s while the victim application is spamming a load string instruction as fast as possible.
This does not allow:
>any ad network can access any and all memory on your desktop
This allows any ad-network to access random bits on the cache line. If the timing mitigation didn't already fix this, it seems impossible to me to get anything useful from it, the precision and bitrate is just too low (which is why the exploit just spams load instructions in a while 1 loop).
>and the RIDL exploits don't require special instructions.
Weird, in the new addendum it says it uses TSX, and in the PoC it uses XBEGIN. Must be a mistake.
The downsides are of course, infinitely lower (stolen password vs disability or death), but yeah the similarity is there.
We've yet to see a real world attack using any of these, however.
And like the other commenter said, there are also JS implementations.
sure dude, sure
Your turn. Link the speculative execution 0days.
I think it's just silly that you would rely on browser mitigations like disabling high precision timers when that's obviously just a hack.
That's not a speculative execution browser 0day, as it doesn't work in updated browsers and it never has (or I guess it works if you want to leak data you could just console.log). Could you post an actual 0day instead?
$ cat /etc/default/grub.d/mitigations.cfg
Well, let's hope you're not powned because of this and get dragged into giving some fucks, for a 5-10% of performance hit you wouldn't notice anyway...
Do you understand my risk and load requirements better than I do ?
Have you entertained the possibility that the decision to de-mitigate was the result of considered risk and resource management modeling ?
But thanks for your concern and all.
You'd be surprised what people would do. Could be just out of some spite ("fuck Intel"). Could not even include any measurable workloads ("I want my machine to go ultra fast").
>Do you understand my risk and load requirements better than I do ?
For you particularly probably not, but you'd be surprised how many times a third party can "understand the risk and load requirements" of someone else (even a business) better than they do.
E.g. the "No, you don't need a Hadoop cluster for doing "ML" on a 1GB file", phenomenon.
>Have you entertained the possibility that the decision to de-mitigate was the result of considered risk and resource management modeling ?
Only briefly, cause the tone of the comment made it sound more of a knee jerk reaction ("No I don't give a fuck about the 'risk' this introduces", "until everything is swapped to AMD") doesn't sound like the fruits of rational analysis...
What do you mean by `resource management modeling` in this context? Do you mean `capacity planning` or `system scaling planning` or something else altogether?
The sales number dont lie, AMD doesn't even account for 10% of Server CPU shipment, and may not even happen in 2020 given Intel's new price cut.
Is the answer (for home users) to just sandbox some processes under an emulator layer? I'd be happy to just sandbox some sensitive processes like my browser even if it took a huge performance hit, so long as some other apps like games did not take the same hit.
Accessing the emulator's memory means accessing the emulated program's memory, it's just slightly obfuscated.
As far as I understand you still need to be able to have some kind of timing information or cpu state available in the sandboxed program, which is possible if the emulator/sandbox runs close enough to the metal (Such as a js program in a modern browsee, because they need to be fast). Remove ALL timing info and it should be possible to make it impossible to exploit speculative execution. It might run 1000x or 10000x slower than a modern JS engine however.
If you think you have removed all timing information sources you are aware of, many remain: those you aren't aware of at all, those you failed to recognize as exploitable, those you didn't actually remove by mistake, those that are degraded but still present... The attacker should be assumed to be clever and knowledgeable; as the saying goes, creating a system that you don't know how to crack is easy.
for context, imagine that the attacker has access to all memory on the system. It's not -exactly- like that for a bunch of reasons but realistically it's very similar.
The less good news: as far as I can tell, Intel did not commit to how architectural this will be going forward. Considering the role TSX has played in speculation-based attacks, it appears to me to be a generic mitigation that would be great to accompany TSX wherever it is available in the future. Now that MSR_IA32_TSX_CTRL is defined, it should be easier to implement going forward.
Disclaimer: I work on Linux at Intel.
If that is going to be your personal way of mitigating the issue, you've got a choice of 4, 6, and 8 core parts at a significant discount compared to their HyperThreaded variants.
If you steal passwords, then you can use said password to hijack whatever the passwords are protecting.
If you steal private keys, you may be able to use said keys to impersonate the victim (like via ssh into their remote machines).
But if you're asking if speculative vulns could directly lead to remote code execution, then no (since you already have given the attacker a measure of control, as they are able to execute code already).
It doesn't immediately give code exec, but generally it wouldn't be very hard to turn arbitrary memory read capabilities into privilege escalation. As long as you know what the system is running.
These vulnerabilities "only" steal information; however that information could of course be leveraged into privilege escalation or anything else.
Being able to cause manipulate the control flow of code that already exists on the computer can be sufficient. See netspectre for an example that worked on real google cloud vms and local wired networks.
Yes in theory you could do that, but to actually exploit in practice I would have guessed couldn't be done.
Still not very useful for an attacker.
But still fascinating and impressive they could do it at all.
Point being, running arbitary (unprivileged, sandboxed) code is a prerequisite; an attacker can already max your CPU, mine crypto, etc.