The connection between the linked article in this comment and the linked page for this post is that there is a potentially huge bug that will be made public soon and it just affects Intel processors, not AMD - hence the large sale of stock by the Intel CEO.
Not sure if it makes any sense and even logical to compare the market before and after Intel's floating point bug was uncovered a decade ago. My bet is this current bug won't shake Intel's stock price much.
If the workaround being deployed now causes a 30% performance hit in real world usage, even just for some cases, it could hit Intel way harder than fdiv.
A lot of people on Intel will suddenly lose a noticeable amount of performance. Conversely, if your Intel based VMs lose 25% performance, you are now booting up and paying for 20% more VMs for the same load.
I've heard you can schedule big sales all the time and then regularly cancel them unless something goes wrong. Apparently there is no rule against insider canceling.
That's not true. Changing a stock sale plan in any way is considered insider trading. The window has nothing to do with whether it's legal or not. It's only used a risk mitigation and is up to company policy.
I think the point the cma was making is that this trick doesn't involve making changes to (formal) stock-sale plans. The formal plan is to sell regularly, and that remains unchanged, you just cancel it by hand habitually, except when you don't.
I am no lawyer, so I don't know if this is really allowed. My gut instinct is (a) no, it is not allowed and (b) there will always be some more subtle version of the tactic that is allowed.
That's correct as I understand it, though there is a risk that some high-profile event will cause the media to whip up enough (understandable) public outrage to cause the SEC to actually, reluctantly fine someone for that.
Was the connection with speculative execution already being discussed openly? I know about https://cyber.wtf/2017/07/28/negative-result-reading-kernel-..., but not about anything between that and 28 Dec suggesting someone made it work and that's the reason for KPTI.
If it wasn't in the open, seems...not ideal embargo-wise for AMD to leak it there. Though no one's in that thread complaining about the disclosure, so maybe they either think that part is already known to anyone looking closely, or just don't think it's a very big piece of the exploit puzzle (like, finding the way to get info out a side channel was the hard part).
It wasn't publicly acknowledged but people figured it out already. Take a look at https://news.ycombinator.com/item?id=16046636 (both the article and the comments) for example. This wasn't going to stay secret much longer.
That post is a couple days after the 28 Dec AMD commit, though. Curious if it was _already_ discussed since that would mean no way what AMD said is how people figured it out.
my123 does point out that the author of the speculative execution blog post is first in the KAISER paper's acknowledgments, and looks like the paper was presented at a July conference, so that's an earlier clue out in public, for what it's worth.
https://twitter.com/dougallj has released source code (https://t.co/vaaMyajriH) which partially reproduces the problem. you need a little bit of tweaking to read kernel memory and to read the actual values. from his twitter and from i've observed sometimes the speculative code will see 0 and sometimes it will see the correct value. he speculates that it might work if the value is already in the cache.
That's "a major overhaul of the KAISER patches" as the commit message says. It doesn't mention the connection to speculative execution, though; that was the bit I was interested in.
This is going to have dramatic effect on the cloud computing market. It might make sense to make sure any VMs you run are on AMD processors or it can really hurt your performance and basically cost you more to do the same workload.
It also seems, from early benchmarks, this can slaughter performance with databases.
Why? They pass it on to customers. More interesting if Google or Facebook will react. They could need up to a data centre each to compensate for that (I assume both have very syscall heavy applications). Maybe not suing Intel but pouring more money into the development of competing chips
The bug affects transistions into kernel mode. Virtual machines have one extra transistion. A read() call in the guest calls the guest OS which calls the host OS.
Don't worry. I don't think that there will be two separate kernels for Intel and AMD. I think performance drop will be on both CPUs no matter has it the bug or not.
Probably because the method of checking is flawed and should be done in make/model (i'm no kernel expert but this is the general oppinion i've seen so far).
I've seen nothing except the 'caution side lets do it with all' approach and no indication of other problems on the other hand.
This feels like a big FU to Intel. I've heard this patch can slow down programs like du by 50%. Does that mean AMD is going to find itself running twice as fast as competitors?
I think the du case was an outlier. Normal workloads shouldn't be so heavily affected. I am expecting a few percent loss on most programs though. It's basically a larger penalty for making a syscall, which was already a fairly slow operation so performance minded people avoid them in tight loops. It will be bad for people who need to do lots of fast I/O I suspect.
Some portable / embedded databases come to mind. Also "normal" databases doing replication initiation, and replication re-sync. And lastly backup, restore, tar, etc with small files. For files under a handful of pages long mmap() isn't a big gain.
Another syscall I think that might cause issues is gettimeofday(), that particular call has been optimised to the nth degree, and lots of user programs spam the crap out of it (mostly necessarily), especially networking and streaming programs. It would be interesting to see how much of an overhead percentagewise page table isolation will cost, and its effects on low end media devices, et al.
Linux gettimeofday these days is implemented in the 'vdso', which is code provided by the kernel that runs solely in userspace. So it's not a syscall in the 'privilege level switch by executing insn that takes an exception' sense and shouldn't be affected by the syscall-entry/exit path becoming more expensive.
Linux kernel has a compatibility guarantee to the user space visible API, so static binaries will continue to run. If the static binary is so old it does not know about vdso and uses regular syscall to query time, it will be slowed down.
Isn't every (edit: contended) mutex/etc. wait operation a syscall? That's gotta hurt for any program that waits for frequent events that don't take too long to process.
I was assuming contention but I guess I wasn't clear, sorry. I updated the post. But saying this "only" occurs when there is contention is very misleading since it makes it seem like the scenario of lock contention is a negligible concern. It's not.
Thread-suspending contended mutexes are already extremely slow. If you have a heavily-contended mutex you already have a major performance bug. If this is the kick in the pants you need to go fix it that's arguably a good thing ;)
Note that mutex contention does not itself mean immediately falling back to futex - commonly you'll spinloop first and hope that resolves your contention (fast), then fall back to futex (slow)
> If you have a heavily-contended mutex you already have a major performance bug.
I can't really devote time to countering the unfounded assertion that every contended mutex must be a bug. It certainly isn't consistent with my experience, but if every problem you've solved could have been parallelized infinitely without increasing lock contention, more power to you.
> I can't really devote time to countering the unfounded assertion that every contended mutex must be a bug.
Good, because that's not what I said. If you're heavily hitting futex convention you do have a performance bug, though. You might be confused with general contention that's being resolved with a spinlock rather than futex wait, though.
>> I can't really devote time to countering the unfounded assertion that every contended mutex must be a bug.
> Good, because that's not what I said.
It is literally what you said:
>>> If you have a heavily-contended mutex you already have a major performance bug. If this is the kick in the pants you need to go fix it that's arguably a good thing ;)
> You might be confused with general contention that's being resolved with a spinlock rather than futex wait, though.
I'm not confusing them at all; I'm literally reading exactly what you wrote. You literally said contended mutexes are necessarily bugs (right here^) and that you considered mutexes to include the initial spinlocks ("note that mutex contention does not itself mean immediately falling back to futex - commonly you'll spinloop first"). But maybe you meant to say something else?
He said "heavily contended", and then you dropped the "heavily" prefix and claimed that was literally what he said. That adverb is material to the discussion and your dropping it completely changes the meaning.
I concur with his opinion. Infrequent contention is not a bug; otherwise no mutex is needed. Frequent contention (or heavy contention in his words) is a performance bug.
> He said "heavily contended", and then you dropped the "heavily" prefix and claimed that was literally what he said. That adverb is material to the discussion and your dropping it completely changes the meaning.
"Heavily" was not dropped intentionally at all. Add it back to my comments. It changes nothing whatsoever. The incredible opinion that every problem can be necessarily parallelized without eventually resulting in contention (and I license you to freely modify this term with 'light', 'heavy', 'medium-rare', 'salted', 'peppered', or 'grilled at 450F' to your taste) is so fantastically absurd that I cannot believe you are debating it. I definitely don't know how you can justify such an unfounded claim with no evidence and I certainly have no interest in wasting time debating it. As I said earlier: if you never encounter problems that exhibit eventual scalability limits, more power to you.
> Let's put it this way. If every contended mutex were a bug, why not remove the mutex and let the code run as-is? No, you wouldn't, so no, not a bug.
I mean, the parent's argument is wrong, but isn't that naive. Presumably the argument is a bad (yet still correct) solution would result in lock contention while a better solution would e.g. use a different algorithm that is more parallelizable.
Thanks, yeah, someone already mentioned this and I already edited in "contended" to clarify. I was actually already aware of futexes (thanks for the link though, I've never actually read the paper), but I was assuming contention -- the "every" referred to every type of operation, not every instance. See my reply to the sibling comment regarding lock contention.
Applications like this where the syscall overhead (and latency) starts to be a significant factor in processing time and latency have moved to userland drivers anyway:
The queuing and balancing stuff the kernel does makes sense for spinning rust harddisks and residential networking, but when the underlying hardware is so fast that nothing is ever queued, really what are you doing. At 100 Gbps line speed, a 1518 byte packet takes all of ~ 120ns to transmit, or about 360 clock cycles for a 3 GHz processor.
User-space drivers have been doable for a while, and dpdk[1] is definitely worth a check. There's also some manufacturers[2] that only does user-space drivers for their high-performance cards (e.g. 4x10Gb/s, 2x40Gb/s, 2x100Gb/s cards). Being designed with this in mind helps performance a lot.
> Applications like this where the syscall overhead (and latency) starts to be a significant factor in processing time and latency have moved to userland drivers anyway:
I would personally think that is worse, though please correct me if I'm wrong. The userland driver will run with an isolated PT like any other userland process won't it? If so, it will suffer the same slowdown that every other process now has every-time it has to communicate with the kernel, which I would think would be a lot for a driver.
It's counter intuitive at first, but the key to understand how this works is that while you can use an MMU to assign chunks of physical memory to a process, you can of course also just use the MMU to assign the memory mapped IO registers of say a PCI express peripheral to a process.
That is in a nutshell what a "userland driver" is. It's not too far removed from poking the parallel port at 0x378 on your DOS computer :)
A user-space driver doesn't communicate with the kernel. It is assigned DMA buffers, and communicates with the NIC solely through reading and writing to shared memory buffers.
Even before this fix, the benefits were massive, as sending a buffer was just writing to some memory, rather than syscalls and copies galore.
Unlikely. It's relatively easy to get to the point where syscalls aren't the bottleneck by a long margin, so the only apps where syscalls are the bottleneck will be those that haven't put in any optimisation effort.
I think syscalls are not as slow as many people imagine they are, especially with modern CPUs and kernels (there are special instruction for syscalls that are faster than the old "interrupt" approach). See here: http://pzemtsov.github.io/2017/07/23/the-slow-currenttimemil... ("Off-topic: A system call"). But they will be slow with this mitigation.
"The overhead was measured to be 0.28% according to KAISER's original authors,[2] but roughly 5% for most workloads by a Linux developer.[1]" [1] = https://lwn.net/Articles/738975/
Though the patches evolved since then. So I guess we'll see.
I believe the 0.28% are only for CPUs that support PCID. Earlier CPUs (which is a lot still) will get a much harder hit since you'll have to flush the entire TLB.
In fact, the tight syscall loop isn't even necessarily the worst case: the primary cost of this change isn't the a direct cost in the syscall, but the CR3 switch which invalidates the TLB and incurs an ongoing cost for some time following the syscall.
The worse case would be something like a frequent syscall followed by code that touches a number of distinct cache lines, which all now require a TLB reload and page-walk (even here the cost is tricky to evaluate since there are various levels where the paging structures can be cached beyond the TLB, so the cost of a page-walk various a lot depending on locality of the paging structures used).
In that case (PCID hardware on a PCID-enabled kernel), the performance effect should be more limited to the syscall itself. That said, why is the hit still so big with PCID? Surely just the CR3-swap by itself shouldn't be so slow?
MOV-to-CR3 is pretty slow, yes. In the ballpark of a hundred clock cycles, and you have to do 2 of them. The cost of a system call was about 1000 cycles, maybe less on newer processors---both OSes and processors optimize the hell out of SYSCALL/SYSRET.
Yes. AMD didn't take shortcuts, and implemented the spec correctly. Intel took shortcuts, introduced bugs, and now to compensate for that the OS has to work around it in software, it's going to be slow. For years Intel has reaped the benefits of shortcuts for performance, while AMD has been implementing things correctly; now there is a correction.
AMD doesn't exactly do an amazing job of avoiding gotchas in their CPUs. They have a bizarre idea of what writing zero to a segment register should do (resulting in info leaks that were only recently fixed on Linux), their demented leaky IRET is even more demented than Intel's, and their SYSRET's handling of SS is downright nutty.
OTOH, Intel's SYSRET is actively dangerous and has resulted in severe security holes, and Intel doesn't appear to acknowledge that their design is a mistake or that it should be fixed.
SYSRET on Intel will fault with #GP if the kernel tries to go to a noncanonical user RIP. The #GP comes from kernel mode but with the user RSP. Before SMAP, this was an easy root if it happened. With SMAP, it's still pretty bad. AMD CPUs instead allow SYSRET to succeed and send #PF afterwards, which is very safe.
AMD CPUs are differently dumb. If SYSRET is issued while SS=0, then the SS register ends up in a bogus state in which it appears to contain the correct value but 32-bit stack access fails. Search the Linux kernel for "SYSRET_SS_ATTRS" for the workaround.
The text as written only seeks to defend AMD's product. Whether the sub text goes further is open to non objective speculation. Having said that I'm sure AMD are feeling pretty happy with their statement. Schadenfreude may be too long a bow...
What are you talking about? Every other single CPU vendor with an MMD, arm, s390, Sparc, ... either has a separate page translation table register for kernel and user space, or like AMD memory page capabilities, just Intel not.
It's very clear who is at fault here.
I think you're still misunderstanding. The CPU picks TTBR0 or TTBR1 based on the top significant bit of the VA, irrespective of whether the access was initiated by user or kernel code. This is in contrast to s390, which has separate page tables for user mode and kernel mode. I personally much prefer s390's model.
And yes, I've read quite a few papers, and I wrote a good fraction of the patches.
My response was about whether AMD was part taking in schadenfreude, in reference to the OP's statement that this was "a big FU to Intel". I wasn't making a statement on who else was affected and/or who was at fault.
All Intel CPU's are affected, mitigation syscall overhead increased by 50%, and none of AMD CPU's affected? I would say this could be an indicator to short INTC and long AMD...
I think it will because it shows the downside of a monoculture. Hence big purchasers of CPUs will want to diversify. Also good for ARM vendors I suppose. Disclosure : bought AMD this morning before headlines saying "Buy AMD, short INTC" appeared.
Has there ever been a precedent for this? When there were major bugs in Intel CPU's (or drives, or RAM, or motherboards) did the likes of Amazon and Google invest in diversification? And has it affected stock prices meaningfully? My guess is that they'll see this as just another one off issue that can be fixed with software, then move on. For a large enterprise, monoculture that works is actually better than diversification.
When you think about your own workstation, it's not a big deal to build an Intel or AMD system. But when you buy 100k motherboards and spend the time adjusting your tooling to those, from packaging to power, to cooling, to support, to OS code, etc. and then you on a whim decide to get another 100k motherboards of a different architecture, you spend a non-trivial amount of time and money to support those as well. Again, if AMD provides better hardware, it's absolutely worth it. But I personally wouldn't do it based on this bug.
I checked the stock of Intel during the FDIV bug (1994/1995) where they had to go as far as recalling the affected processors at a cost of $500M in January 1995 and there was basically zero effect. By the end of 1995 the stock had actually pretty much doubled in value..
I personally think FDIV made Intel money, it told the world how important Intel was. It wasn't just the calculator sitting on some trader's desk. It ran the stock market and the stock market responded.
> For a large enterprise, monoculture that works is actually better than diversification.
But that's the problem. Nothing works 100% of the time, which is why monoculture is bad. When there is a bug that affects 20% of your systems, you can continue operating at 80% capacity, which at a reasonable level of reserve/redundancy means you're still entirely up. With a monoculture the bug affects everything and you're entirely down.
> But when you buy 100k motherboards and spend the time adjusting your tooling to those, from packaging to power, to cooling, to support, to OS code, etc. and then you on a whim decide to get another 100k motherboards of a different architecture, you spend a non-trivial amount of time and money to support those as well.
This is why hardware abstraction is a thing.
It's almost always less expensive to support diverse hardware from the beginning than to wait until after the market shifts.
Eventually the day comes to switch from 68K to PowerPC, or PowerPC to Intel, or Intel to ARM, or ARM to whatever else. Because eventually you save/gain a zillion dollars by switching and it "only" costs three quarters of a zillion to switch.
But it would have cost a tenth that much to have supported diverse hardware from the start, and then the transition is only a matter of using more of the now-superior hardware rather than being stuck on the now-inferior hardware for potentially years while everything is rearchitected from scratch.
This is the mistake in your argument. Motherboards, CPUs, RAM chips, GPUs are analog and physical objects. For AWS to switch a DC from one mobo to another just to find out that this one draws 5% more power and their standard backup generator can't handle it, which now starts a chain reaction of upgrades is going to incur real world costs. Costs that can't be amortized by writing some code to make the motherboards look the same.
This is basically the A/B testing/one-armed bandit problem. How much do you spend time exploring alternatives vs how much time to you reap the benefits of the fact that all your hardware is exactly the same and best of breed, as based on your testing?
> When there is a bug that affects 20% of your systems, you can continue operating at 80% capacity, which at a reasonable level of reserve/redundancy means you're still entirely up. With a monoculture the bug affects everything and you're entirely down.
These situations simply don't lose enough money to make up for the gains of a monoculture.
Think about it this way. You probably run or at least know someone who runs SaaS products. Do you/they use five different cloud providers in equal measure to make sure you have diversity if one has an issue? Do you/they use five different software stacks in case there is a remote exploit for RoR and PHP holds things up? Do you/they buy groceries at five different grocery stores in case one of them has an e. coli outbreak that the other four don't? The answer to all that is now, because no matter how you try to abstract these things, there is a meaningful difference between PHP vs Django vs RoR vs Express vs .NET and between AWS and GC and Azure, that it would cost you a lot more, not just in billing but in engineering effort to support.
Another example: chances are you've at some point built a RAID array. Did you put different size and performance drives from different manufacturers into it or did you buy N of the same drive type to ensure even performance? If so, why?
Put another way, how much more are you willing to pay on your AWS bill to ensure they are running a mix of ARM, AMD, PowerPC, and Intel chips? Because my guess is that it won't be in the range of 1-2%.
> Motherboards, CPUs, RAM chips, GPUs are analog and physical objects. For AWS to switch a DC from one mobo to another just to find out that this one draws 5% more power and their standard backup generator can't handle it, which now starts a chain reaction of upgrades is going to incur real world costs. Costs that can't be amortized by writing some code to make the motherboards look the same.
Being physical isn't different. The data center is designed to allow systems that consume up to, for example, 500W. When one consumes 400W and another consumes 420W, they're still fungible. A system that consumes 525W can't be used, but you know that so you don't use those.
> This is basically the A/B testing/one-armed bandit problem. How much do you spend time exploring alternatives vs how much time to you reap the benefits of the fact that all your hardware is exactly the same and best of breed, as based on your testing?
That isn't the relevant problem. Even if you choose monoculture, you still have to pay the cost of weighing your alternatives to decide which single model to use.
The cost of diversity is that the second best model on some metric is 20% worse than the best. But that is also the advantage, because on some other metric it's 20% better. You can use each model for its strength. And since you can't perfectly predict the future, when something unexpected happens you're better able to handle it, because for any given thing that only some systems can do, you will have some systems that can do it.
> These situations simply don't lose enough money to make up for the gains of a monoculture.
Beware survivorship bias. It's easier to find an active monoculture company that has never had a major problem than one that has, because having a major problem in a monoculture often results in bankruptcy.
> Do you/they use five different cloud providers in equal measure to make sure you have diversity if one has an issue?
For services with high availability requirements, people absolutely do that.
> Do you/they use five different software stacks in case there is a remote exploit for RoR and PHP holds things up?
That wouldn't reduce attack surface. The relevant thing people do is to use two factor authentication.
> Do you/they buy groceries at five different grocery stores in case one of them has an e. coli outbreak that the other four don't?
Having multiple local grocery stores is a thing people want. And people do actually use them, because different stores have the best price or quality for different products.
> chances are you've at some point built a RAID array. Did you put different size and performance drives from different manufacturers into it or did you buy N of the same drive type to ensure even performance? If so, why?
These are spec differences, not supplier differences. There is no issue with using drives of the same size and speed from different manufacturers.
Also compare ZFS, which allows you to efficiently use unmatched drives for the same filesystem.
When you're building out your DC, its a function of cost relative to performance/power use. I could see new setups may look at AMD over Intel, especially if they're running workloads impacted by the software fix, at least for the current generation of CPU's, maybe even the next 1-2 that are in the pipeline.
When you're scaling up/maintaining your DC, you're much more likely to be looking for single sku, like for like products that allow similar tooling, knowledge base, experience etc... Like you said, monoculture has its benefits in some situations.
Personally even with this bug I'd be very hesitant to switch. Our previous tests between them for very specific workloads showed our best cost/performance was with Intel over successive generations, and the scaling/tip over points were different. We have a combination of experience and knowledge around the existing arch and how our applications and workloads interact with it that involved a number of pain points that I'm not sure its worth it to re-experience with another arch.
On the other hand, those running on non-bare metal, cloud based, auto-scaling/automated solutions that have a wider tolerance for individual app performance, are probably in a situation where they care less about this, but at the same time have little to no say in the arch they run on, that decision is left to the cloud providers they use.
POWER has been better performing than intel since basically 1990, though intel's tick/tock cadence and trading blows in fab tech have kept things interesting. That shouldn't be surprising since POWER is ultra focused on the high end and intel is fending off attacks from the low end and never had good long term thinking on the high end.
The reason every server isn't POWER is: ecosystem. For any random company, switching archs for anything less than a multiple factor gain is a daunting multi-generation proposition. For a hyperscaler like Google the bar is a lot lower but you need a compliant vendor that will do a lot of the long haul platform work. IBM's been trying to establish that for many years and is just about to pull it off. Supply chain is also important, hyperscalers have come to expect buying and building systems a certain way and IBM will now just sell chips or even the IP for you to fab yourself. And of course the total cost calculus: capex, and opex in the form of TDP, support burden.
Google _will_ be using P9 for GPU servers internally. The inflection point for them was I/O and memory bandwidth. So, paradigm shift was what was needed to turn a juggernaut.. and that is what adding a bunch of accelerators to your platform is. Intel has no good solution there.
Right. I think HPC will be the first to take on POWER9 since it has some huge advantages with CAPI and PCIe v4. Outside of that, it will take some run time to convince the larger cloud providers it's useful.
I believe POWER9 has the ability to be either big or little endian as well, so that helps for compatibility issues, and it's just a matter of whether your application can compile.
Why would this cause you to diversify? Long-term negative effects of a monoculture are not evenly distributed to purchasers. In fact, if you ran both AMD and Intel CPUs, you'd see application performance differences solely based on processor architecture. This makes application deployment planning way harder. At any given time, there's one CPU that should be purchased, and artificially introducing two "so they don't fail the same way" is bad, specifically because they won't fail the same way.
It depends. There was a reason back in the day that if you were a telco you have phone switches from from 2 providers,ie a DMSxxx and ESSxxx. Another example would be how the big providers got screwed by the in ability of Cisco to get their GSR working right without a few forklift upgrades (really they were moved with a forklift). This opened the path for Juniper. For a long time the telcos moved to have one router from each so a nasty bug in one would not take them down. In a properly tooled setup you should be able to account for the load characteristics between AMD and Intel. Having 2 is safer then one.
Google is pushing both PowerCPU development as well as ARM. They seem to be able to sort for this just fine. You can write tools to sort the differences. You cannot write tools to fix major HW issue.
Anyway my 2 cents based on experience and history for whatever the comments of a random person on the intertubes is worth.
Imagine you are AWS. You have a range of instance types with various performance characteristics. Having customers move from c5s to c4s is much better than customers moving from AWS to GCE.
AMD is doing a lot of things right recently and they have a bright future. And after seeing this, apparently they have been doing things right longer than I thought.
Large cloud providers don't make decisions emotionally. They'll take a "let's mitigate the ME stuff and buy best support + performance per dollar hardware possible" approach. They don't care much about the opinion of the outraged hackers.
Usually mitigated by a special incentives like "15% extra discount for next 2 years if you stay with us". Intel has enough cash and market presence to be able to do those deal.
At the same time AMD also has a golden opportunity to for some PR and marketing.
Is it? That seems like a big broad claim. Again, after things like RowHammer, did anyone actually do anything differently in a way that affected stock prices?
Like everyone else, I sure would want to know what kinds of conversations have been going on around RowHammer and customers most affected by it, and system/DRAM vendors.
I'm not a believer in stock price as good indicator of anything, sorry to skip that part.
ME is actually a gasp useful feature. The problem is with Intel's implementation of it: it's not open source, it can't be disabled, and it's buggy. Fix all three of those, and Intel's stock will go up.
Do you really think enough people care about the ME / control of hardware in general / hardware that spies on you or is out of your control to influence the stock price of a company the size of Intel?
No. That's exactly what I'm saying. Most people don't care. Enterprise users do care because ME is useful for them. It's a feature, not a nefarious backdoor that the NSA made Intel include under the cover of darkness. They'll see this as a small problem that should be fixed and will ask Intel to do so. Intel will fix it, most everyone will move on. I don't think ME will take down Intel stock, and neither will this page isolation bug.
Intel's value is 99% engineering + manufacturing ability + customer relations. It would be a poor CEO indeed who'd direct their IT to start buying AMD because of this alone.
That's the narrative, but consulted to a lot of enterprises, and I've never once seen ME in use. Servers have hardware like HPE iLO, and desktops will use OS based agents. And failing that they'll use PXE boot and get rebuilt. The only discussion I've ever seen an Enterprise have about ME was the debate about how you deal with HPE's latest laptop security update.
If Intel weren't under pressure to keep a negative-ring network enabled snoopstack open by an external entity, they would by now definitely have released an update that allowed people to disable the networking aspect of IME.
Major system vendors are now offering to apply bootleg removal situations at the factory on customer request[1]. That request is not free. People are willing to /pay extra/ for no-IME laptops.
Either Intels marketing and public relations department are asleep at the wheel, or they've gone to the top to request a friendly switch to disable this and been told by the legal department that they can't have one.
OK, but that's (a) 100% speculation and (b) fails Hanlon's razor.
I don't like the fact that you can't disable ME, that it's not open source, and that it's vulnerable any more than anyone else. But this does seem like hyperbole much more than fact.
The existence of that program was pure speculation, until it turned out to be totally real.
>(b) fails Hanlon's razor.
This is completely irrelevant to any argument made between two informed participants. It's worse than speculation, it's a plea to glib colloquialisms. Any chance you've got evidence or even reasoned speculation supporting the theory that the worlds most successful CPU manufacturer has an incompetent marketing department?
I think they have an incompetent management department that decided that no open sourcing ME is a good idea. Marketing is may also be incompetent at picking up the pieces after the bugs were discovered.
> 95% speculation. The last 5% comes from exercising basic pattern recognition.
No, it's all speculation because pattern recognition is not evidence, as applied here. Like, is it possible that I am an NSA agent trying to persuade you that you are safe and shouldn't worry about ME? Of course it's possible. But do you have any evidence of that? No.
"Well, in the past the NSA has asked big companies for backdoors into their products" is a true statement with evidence. "That implies that in this case there is a 5% chance that is exactly what's happening" is 100% speculation because again there is no evidence. If you can find any, I am all ears because honestly I am not a fan of Intel, Intel ME, the NSA, government spying, big corporations taking advantage of consumers, or a number of other things I imagine you and I agree on. But I think I am being rational when I say that chances are this is a stupid bug or number of bugs, plus bad old school thinking on the part of the management team, and not a deliberate NSA feature.
Here is my bit of speculation: if the NSA asked Intel to include a backdoor, wouldn't they both have done a better job of creating it? Why introduce a bug when you can include whatever code you want in a closed source firmware? You can literally add any kind of C&C mechanism you want because nobody can see what you are doing and nobody would ever know. Is the NSA that stupid to to ask for a bug that can be found and exploited? Is Intel not able to offer a better technical solution? Wouldn't it be to both of their benefits to do this right from the start? Also, why only approach Intel and not AMD? AMD is not as popular but surely has enough market share to warrant spying on.
You say "do you have proof?". But nobody can have proof beforehand. That's how these things always go -- something is done under cover and later (usually much later) somebody uncovers it and shows it to the world. Why do you ask of a proof that can't possibly be in the spotlight right now? Many historical facts have been denied and met with skepticism and mockery until they have been proven to be indeed facts. Why is this case different in your eyes?
Why aren't you viewing the possibility of intelligence agencies ordering the Intel ME as one of these future historical facts? If the proof for that became known today, both the agency and Intel would scramble to introduce a better backdoor in the next generation CPUs / MBs and devise a marketing campaign to make it sound good -- and to bash their former selves for "making a mistake" while simply thinking "OK, we're gonna cover it up much better this time and we're gonna twist it in such a way that people would flock to buy it". It's what marketing and spies do; they twist facts. Why is that so non-legit for you?
Furthermore, you're asking why didn't they do a better job if it was a conspiracy. People in closed circles aren't exposed to public criticism and their thinking is affected in the process. They usually think "meh, good enough, nobody will ever find it anyway". They are humans like you and I and are susceptible to bad days or negligence due to being tired. Furthermore, it's very likely they were under pressure to make it work quickly so they took shortcuts. What makes you think the programmers of the intelligence agencies have godlike powers over their (very likely) military superiors? Answer is, they don't. Programmers have no executive powers and their counsel is usually met with skepticism if it doesn't fit the management's agenda.
When talking about intelligence, our best bet is to do educated guesses. If we had hard facts we would be targets. As mentioned in another reply of mine directed at you -- it's their job to hide the facts. So you requesting proof of these matters is basically refuting all possibility of intelligence agency commission of the Intel ME on the grounds of "hey, you are not the next Edward Snowden so your arguments are invalid".
Meh. You come across as a guy who basically says "my speculation is better than yours". Not constructive.
Ok you lost me at “future historical fact”. Again that is a fancy way of saying pure speculation. No I don’t know for a fact that the NSA didn’t order Intel to build a buggy ME into all its processors. I can’t prove that it didn’t happen. And maybe your speculation will turn out to be right. I am arguing that my speculation that this was incompetence is significantly more likely to be correct than your speculation of conspiracy.
Your theory in the above comment is that the NSA or equivalent ordered Intel to build a C&C mechanism into their processors. Intel then did a perfect job covering up this request, but did a piss poor job of implementing it due to incompetence and has not managed to correct it for 10 years. There is no indication that this might be the case but because of other unsavory activities by the NSA or equivalent it can be assumed that at some point evidence will be uncovered that you are right and therefore we should accept it as fact. Do I have that right?
Not exactly but almost. I am saying this is the most likely outcome.
Judging by other activities of the intelligence agencies and working with pure speculation -- not hiding from these words, you are correct by calling it that -- I still think it's much more likely they commissioned the Intel ME.
You mention critical thinking in another comment. Critical thinking, the way I apply it, also requires a historical context to be applied to the situation one is analyzing. Agencies have been doing pretty shady stuff and some of it has been uncovered for the entire world to see.
Critical thinking, the way I apply it, says that the odds are there is a foul play. I merely wish you to recognize that this is the more likely scenario than a bunch of coincidences and/or people supposedly making the ME to serve data center sysadmins -- btw many of those sysadmins, including on several threads here in HN, said they never used the ME and named a plethora of other tools.
Obviously I am not trying to change the way you think in general. I believe we can both agree that none of us knows for sure. The human brain's strength is to work with many variables and be able to impose some order in the chaos by pattern recognition and using historical info. I am not gonna deny this can lead to people drawing awfully misguided conclusions sometimes -- and I've been guilty of that as well! -- but it's the best we have, especially having in mind what tiny imperfect brains we have to work with.
Everything I can name are circumstantial evidence. I accept that. It's the nature of the area. Intelligence data isn't easy to come by.
OK. And with that you are saying that you are basing this on 95% speculation and 5% pattern recognition with no direct evidence, and yet it's the most likely outcome.
And I am saying that the confidence interval on that calculation is just orders of magnitude not tight enough. I am not denying that you could be right. It's just that I am giving that possibility something like a 1% chance of being true, while something like 85% chance of this being pure incompetence by Intel management and engineers (the rest being some other explanation that's neither malice nor direct incompetence). I don't think you and I can find a common ground on this estimation.
Again though, ME is a bad thing because it's not open source, it can't be turned of, and it's buggy. Regardless of who ordered its creation, it sucks.
How can you request a citation about things relating to possible intelligence agencies efforts with a straight face? It's literally their job to make sure such material doesn't exist or sees the light of day if it does. It's not exactly publicly-funded science now, is it?
You request a proof that's impossible to procure. Are you now gonna claim the lack of this proof supports your thesis?
Critical thinking would demand recognition of the fact that intelligence agencies compromising security isn't a hypothetical anymore, it's a fact, and it would further demand intense skepticism of unauditable and hostile (resists attempts to disable it) code running below ring 0.
I never said they don't. Simply that in this case there is no evidence, direct or circumstantial, pointing to Intel ME being born out of an order by an intelligence agency. Could it be? Sure. But critical thinking demands facts, not speculation. Facts are:
1. Intelligence agencies have been known to force companies to give them access to their products.
2. Companies have been known to comply, if reluctantly, at least until a whistleblower exposes the program.
3. Intel ME was developed as an on-chip version of an external card that is actually useful.
4. Intel has made poorly engineered products before.
5. Intel isn't in a habit of open sourcing firmware.
6. From a technical standpoint, Intel is fully capable of creating a system that doesn't allow C&C through a bug and an exploit.
7. AMD, the second largest computer chip maker does not have a matching system that can't be disabled and that has similar bugs.
Based on this, I'd say it's possible that the NSA (or equivalent) asked Intel to develop ME and add a bug to allow C&C, but very unlikely.
It's also possible that the NSA (or equivalent) asked Intel to develop ME and add C&C and Intel did it through a deliberate bug, but very unlikely.
It's also possible that Intel tried to develop a feature the market might want, and screwed up the implementation. This seems to me to be very likely. It's the simplest explanation (Occam's razor) and it requires only incompetence, not malice (Hanlan's razor), so it's sort of by default most likely.
If someone can produce an iota of evidence to the contrary I will change my allocation of probabilities appropriately, but so far the evidence is "it could have been done" and "they've been known to spy on people in the past". In my book that's not a strong enough argument.
It creates a huge attack vector on most computers that the user has almost no control over. Even if Intel are completely uninvolved, some intelligence agency will try to exploit it.
No, the claim being made is that ME is being added as a feature, with a hyperbolic version of the other argument tacked on. Whether they were forced to include it doesn't matter, the way they included it benefits the intelligence agencies.
If my recollection serves, when Intel had what was the largest/most expensive recall in the world in the 90's (at that time anyways), their stock still nearly doubled that year.
I think there is a critical opportunity for AMD here to take this to the public and the media. Basically kick Intel while it is down. Intel will probably recover fine, but AMD shouldn't miss its chance either. Investors and such might pay attention to that and start selling INTC and buying AMD.
If the hit is as bad as they say (30% performance), cloud providers will be almost forced to upgrade when the new hardware comes out that fixes it. Are they really ready to adopt AMD? Go long on INTC?
They could get AMD hardware that works today. We don't know when Intel will have working hardware. It will be at least months and possibly years. Processor design is a long process.
Options are much better for playing these short-term, news related swings. This has the potential of being a good one, as INTC is at the peak of a bull run and this news doesn't seem to have hit mainstream sources yet.
Yep, and until the embargo about this is over, we won't know anything with any certainty. This has been one hell of a fun thing to watch from the outside. I run a small test your code service (for all versions of Perl) that could be affected by this so I'm really curious what the whole thing is.
Essentially looks like Intel compromised (whether intentional or not is a different point) the design to get the speed boost that gave them the lead over AMD for the past decade. Will be interesting to see how all this plays out.
Other than leaking timing information though, is there any reason why this kind of speculative execution can't be secure? Apparently we're going to find out more in the coming weeks, but it feels strongly like Intel has made a number of mistakes leading up to this.
"To compromise" means "to weaken" or "to endanger", not "to make _a_ compromise". To make a compromise is an intentional act, but you can compromise (e.g. the security of) something by sloppiness. So yes, it is a different point.
(Yeah, I know, don't blame me. English _is_ weird.)
AMD's triple core processors were quads with disabled cores. Often times processors within a line are processors with manually set lower clock multipliers or disabled cache.
Sounds like Intel has just made it unlockable instead of permanent. It just brings to the fore what was already being done, and makes us question again the ethics of pricing models.
This is generally the result of segregating defective parts of the chip in order to create a stable (albeit less powerful) chip.
Some of the chips will be fully capable of running with all parts enabled, but in a higher power envelope (this is a guess - but I believe that the fully capable chips most likely to be sacrificed are those that have trouble fitting in the ideal power envelope).
I would also imagine that (under some circumstances) chips that are fully functional within the expected power envelope will be artificially limited in order to control levels of stock.
The vast majority of chips that are limited in this way will be out of spec, unstable or inoperable when unlocked.
> AMD's triple core processors were quads with disabled cores.
That's binning & price discrimination, Intel did the same (with quads v dual IIRC): if you have a defective core, you gate it and sell a 2/3 core instead of a quad. Of course the issue is when the low bin becomes too popular and you have to start low-binning "perfect" parts to keep supplies acceptable (used to be very common for Intel starting ~mid-cycles, they'd literally run out of defects, which is why their low-end CPUs had such good performances & were ridiculously overclockable)
Nvidia‘s GeForce cards could be converted to Quadro cards by opening a chip and adding some lines with a pencil. Don’t think that this still works, but a colleague of my father did it for his home PC.
However, that (and the later software modification) could both hamper performance in games and could exhibit correctness problems in accuracy-focused use cases, so it was rarely a great idea.
No, that was just spoofing the PCI VID:PID to the kernel. It did not enable hardware features, just fooled the driver into thinking it was another device. You could do the same with a patched kernel if you don't want to solder.
At the meta level this is just a special case of "complexity is evil" in security. CPUs have been getting more and more complex, and the relationship between complexity and bugs (of all types) is exponential. Each new CPU feature exponentially increases the likelihood of errata.
A major underlying cause is that we're doing things in hardware that ought to be done in software. We really need to stop shipping software as native blobs and start shipping it as pseudocode, allowing the OS to manage native execution. This would allow the kernel and OS to do tons and tons of stuff the CPU currently does: process isolation, virtualization, much or perhaps even all address remapping, handling virtual memory, etc. CPUs could just present a flat 64-bit address space and run code in it.
These chips would be faster, simpler, cheaper, and more power efficient. It would also make CPU architectures easier to change. Going from x64 to ARM or RISC-V would be a matter of porting the kernel and core OS only.
Unfortunately nobody's ever really gone there. The major problem with Java and .NET is that they try to do way too much at once and solve too many problems in one layer. They're also too far abstracted from the hardware, imposing an "impedance mismatch" performance penalty. (Though this penalty is minimal for most apps.)
What we need is a binary format with a thin (not overly abstracted) pseudocode that closely models the processor. OSes could lazily compile these binaries and cache them, eliminating JIT program launch overhead except on first launch or code change. If the pseudocode contained rich vectorization instructions, etc., then there would not be much if any performance cost. In fact performance might be better since the lazy AOT compiler could apply CPU model specific optimizations and always use the latest CPU features for all programs.
Instead we've bloated the processor to keep supporting 1970s operating systems and program delivery paradigms.
It's such an obvious thing I'm really surprised nobody's done it. Maybe there's a perverse hardware platform lock-in incentive at work.
A lot of these ideas were in the back of our heads in designing WebAssembly, but to keep expectations low, we don't make too much noise about them. However I personally believe that we are on the right track with WASM and am very excited about the future!
It also made me think of PICK (and PICK cpu hardware implementations); though I never learned enough about the internals of PICK when I last used it 20+ years ago (so I could be wildly off-base).
Tao/Intent/Elate (which I think is defunct nowadays) would also qualify, and I'd argue .NET on Windows with the GAC would, too (although there'll be a legitimate argument about whether that's "simple and closely models the processor").
Tao is long defunct, yes (went under a decade ago). It turns out that people don't really want a runtime-portable OS/apps (IIRC the biggest takeup it got was as a Java runtime for mobile, because the competition at that time was all interpreted). There was no security model in VP, though -- single flat address space and bytecode could turn any integer into a pointer and dereference it (loads just got translated into host cpu load instructions), so there was no isolation between processes or between processes and the os.
AS/400 and descendants have a security model, but they rely at least partially on a trusted runtime code generator (and, transitively, trusted boot). The systems have HW assist to tag real pointers, but that's mainly for performance reasons. Pointer validity checks are performed in software (or they were until ten years ago), automatically inserted by the bytecode translator. If you subverted the code generator, your malicious code could get a bit further by forging pointers.
> We really need to stop shipping software as native blobs and start shipping it as pseudocode, allowing the OS to manage native execution.
What we really need to do is to start shipping all software as source code. This is exactly what JavaScript does, and why it is the most successful method of software distribution ever. WebAssembly is a huge step backward.
> What we need is a binary format with a thin (not overly abstracted) pseudocode that closely models the processor. OSes could lazily compile these binaries and cache them, eliminating JIT program launch overhead except on first launch or code change. If the pseudocode contained rich vectorization instructions, etc., then there would not be much if any performance cost. In fact performance might be better since the lazy AOT compiler could apply CPU model specific optimizations and always use the latest CPU features for all programs.
A major underlying cause is that we're doing things in hardware that ought to be done in software. We really need to stop shipping software as native blobs and start shipping it as pseudocode, allowing the OS to manage native execution. This would allow the kernel and OS to do tons and tons of stuff the CPU currently does: process isolation, virtualization, much or perhaps even all address remapping, handling virtual memory, etc. CPUs could just present a flat 64-bit address space and run code in it.
The overall idea has a lot of merit (and, for example, Apple is moving towards this model with the iOS AppStore) - but I don't see how it solves the current problem.
Across a variety of architectures, the market has come down firmly in favor of hardware address translation and protection. There are various implementations, many not subject to the current side-channel, but all of them do most of the heavy lifting in hardware: TLBs and related things "just work".
Lets say you had some intermediate format and executed everything in a single 64-bit address space after a final JIT compilation step (your suggestion, as I understand it). How you would implement process and kernel memory protection? It amounts to a bounds-check on every memory access. Certainly you can use techniques common in bounds-checking JITs today to eliminate many of the checks via proof methods, hoisting and combining bounds checks, etc - but the cost would still be large in many cases.
Maybe you want a hardware assist for this bounds checking then? Well follow that to its logical conclusion and you end up with hardware protection support: maybe in a slightly different form than we have today, but hardware support nonetheless.
There are a lot things we could do differently with a clean-slate design, and I think intermediate representations have a lot of merit (e.g., the radical performance improvements partly as a result of radical architecture changes enabled by use of intermediate formats in the GPU space are evidence this works) - but hardware address translation doesn't seem like the problem here.
Bounds checking in hardware is an awesome idea. It's still simpler than full protection modes and is more versatile. Not only does it allow efficient software JIT implementation of protection but it also allows pervasive bounds checking to eliminate buffer overflows and other common errors. It eliminates the performance incentive for a lot of unsafe code.
What I'm suggesting is not a total clean slate. It could be done easily on current processors or current instruction sets and would be more an omission than a change to core architecture.
I wonder if doing it on current chips and just ignoring all the protection and remapping logic would have a performance benefit? Look at the boost you get on some databases with transparent hugepages, which kind of do that.
Okay so take this bug for example. It seems to have to do with the CPU speculatively performing a load before checking that it won't generate a page fault due to user code trying to access kernel memory. Say you get rid of process isolation, etc. How do you protect kernel code from user code? You can't do any sort of static analysis I'm aware of that'll still allow you to run C code (which let's you manufacture pointers from arbitrary integers). And if you insert dynamic checks instead, you're talking about turning each memory access into many (memory accesses that in a modern CPU are hidden by the TLB).
> And if you insert dynamic checks instead, you're talking about turning each memory access into many (memory accesses that in a modern CPU are hidden by the TLB).
You only have to check that the memory address is not negative (kernel pointers are negative on x86-64). No extra memory access needed.
> You can't do any sort of static analysis I'm aware of that'll still allow you to run C code (which let's you manufacture pointers from arbitrary integers).
Good point about the ease of checking kernel pointers. That doesn't address process isolation generally, however, unless you're willing to segment virtual memory in the same way.
As to NACL, it relies on various CPU protection mechanisms, and also makes some major trade-offs: https://static.googleusercontent.com/media/research.google.c.... On x86, NACL uses the segmentation mechanism. On x86-64, which has no segmentation registers, it masks addresses and requires all memory references to be in a 4GB space. To handle various edge cases, and to speed up stack references, it relies on huge guard areas on either side of the module heap and stack, thus relying on the virtual memory system. Finally, likely to mitigate the overhead of masking, it does not address reads at all, and relies on the virtual memory system to protect secret browser information from the sandboxed process. Even with these limitations, on about half the SPEC benchmarks the overhead is 15-45%.
Good point about static analysis and C code. That means you would not be able to toss memory protection unless you introduce fast hardware support for bounds checking and bounds check everything in JITed code. I guess you could also have the JIT do more elaborate guarding of memory but that would probably have a performance penalty.
You could still toss a lot: virtualization, complex multi-layered protection modes, address remapping, and essentially every hardware feature that exists to support legacy binary code. All deprecated instructions and execution modes could go, etc.
Finally you would maintain the benefit of architecture flexibility. Switching from x86 to ARM, etc., would be easy.
Out of curiosity, which parts of .NET bytecode do you believe to be "too far abstracted from the hardware"? The object model, certainly, but you don't need to use that. On the other hand, the basic instruction set for arithmetic and pointers seems to be on the same abstraction level as WebAssembly to me.
You can build C++ as .NET, absolutely. So far as I know, it can handle everything in the Standard except for setjmp/longjmp. All it takes is compiling with /clr:pure.
What you're referring to is probably C++/CLI, which wasn't removed, but it hasn't really been updated for a while. C++/CLI is a set of language extensions that make it possible to interface with the .NET object model.
If we go feature by feature, the .NET type system and bytecode has:
- unsigned types
- raw (non-GC) data pointers with pointer arithmetic
- raw function pointers (distinct from delegates)
- structs and unions
- dynamic memory allocation on the stack (like alloca)
- vararg functions
While what you're saying sounds nice, your theory has nothing to do with practice.
In reality, the ultimate source of this problem is the mismatch in speed between silicon logic and silicon memory. This is why your CPU ends up doing all sorts of tricks like caching, branch prediction, speculative execution to compensate for slow memory.
Can intel release a drop in CPU that will avoid or mitigate this issue?
The infrastructure investment in intel cores is huge, if a drop in replacement lets me minimize downtime, re-gain performance and is "cost effective" compared to a cost prohibitive replacement does this result in intel having a sales INCREASE where it replaces bad silicon?
I don't know enough about this issue to speak to the issue either way, but I would love to hear if this fix is possible/viable.
Anything is possible, but I would think that if it was fixable at that level we wouldn't have OS developers going through all of this trouble. There is basically no upside to this patch-set at all for the end-user except for increased security, and there's pretty big down-sides in the form of fairly significant speed losses.
Keep in mind, this is more-or-less just the 4GB/4GB patch set that floated around awhile back for 32-bit systems, and that patch was never merged precisely because of the big performance impact it imposed - and that change actually had some merit, this one has none besides security. I don't think Linus would be letting this go through (And especially be on by default) without a fuss unless there is really no other way to mitigate a fairly big security hole. That's just my opinion, but it seems pretty clear to me. It's possible he has not spoken to anybody at Intel about this, but I would personally think he has some connections to get some info on it.
For years, Intel and AMD processors have supported patching using microcode updates. Until we know the embargo is lifted and we know the full extent of this vulnerability, we won't know if that would be possible.
Based on the fact that kernel patches are going in it's reasonable to assume this means it can't be fixed with a microcode update. Otherwise, Intel would issue a microcode update and the Linux kernel wouldn't be accepting this patch set as a mitigation for this issue (which is all this patch set is, it has no other benefit to the end user than fixing this bug).
I wouldn't be so sure about that. Linus and Gross might think that it could be microcode fixable, because they wanted it configurable. Opt-out. However in the current patchset it's mandatory.
With a future os-specific microcode patch a config option would make more sense.
They just need to find a 2nd C3 register to seperate user from kernel space or do the permission check before prefetches. Like AMD does. On Linux this check would be cheap, on Windows NT not.
It depends on how long it takes for Intel to go through all regression tests for all affected platforms. If it takes several months to complete, a countermeasure in the kernel update may still be the better stopgap.
Or maybe it could be that Intel privately disclosed already that no backport will be done to firmware of older CPUs, in which case the kernel update is the stopgap for newer generation and the solution for older generations.
They knew of the issue since June. Now that the patches are out it'll be hard to regain the performance. I doubt they are able to issue a microcode update within the following months. Otherwise large clients such as AWS would have implemented that instead.
Until more information is available, who knows. It might be fixable in microcode, it might be fixable in a new processor stepping, it might require a deeper rework that wont come out until the next generation of processors (or even the generation after that).
Though if it were going to be fixed in microcode, it seems like this would have played out by Intel just having released new microcode already and encouraged people to update to it, rather than every OS vendor scrambling to rewrite large chunks of their memory management, you know?
Yes, but they might have focused more on Windows, which is not so easy to fix. Esp in microcode. The NT kernel fixes were already out for months, there the table layout is not as simple as in Linux, whilst Linux is still discussing it.
For Linux microcode you just need to look at the first bit for a permission check. Not so with Windows.
Wouldn't this kind of issue validate the ideas of microkernel-based OSs, where kernel and user spaces are already completely separated?
BTW, removing the kernel from the non-privileged address space seems like such a great idea (which is not a new one at all) the whole thing should probably should have some hardware support to be made fast.
> Wouldn't this kind of issue validate the ideas of microkernel-based OSs, where kernel and user spaces are already completely separated?
I don't think so, but it depends what you mean.
Kernel space and user space being separated isn't specific to a microkernel. The only reason the kernel is mapped in to each process is to avoid the TLB flush during syscalls. The pages themselves aren't actually accessible unless you're running in kernel mode (well, unless you're using hardware affected by this bug). So, in a non-broken system, kernel and user spaces are separated, even with a monolithic kernel (Linux).
> BTW, removing the kernel from the non-privileged address space seems like such a great idea (which is not a new one at all) the whole thing should probably should have some hardware support to be made fast.
For the most part, I agree. However, it really shouldn't be necessary if the virtual memory protection did what it was supposed to do. Mapping the kernel in to the process address space and using the page protection flags is an optimization that is perfectly legal from an architectural standpoint.
If you can't rely on the page protection flags to work, then you really can't rely on any other hardware feature to work either.
Given Intel's dominance of the server market does this mean that datacenter computational capacity will see an overnight ~5% drop?
Is there enough spare capacity to cope with this? Will spot-instance prices go up? Will I need more instances of a given type to run the same workload?
Given what's been disclosed so far it seems an exploit using rowhammer techniques would be unlikely to work with ECC RAM. Consumer systems will be screwed unless a tolerable microcode update is released.
I don't think this issue is related to rowhammer. I think people have been speculating about rowhammer because it's a famous hardware bug, but none of the details of page table isolation seem to align with a rowhammer-based attack.
Oh, are you thinking the KASLR bypass is actually the main problem, because it allows targeted rowhammer? I'm not sure if that's really true, since a KASLR bypass would give you a virtual address, and rowhammer would care more about physical addresses.
But in any case, the KASLR bypass is not the main vulnerability here. KASLR is widely seen as too leaky to be really useful. Linux would not rush out a >5% performance hit just to fix one of the many leaks.
I was under the impression that rowhammer could work because ECC ram can't correct for a high number of errors. Specifically "However, even such modules cannot correct multi-bit disturbance errors" from http://users.ece.cmu.edu/~yoonguk/papers/kim-isca14.pdf.
With such a crude mechanism the odds are pretty slim that you can create all the right multi-bit flips in one go without hitting an intermediate state that triggers an ECC fault. Research theorizing and actual practice aren't always the same.
This patch is part of a patchset that isn’t merged yet. The patchset adds 29% (with PCID) or > 50% (without PCID) overhead to syscalls on Intel processors.
Overall, this has between 0.28% (best case application with barely any syscalls) and > 50% (du, which does lots of syscalls) impact on performance on Intel processors.
> Nothing will happen since this patch will speed up AMD hardware, but leave Intel the same as before.
It doesn't speed up AMD hardware. Intel incurs a performance penalty (so it doesn't leave Intel the same as before), but that penalty doesn't make your AMD CPU magically go faster.
All that I've read about this so far seems to indicate that it's only a way to bypass KASLR... which is itself not really a problem, but there must be something more to it. Given that it doesn't affect AMD, perhaps it's related to Intel ME?
Reading kernel memory from user mode = reading cached disk blocks, cached credentials and anything else, by simply running javascript on a web browser.
Interesting. I can see it being a concern in shared environments (hence all the cloud providers are quite scared), but unless there's another part about being able to modify kernel memory, IMHO it's not such a big deal for the typical single-user personal computer.
I wonder if there are other (non-x86) CPUs that do similar speculative execution affected... the general ideas behind it don't seem to be specific to x86.
How so? Letting any old web page read your kernel’s memory seems like kind of a big deal to me. On the other hand I guess remote debugging will be a lot easier in 2018 :)
...but the blog post above shows that you need to execute instructions that (try to) access kernel addresses, and have a handler in place to catch the inevitable exception. That doesn't seem like code a JS JIT could generate.
You might be thinking of that JS RowHammer demonstration, but that was using regular memory accesses and not with the specific kernel addresses that you need for this.
Sorry that train has left the station. JS is now a part of the web. The advice to keep JS off by default is a lot like saying "turn off your Wi-Fi by default" and "don't use a computer." People that do it occasionally experience an exaggerated sense of smugness when a particularly nasty bug is discovered, but then they go back to leading a much more difficult online life than the rest of the world.
No, it's only because of "JS is now a part of the web" advocates that we've gotten into this horrible situation.
but then they go back to leading a much more difficult online life than the rest of the world.
I completely disagree, because I don't have to routinely subject myself to the barrage of useless distracting noise (adverts and whatever else) caused by JS. https://news.ycombinator.com/item?id=10871967 (The rest of the comments on that item are worth reading too.)
Also, "JS off" is very much in agreement with "don't run untrusted code", something which everyone who cares about security in any way would have no problem with.
Yes I advocate for JS to be a part of the web. There are good reasons for it. But regardless, it has nothing to with advocates. We are in this situation because browser vendors included JS and developers and users found it useful. Again, you are free to deny the idea that this is irreversible, but I am with the 99.99% of users of the web who have JS enabled.
Edit: QoL is subjective of course, but let me ask you this: when was the last time you really had JS enabled by default and how did you measure the trade off? My suspicion is that most people who put on their tin foil hat^W^W^W^W^W^Wturn off JS by default don't actually turn it back on frequently, and spend a whole lot of their lives fiddling with drop down menus to enable/disable JS on specific sites.
> and spend a whole lot of their lives fiddling with drop down menus to enable/disable JS
Even without special extensions and keyboard shortcuts you spend very little time on fiddling with menus. It seems like a lot only at the beginning and quickly gets to near no fiddling at all. But it also saves time on various things, like when your adblocker doesn't catch something and you have to close those clickunder born popups or see a page full of ads where it's hard to even find the content, things also load faster and so on.
My only problem without javascript is cloudflare. They truly are sabotaging it, giving impossible to solve captchas for example.
I honestly don't know what sites you are visiting where ad blocking is such a problem. I guess I'd seen an odd clone of KAT and other torrenting sites that do this shit. It's annoying, I agree. But in my daily life I rarely encounter an ad that slips by my ad blocker that makes me drop everything I am doing to go digging into how to kill it. On the contrary, I find my ad blocker more annoying sometimes in the other direction where my bank's site doesn't work and I have to disable it to get the site to e.g. show me my balance or make a credit card payment. I don't see how the savings in time will add up over a lifetime.
there's a subset of the web that still remains a hypertext document database (the 'web 1.0' if you will) instead of becoming an application delivery platform (web 2.0, i hear it's almost out of beta). going JS-less on wikipedia is possible and not at all a bad experience.
Sure if you limit your life to Wikipedia that's fine. Hell, you don't even need an internet connection for it. Just download it all once in a while. But the rest of us like using places like Amazon, Slack, Google Maps, etc.
I fully support not making content delivery rely on JS. But I disabling JS because it can be used for intrusive ads is a lot like taking the wheels off your car because it can take you to the mall where you might see big for sale signs and annoying sales people. Effective, but stupid.
> But disabling JS because it can be used for intrusive ads is a lot like taking the wheels off your car...
You should try it sometime. Selectively enabling JS will be annoying at first, but as long as you save your preferences, the web will soon become a much less terrible place, and you'll rarely have to tweak your config. This approach won't work for non-techies, of course, but it's not much of a hardship for someone vaguely familiar with how the web works. Amazon, for example, works fine with a bit of JS not including amazon-adsystem.com.
Bad comparison, unless you were to change it to 'reprogramming your car so it does not take you to - or warns you for - annoying sales people'. Javascript can be disabled for specific sites or purposes or only enabled for specific sites and purposes.
JS is part of the web - going JS-less under the guise of security seems fruitless to me. Even if I went JS-less, it would be very difficult for me to convince anyone with access to privileges on my life to go JS-less as well. A JS exploit could affect my parents which would in turn affect me. A JS exploit could affect my doctor/lawyer/bank teller which could affect me.
I'm not sure what the advantages of this argument are anymore. JS is now so ubiquitous I can only imagine how a drive-by JS exploit can truly mess you up in obscure ways despite the fact you browse the web with IE4.
That is the true counter-point here. Even if one person protects themselves better they are so connected to several (or many) others that the overall protection gains are miniscule.
And this is not gonna change before a huge paradigm shift in network protocols and network apps.
What I meant by this is that JS relies on a browser with an interpreter. If we use one without an interpreter then JS is nothing but text. I guess one could claim that this text is "part of the web". The point is that it is the users choice whether to run the it through an interpreter. Sometimes they might want to do that (maybe offline), other times they might not. Most times I do not need to run JS to get what I am after (e.g., text, documents, videos, etc.). There is just no need to run all these third party scripts to read some text or download a file for offline viewing. I may read the JS though. In that sense, yes, it is "part of the web". It just isnt the content part that users care about.
Data structures stored in kernel space, such as llds [1], will not incur the overhead of the TLB flush/load.
I suspect that storing data in the kernel space in order to avoid maintaining a large application PD will become the norm, whereas in the past it has been reserved for use cases like search engines with massive in-memory trees.
Would it be possible to slow down segfault notifications to mitigate the attack? For example, if the segfault was not on kernel space, halt the application for the time offset of a kernel read. In this way all segfaults would be reported at more or less the same time and the attack could be avoided.
Are there any sane apps that depends on timely segfault handling and thus might be affected by such a workaround?
It's not timing the segfault delivery itself, the idea is to time another read of your own address space after the fault to see if it's been prefetched or not.
Maybe you could CLFLUSH on segfault delivery though.
I sometimes wonder if verifying properties of the code we run wouldn't be smarter than relying on hardware isolation. Or at-least in addition to hardware isolation, so that there is two layers.
By verify I'm thinking NativeClient-like or JVM isolation.
Obviously, it would entail complete OS rewrite, or maybe partial...
Pretty extraordinary that a user's most important files (their documents and whatever else is in their home folder) are accessible to any app at any time. Why are we still using this outdated security model? On Windows, I could download an .exe and it could upload the entire contents of my Dropbox without even prompting for elevation or anything. Kinda scary when you think about it.
> Why are we still using this outdated security model?
Because it's convenient. The alternative would be something like flatpak's portals, which funnel everything through a few standardized dialogs; but how would you for instance use them to implement a media player application which scans for mp3 files, reads their tags, and presents them on a list? A "select a directory" portal dialog either would not allow for a recursive scan, or risk a non-technical user selecting their home directory, and either way would be a strange interruption in the workflow. (I understand, however, that Android has done precisely that for removable SD cards...)
Would it make sense to switch core at the same time the context is switched between user and kernel? The hit with cache is already there and, if one could go back and forth to already primed caches on different cores, at least some of the performance issues would be mitigated.
I thought it was clear that this patch only applies to AMD. However, reading the comments here confuses me. How's does the performance on Intel drops with this?
No, other way around. The patch which decreases Intel performance has already occurred. This patch AMD saying "we don't need this, so we're disabling it for AMD CPUs."
There's code going in to perform some extra work to workaround some CPU bugs. This disables that work on AMD, because the bugs are not present there. Intel will have to do the extra work still as the bugs are present in Intel CPUs.
The full details are as yet undisclosed, implying there are security issues arising from these bugs (also the name of the flag)