> Fully removing the vulnerability requires replacing vulnerable CPU hardware.
This smells really bad to me, as if Intel pressured CERT into removing language that could have caused their market value to instantly vaporize as every consumer for the last 20 years joins a class action suit...
Here's the vulnerability note stating to replace hardware on 01/04/2018 at about 1800 GMT:
Then here is the same page with the replace verbiage removed on 01/04/2018 at about 1900 GMT:
In any case, I doubt Intel would pressure anyone to remove the generic imperative "buy a new CPU".
Imho intel would much rather them keep this language, which is why they removed it. There is no drop-in non-intel replacement for an intel CPU. Telling everyone that they need to replace CPUs is basically a mandate for them to by whatever replacement intel can cobble together. Having to replace all those chips would see intel's stock price skyrocket. The reality is that chips don't need to be replaced asap and customers have time to perhaps choose non-intel chips.
A product manufacturer with serious defects almost always ends up eating the costs of hardware repair or replacement.
Intel is 100% liable to face a lawsuit over this. Consider if a major car brake manufacturer discovered that there was a design flaw in the brakes that prevented them from functioning in certain situations. It'd be facing multiple lawsuits by now whereas Intel is going with "our chips are the most secure ever" line.
Your example with the brakes is about safety, i.e. making sure the car won't kill you during normal operation. Normally, unless their CPUs start bursting into flames, this is not a problem for Intel.
The problem here is about security. A car analogy would be that to start your car, you need a code and that code can be found by measuring how long it takes to process the input, making life easier for thieves.
As for liability, I don't think you can be liable in court if you didn't plan for something that wasn't known at the time and isn't trivial.
Engineering brakes that work reliably over months and years of use isn't trivial.
Process isolation and kernel security issues have been known for decades and have been fundamental design requirements for decades.
In both of those cases the products can cause serious harm without any third party implication. The brakes would just go of or the battery would explode during normal function.
However, in the Intel case there has to be an attacker that actively exploits an issue in design.
To me this would be like making a class action suit against all lock vendors because they can be bypassed with the right set of tools. The fact that this affects everyone (Intel more than others) and that it took 10 years to find grants them some excuse. Also the architecture is not secret as far as I know so anybody could have audited this. They most probably did do so and found nothing until now.
Now, I do not like Intel communication around this and if it comes out that they knew this for years and decided to sit on it then it would be a different story.
Class action lawsuits are useful when there is negligence, or bad intent but in this case what could it possibly solve?
Sorry, but in the 21st century world of the internet, cracking needs to be taken as a given. In many cases, "normal use" for a computing product means exposing it to use and therefore potential attack from anywhere on the internet. CPUs certainly fall into this category.
If you bought a car that was advertised as having 300 horsepower, and then the manufacturer realized it was unsafe unless they made a software change limiting the horsepower to 200, wouldn't you expect some compensation?
Example. Intel worked on rushed product to compete against AMD threadripper. That time was after the time of the known issue. Thus Intel was investing in continual bad practices and selling known bad products instead of investing in fixing the hardware flaw.
Major flaw in a car prevent selling a card until the flaw is fixed. Why shouldn't this also applying to computers that run the cars and other products?
You don't think people should be able to sue for negligence?!?
If they don't do anything after finding out vulnerabilty - probably it depends, I dont know.
But thy are clearly putting in effort with vendors to mitigate and solve the issue. Doesn't look like negligence to me.
No secret channel to communicate with Linux Kernel developers? No coordinated effort? Last minute findings?
On this thread https://lkml.org/lkml/2018/1/4/174 looks like that the author is disclosing the info on the last minute.
Did the vendors ignore the disclosure initially and begin to change tactics later in the game? Based on how certain vendors have been characterizing this in their PR, I wouldn't be surprised if they didn't take the problem seriously originally.
First prove it works and then prove it can be made better and faster ...
Everybody stalls for time when the stakes are this high. How long can I reasonably spend tying to turn this into a small problem before I have to go public with it?
Saying it’s a bigger problem than it turns out to be is a PR nightmare of its own. If there was a cheap fix then you cried wolf and killed your reputation just as dead.
The chatter is all about how CPU manufacturers screwed up, but there is a much more alarming issue here, I think: the apparent irresponsibility of the people who published the flaws before the security teams and the users could mitigate them. Perhaps there was a reason for accelerated public disclosure, but so far this makes no sense to me.
> Note: IBRS is not required in order to isolate branch predictions for SMM or SGX enclaves
Perhaps this microcode update exposes a feature which was originally to protect these two modes? But that would mean that Intel did think about leaks through the branch predictor, only didn't make the logical leap that this could be an issue also for normal ring0/ring3...
1: https://arxiv.org/abs/1611.06952 (Nov '16)
AMD isn't vulnerable to Meltdown not because they foresaw this issue, but probably because they simply weren't as aggressive as Intel in allowing speculative execution. For years people have preferred Intel over AMD cpus due to their performance advantage, due in part to that higher sophistication of their pipeline.
Or to recast it, nobody is hating on AMD right now, but AMD CPUs do allow a user process to learn some things about the kernel via timing attacks. If next month a researcher develops Meltdown2 for AMD, are AMDs designers now suddenly idiots for missing an obvious security hole?
You don't see why being "aggressive" with speculatively loading data over a _protection boundary_ could be considered irresponsible? I for one, think AMD has the right to gloat if they want. It's not just AMD, besides the latest version of ARM it seems all the other CPU vendors decided to not be "aggressive" with their users' protected data (sparc, mips, amd, power, s390x).
Does it mean all those vendors and architects had PoC for years for this and were sitting on it? No but they could have had a hunch not to go that route. Just like a sane developer might have a hunch over opening a wide API surface to a server that contains sensitive data. It doesn't mean they know there is security vulnerability in one of the API endpoints, it's just sane practice.
> If next month a researcher develops Meltdown2 for AMD, are AMDs designers now suddenly idiots for missing an obvious security hole?
But who called any developers idiots here? I think you were the only one.
I think you are being unfair as the GP didn't call anybody an idiot for not caring about security.
It just calls them out for insisting that this is not a flaw or a bug.
This wasn't an oversight; this was more like... whatever you call the fact that we're still, today, choosing to employ (and even design new!) hash functions that quantum computers could probably break easily. We're making an intentional design choice, based on the perceived difficulty and current infeasability of a particular known class of attack against that design. That current hashes are vulnerable if-and-when a quantum computer comes along to crack them isn't really a "bug" or a "flaw" in our hashing algorithms; it's a known property of our hashing algorithms.
Or, for another analogy: there was a point in history when the peak of warfare was ships shooting guided missiles at other ships, and the targeted ships shooting smaller "countermissiles" that attempted to get in the way of the incoming missiles before they could hit anything important. Every missile had a faint heat signature, making it visible to infrared optics—this was an unavoidable consequence of the fact that missiles need engines to make them move. But for a long time, the idea of a heat-seeking countermissile was just infeasible or un-economical to implement, so little work was done to hide the emissions signatures of missiles. The emissions signature certainly wasn't a "bug"—it wasn't the result of an oversight; and it's a bit strange to call it a "flaw", insofar as there was no such thing as a missile that didn't have said "flaw" while still being a missile. It was a known property of the missile technology of the time. Or, if you want to think of it on a higher level, "missiles" themselves—anything that you might call a missile—had a categorical flaw.
In the same way, anything we might call a modern-day CPU is now known to have the categorical flaw of leaking at least some amount of information through speculative execution. You can minimize it (like you can minimize a missile's heat signature), but you can't get rid of it without making something we wouldn't even call a CPU any more (most things without speculative execution are, these days, considered microcontrollers.)
In that sense, I can understand Intel's insistence that they didn't make a flawed product: they made a perfectly good instance of a "computer processor"—it's just that "computer processors", as a category of product, have a problem.
You wouldn't blame the missile manufacturer for making missiles with visible emissions signatures, before heat-seeking countermissiles were invented. They didn't introduce a flaw. They made their product to order, and the order—the requirements, the demands of the customer—themselves contained the flaw, contained the supposition that it was okay to make a particular trade-off because it wasn't currently exploitable.
In the missile manufacturer's case, it was the government that said "sure, heat doesn't matter, just make it go fast"; and when heat-seeking countermissiles were invented, it was the government whose (lack of) intelligence foresight was to blame for not changing their requirements to anticipate that exploit.
In Intel's case, some customer could have foreseen the exploit and shifted the market toward demanding non-speculative-execution CPUs. Intel was just making what the customers asked for, and right up until the end, they were asking for the categorically-flawed product.
You seem to think that this issue is inherent to speculative execution - it is not. It is due to intel performing speculative execution in a flawed way. In particular, an incorrect branch prediction should have no detectable effect on the system, whereas here it does.
Branch prediction is not scoped for that. Branch prediction will always change microarchitecture state, which is always detectable at some level or another. The key takeaway for designers should be that even though microarchitecture state is not exposed in the datapath it is not secured from side channel exposure.
It’s basically like saying “we are building stuff customers wanted, we just also beat to death any other potential alternative they could want as well”
I don't think Intel's engineers are incompetent, but be careful about the reasoning you're using.
We did. We, the people you called "paranoid" while we quietly try to fix things. We're the ones trying to make sure that people don't die when cyber vulnerabilities are exploited by shitty actors.
I have a theory that this heavily relates to the feedback loops and signals in play. New features are positively observable and their impact is observable from release onwards.
When defending against unknown unknowns, security is unobservable. It's observable only in its absence. All that's left are heuristics and synthetic signals like pentesting.
I wrote a multi-thousand word essay on the topic, but for an internal audience. I don't know if I could properly share it.
My own speculation is that we got here in this industry
though a complete absence of liability. Bugs are not a
big deal, they are _expected_ now.
The only counter example I know of is Knuth's bug reward
Happy to be proven wrong however.
A) they are vulnerable to two out of the three attacks, which indicates they did not in fact consider speculative execution a danger
B) it is hard to believe that if they researched the topic at AMD, they wouldn't find this vulnerability in Intel processors a long time ago
It’s this protection that means that AMD processors are not subject to meltdown.
All CPU manufacturers appear to have been caught on the hop by Spectre (branch prediction history side channels).
I think it's worth noting that it's entirely possible, that if you are a CPU execution pipeline designer that you think about memory loads/stores, L1 cache, branch prediction and speculative execution, and it occurs to you that the cache gets polluted by spec exec, and the branches can be security checks.
But the solution is simple. If the branch is important, wait for it. (Load it into L1 cache before the branch - use a memory barrier.)
The fact that this is not in any ISA docs is a likely pointer toward the possibility that it in fact hadn't occurred to them.
These attacks are the same as any new invention. Easy to see once you've grasped the concept.
I'm neither supporting or refuting the commenter. Just explaining to you the meaning of the comment.
If I observe fact X which is "bleeding obvious", then it's not my responsibility to tell the world about X.
Of course this stuff isn't "bleeding obvious", but I'm going to assume that the AMD engineers thought that it was "obvious enough" to not explicitly to tell the world about it.
Besides... do you have any idea what kind of NDAs those engineers are going to have to sign?
Intel designers should of course be blamed for such issues - full price paying customers are now suffering from performance penalises for up to 30% on some workload, when each new generation of Intel processors give you 5-10% performance boost in the last 5-10 years. Sure, the issue is a surprise for everyone, but if they are designing processors to power billions of devices, they are expected and required to be exceptional.
Let's make it perfect clear - Intel designers don't have to be smarter, they can give up their market share to AMD. It is a privilege to design chips for the entire world, it is not a right. When things go wrong, they need to admit the mistakes and fix their craps, sadly Intel is putting too much efforts into its PR rubbish ATM.
Spectre is clever. Meltdown, as known today, is (mainly) an Intel major fuck-up.
The reason for the strong reaction over these flaws is not because of the severity of this issue, nor does anyone believe Intel engineers are idiots. They are getting blowback because they implemented a closed solution for a powerful feature - Management Extensions.
They lost a lot of trust, which makes it far harder to recover when new issues occur.
On a grander note, there are probably hundreds of even more esoteric side channel attacks all across the system since every process changes the system state. This is more like the beginning of a new style of attacks now that one is shown to be practical, rather than any particular entity's fault. Hardware designers will need to consider informational and physical isolation in a more rigorous way, and there may be theoretical limits that bound the performance-security tradeoff when you share resources.
I don't know if playing the blame game here is going to be productive. All CPUs are vulnerable to side channel attacks of various kinds and focusing on Meltdown specifically seems like missing the forest for the trees - especially as ARM has the same issue in some of their designs.
In this case, the cache is not being rolled back. Neither (presumably) is the branch predictor. And then what of the performance counters? (do they count if an instruction is not retired?). I see many potential attack vectors opening up and it's much harder to prove that any of these state changes can't be exploited.
Why would it be any different?
It seems that at least some ARM might be affected by both Spectre and Meltdown.
So far, I have only seen negative meltdown tests for older AMD cores. Is there anything known for Ryzen except for the PR by AMD (and the kernel patch, which might be based on the google project zero information about older AMD cores)?
While meltdown is "easy" to fix by not reading memory if unprivileged to do so, spectre is a lot harder. Even if the caches are made safe, for example by having "speculative" cache lines which will be renamed into the "true" cache when the speculative thread is actually accepted and retired: It's not the only place where there is hidden state. For example, the branch prediction might be affected, and might give a timing signal.
> We reported this issue to Intel, AMD and ARM on 2017-06-01.
I don't think they have been twidling their thumbs with such a huge discovery without informing constructors.
Perhaps it took so long to find because it's only relativity recently that companies have been paying people to break the hardware?
What do you mean by 'aggressive' here? Cause frankly, I think aggression isn't a good trait to be exploiting in business - it's not the same as competitiveness.
Is it 'they would have done this if they could but they didn't try hard enough', or is it 'they could have done this but didn't have the nerve to take the risk'?
In the latter case, I'd argue that a lack of willingness to trade off security against performance is a Good Thing in line with engineering ethics. In the former case, it seems like you're assuming business comeptition is always construed as a zero-sum game - is this correct?
The Spectre attack is a side effect of performing speculative execution without wiping caches, something that was, until yesterday, an intentional and clearly-chosen industry design direction, standard across almost every commercial CPU produced in the last 20 years, despite the known risk of "theoretical" timing attacks.
The only reason for Intel to make the decisions that led them to be vulnerable to Meltdown was to sacrifice correctness/safety for performance, and failing to consider the potential side effects of that sacrifice (cache heat). They obviously made a bad risk tradeoff there (though I don't necessarily fault them).
AMD could definitely make the argument that Meltdown was an irresponsible "benchmark cheat" from Intel.
EDIT: And let me further clarify, ARM was "cheating" and doing the permission check asynchronously on some models too (i.e., some of their chips are also vulnerable to Spectre in Kernel-space aka Meltdown). It's not solely an Intel issue.
Spectre does not need that -- but OTOH is more difficult to exploit; it is in a whole different category. There is a reason they have different names.
"This is not a bug or a flaw in Intel products. These new exploits leverage data about the proper operation of processing techniques common to modern computing platforms, potentially compromising security even though a system is operating exactly as it is designed to."
(Edit: It seems this analogy may be overly kind to Intel. Read all the replies to this comment for more information.)
This is not really like that. The dangers of speculative execution were known before, and, separately, the processor itself provided memory isolation capabilities. That the processor would ignore its own memory isolation principles when speculating about the next instruction to execute is a critical base of these vulnerabilities.
Now Intel has his own Grenfel tower.
Probably best to just defenstrate this analogy.
Goodness you just described my current work situation that's causing me to rip my hair out. This is a deeply human problem, and I've spent a year trying to mitigate it, but each time the discussion comes up I leave the conference room feeling like certain actors have a personal financial stake in wool fibers.
(No I don't work for Intel)
Edit: Elsewhere I found a link to ARMs page where they breakdown exactly which of their processors are vulnerable to both Metldown and Spectre and it looks like quite a few of them are vulnerable to one or both of them.
In Intel's case, even though the operation is correct, it's the actual design that's flawed.
"Well, that's an interesting hack, I guess you can! But everything works as designed, so there's no flaw!"
Could be not worth making a public stink, could be weaponizing the exploit, could be coordinated disclosure.
[Urban legend, afaik, cats can't right themselves with less than the seventh floor height to fall..]
The problem is they land with legs extended down from floors 1 to about 7 (bad: impact is transmitted up through shoulders and hips), while higher than about floor 7 they spread their legs out and attempt to parachute (better: impact is uniform over entire ventral surface, terminal velocity is lower, mortality rate drops).
Source: am DVM and interned at AMC in Manhattan.
Got it ,:-$ so, marketing for 7th floor Homes with kitty life policy in the service charge bill technically not the Blatant fraud I assumed, and if the building manager had run into trouble the lives of many injured kitties luckily lookalike to the residents could be the hardest sentencing hearing for any animal lover judge. English cities badly need dog license reintroduction. Asked to help a former neighbor become tramp beg court to return his dog. Second I met the man, who certainly was denied due process and on paper atrociously mistreated, he introduced the dog, unknown potential dominant bulldog mongrel, getting it to l so and lock it's jaw on his agitated arm. I passed him again, alone in rain, dog coat sodden, he was high and drunk careless no train still disgorge the fool he requires. Dogs need sorting out in London before brexit chaos
I mean in cities I keep imagining much greater extent of elevation, moment we figure how. Above the smog line... I'm sure I was in 94 but 16 same place, in the middle, ugh.
Could it become unlawful to rent (Live in generally) homes without harmful particle filtration?
I'm thinking of renting out my filters just before tenant viewings, because this eliminates the obviously fresh air chill, takes out fat or any food Smell beyond the kitchen, let's a visitor smoke even..(friend closed rental dear by offering a cigarette excused by filters, it's your residence now) but from flu season to the decoration next door, the cost needs a artificial boost. Wish was a swapHN channel..
Sounds like a bug to me. The PR team must have had fun trying to find a way to downplay this one...
One way to interpret the statement is that the chip design which they utilized is widely available and understood by everyone in the chip community to be good chip design, thus we aren't at fault, because everyone is doing it.
To reiterate those types of arguments are a very serious logical fallacy and unsound argument called appealing to common practice.
They should fire their PR team.
Far from being at risk of being fired their whole PR department is probably swelled by ranks of excruciatingly expensive corporate emergency consultants and experts paid precisely to output this kind of menial drivel.
How were they supposed to avoid an exploit that no one would discover for years? And exactly how is "appeal to common practice" a fallacious or unsound argument?
Exactly how much performance -- meaning, how much market share -- is Intel supposed to sacrifice to avoid the possibility of introducing unforeseen bugs?
Also, appealing to common practice doesn't make logical sense.
It is hazardous to apply logical arguments blindly in illogical contexts, such as competitive markets.
Again: mitigating all possible bugs isn't free. Exactly how much effort and expense is worthwhile, and how do you know beforehand?
A malicious process allocates a 256 member array. Then it creates a conditional where the speculatively executed part writes a byte at offset array + kernel_memory_value. The speculative branch is executed but then backed out, but the byte in the array was touched so it is in cache now. Then the malicious process reads all of the members of the array and looks for one that returns much faster than expected (is in cache) and they know the value of that byte. Rinse and repeat to read the rest of the kernel memory. It's not going to give you MB/s of throughput, but it's plenty fast to read some key material or process tables or anything like that.
It's a very impressive attack. My hat is off to whomever thought it up.
Or maybe the idea is speculative execution itself was a dream of the NSA that was Inception-planted into the brains of CPU designers in the 90s; who knows what the theory-of-the-hour is regarding 3-letter-agencies and their capabilities.
Ultimately I think what we're really learning is that guarding against things like microarchitectural attacks on contemporary superscalar, OoO CPUs is going to be an uphill battle that we didn't ever think of due to incidental complexity (among other reasons), and will serve as a new class of attacks. Who knows how long this bug class will exist; we've killed some. What's also likely is that, like most security failures in the industry, this is a result of various things like basic lack of forethought/ill considered design, as opposed to plants (3 letter agencies aren't responsible for the vast majority of security failures you see, it's simple mistakes). But peddling conspiracy theories involving them gets you upvotes, so, you know...
The crux is:
array[value_of_kernel_memory_byte] = 1;
So it speculatively indexes into the array by the value stored at that memory address and writes the byte. Then to figure out what the value is you just have to see which element in the array is cached.
This assignment gets rolled back like it's supposed to. It's when reading the array after the rollback that the exploit measures that a read to array[value_of_kernel_memory_byte] is faster than the rest because that index is already in the cache.
Is it a valid legal defence to say, that was just marketing lies so don't take it seriously?
There actually is a recognized defense to fraud claims along those lines. The concept is called "puffery": https://definitions.uslegal.com/p/puffery/
More precisely, there is only a PoC for Intel at this time. AMD processors are believed to not be vulnerable. Some ARM processors _are_ believed to be vulnerable.
> for most CPUs you can attack userspace processes with these techniques
Spectre can attack the kernel as well, at least according to http://www.tomshardware.com/news/meltdown-spectre-exploits-i... . It's just harder to use than meltdown.
AIUI, Spectre can be used to attack the kernel, only if you can get code running in kernel-space, via, e.g. eBPF.
Spectre variant 2 attacks vulnerable indirect jump code patterns that exist in the kernel (or some other process), but doesn't require running the attacker's code.
Spectre variant 1 allows you to infer the contents of memory in the same address space, so that's the one where you'd use eBPF to attack the kernel.
Meltdown (variant 3) if I understand correctly can infer memory contents of other address spaces without relying on any assumptions about the code running in the other address space.
Oh, right, ycombinator's URL parser is broken. I fixed the link to work around the buggy parser....
It's a bunch of bullshit trying to dance around the fact that Intel shipped faulty products for years. The fact that other similar products may also be faulty isn't a valid excuse.
Actually exploiting the information leakage isn't easy; and its compounded by the secrecy surrounding cpu internals. So I think they definitely deserve some blame here. Yes, the PoC is new. But the attack surface was widely known more than a decade ago, and they chose to punt the issue onto software; a solution that was unlikely to really hold water.
From the abstract:
> Information leakage through covert channels and side channels is becoming a serious problem, especially when these are enhanced by modern processor architecture features. We show how processor architecture features such as simultaneous multithreading, control speculation and shared caches can inadvertently accelerate such covert channels or enable new covert channels and side channels.
They were cutting corners; that deserves some opprobrium - especially given their market-dominating position over this timespan.
I am not angry at Intel and in general think they do a good job, but trying to dodge blame here comes off sounding pathetic.
It would only be correct to say "This is not a bug or a flaw exclusive to Intel products"
A bug would occur in the case where the specification specified that there were no visible side effects from these mispredicted speculative executions, and the processor implementor failed to implement that part of the specification. This is a big deal because if it's a bug, Intel is liable.
It's likely all processors with these kinds of features, the specs will get updated to be more specific about these kinds of side effects.
The KRACK attack from a couple months ago it's due to the fact the WPA2 specification was ambiguous about what values to accept. Most implementations allowed decrypting traffic and a few even hosts impersonating other hosts but they were perfectly conformant. I would say there is a flaw in the WPA2 specification.
There are always going to be unintended consequences but this one about effects of branch prediction seems, ironically, quite predictable.
That's a philosophical stance known as "positivism."
Who do you think wrote the spec here?
It's a huge lapse in customer trust, for sure. But if you're just going to play that semantics game...
I think the good point here is whether or not Intel engineers knew of patterns like this (they should have) and is it negligent or unethical to release products that have these vulnerabilities built in. Insecure (or "undefined") by intention or by coercion.
This is after all what the real execution does; when you try to load from a memory address that you are not allowed to, it does not cause a memory load, it does not affect the cache and it does not continue executing instruction but instead generates a page fault. We skip the page fault thing in speculative execution, obviously, but we shouldn't continue normal control flow.
I'm sure that is a lot more difficult to implement in silicon and may end up negating the performance benefits of speculative execution, but right now it feels like not having the memory protection in place in speculative execution is a performance hack that exploded in all our faces.
(The exploit involves reading an off-limits location in the speculative branch, then reading a legal location based on the off-limits value; so unless the entire cache was flushed the attack would still be possible.)
Even if speculative loads do not modify the cache, you could still have a side channel by analyzing the memory bus contention.
IMO there is no way to get rid of these timing attacks entirely without getting rid of SMT and speculation.
This would allow keeping the performance of speculative execution in the vast majority of cases, while also fixing the kind of potentially leaking double-indirections that Spectre variant 1 can exploit.
Spectre variant 2 needs to be fixed by isolation of the branch prediction structures, i.e. tagging the BTB and others with the PCID and privilege level.
That's sadly not enough.
There are also potential speculation-based side-channels that are unrelated to cache: timing the idiv operation whose latency "depends on number of significant bits in absolute value of dividend." Furthermore there are many ways to measure contention on the execution ports which can leak information about the speculative execution.
You mention idiv, which is an interesting example. Is it actually observable? Without hyper-threading, almost certainly not, because the execution unit reservations should be dropped as soon as the speculated branch is resolved. With hyper-threading? Perhaps, but it shouldn't be too hard to fix if a non-speculated thread always takes precedence over a speculated one, which makes sense anyway.
In both cases there may be leaks in practice, but those should be fixable relatively easily in hardware, and we've already established that hardware fixes are required anyway.
Another case to think about are dependent conditional branches, since those could affect the instruction cache.
It does seem like a good idea to have a strict barrier instruction against speculative execution just to have a fallback mitigation.
Because eviction is a side-channel too.
With each day passing this looks more and more like the Diesel scandal. Is it OK if all big players do it?
So people can leverage the poopy design of your chip to steal compromising information? And it's okay because the chip is functioning as it should?
It shows how deeply embedded the idea of maintaining proper design and processes is over, going back to the drawing board and designing a secure chip.
Literally, prioritize proper design over security and maybe even performance in some cases. Which is interesting because you'd figure at some point, the customer would be considered in the design.
Like are Intel chips designed by robots or something?
That's how Master Lock stays in business.
Our service works just ask expected without any flaws. Our ushers check for proper tickets after the presentation has finished, that way we know who should and should not have access to the theatre.
But shouldn't you check before hand?
Again our service works ...
So people made workarounds for those two workarounds: software patches.
Maybe even more other people will find (or make?) workarounds for those software patches?
Will we witness an semi-endless cycle of workarounds until the current design specifications are slowly becoming worthless?
Or will we suddenly witness new updated (patched?) design specifications (with some extra free features we never knew we wanted) and all buy new hardware?
So Intel has learned nothing and will be prone to similar mistakes in the future.
And btw, did you notice that huge amount of people disagrees already with you, simply putting your stock 8% down in two days?
"The BIOS Update CD can boot the computer disregarding the operating systems and update the UEFI BIOS (including system program and Embedded Controller program) stored in the ThinkPad computer to fix problems, add new functions, or expand functions as noted below."
Checking AsRock, there aren't any Bios updates.
> Customers who only install the Windows January 2018 security updates will not receive the benefit of all known protections against the vulnerabilities. In addition to installing the January security updates, a processor microcode, or firmware, update is required. This should be available through your device manufacturer. Surface customers will receive a microcode update via Windows update.
(There is also a powershell script to check if you are protected fully or not)
I'm not holding my breath.
>In addition to installing the January security updates, a processor microcode, or firmware, update is required. This should be available through your device manufacturer. Surface customers will receive a microcode update via Windows update.
I've installed the hotfix for Windows, but when I run the PowerShell script to determine whether mitigation is active, the script tells me that it's not active, due to lack of hardware support. The script then goes on to give the recommendation to "Install BIOS/firmware update provided by your device OEM that enables hardware support for the branch target injection mitigation."
It's a 1-year-old ASUS laptop and I would be surprised if they even give a sane response to my question to their technical support (I doubt they will even know what I'm talking about).
There has been recent news about critical security issues in Intel CPUs, requiring a firmware update for all laptops and motherboards with Intel chips.
The vulnerabilities include the potential for malicious websites to read sensitive system memory, including passwords and encryption keys.
I have model XXYY-ZZZZ, do you have any information on when an update will be available, and where I can access it?
If not, can you attempt to escalate this ticket? The security issues are starting making their rounds in the news, and more information can be found at https://meltdownattack.com
Thank you, and happy new year :)"
Seems like it might be worth a shot.
thx for ur interst in our product. our team will reach u. we have many new products. hope u have great new year!
- OEM volume sales"
I have another Gigabyte MB that I suspect is too old for BIOS updates anymore so I am really hoping that at some point these microcode updates due come through Windows and not just via BIOS updates.
...so I suspect this is just a standard disclaimer.
Generally, microcode updates are distributed with the OS or OS distribution.