Problem: Under complex micro-architectural conditions, short loops of less than 64 instructions that use AH, BH, CH or DH registers as well as their corresponding wider register (e.g. RAX, EAX or AX for AH) may cause unpredictable system behavior. This can only happen when both logical processors on the same physical processor are active.
I wonder how many users have experienced intermittent crashes etc. and just nonchalantly attributed it to something else like "buggy software" or even "cosmic ray", when it was actually a defect in the hardware. Or more importantly, how many engineers at Intel, working on these processors, saw this happen a few times and did the same.
More interestingly, I would love to read an actual detailed analysis of the problem. Was it a software-like bug in microcode e.g. neglecting some edge-case, or a hardware-level race condition related to marginal timing (that could be worked around by e.g. delaying one operation by a cycle or two)? It reminds me of bugs like https://news.ycombinator.com/item?id=11845770
This and the other rather scary post at http://danluu.com/cpu-bugs/ suggests to me that CPU manufacturers should do more regression testing, and far more of it. I would recommend demoscene productions, cracktros, and even certain malware, since they tend to exercise the hardware in ways that more "mainstream" software wouldn't come close to. ;-)
(To those wondering about ARM and other "simpler" SoCs in embedded systems etc.: They have just as much if not more hardware bugs than PCs. We don't hear about them often, since they are usually worked around in the software which is usually customised exactly for the application and doesn't change much.)
After several full days of team-wide debugging, we had no better explanation based on the available evidence than cosmic rays, or a hardware bug. IBM's POWER processor designers worked across the street from us, so we tried to get them to help- first by asking nicely, then by escalating through management channels.
Their reply was more or less: we've run our gamut of hardware tests for years, and your assertion that it's hardware related is vanishingly unlikely... we don't look into hardware bugs unless you can prove to us beyond a doubt it's hardware related. Cache-aligned memory corruption without any other circumstantial evidence isn't enough.
On a crashed test system sitting in the kernel debugger for several weeks now, there would be no more circumstantial evidence beyond the traces. A corruption like this was never seen again, by all accounts.
If we were right and it was evidence of a hardware failure, this is one way such a problem can go undetected. I hope it was something else, or even a cosmic ray, but we'll never know for sure, I guess.
It's also possible for a _byte_ offset to be wrong, and these types of errors need not occur at a page boundary. Useful things to do with a raw memory dump with this kind of corruption (at least in AIX kernel):
- Identify the page containing the corruption, and find all activity concerning the physical frame in the traces.
- Try to reverse engineer the bad data. Often times, there are pointers you can follow. You would have to manually translate the virtual address to physical frames, but that's pretty simple to do, both for user space and kernel space (in our case, it was always in kernel space, which was 64-bit and just used a simple prefix).
From there, you just have to be crafty and thorough in following the breadcrumbs and either identifying the bug in code, or at least who should investigate next.
In my original post, note that the corruption was on a _CPU cache_ boundary (128 bytes in POWER). IIRC, the containing page was allocated and pinned  for a longer time than the trace buffer tracked (it's been a few years, though).
 To make things fun, AIX and POWER supports multiple page sizes- 4K, 64K, 16MB, 16GB. Hardware also has the ability of promoting / demoting between 4K and 64K for in-use memory... lots of fun :-)
 AIX is a pageable kernel. If kernel code can't handle a page fault, it needs to "pin" the memory.
... note that any of this can be really outdated, it's been almost a decade since I was an expert in this stuff :-)
(Edit: formatting... how do you make a pretty bulleted list on HN??)
had to have been a cosmic ray at least once right?
This is yet another of the many places where the complexity of the x86 ISA shows up and makes its hardware implementations more complicated: the x86 ISA has instructions which can modify the second-lowest byte of a register, while keeping the rest of the rest of the register unmodified (but AFAIK no instructions which do the same for the third-lowest byte, showing its lack of orthogonality).
For in-order implementations, like the ones which originated the x86 ISA, it's not much of a problem. But for out-of-order implementations, which do register renaming, partial register updates are harder to implement, since the output value depends on both the output of the instruction and the previous value of the register. The simplest implementation would be to make a instruction depending on the new value wait until it's committed to the physical register file or equivalent, and that's probably how it was done for these instructions for these partial registers before Skylake.
For Skylake, they probably optimized partial register writes to these four partial "high" registers (AH, BH, CH, DH), but the optimization was buggy in some hard-to-hit corner case. That corner case probably can only be reached when some part of the out-of-order pipeline is completely full, which is why it needs a short loop (so the decoder is not the bottleneck, AFAIK there's a small u-op cache after the decoder) and two threads in the same core (one thread is probably not enough to use up all the resources of a single core). The microcode fix is probably "just" flipping a few bits to disable that optimization.
And this shows how a ISA is more than just the decoding stage; design decisions can affect every part of the core. In this case, if your ISA does not have partial register updates (usually by always zero-extending or sign-extending when writing to only part of a register, instead of preserving the non-overwritten parts of the previous value), you won't have the extra complexity which led to this bug. AMD partially avoided this when doing the 64-bit extension (a partial write to the lower 32 bits of a register clears the upper 32 bits), but they kept the legacy behavior for writes to the lower 16 bits, or to either of the 8-bit halves of the lower 16 bits.
My guess is that is where the bug is; the behavior for partial register access stalls---insert one extraneous uop to combine, e.g., ah with rax---is unchanged since Sandy Bridge.
The LSD resides in the BPU (branch prediction unit) and it is basically telling the BPU to stop predicting branches and just stream from the LSD. This saves energy. However, predicting is different than resolving. Branch resolution still happens and when resolution (speculation) fails, the LSD bails out.
In any case, 64 μops is a lot. That's a good sized inner loop.
Simultanious MultiThreading, which is marketed by Intel under the name Hyperthreading when using two threads.
For Skylake, they probably optimized partial register
writes to these four partial "high" registers (AH, BH,
CH, DH), but the optimization was buggy in some hard-to-
hit corner case.
The high registers (AH/BH/DH/CH) are nearly written out of existence with the REX Prefix in 64bit mode. Within the manual(s) it is called out effectively not to use them as they're now emulated and not support directly in hardware.
The 16bit registers (AX/BX/DX/CX) are in worse situation, but it ends up costs additional cycles to even decode these instructions as the main encoder can't handle these instructions and you have to swap to the legacy encoder, and you'll end up losing alignment. This costs ~4-6 cycles, also the perf registers to track were only added in Haswell (and require Ring0 to use ).
High Register and 16bit registers are huge wart that it seems Intel is trying desperately hard to get us to stop using.
That corner case probably can only be reached when some
part of the out-of-order pipeline is completely full,
which is why it needs a short loop (so the decoder is not
the bottleneck, AFAIK there's a small u-op cache after the decoder)
But in _some_ scenarios when a loop can fit completely within this cache it'll be given extremely priority. This is a way to max out the 5uOP per cycle Intel gives you . It'll flush its register file to L1 cache piece meal as it continues to predict further and further and further ahead speculatively executing EVERYPART OF IT in parallel. 
In short this scenario is extremely rare. uOPs have stupidly weird alignment rules. Which you can boil down to:
Intel x64 Processor are effectively 16byte VLIW RISC processors
that can pretend to be 1-15byte AMD64 CISC processors at a minor performance
The real issue here is when Loop Stream mode ends it is properly reloading the register file, and OoO state.
This is likely just a small micro-code fix. The 8low/8high/16bit/32bit/64bit weirdness is likely somebody wasn't doing alignment checks when flushing the register file.
 On Skylake/KabyLake. IvyBridge, SandyBridge, Haswell, and Boardwell limited this to 4.
 Volume 3 performance counting registers I think we're up to 12 now on Boardwell.
 Volume 3 Chapter 22.214.171.124 (Page 107)
I think you meant REX prefix but even that doesn't make any sense.
High registers are a first class element of the Intel 64 and IA-32 architectures. They aren't going anywhere. Microarchitectural implementations are an entirely different thing.
That aside, where in the manuals does Intel say not to use the high registers? They're pretty clear about such warnings and usually state them in Assembly/Compiler Coding Rules.
From the parent:
> For Skylake, they probably optimized partial register writes to these four partial "high" registers (AH, BH, CH, DH), but the optimization was buggy in some hard-to-hit corner case.
That is about right. I don't agree with the preceding slap at x86 but this is a good summary.
BTW writing to the low registers is in principle also a partial register hazard but then Intel sees fit to optimize that as a more common case.
In particular, mov AH,BH is not emulated from the MS-ROM which is just hella slow. It uses two μops for Sandy Bridge and above. This is covered in 126.96.36.199 Partial Register Stalls.
Lastly, there is no section 188.8.131.52 in the Intel® 64 and IA-32 Architectures Software Developer’s Manual which is 3 volumes. You must be talking about the Intel® 64 and IA-32 Architectures Optimization Reference Manual which is a single volume. And it isn't clear how that section furthers your argument.
Someone really ought to tell clang and gcc this; they both happily use 16-bit registers for 16-bit arithmetic.
Anyway, Intel obviously already has special optimizations for many partial register accesses, dating back to Sandy Bridge. While it's quite possible that they left out the high registers initially (no clue, don't care), if they did they could have decided to include them in Skylake. Who knows though...
What are you even talking about with the LSD? The LSD is entirely before any register renaming and the entire out-of-order bits of the CPU. It's likely the LSD is involved only because that (plus hyperthreading) might be the only way to get enough in-flight µops to trigger whatever is going wrong, whether or not it's due to optimizations for partial register accesses.
Anyway, since you asked: In AArch64 that would be written "bfi w0,w0,#8,#8". "bfi" is an alias of "bfm", an instruction far more flexible and useful than any of the baroque x86 register names. BFM can move an arbitrary number of bits from any position in any register to any other register, and it has optional zero-extension and sign-extension modes.
I've never seen Intel do a very good job at failure analysis or following on with failures unless prodded very hard.
* For Intel that would be companies like Dell, Apple, HP, and maybe a couple of others.
So in that super rare case of actually running into a CPU defect, it's a mindfuck, it'll drive you crazy. You'll be looking for the flaw in your algorithm which makes it fail once a week under production load. But you just can't find it, it makes no sense ...
(When it comes to drivers for network/storage/graphics etc devices, it's a whole different story. Those things are piles of bugs that need work-arounds in drivers.)
The symptom was that a board with a specific microcontroller on it would be working fine, then after a power cycle it might not keep working. A flash dump would show that the reset vector, the first byte of flash on that system, would be all zeroed out. Of course the system would not run anymore, but why did it happen? After months of intermittent debugging and trying to reproduce the cause was determined. At least under certain conditions the brownout detection level was lower than the voltage level that caused the CPU to make errors. If the board lost power slowly then the CPU would start executing corrupted / arbitrary instructions which generally included lots of zeros. It would occasionally write zeros to the zero address, bricking the board.
Since then we have external power monitoring and reset circuits on all the new boards, but existing ones needed a fix. Luckily the board had power failure interrupt connected, so when that triggers we reconfigure the CPU to execute on the slowest possible clock rate, which greatly reduced the occurrence.
Or on the network level, a VPN that failed only when traversing one possible route between company offices.
"Don't use the bookshelf over there, physics is broken on that shiny spot." :D
// Workaround: Make the accessor method return fuzzy values for either of those values.
Modern day VM software has various levels of exception checking, and code to catch/mitigate/etc when bugs crop up.
So, a universe-capable simulator might have any kind of behaviour if/when a bug occurs. It doesn't need to be an unbounded, runaway scenario. :)
It probably happens more often than we imagine ;-)
> "Don't use the bookshelf over there, physics is broken on that shiny spot." :D
You know "The Animatrix - Beyond"?
Seems like the same kind of concept. :)
If Intel had to completely disable hyperthreading in Skylake and Kabylake that would make the premium anyone paid for i5 vs i7 worthless.
despite what cpuinfo tells you, no HT in i5.
and my previous comment was ironic :)
Or has that changed? At one point, i7 was full-featured, i5 was an i7 with HT disabled, and i3 was i7 with HT intact but smaller caches. Is that different with Skylake/Kabylake?
and my previous comment was ironic :)
Ack - sorry. I must be irony-impaired. That's why I don't post very often. :-)
yes it happens, but software bugs happen every day where as if your system blue screened every day you know dang well you'd be on the phone with the hardware vendor for a refund / new system.
Well, since hundreds of millions of people use Skylake/Kaby Lake CPUs for 2 years now, and only now we learn about this, obviously this is not of the "system blue screens every day" variety but a very rare bug.
Not to be anal but we can't know this.
I think we're getting to levels of complexity where the process Intel uses, with lots of different QA and testing teams doing their best to look for bugs, just isn't going to cut it. We need formally verified models transformed step-by-verified-step all the way down to the silicon. It's already feasible, with free tools, to formally verify your high-level model (using e.g. LiquidHaskell) and then transform this to RTL (using e.g. Clash). With Intel's QA/testing budget, it's well within reach to A) verify the transformation steps and B) figure out how to close the performance gap between machine-generated (but maybe slower) and hand-rolled (faster, but evidently wrong) silicon.
For example, you can not possibly formally verify the fetcher unit on its own, because the state space that you need to cover for several cycles for all the module inputs and outputs is beyond the capability of any formal verification tool.
Typically, you run formal verification on sub-blocks of sub-blocks of FUs.
For this particular bug, it looks like multiple functional units are involved, so it might have been missed by formal verification.
Then use a proof methodology that doesn't require exhaustive enumeration. This objection is actually fairly alarming to me; perhaps there is a larger disconnect between industrial formal techniques for hardware and software than I thought. Strings also occupy a "large state space", but this obviously doesn't prevent us from doing formal verification on functions over strings.
Then use a proof methodology that doesn't require exhaustive enumeration.
Strings also occupy a "large state space", but this
obviously doesn't prevent us from doing formal
verification on functions over strings.
Ok, so what this tells me is that you're not aware of what modern verification techniques look like.
There's not really any one resource I can point you to, but take a look at these links. I've used these or similar technologies personally, but there are others I haven't used.
> What is your reason to think this is 'obviously' the case?
Because I've done this, and anyone who claims to be familiar with formal methods should be at least passingly familiar with all the things I mentioned.
Even if you aren't, if you've ever heard of SAT (a fairly universal CS concept) you should at least be familiar with the idea that non-exhaustive proofs are a thing.
Ok, so what this tells me is that you're not aware of what
modern verification techniques look like.
Notwithstanding your example that it's possible to non-exhaustively verify some functions on strings. There is a quite some distance between that to verifying just any function.
Because I've done this,
I think this may be an example of the disconnect between research and the industry. Researchers say things are solved when they have shown something is possible and published about it. They feel it's then up to the industry to extrapolate, while research moves on to exciting new and greener pastures. Meanwhile the industry thinks the results are too meager, thinks the extrapolation involves a lot of technical difficulties and generally is not willing to spend enough money on what they cannot see as anything but a longshot.
If you do any model checking at all, with tools like SPIN or TLA+, you are already in the state-of-the-art minority in industry.
If done naively, it doesn't work because you are unlikely to hit the problematic bug. So you have to be clever: do not generate inputs uniformly! Generate those inputs which are really really nasty all the time. If you find a problem, then gradually simplify the case until you have reduced the complexity to something which can be understood.
With some experience for where former bugs tend to live, I've been able to remove lots of bugs from my software in edge cases via this method. (See "QuickCheck").
but Intel isn't allowed to have issues in their processors, even if they can fix them with a software (microcode) update?
They could've caught and fixed this one with some more testing, before actually releasing. Formal verification isn't necessarily going to help, if it's a statistical type of defect --- thus the concept of wafer yield, and why dies manufactured from the exact same masks can behave completely differently with respect to the clock speeds attainable and voltages required.
Therefore, CPU designers verify the important units (e.g., the ALU) independently of each other, and then try to verify their interaction. But a system level design verification check is simply not possible.
Obviously, faults that result from fabrication can still take place, so they are tested for using BIST and JTAG. Test coverage can be pretty high, but obviously not 100%.
As you can see, there are still a ton of places where hardware errors can seep through.
I believe the main issue with formally verifying everything is that reasoning about parallel code is extremely hard and might well make the entire endeavour unattainable. It would be great to have formally verified CPUs, though.
> They could've caught and fixed this one with some more testing, before actually releasing.
Isn't this true of any bug?
I know they did AMT but that was a special case and different.
The TSX situation was indeed unfortunate, I have one of the affected CPUs. But it was a new feature that was broken, not something that used to work, so the bug's impact was less serious, and disabling the feature didn't have too much of an impact.
The Dell XPS 9350 (Skylake) has numerous issues with GPU noise, USB-C compatibility, wifi reliability - Dell don't care unless your consumer laws are strong enough to make them care.
I don't think this is sustainable, and I think that massive arrays of relatively simple processors. First, we will need a culture shift that learns programmers to program concurrent programs from the start, and this will take a lot of time (because technology moves a lot faster than culture).
EDIT: My wording was poor, but what I meant was that it's not like hardware vendors do not make bugs happen due to deficiencies in their design, which are not only tolerated, but often reflected in software, which runs counter to OP saying that apparently hardware vendors are not allowed to have bugs, but the software ones are:
> So it's okay for software to have bugs that get fixed (I think everybody here acknowledges that software will always have bugs), but Intel isn't allowed to have issues in their processors
Ever since I moved from C to Java (I like low level stuff but the company I joined is such) I have been having one of three major problems - logic bugs, too many frameworks, causing bugs due to lack of in depth understanding of each one of them, and GC bottlenecks. Of all of them, I hate the GC ones the most.
Formally verifying something like a multiplier block is difficult but doable if you care. Formally verifying an FPU is probably at about the limit.
If you want formal verification, you would have to simplify a modern microprocessor a lot.
Perhaps we are imagining different formal verification methodologies. Can you tell me what kind of formal verification you're referring to?
I wonder if it's exploitable ;) Maybe that's why they never release the details of these CPU bugs.
> Was it a software-like bug in microcode e.g. neglecting some edge-case, or a hardware-level race condition related to marginal timing
Not sure about microcode, these x86 cores execute many simple operations natively, by means of dedicated circuits. Microcode is only involved in emulation of complex x86 instructions.
And hardware problem doesn't have to be marginal timing. It could simply be a logic bug, i.e. the circuit operates as designed but it was designed to do something else than it should be doing in some unforeseen circumstances.
Modern PC demoscene productions don't really do very funky things CPU-wise anymore. Most run mostly in shaders, actually. Amiga and C64 is a different story, but Intel isn't making that many Amiga CPUs :-)
The issue was being investigated by the OCaml community since
2017-01-06, with reports of malfunctions going at least as far back as
Q2 2016. It was narrowed down to Skylake with hyper-threading, which is
a strong indicative of a processor defect. Intel was contacted about
it, but did not provide further feedback as far as we know.
Fast-forward a few months, and Mark Shinwell noticed the mention of a
possible fix for a microcode defect with unknown hit-ratio in the
intel-microcode package changelog. He matched it to the issues the
OCaml community were observing, verified that the microcode fix indeed
solved the OCaml issue, and contacted the Debian maintainer about it.
Apparently, Intel had indeed found the issue, *documented it* (see
below) and *fixed it*. There was no direct feedback to the OCaml
people, so they only found about it later.
I'm not a particularly big fan of Intel's practices, but the reactions in this thread seem a bit too strong to me.
That's just not true. Intel published the erratum in April: https://www3.intel.com/content/dam/www/public/us/en/document... - search for SKL150. It was also clearly noted in Debian's intel-microcode changelog on May 15: http://metadata.ftp-master.debian.org/changelogs/non-free/i/...
They absolutely should have followed up to the OCaml people's support ticket. But sloppy followup is an issue that every large project encounters.
And not only that, but they did much more than the average Joe and essentially pin-pointed the issue for them. So yeah, inexcusable it is.
« Instruction Fetch May Cause Machine Check if Page Size and Memory Type Was Changed Without Invalidation »
« Execution of VAESIMC or VAESKEYGENASSIST With An Illegal Value for VEX.vvvv May Produce a #NM Exception »
but something like this should be announced clearly.
(I keep my microcode packages up to date, but I don't normally bother rebooting when an update comes in.)
The fact that Intel does not do that with a bug of this magnitude shows how much respect they have for their users.
What, the "found the issue, documented it, and fixed it" part?
This recently came to my attention while debugging some increasingly frequent lockups, which took me a solid week of eliminating all seemingly more likely causes (VirtualBox, nVidia driver, faulty RAM, etc). In the end I found the culprit while digging into the Intel Specification updates: my Core i7-5820K (and most other Haswell-E and Broadwell processors) has a bug when leaving package C-states, and the only workaround is to disable C-states above level 1. Timely updated microcode, which applies this workaround, would have saved me my week.
By ancient, perhaps you mean the version that was current at the time 16.04 shipped?
> I cannot understand why they left out 16.04 there. So much for LTS, it seems.
See https://wiki.ubuntu.com/StableReleaseUpdates. The point of an LTS (or any stable release, for that matter) is that it doesn't change by default. For those who want to keep everything up-to-date, Ubuntu ships a new release on a six month cadence. If you choose not to use that, then you shouldn't be surprised when things aren't updated, since that's exactly what you opted in to.
The microcode package may warrant an exception, however, and we have a bug to track that. It's tricky because without the source we cannot pick apart what changed, or determine whether any changes meet our update policy. We have to be careful. Sooner or later some user will inevitably come along to tell us that a microcode update broke things, and ask why we didn't fulfill our LTS promise by not changing it.
You say Ubuntu has a bug to track the microcode package as an exception, but that doesn't seem to be having a positive effect, does it? Precisely because Ubuntu cannot pick it apart, what is it that they're trying to decide in the bug? Why is Ubuntu second guessing Intel in deciding which microcode update to apply and which to skip? How would Ubuntu know that better than the manufacturer? The Intel specification updates list tons of processor bugs including some very critical ones, so we know the microcode updates do help with some of those. When was the last time that an Intel microcode update brought a new bug or made something worse? I'm not aware of any such instance, and although that may indeed happen sometime it doesn't seem as likely as facing existing known bugs, right?
I think it could be argued that it is up to the user to decide (say, a warning during installation), or that Ubuntu could choose to apply all microcode updates by default and let the user opt out. Ubuntu might impose a certain delay, say a month or two at most, in order to see if a microcode update gets withdrawn or ends up too buggy. But I don't think Ubuntu could reasonably choose to skip all microcode updates for 1.7 years like it did in my case, or to choose which ones to apply and which ones to skip, like it seems to be trying to. Microcode should be treated like other propietary software, but with special dilligence due to its criticality. If nVidia says a particular driver release is very buggy and should be updated, Ubuntu promptly updates it. Why would Ubuntu sit on known critical microcode updates then? If, and it's really a big if, eventually some microcode update brings a new bug and Ubuntu deployed it, it would be Intel's fault and not Ubuntu's.
> I think it's a fair and tame adjective considering this is processor microcode we're talking about...
Processor microcode updates haven't, to my knowledge, ever automatically been applied by distributions in the past. In light of that, I don't see how it's reasonable to have an expectation otherwise.
> You say Ubuntu has a bug to track the microcode package as an exception, but that doesn't seem to be having a positive effect, does it?
By being careful before pushing out an update to millions of users? I'd say that's a positive effect.
> ...what is it that they're trying to decide in the bug?
Whether to continue to let users have a choice, or by taking that choice away by doing things automatically for them. There are also packaging-based regressions to consider; not just the microcode ones. For example: if the wrong microcode is applied to the wrong processor because of a packaging error, who would you be blaming? Intel or Ubuntu?
> But I don't think Ubuntu could reasonably choose to skip all microcode updates for 1.7 years like it did in my case...
Ubuntu didn't "choose to skip" all microcode updates. Ubuntu didn't choose at all; pushing an update requires a specific effort.
In light of this issue, Ubuntu is now considering what to do about it, responsibly, for all users. Both for this particular issue, and for microcode updates going forward.
I wasn't aware of this, thanks. Though I searched, and I found that they've been causing their users problems by doing so: https://rhn.redhat.com/errata/RHBA-2017-0028.html
I think this backs up my point: care must be taken.
The whole point of an LTS release is that they will keep a stable baseline for all the packages they're distributing and apply a certain amount of integration testing. If you believe upstream knows best (and I'm not saying you're wrong to do so) then why use LTS in the first place?
I think you're confusing an objective (LTS) with a tactic (keeping a stable baseline). Certainly the whole point of LTS is not to keep a stable baseline, but to provide long-term support. And that is clearly violated when Ubuntu chooses to not provide support when it is known to be needed (e.g., listed in an Intel Spec update) and a solution is available by a vendor (e.g., Linux-specific microcode update being made available by Intel). The whole point of LTS is to avoid the bleeding edge while fixing known bugs. Microcode updates is not bleeding edge, it's just patches for known bugs.
No, you've got it backwards. If you just wanted long-term support you'd use a rolling release distribution, of which there are any number (yes some rolling release options are "bleeding edge", but there are stable options too). The whole point of LTS releases is that they are stable baselines that are supported in the long term.
The costumer or user of LTS wants, well, Support (fixes) for some extended time (Long Term) and avoiding problems (less bugs), perhaps at the cost of getting less new features. The whole point of LTS is self-explanatory.
Keeping a stable baseline is a tactic used to try to provide long-term support by reducing the amount of new bugs at the cost of less new features. But keeping a stable baseline does not attain the key word, the substantive in LTS: Support. Support is provided only by introducing fixes, and that is exactly what has been omitted in the case of the Intel microcode bugs.
$ geteltorito n1cur14w.iso > eltorito-bios.iso # provided by the genisoimage package on Ubuntu
$ sudo dd if=eltorito-bios.iso of=/dev/sdXXX # replace with your usb drive with care to not write over your disk
Details and if you want the script I link to it from here: https://forum.xda-developers.com/hardware-hacking/chromebook... - don't judge my shitty bash skills.
Many people have neither the interest nor the hardware access to overclock, and these processors have less overclocking headroom than earlier designs. Nevertheless, the hyper-threading hardware itself generates heat, restricting the overclocking range for given cpu cooling hardware. In this case, turning off hyper-threading pays for itself, because one can then overclock further, overtaking any advantage to hyper-threading.
Just checked numbers. That was my expectation as well until I came across code that experienced a bit over 80% speedup when HT was used.
If Intel used marketing names that were more closely related to technical reality, then when something like this happens they wouldn't have so many customers finding themselves in the "maybe I'm affected by this horrid bug" box.
If so, there's a way to disable hyper-threading, but you need Xcode (Instruments).
Open Instruments. Go to Preferences. Choose 'CPU'. Uncheck "Hardware Multi-Threading". Rebooting will reset it.
sysctl -n machdep.cpu.brand_string
Intel(R) Core(TM) i7-4650U CPU @ 1.70GHz
I'm much more annoyed by the completely unpredictable desktop assignment on monitors when hotplugging DisplayPort connections on multiple displays. This one bothers me every day.
If there was data corruption you might not notice.
What I see:
Same monitor/monitors plugged into same ports produce consistent configs.
I get a unique config per monitor/port.
Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
> machdep.cpu.model: 78
> machdep.cpu.stepping: 3
> machdep.cpu.microcode_version: 174
Can't find if 174 is the fixed version or not.
So this is one of the models for which there exists a fix, as per the email.
On laptops, some i5s are not real quad cores but dual cores with Hyperthreading.
Hmm...either this statement is wrong or this desktop /proc/cpinfo is wrong:
$ grep -E 'model|stepping|cpu cores' /proc/cpuinfo | sort -u
cpu cores : 4
model : 94
model name : Intel(R) Core(TM) i5-6600 CPU @ 3.30GHz
stepping : 3
$ grep -q '^flags.*[[:space:]]ht[[:space:]]' /proc/cpuinfo && echo "Hyper-threading is supported"
Hyper-threading is supported
$ grep '^flags.*[[:space:]]ht[[:space:]]' /proc/cpuinfo
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush
dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts
rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2
ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm tpr_shadow vnmi flexpriority ept vpid dtherm ida
dmidecode seems to give more accurate info for this:
$ sudo dmidecode -t processor | grep 'Count:'
Core Count: 4
Thread Count: 4
$ sudo dmidecode -t processor | grep -E 'Flags:|HTT|Status|Count'
Status: Populated, Enabled
Core Count: 4
Thread Count: 4
>A value of 0 for HTT indicates there is only a single logical processor in the package and software should assume only a single APIC ID is reserved. A value of 1 for HTT indicates the value in CPUID.1.EBX[23:16] (the Maximum number of addressable IDs for logical processors in this package) is valid for the package.
UPDATE: It appears these flags refer to each initial APIC ID, so it seems the HTT flag value should be 0 in all cases where the overall processor:thread ratio is 1, suggesting there might either be incorrect information in the CPUID instruction for some Intel CPUs or the kernel is not correctly evaluating CPUID.1.EBX[23:16].
Hopefully, someone more versed in CPUs can correct me here.
Oh well... so far the machine (running Windows 10) has been stable minus one or two random lockups in 2 months of heavy usage which could be attributed to this. Guess I wait...
Well done Debian folks!
Also the advisory seems to imply that the OCaml compiler uses gcc for code generation, which it does not -- it generates assembly directly, only using gcc as a front end to the linker.
Yes, but that assembly code contains calls into the OCaml runtime, for garbage collection etc. If I understand correctly, the particular loop affected by this bug was somewhere in this memory management code. That code is written in C and compiled with a C compiler.
This isn't a hypothetical; what did Intel do when the only fix for broken functionality was to disable TSX entirely?
What happens when your laptop's display has frequently broken pixels?
Well they wait for 6 or more before they say it's out of spec
Windows stores its microcode in C:\Windows\System32\mcupdate_GenuineIntel.dll which is a proprietary binary file and you can't simply replace it with Intel's microcode.dat file (which is ASCII text), so you have to use a third-party tool such as VMware's one.
1. Download and extract the zip file in the first paragraph
2. Modify the install.bat file so that the line which reads `for %%i IN (microcode.dat microcode_amd.bin microcode_amd_fam15h.bin) DO (` only contains the microcode.dat parameter (since you obviously don't have an AMD CPU, and the tool is made for both)
3. Download and extract microcode.dat from Intel's website (https://downloadcenter.intel.com/download/26798/Linux-Proces...) and place it into the same directory as the VMware tool
4. Run install.bat with admin privileges
5. Hit cancel when it tells you that the AMD microcode files are missing, and you're done
The CPU microcode will be updated immediately (yes, while Windows is running.) The service will also run on each boot and update your CPU microcode, since microcode updates are only temporary and are lost each time you restart. You can check Event Viewer for entries from `cpumcupdate` to see what it has done. It's advised to run a tool to view the microcode version before installing (such as HWiINFO64) so you can re-run the tool after installing and confirming that the version has changed.
I have done this and it works as described. I went from 0x74 to 0xba as shown by the μCU field in HWiNFO64, and I have an i7-6700k.
Usually (always?) it's not a ROM update, the encrypted microcode blob is loaded into the CPU by the OS on every boot via CONFIG_MICROCODE.
some linkrot: http://imgur.com/a/z1uLv
I've been using the thunderbolt 3 dock with two external monitors and occasionally get a little glitch prolly loose cable I think.
I've downloaded the bitcoin blockchain, done quite a bit of work in pycharm + chrome, multiple projects, flow and webpack in the background and haven't had any sort of crashes tho.
Ryzen is great, I might buy one next, but it is not to "doge bullets".
And is the microcode fix available for non-Linux systems yet?
However, looking at the microcode update driver on an updated Windows 10 as of right now, I don't see a recent enough microcode version to fix it. The latest updates appear to be from 2015.
Loading the microcode is straightforward---all you need to do is put a few values in the appropriate MSRs. This is described in the Intel manuals, Volume 3. The microcode itself is embedded in the above DLLs, as a big binary table. The latest entry I see is 20150812 for the Intel 0x40651, aka Haswell.
I wonder if intel will do something like that again or if the industry as a whole is more tolerant of unreliable / buggy behavior and will just live with it. Examples of Apple just telling people that the poor reception strength was their own fault / changing software to hide problems / etc.
To this day I disable it by reflex on everything!
I wonder if a microcode update would solve some of the various issues I have in Windows.
I installed debian 9, installed virtualbox, vagrant, setup a clean development machine for myself, everything took 4 hours to finish.
I reboot the virtual machine, and boom, there was a kernel panic which I sadly don't remember exactly / didn't take a picture of. After I rebooted the machine, and opened terminal, the system froze. The cursor wouldn't move. Reboot again, motherboard has a CPU fail/undetected light on. Couldn't get it to boot after that.
I am both sad and relieved that bad stuff exists, but it's being patched to prevent proliferating.
I sincerely hope I'll get a replacement from Intel.
model name : Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz
then $cat /proc/cpuinfo | grep ht
Definitely there under flags
Should I be concerned?
But a new computer may serve you well...
AMD on the other hand doesn't even acknowledge an issue
when multiple customers report problems. See this Ryzen bug:
Not in this case.
"Apparently, Intel had indeed found the issue, documented it (see below) and fixed it. There was no direct feedback to the OCaml people, so they only found about it later."
It also affects code generated by GCC, but apparently GCC is less likely to generate code sequences which trigger the CPU bug.
Possibly, but that was not the issue here. It's clear from your own link that the crashes were due to C code in the OCaml runtime (used by the compiler itself), which is written in C and was compiled with GCC at -O2. See https://caml.inria.fr/mantis/view.php?id=7452#c17129
At some point in the near future Apple will package the microcode fix in an update, and that will be it.
Also, up until the end of 2016, Apple didn't use Skylake processors, so it wouldn't be 2+ years.
Or this is inception level hellban, where some bots and devil curators lurk. What is even real.
Btw good to see your Lego contraption article in IEEE Spectrum.
Oh cool, thank you I didn't even realize it had been published. It's already outdated, the latest re-incarnation has double the drop-off bins for twice the sorting speed.
And since a cheaper cpu can have the same or worse bugs, the point is moot.
You don't pay i7 vs i5 etc for better quality control.
For that you have to go to enterprise/server grade stuff that costs much more (Xeon). And even there, nothing is guaranteed to be perfect.
In fact, with the complexity of modern CPUs/GPUs it's a minor miracle that anything works at all. We have it better than ever...