
Intel to shut down renegade Skylake overclocking with microcode update - pavornyoh
http://arstechnica.com/gadgets/2016/02/intel-to-shut-down-renegade-skylake-overclocking-with-microcode-update/
======
xlayn
There are two things for this situation

    
    
      -On one hand Intel is disabling a "Feature" of their cpus
       as a way of preventing users to get "More expensive 
       performance" without paying for it; you can think of it
       like intel is just covering their back; we can assume the
       difference on their cpus is just related to binning [0]
       and they are just protecting their customers by not 
       allowing less performant cpus perform better therefore 
       increasing reliability.
      -on the other hand you can see how hardware is not
       anymore something you buy and expect to behave in a
       certain way.
       For those paranoid... would this mean they can also alter
       how instructions behave? **cough**security**cough**
    

[0]
[https://en.wikipedia.org/wiki/Product_binning](https://en.wikipedia.org/wiki/Product_binning)

~~~
andromeduck
Yes microcode can alter the way instructions behave and it's been like this
for a decade.

What worries me more is something like Nvidia's Denver architecture because
that's actually a full abstraction above machine code.

~~~
gioele
> What worries me more is something like Nvidia's Denver architecture because
> that's actually a full abstraction above machine code.

What relieves me is that RISC-V is demonstrating that opensource hardware can
be a reality. This will provide people with a concrete alternative to Intel
and ARM chips that does away with (closed) microcode and shadowy
marketing/security procedures.

Yes, we are far away from FPGA-ing our CPUs, but I see the years 2010 as the
years 1980 for FLOSS code. In the '80s a bunch of people sent around tapes
with copies of EMACS, cp and mkdir; 20 years later multibillion dollars
infrastructures rely on open source.

Today we use Intel microprocessors fearing that their ME components will spy
on us and that secure boot will lock us in. In some years we will just install
our Debian-for-HW and be done with it. It is a liberating thought.

~~~
andromeduck
Yeah FPGAs sound nice but I seriously doubt that will ever happen where
performance and power are major concerns.

I'm not sure that just having the hardware design makes much of a difference
either as it'd be the equivalent of shipping a binary blob then making the
source available without the compiler used or a way to decompile because the
tools required are so expensive.

In CPU land have ARM which is probably as close as we can practically get to
open source but IMO the real problem is still verification of the shipped
product and without third parties auditing the design I'm not sure we could
ever be sure source or not.

~~~
PhantomGremlin
_the real problem is still verification of the shipped product_

In that vein, there is a lot about a modern FPGA that's a "binary blob". Do
you fully trust Xilinx? Do you fully trust Altera?

Do you continue to trust Altera now that they are a subsidiary of Intel? If
so, why don't you just trust Intel directly? Perhaps it's turtles all the way
down? :)

------
makomk
Are Intel still disabling random features in the K versions of their
processors for market segmentation reasons? Figuring out what processors had
what features locked off was getting truly ridiculous.

~~~
onli
Yes: [http://www.cpu-
world.com/Compare_CPUs/Intel_CM8066201919901,...](http://www.cpu-
world.com/Compare_CPUs/Intel_CM8066201919901,Intel_CM8066201920103/) – Trusted
Execution in this case.

~~~
userbinator
In the case of TXT I don't think everyone would consider it a desirable
feature since it's part of the trusted computing stuff, but in the past
they've also disabled more friendly features like virtualisation.

~~~
delroth
If you're not using TXT, your full disk encryption is completely useless if an
attacker can get access to your laptop for 5 minutes¹. They can just backdoor
the bootloader (or, on Linux, your kernel/initramfs which are not encrypted)
to get a foot into your system at the highest level of privileges.

If you're using TXT, maybe the NSA can still do that by getting the ACM
signing keys from Intel. But that's still an improvement from just the
standard FDE setup most people use, which could be trivially hacked by someone
with the skill of an average university student.

¹ Make that 15min if you have a BIOS password and a static boot order that
disallows booting on any kind of external device. It just requires a bit more
preparation, but on most laptops it's not hard to put a clip on the NVRAM that
stores BIOS settings and reset what needs to be reset.

~~~
heywire
Can you elaborate on this a bit? I'm definitely not an encryption expert, but
I was under the impression that full disk encryption on Linux with a
sufficiently long passphrase was secure.

I personally use cryptsetup (dm-crypt/LUKS) on my laptop running Arch Linux
just in case it were stolen. Are you saying that bypassing the bootloader with
a live USB, etc, could give an attacker access to the data stored on the
encrypted drive (outside of the boot partition, of course)? That seems like it
would defeat the purpose of full disk encryption. Note: I understand that this
is assuming that the attacker does not gain access to the system while it is
up and running.

~~~
qb45
> Are you saying that bypassing the bootloader with a live USB, etc, could
> give an attacker access to the data stored on the encrypted drive (outside
> of the boot partition, of course)?

No, of course not. (Well, disregarding cold boot attacks).

But it gives access to data stored _inside_ the boot partition, allowing for
fun things like patching your kernel to send dm-crypt keys to
[http://fbi.gov/submit_key.cgi](http://fbi.gov/submit_key.cgi) \- makes sense
now?

~~~
dogma1138
There are ways to also encrypt the boot partition.

There are various protection mechanism that rely on software alone
(bootloader), software + hardware (TPM), software + firmware and software +
hardware + firmware.

The question is always what do you want besides encrypting the main partition
mainly in terms of integrity checks.

And older BIOS with a TPM or a modern UEFI with or without a TPM can provide
additional integrity check for both the host configuration (BIOS/Device
settings) as well as storage device specific integrity checks.

TXT basically allows you to measure various elements using the UEFI and more
importantly for OEM's at least TXT has extensive DRM capabilities that can
restrict the user from installing "untrusted" operating systems or making
modifications to the host it self (e.g. chaging bios settings).

Beyond that TXT gives only a slight improvement as far as actual security goes
against cold boot attacks as it allows you to take measurement when switching
between S4 and S5 power states (soft off and hibernate) it still doesn't allow
any measurement for S1-S3 states which are legacy sleep mode.

A modern UEFI with or without a TPM can ensure that the OS will not boot or
will boot into recovery mode if any changes were made to the hardware or
firmware configurations as well as if any tampering was done to the bootloader
(secure boot keys) with a TPM you can be slightly more assured that no one
tampered with anything since the TPM is a better cryptographic storage than
the UEFI's ram/nvram.

------
roddux
What possible reason could they have for this, apart from forcing users to buy
their 'unlocked' chips? Is it really worth the nightmarish PR debacle they
will undoubtedly face?

Side note (from comments) -- apparently Intel paired this microcode update
with a patch to fix CPUs freezing during Prime95. Not cool.

~~~
revanx_
What nigtmarish PR are you talking about? Nobody cares, just like nobody cared
when they started shipping cpus with embedded Intel ME.

~~~
roddux
Nightmarish PR was probably an overstatement, but I would expect a
considerable backlash from customers who will be having a free feature
removed-- for no reason other than to make Intel money.

Seeing as AMD are getting back into the game with their new Zen architecture,
I think this is a very unwise move by Intel.

~~~
Klathmon
but it was never advertised as a feature, in fact i believe the opposite is
true, they advertised it as non-overclockable.

~~~
roddux
Very true, but having the loophole there is beneficial for users who want to
overclock it-- bug/feature debate aside.

They advertised it as non-overclockable, but some users found out it was --
why bother patching? To force those users to buy the more expensive 'unlocked'
chips for something that used to be free.

------
snvzz
I'm glad they're doing this. It'll help plant an image: Proprietary microcode
is evil.

~~~
astrodust
The alternative here is what, exactly? Can you site an example of a better
situation?

~~~
na85
It's "cite".

And the better solution is to finally ditch backwards compatibility with the
8086 and implement a sane instruction set that isn't just a virtualized layer
à la x86

~~~
craigjb
Additionally, the instruction set is not virtualized because of compatibility.
Compatibility is a nice side effect though, so software from even five years
ago works just as expected (not all software can be recompiled, and could you
imagine commercially supporting that many variants--this way Intel shoulders
that burden). And, backward compatibility is a tiny fraction of the control
unit silicon area. It has zero performance cost. Zero. Intel is scraping for
more (not less) things to add to silicon to improve performance (video
encode/decode, GPU, memory controller, PCI).

The virtualized instruction set enables the performance gains by executing
portions of instructions in parallel, re-ordering to avoid pipeline stalls,
better branch prediction, execution shortcuts depending on operanda, etc.
without compiler support. RISC architectures like ARM do this as well. On a
modern Cortex part with multiple execution units, the processor is not a
textbook pipelined RISC processor like this community seems to yearn for. So
if you look at machine code for some reason, yes the instruction set is
simple, but it's still a facade over complexity.

I don't understand the constant cry that processors are complex. To gain
performance against the frequency limit, complexity increased. This happens
with software all the time.

Also, for those that want direct control over all the elements of the
processor, go buy an Itanium... oh wait... no one did.

------
alkonaut
Can't they just disable it for yet unsold cpu:s (e.g. look at serial numbers
or something)? Consumers (should) only care if their product is made worse
_after_ they purchased it. I can see why someone would be angry if a feature
that was advertised is removed after the purchase.

~~~
Klathmon
IMO this would only be an issue if they advertised that it was overclockable.

If it was advertised with no ability to overclock, you should assume it can't
overclock and if you are able, don't assume it will work or will stick around
forever.

~~~
brillenfux
There was a time when I bought something I owned it and could do anything I
wanted with it. That time is clearly gone.

~~~
Tepix
No one is forcing you to install the BIOS update.

~~~
Retric
I have been 'required' to install BIOS updates before. PS3 lost capability's
at one point in an update and new games forced that update without mentioning
it on the packaging.

"Note that Intel paired this with a bug fix for the freezing during Prime95 -
if you want the bug fix, you have to let them lock down your clock. "

~~~
Tepix
You are using an undocumented feature of the CPU, from Intel's point of view
it's an unintended loophole.

Also you can try to separate the Prime95 bugfix from the clock lockdown. Good
luck :-)

------
userbinator
Those supporting AMD should remember that they did a similar thing after it
was discovered that some processors could have entire cores re-enabled.

On the other hand, if I remember correctly one of the mobo makers soon figured
out how to use both the new microcode and the old one to get the best of both
worlds; it might've been AsRock too...

~~~
Already__Taken
Those supposedly triple and dual core CPUs that got unlocked to quad cores
were apparently a way to sell damaged silicon that the remaining cores passed
QA so they just disabled the broken core. To meet demand of these product
lines perfectly good quad core CPUs got disabled and sold off as dual/triple
editions. It was the luck of the draw if you got one of those and not a
genuinely flawed chip.

Now, How much of that is true I don't know but that's what I remember reading
at the time. A friend of mine did succesfully unlock his triple and ran for a
long time.

~~~
masklinn
> Those supposedly triple and dual core CPUs that got unlocked to quad cores
> were apparently a way to sell damaged silicon that the remaining cores
> passed QA so they just disabled the broken core.

It's called "binning", and it's not just for defects. CPUs from the same chain
can have different tolerances, one will reach 3.5GHz easy and the next one
won't be stable beyond 2.8. Those are also put in different bins and sold as
different models.

> Now, How much of that is true I don't know but that's what I remember
> reading at the time.

It's completely correct. Selling a model from the exact corresponding bin is
ideal, but if you have more demand than the bin provides you get parts from
higher bins and gate them. A few years ago that got very very common with
Intel parts as they reached tremendously low defect rates, and more or less
any CPU you bought would come from the highest bins gated and undermultiplied
to whichever model you'd buy.

~~~
exDM69
This is "floorsweeping", not "binning".

Floorsweeps are done when a part of a chip isn't passing tests. The defective
part gets fused off and the chip gets sold as a dual core instead of a quad
core, for example.

Binning is orthogonal. It's a qualitative measurement of (the part of) a chip
that passes all tests. The electrical and thermal properties of a chip are
tested, e.g. the leakage current and temperature are measured at different
clock speeds and voltages (this is infinitely more complex for a battery
powered device where voltages and currents fluctuate). The ones that pass with
best results are sold as a premium product and the rest are clocked down and
sold for less.

So a "K" model Intel chip comes from the best bin. An i3 chip dual core is a
floorswept quad core i5. (This is what I assume, I don't work for Intel and
don't know the details)

But these chips may not sell in the proportion they get manufactured in, which
means that some perfectly functional quad cores get fused to dual cores and
sold for cheaper. If you're lucky, you're getting one of these (and un-fusing,
if possible, will work).

Floorsweeping and binning are required because the tolerances of modern
semiconductor manufacturing are so tight. To get the best bin to perform well,
the manufacturing process is really pushed to the extreme, which means that
there will be chips that have manufacturing defects as well as lower
performing chips.

~~~
brianwawok
On the other hand, floorsweeping and binning cannot exactly match market
demand. Towards the end of a run of a given chip, most may qualify for the
highest bin - but due to demand many have to get moved down a bin or two. So
you end up with totally capable of being overclocked CPUs being sold as lower
clock.

~~~
andromeduck
And that's addressed by introducing new bins + dynamic overclocking or
reducing dynamic voltage

------
sireat
This "software eating the hardware" is worrisome.

It used to be that at least you could depend on the hardware staying the same
unless you chose to apply patches yourself.

Is it theoretically possible to change the microcode to actually add more
features instead of disabling them?

Let's say Zen actually comes out better than expected and then Intel
miraculously releases another update which re-enables OC ability?

------
dbalan
What are the ways in which this microcode update is enforced? Is it something
that ships with your os? or something that user has to manually install like a
BIOS update?

~~~
s9ix
Would probably be a BIOS update distributed from the motherboard manufacturer.
As the article states, people could just not update,but finding hardware with
the older BIOS will be more and more difficult over time.

~~~
bradfa
Why couldn't the OS just put the microcode update into the EFI bootloader?

I know some recent Intel parts can't even run the EFI firmware without first
getting a microcode update, but is Skylake like this too or can the EFI
bootloader run without first getting microcode downloaded?

~~~
amluto
Linux effectively does if your distro opts in.

------
amluto
What happens when this microcode patch is applied at runtime on an overclocked
CPU?

Remember the Haswell mess when a microcode patch disabled TSX ? If you applied
it after user code (glibc) detected TSX support, _boom_.

~~~
rasz_pl
you get a hang, happened with Win 10 on overclocked G3258. Win10 ships with
mcupdate_GenuineIntel.dll containing microcode that ... disables Overclocking

Workaround is renaming/deleting mcupdate_GenuineIntel.dll and not updating
BIOS ever again to avoid new microcode, all to keep cheapest 4.4GHz single
thread cpu money can get.

------
venomsnake
Lets hope Mr. Keller did stellar job at AMD. Once again.

~~~
rasz_pl
Wouldnt count on that, best AMD marketing predictions promise 40% more IPC,
that would still be 1-2 Intel generations behind. AMD marketing always
overpromises, so we will end up with something between first-second i5 gen.

------
AlexDanger
So was it just one particular Asrock motherboard that allows this type of
overclocking? Whilst my hardcore overclocking days are behind me, I have an
i5-6500 and some kind of Asus mobo...I'm curious to take it for a test drive.

~~~
jsheard
Many Z170 motherboards got this functionality, but keep in mind it's not
without caveats.

It breaks integrated graphics, turbo boost, all power saving states,
temperature sensing, and cripples the performance of AVX instructions for some
reason.

[http://overclocking.guide/intel-skylake-non-k-
overclocking-b...](http://overclocking.guide/intel-skylake-non-k-overclocking-
bios-list/)

------
Osiris
One reason that I've always liked AMD is they have a limited number of
products so they don't play games like this. AFAIK, every feature that an
architecture supports is available on chips released. The only exception that
I'm aware of is "black" edition allows for an unlocked multiplier, meaning you
can overclock the CPU without overclocking the BUS.

------
grawlinson
They're basically disabling functionality in order to make a quick buck.

~~~
glogla
AMD Zen can't come soon enough.

~~~
kale
My Athlon 740k is getting old. I really hope Zen comes to market soon. I can't
bring myself to buy a chip as a stop-gap with Zen around the corner.

I'd upgrade if I could get my hands on an AM4 motherboard. Hopefully we'll see
something like the FX-6300 on AM4 early this year, so I can at least get ready
to upgrade next year.

------
quickben
Microcode giveth, microcode taketh away :)

It is interesting though how Intel waited so far (not even a statement?) ,
probably just to sell more CPUs overall.

------
reflexing
Good thing is we can load old microcode on Linux

~~~
throwaway7767
> Good thing is we can load old microcode on Linux

You cannot load old microcode anywhere. The CPU won't let you.

The OS feeds the CPU a blob, the CPU checks that it's signed by Intel (to
prevent modifications), and it additionally checks that the version number is
newer than the currently-running code. If it's not, it won't be loaded.

If you have a CPU with the old microcode versions, you can keep it around, but
if you update your BIOS you'll find it will bring the new microcode in and you
can't downgrade after boot. If you're lucky the BIOS manufacturer wasn't too
careful with signing their BIOS and you can replace the microcode blob, but
that's a huge hassle.

~~~
dnlrn
Microcode is not written to the CPU, it gets loaded on every boot. This can
happen during the BIOS POST, during the OS bootloader or even while the OS is
booting. Therefore, yes its possible to run older microcode (at least on
Linux), since you just have to not write the newer version on boot. If the
BIOS contains the new microcode, you can flash the previous version of the
BIOS.

~~~
throwaway7767
> Microcode is not written to the CPU, it gets loaded on every boot. This can
> happen during the BIOS POST, during the OS bootloader or even while the OS
> is booting. Therefore, yes its possible to run older microcode (at least on
> Linux), since you just have to not write the newer version on boot. If the
> BIOS contains the new microcode, you can flash the previous version of the
> BIOS.

Did you read the last paragraph of my message? Because you're not really
disputing anything I said. (to clarify, when I say "You cannot load old
microcode anywhere", I define "old" to mean "older than the currently running
microcode", I.E. you cannot downgrade it at runtime after it's gotten a new
one loaded to RAM.

If you're willing to run outdated system firmware (with associated bugs,
security vulnarbilities, etc), you can do it - just like I said in the message
you're replying to. But that's not what I'd call a good solution.

------
alexsoft
AMD adopted same strategy, no overlooking for poor!

------
dnlrn
CPUs are complicated pieces of technology. During the manufacturing process,
some parts have a better quality grade than others. The better quality parts
allow some overclocking without producing errors and therefore they get put
into the overclockable K-processors. The worse parts get put into non-
overclockable processors and run fine using the default voltage.

Some of the non-overclockable cpus might work fine after overclocking, some
might not. Intel definitely doesn't want the negative press when some kids
decides to overclock their non-K CPU and break it during the process. So I
understand the decision.

~~~
userbinator
_Some of the non-overclockable cpus might work fine after overclocking, some
might not. Intel definitely doesn 't want the negative press when some kids
decides to overclock their non-K CPU and break it during the process._

Have you participated much in the overclocking community? The whole point is
that every CPU chip is different and can be overclocked by different amounts,
some almost not at all. There is no "negative press", since anything past
stock speed is a _bonus_ which is what overclockers are trying to get. If CPUs
were not working at stock speeds, that would be a reason for "negative press".

~~~
tzs
> Have you participated much in the overclocking community? The whole point is
> that every CPU chip is different and can be overclocked by different
> amounts, some almost not at all.

On the other hand, there is no way that you can actually determine how far a
CPU can be overclocked and still maintain full functionality, so it might be
best to limit overclocking to systems that will not be used for something of
high financial or safety value.

The problem is that fundamentally the hardware is still analog. Digital is an
abstraction on top of the underlying analog system. In the digital
abstraction, a signal changes instantaneously from 1 to 0 or from 0 to 1. In
the underlying analog system, the components carrying the signal have
capacitance and resistance. Changing the high voltage that represents 1 to the
low voltage that represents 0, or vice versa, involves discharging or charging
that capacitance through that resistance, and that takes time.

This sets an upper limit on how quickly that signal at that particular point
in the circuit can change digital state.

There are also other ways the analog nature of the underlying circuit leaks
into the digital realm. Neighboring components that are in the digital
abstraction completely isolated from each other (except through intentional
connections) might be coupled by stray capacitances and inductances. This can
let signals on one cause noise on the other, or the state of one could change
how fast the other can change state.

When a chip is designed the designers can figure out what areas are the most
vulnerable to potential analog problems. They can incorporate into their tests
checks to make sure that these areas are OK when the chip is operated in spec.

The ideal scenario is that if you clock a chip fast enough to break something,
the chip blatantly fails and so you find out right away, and can slow it down
a bit.

The frightening scenario is a data dependent glitch, where you end up with
something like if the ALU has just completed a division with a negative
numerator and an odd denominator and there has just been a branch prediction
miss, then the zero flag will be set incorrectly.

