
Intel SGX Explained - zmanian
https://eprint.iacr.org/2016/086
======
MrBuddyCasino
The juicy bit:

"That being said, perhaps the most troubling finding in our security analysis
is that Intel included a licensing mechanism in SGX that prevents software
developers who cannot or will not enter a (yet unspecified) busi- ness
agreement with Intel from authoring software that takes advantage of SGX’s
protections. All the official documentation carefully sidesteps this issue,
and has a minimal amount of hints that lead to the Intel’s patents on SGX.
Only these patents disclose the existence of licensing plans."

~~~
gleenn
What are the ramifications of this exactly?

~~~
ctz
The SDK documentation does, almost, tell you:

    
    
      > The signing tool supports a single-step signing process, which requires
      > the access to the signing key pair on the local build system. However,
      > there is a requirement that any white-listed enclave signing key must
      > be managed in a hardware security module. Thus, the ISV’s test private
      > key stored in the build platform will not be white-listed and enclaves
      > signed with this key can only be launched in debug or prerelease mode.
    

And, indeed, launching an enclave without debug mode set fails with
'SGX_ERROR_SERVICE_INVALID_PRIVILEGE' error.

A debuggable SGX enclave enables read-a-word and write-a-word primitives, so
loses its confidentiality and integrity.

~~~
userbinator
I think there would be quite a bit of controversy as with Edward Snowden if
someone leaked that key somehow. That someone would be considered a hero by
many, and also a traitor by others.

Alternatively, someone leaks an SGX exploit that bypasses it all, and we
wonder whether it was a mistake like so many other vulnerabilities, or if
someone deliberately put it there because they didn't believe in Intel having
that amount of control... "I wish for the insecurity that brings us freedom."

~~~
mike_hearn
It'd make no difference, as the keys in question are replaceable via microcode
updates and the microcode version is included in the remote attestations.

SGX doesn't really give Intel "control" in the sense of taking away existing
freedoms. It's a new feature. You can always elect not to use it, or not to
use software that uses it.

~~~
wmf
It is a bit of a bait-and-switch since no other CPU feature works this way and
Intel never mentioned this "feature" in all their years of disclosures about
SGX.

~~~
andromeduck
Isn't that how most extensions work? And microcode has been around for at
least a decade now.

~~~
wmf
I know of no other CPU feature that requires authorization from Intel.

~~~
costan
TXT requires an ACM, which is essentially a small signed BIOS subset. At least
ACMs are freely downloadable from Intel, and they don't look into what you'd
like to run under TXT.

[https://software.intel.com/en-us/articles/intel-trusted-
exec...](https://software.intel.com/en-us/articles/intel-trusted-execution-
technology)

------
throwaway84019
For the scary applications (regarding user freedom) of Intel SGX, see Joanna
Rutkowska's two blog posts about Software Guard Extensions.

Part 1: [http://blog.invisiblethings.org/2013/08/30/thoughts-on-
intel...](http://blog.invisiblethings.org/2013/08/30/thoughts-on-intels-
upcoming-software.html)

Part 2: [http://theinvisiblethings.blogspot.com/2013/09/thoughts-
on-i...](http://theinvisiblethings.blogspot.com/2013/09/thoughts-on-intels-
upcoming-software.html)

Intel SGX also had some useful applications alongside those that are harmful
to users, like search engines that provably don't log queries, mail servers
that provably don't keep your mail, provably safe Bitcoin mixers and so on.
But if using Intel SGX requires a business agreement with Intel, I worry we
will only see the bad things and not the useful ones.

It is possible I am wrong and cloud providers will give people who aren't
Hollywood access to Intel SGX. But all the applications require trusting Intel
and the NSA. Hollywood surely does not mind trusting them, do we?

 _Intel x86 considered harmful_ [1] talks about all the scary stuff with
Intel's processors.

[1]:
[http://blog.invisiblethings.org/papers/2015/x86_harmful.pdf](http://blog.invisiblethings.org/papers/2015/x86_harmful.pdf)

------
transpute
Alex Ionescu wrote a paper [1] on Win10 and SGX.

    
    
      > It’s important to realize that for now, only Intel has 
      > the required key to allow an enclave to be launched 
      > without knowing the required CPU-specific enclave key, 
      > and no other (even signed) enclaves can be launched 
      > without it. Once Intel releases a permissive loader, or 
      > if Intel ME vulnerabilities are found to extract the key, 
      > then the real abuse will begin.
      > 
      > Indeed, one area of further research is the Intel SGX 
      > Driver that was released for recent Intel SGX-enabled 
      > Dell Laptops, which contains a le.signed.dll file that is 
      > the Intel Launch Enclave. Additionally, it contains 
      > Intel’s EINITTOKEN that can be used to launch such 
      > enclaves, as well as a service and set of APIs which 
      > appear to make it possible to launch additional enclaves. 
      > Windows 10, on its own, does not seem to ship or support 
      > its own Intel-signed Launch Enclave.
    

[1] [http://www.alex-
ionescu.com/Enclave%20Support%20In%20Windows...](http://www.alex-
ionescu.com/Enclave%20Support%20In%20Windows%2010%20Fall%20Update.pdf)

~~~
costan
Do you happen to know if the Launch Enclave has the debug flag set? If so, you
can't use it to launch production enclaves.

------
rolandr
The initial batch of Skylake CPUs do not implement SGX:
[http://www.anandtech.com/show/9687/software-guard-
extensions...](http://www.anandtech.com/show/9687/software-guard-extensions-
on-specific-skylake-cpus-only)

Any word on whether there will be a BIOS update (for example, microcode and ME
updates) that will enable SGX for the first batch of Skylakes, or are they
just forevermore broken?

------
rdl
The SGX launch has been one of the worst launches of a major CPU feature I've
ever seen coming from Intel; it would be really interesting to learn what went
wrong and why.

It's _years_ later than people anticipated (and won't be in the E5 xeons for a
couple more cycles, so 2017/2018/2019).

Not quite NetBurst level, but pretty horrible from a generally execution-
excellent company.

~~~
userbinator
I suspect there was significant opposition to SGX from various groups, even
from within Intel.

~~~
costan
SGX serves a good purpose, at least in theory. Many people, myself included,
wanted it to turn out to be good. So, I don't think many Intel folks objected
to it.

Instead, I think that a bunch of MBAs showed up and decided SGX is security,
security is an enterprise thing, so SGX must be pay-to-play. For whatever it's
worth, I think the SGX designers did a pretty good job of separating the
objectionable parts from the rest of the design.

For example, the EPID homebrew crypto is all in software, so Intel can change
the algorithm without hardware mods or microcode updates.

Also, the way they set up the Launch Enclave gives Intel time until the very
last minute to not be a douche. They still have the option to release a
permissive Launch Enclave that only includes the checks needed to keep
attestation secure.

The SGX design that doesn't come from MBAs is quite clean, given that it
addresses the multi-layered crap pile that is X86. There are some cool tricks
in there.

~~~
nullc
Intel presented on SGX at Real World crypto.

If the functionality works as is being described here, I feel they deceived
the audience, both in their presentations and in 1:1 discussions.

This is especially unfortunate, since I know for a fact their actions have
influenced purchasing decisions.

------
spangry
I have to admit, the technical aspects of this are way beyond me. Would this
technology allow (in theory) secure distributed computation via RDMA?

I've long suspected that Intel understands the implications of mass adoption
of cheap, RDMA capable network adapters (iWarp, ROCE and Infiniband): it will
cannibalise future CPU sales revenue. Imagine a standard corporate working
environment with 1000 workstations on a LAN. At any given time, average CPU
utilisation is probably 5-10 per cent tops. It's a similar story with storage
and I/O capacity. If you add RDMA (and ultra-low latency networking) to the
equation, there is now no need to buy additional computational power for the
next 5 years, as there are a tonne of idle resources that can now be
efficiently utilised (even for non-parallelisable computation).

From what I can surmise from recent Intel actions, they've opted to not take
the 'Microsoft' approach (i.e. hold back the tide), and have instead decided
that if CPU markets are going to be cannibalised, they may as well be the ones
doing the cannibalising.

Well, that's my theory anyway. Am I crazy?

~~~
wmf
SGX has nothing to do with RDMA, and I suspect they don't play well together;
all data entering/leaving an enclave probably has to be copied. Also, RDMA is
only for servers and may not be as powerful as you think.

~~~
spangry
Thanks for helping me understand this. Please bear with me here, as I'm not as
technically skilled as the average HN user.

On your first sentence, is the issue that some fundamental aspect of the SGX
security model requires copying data in/out of enclaves, which would make
direct computation on data stored in remote memory impossible?

And breaking your second sentence into two parts:

(1) That does seem to be the present state of affairs. From what I can gather,
RDMA and low-latency networking is expensive due to the high cost of
interconnect/cabling and switching infrastructure. So atm RDMA is exclusively
used in HPC clusters and as backplane interconnect between server
blades/racks. But I wonder if this will always be the case. Take cabling for
example. Retail SFP+ optical interconnect is crazy expensive for even very
short runs. If this is because production costs are high by nature, then I'd
agree that LL networking and RDMA will remain confined to the server room. But
if there are significant unrealised production economies of scale, or there
are achievable advances in production techniques that will reduce costs, then
deployment at the network edge might be economically feasible once we pass
some level of demand/adoption.

2) On low-latency RDMA not being as powerful as I think: This might be because
of my limited understanding. From what I understand, LL RDMA would allow a
whole bunch of computers to be abstracted as a single 'super computer': the
inter-memory-processor latency is so low that it makes this abstraction
possible. Have I misunderstood the technology? (genuine question)

~~~
wmf
The whole point of SGX is that the memory of an enclave is ultra-protected so
that nothing can get in or out without the enclave's permission. DMA goes
completely against that concept.

Much of the market segmentation between desktop and server is artificial, but
there's still nothing customers can do about it. RDMA doesn't exist for 1G
Ethernet because CPUs can easily keep up with copies. 10G Ethernet has been
around for over 10 years and there's no evidence that it will ever trickle
down to the desktop.

Almost no networks support RDMA since it requires special network
configuration, special NICs, and special libraries. No clouds support it. So
any software that requires RDMA cannot be used by hardly anyone. (Example:
[http://blog.acolyer.org/2016/01/14/no-
compromises/](http://blog.acolyer.org/2016/01/14/no-compromises/) Of course,
the economics of hyperscale cloud providers are different.) Software can be
written with an RDMA fast path and a TCP/IP slow path, but then 99% of users
will use the slow path and so it's better to optimize around the
characteristics of normal networking.

------
amluto
This paper contains a remarkable amount of irrelevant background.

If you actually want to read this thing, read the very beginning and then skip
to at least page 57.

There are some interesting bits that are relevant to OS authors. For example:

At first glance, it may seem elegant to have EENTER restore the contents of
the XCR0, FS, and GS registers in the current SSA, and have EXIT restore them
from the current SSA. However, this approach would break the Intel
architecture’s guarantees that only system soft-ware can modify XCR0, and
application software can only load segment registers using selectors that
index into the GDT or LDT set up by system software (2.7). Specifically, a
malicious application could modify these privileged registers by creating an
enclave that writes the desired values to the current SSA locations backing up
the registers, and then executes EEXIT.

If that's correct (I haven't double-checked thoroughly, but it seems like it's
wrong), then it's a problem. But I think the paper is just wrong.

I'd be more worried about RFLAGS in the SSA. Its exact usage is poorly
documented, but some bits of RFLAGS are privileged (IF and IOPL).

~~~
costan
I'm terrible at writing. I am trying to say that SGX cannot restore things
from the SSA, and it has to use some protected area. To the best of my
knowledge, they're using the non-architectural area of the TCS, which is
protected from any sort of write.

------
nowaynohow
Some (mostly not that relevant) details missing from the article:

1\. CPU microcode update packages from Intel ("MCU") are unified "processor
package" update containers. They update more areas of the chip other than just
the MSROM. This is more obvious in the SoC parts, but it is also true on the
discrete parts.

2\. MCU can be downgraded, although this is clearly going into "not validated
at all" area, so it might not result in a very stable system ;-) It is likely
that Intel can set a flag inside the MCU data that forbids this (the MCU
loader _inside_ the processor is more than complex enough to support this kind
of thing!), but at least up to Westmere downgrades were still working.

2b. and you can always downgrade either just the microcode inside the firmware
by modify-and-reflash, or the firmware itself, even if the CPU started to
ignore downgrade attempts at runtime.

3\. When the MCU update process is done in a trusted environment (microcode
update data in the FIT), the reported microcode version _CHANGES_ (the
processor reports it as one less than the real version of the microcode). This
is relevant for attestation, and it is really something that needs to be added
to the IA32 manuals. We only know about it outside of the NDA'ed world because
coreboot required a fix for the next issue:

4\. As long as you find a way to always feed them the latest microcode (or at
least the same revision that you have in the firmware), Linux, VMWare and the
BSDs [currently] will always override FIT-provided microcode, thus changing
the reported microcode revision (it will not be reported as secured anymore).
Since the revision changed, it will break any attestation that depended on it.
This looks like a good thing at first glance, given how utterly broken at
launch the recent Intel processors have been: anything that would get in the
way of an user being able to fix these by updating the MCU is a damn bad idea
and NEEDS TO DIE.

5\. The microcode update process nicely wastes several _million_ cycles (and
it can easily get to a billion cycles in larger systems, as the update cost
increases linearly per core) at every operating system boot and resume from
ACPI S3/S4/S5 ;-) Try to ensure that your firmware has the latest one if you
want to have a smaller carbon footprint, because if the OS decides to update
it, the box will be doing this expensive procedure twice at every
boot/resume...

~~~
costan
Thank you very much for this feedback!

Re: 1 - I re-read the relevant SDM sections, and saw that there is no
requirement that the new upgrade version exceeds the current microcode
version. Thank you very much for pointing that out! The next published
revision will have the fix.

Do you have any public references for 3 and 4? That looks like it'd help make
the case that SGX rests on very complex and unstable foundations.

