Hacker News new | past | comments | ask | show | jobs | submit login
Thoughts on Intel's upcoming Software Guard Extensions (Part 1) (theinvisiblethings.blogspot.co.uk)
74 points by andyjohnson0 on Sept 8, 2013 | hide | past | favorite | 34 comments



The summary paragraph sums up the only thing I was thinking of while reading this:

"Finally, we should discuss the important issue of whether this whole SGX, while providing many great benefits for system architects, should really be blindly trusted? What are the chances of Intel building in backdoors there and exposing those to the NSA? Is there any difference in trusting Intel processors today vs. trusting the SGX as a basis of security model of all software in the future?"

There is no full trust of any modern hardware or commercial software or even complex OSS software anymore. None. I made poked fun at some of my paranoid technical buddies in the '90s, but so far have apologized recently. Their customized systems don't seem so silly anymore.


Put yourself 20 years into the future, looking back onto 2013, and try to ask yourself whether the x86_64 architecture we have today, with the x86 MMU as understood by Linux 3.11, and the Intel chipset & DMA controllers we use today, whether that bundle of hardware is likely to seem like a reasonable platform on which to build trustworthy applications.

Today, in 2013, a trivial software bug is all it takes to allow the author of a web page to upload and run code in your browser process. That is a consequence of the architecture we run on.

Are we at any point in the near future going to have fully transparent hardware? No we are not.

Do we badly need architectural improvements for hosting trustworthy code on general purpose hardware? Yes we certainly do.

We're going to have to get over the NSA stuff, at least for the most part. Perhaps there are applications that will need some kind of assurance that they aren't generating secrets from RDRAND (I'm not a believer in that problem, but I'm not committed to the argument). But for the most part, we're going to have to trust hardware vendors to design better security architectures and deploy them in new chipsets and processors, because they are badly needed.

If it helps you, think of your resistance to things like SGX as the product of an NSA psy-ops campaign to get you to distrust technologies that will cut off NSA's supply of new software bugs.


"Today, in 2013, a trivial software bug is all it takes to allow the author of a web page to upload and run code in your browser process. That is a consequence of the architecture we run on."

Technically I hate the glued together, shit-stack that is the organically evolved web. One part of me thinks we have devolved from the Big Iron/VMS days.

"Are we at any point in the near future going to have fully transparent hardware? No we are not."

Open hardware isn't cost effective, and economically and politically probably is repressed. There's darn few viable projects that I can find on the internet and a few underground systems that are borderline crackpot. There might be a better market for these systems today than there was a year ago. There seems to be a growing mistrust in commercial software, but there is no substitute for commercial hardware.

"We're going to have to get over the NSA stuff, at least for the most part."

I have to go to the office tomorrow and pretend that I work on secure systems for sensitive data. :)


> Technically I hate the glued together, shit-stack that is the organically evolved web. One part of me thinks we have devolved from the Big Iron/VMS days.

I've seen many people express their dislike for the status quo, however I haven't seen any real arguments for how things could be better.

In terms of UI at least I've worked with Delphi, I've worked with Visual Basic, I've worked with Java's Swing and with WxWidgets and with Android - all of the available UI toolkits, except HTML, make it easy to build stuff with standard widgets. However, if you diverge from that path, pain ensues and I haven't encountered something more dynamic or malleable to experimentation than working with HTML/CSS. There's also nothing stopping you from implementing frameworks of standard widgets on top of the browser's DOM and Javascript. And btw, even Android or iOS developers that are quite happy with their native platform, are sometimes embedding WebViews in their app for cases in which they've got something complicated to do.

In terms of Javascript, there are already compilers and frameworks that use Javascript as a target, with things like Google's GWT giving you everything a Java developer dreams about (code reuse, some static type safety, etc...), yet Javascript is still more popular by an order of magnitude than all alternatives combined. And if you haven't seen the Unreal Engine demonstrating asm.js in Firefox, you should.

In terms of security, the biggest attack vectors on the web right now are old browser versions and various browser plugins that are still around, like Java (most PCs have a version of the Java plugin installed with known vulnerabilities) or Adobe Reader or other crappy plugins with a broken update cycle. With Chrome and Firefox auto-updated regularly, Javascript is actually quite secure and there's nothing in it that prevents you from sandboxing its execution.

Many people complain about the state of the web, however given the constraints that the web faces (open standards, multi-platform in every sense, multiple independent implementations from Microsoft, Apple, Google, Mozilla, Opera, etc...) I have a hard time believing on how things could be better. In fact, given the history of the web, especially given Microsoft that has held the web back for a couple of years simply because they could, the web turned out to be awesome.


Delphi, WinForms, Swing wxWidgets are the old guard, if you want to see what web apps could be in terms of UI development look at XAML (WPF), MXML (Flex), QML (QT) -- sane layouts, components, built-in data binding, etc. and still XML/JSON based.

Adding something like MXML rendering to web browsers, without changing anything else (I'm not going to go into the JavaScript discussion), would already be a huge improvement in developing proper web _application_ UIs.


It's worse than that.

Imagine if in the 90s MS had written Outlook, Excel etc in VBA and you ran them inside Word as macros. They would have been laughed out of the market, but that is is exactly what modern app developers do, shoehorning everything into a web page inside a browser.


Office ran on Windows, which itself ran from MS-DOS. Doesn't seem particularly different to me.

The comparison to Word is completely invalid, because Word is itself one of the applications, while the browser isn't a webapp.


Not quite. The comparison is between Outlook and GMail. And my point stands - the web was (and hence is) designed for hypertext, not applications.


Every modern browser uses sandboxing to avoid having a "trivial software bug allow the author of a web page to upload and run code in your browser process." You simply run the potentially insecure code in a separate, unprivileged process. Apple has a special set of sandbox APIs that make it easy for applications to irrevocably give up privileges such as the ability to write to the file system, etc. It's not as easy on Linux and Windows, but it is possible.

SGX really exists for one reason, and one reason only-- it's another attempt to implement some kind of un-bypassable DRM. Joanna points this out in her writeup. All of the other so-called advantages you can already get simply by running a VM.

Get ready to jailbreak your PC I guess. I can just feel the security oozing from my pores.


Has a year gone by since they were introduced where some kind of sandbox jailbreak hasn't been published? And: it cost Google many hundreds of thousands of dollars to engineer the sandbox system it has, just as it cost Adobe huge amounts of money to retrofit sandboxes onto PDF.

How well sandboxed is nginx? Answer: not at all.

Also, you understand that any VM system that grants the security capability you're talking about also grants content software the ability to protect content, right? They're two sides of the exact same coin. You don't see that they are, because the "security" sign of the coin implicitly but subtly presupposes hostile code is running on the machine already, where the "DRM" side of the coin obviously hosts adversarial code, because that adversarial code is the "protagonist" of its story.


A lot of the Chrome jailbreaks were done through attacking the embedded Flash player, which wasn't sandboxed at the time.

How well sandboxed is nginx? Answer: not at all. Actually, that's not the answer. ngnix is sandboxed by using selinux, using a chroot jail, using AppArmor, or any one of a number of different ways.

How many years have gone by without a Java or web-based exploit being announced? 0. Maybe you should not throw stones, when your own house is made of glass.

Security and DRM are not two sides of the same coin. There are lots of insecure systems that aren't DRMed. And there are lots of ultra-secure OSes, like OpenBSD, that don't include DRM.

I'm not a tinfoil-hat type of person, but I have a healthy fear of code written by firmware engineers. That's why things like EFI and now this are terrible. They're basically shovelware that you can't get rid of on your PC. Like the uninstallable android apps, but 100x worse.


> All of the other so-called advantages you can already get simply by running a VM.

Only when you trust the hardware, and hypervisor. There's a huge potential set of advantages here for cloud platforms where people running various VMs do not trust the other tenants, and might not want to have to fully trust the hosting provider.

The possibility of having everything encrypted in memory and encrypted on disk, and with other code running on the same CPU unable to peek at what you're computing might not be foolproof (e.g. you'd have to trust that the provider does not run a CPU that's somehow backdoored), but it would still be a substantial improvement compared to what we have now. It means that a large class of security flaws in the hypervisors would still be unable to expose client VMs, for example.

I do agree with you that un-bypassable DRM is probably part of the motivation, but there are plenty of other uses.


NSA isn't the only or main security threat for computers. NSA, for example, is unlikely to hack your bank account and steal your money.

Even if it was backdoored by the NSA, it would still be very useful for everyday threats. There are certain classes of attack that are difficult or impossible to prevent in software (think evil maid attack on full disk encryption)

The reality for hardware is you have to trust the guy who fabricated it, Unfortunately since you can't fab your own 32nm chips, we are unfortunately stuck trusting that neither the Chinese MSS or NSA has hacked them.


Actually you don't need to trust the fab: you can get a batch of identical chips, select X% at random (use dice!), grind off the packaging, examine them under a microscope, and compare what you see to the plans you sent to the fab.

Proving that the plans themselves aren't backdoored (your layout software was written by whom exactly?) and aquiring a microscope that can resolve 32nm features without using any integrated circuts (which, by paranoia, we know will identify images of backdoored ICs and replace them with images of non-backdoored ones) is left as a exercise for the reader.

I'm probably forgetting some other attack vectors here, actually.


Could you extend JTAG to provide verifiable hardware?

I'm thinking of something like Vernor Vinge's "trusted computing" stuff in his SF book _Rainbows End_, where a flipflip took 20K gates to implement. You spend a lot of circuitry on attestation.

It's probably a circular problem (who will attest to the attestors? etc.), probably devolving into digital signatures of chip layers and stuff, taken before installing CPUs an peripherals... :-/


You cant trust the electrical interface to be honest if the hardware isn't honest. If a naughty processor can modify a bunch of instructions, then it can modify its verification results or whatever is provided over the interface.


FPGA's would be a better idea and actually might be viable for a small trusted compute base.


You would think that there would be more activity in the OpenCores and other processors on a FPGA systems, but there's not as much I expected when I researched it. Maybe the renewed revelations about the NSA will increase the participation and lower the costs.


"if it was backdoored by the NSA" - are you going to assume it will stay backdoored by only the NSA? A vulnerability is a vulnerability.


Depends on how they do it . Dual_EC_DRBG is a secure random number generator if no one can solve the discrete log problem on a large curve. That's true w/ or without the NSA backdoor, since the NSA backdoor is just cheating and generated the curve with a known discrete log.

This, of course, doesn't make the backdoor a good idea, and back door or not, if someone does get the discrete log for the Dual_EC_DRBG constants, you are screwed.


With the known NSA backdoor in Dual_EC_DRBG then many people within the NSA must have access to the backdoor constants (even if they don't directly realise it and have access to them within an application).

If Edward Snowden was willing to give up his salary and liberty for what he thought was the moral thing to do then I think it's reasonable to assume there must be many more people within the NSA who are willing to leak secrets for money or loyalty to nation states, corporations or terrorist groups.

Some of those people will have given the Dual_EC_DRBG constants or applications that will crack cryptosystems using that PRNG to 'bad' actors.


Also, things leak by other means, eg: Son-Of-Stuxnet escapes into the wild with the constants in its penetration code.


I will postulate that recent consumer hardware is almost guaranteed to have a backdoor for peripherals that grant privileged introspection into the system.

For example, Stuxnet relied on creating a botnet to spread and then wait for a call from home before sabotaging the target computers in an Iranian nuclear facility. It accomplished this with multiple 0-day vulnerabilities. It was incredibly well thought out, comprehensive, and opportunistic. Those factors made it expensive. It also targeted specialized industrial hardware, hardware that lacked consumer features and thus common attack vectors.

My assertion is that it is less time, resource, and opportunity necessitating if a common vector of attack is baked in.

It could be abstracted enough that it isn't an obvious 'hey guys, check out this wide open security hole we built into our hardware' to watchful eyes, but an (un)intended consequence of the system if the backdoor isn't component based.

People keep blowing me away with their willingness to be evil for a profit, so I would bet some palms were greased and some reciprocal 'tips and favors' circulate between a hardware manufacturer and a federal intelligence agency at our expense.

It could be that hardware RNG that comes with your board. CPUs also have encryption standard instruction sets.

And then we can look at the firmware that runs on our components. Then the kernel..

It's a security nightmare all the way down.


Here is the thing.

I would be willing to wager that there are backdoors (aka vulnerabilities) that NSA doesn't know about. The whole stack is sufficiently complex that there isn't much you can prove about it, with or without malicious intent included in the calculation.


> But our SGX-isolated VMs have one significant advantage over the other VM technologies we got used to in the last decade or so – namely those VMs can now be impenetrable to any other entity outside of the VM. No kernel or hypervisor can peek into its memory. Neither can the SMM, AMT, or even a determined physical attacker with DRAM emulator, because SGX automatically encrypts any data that leave the processor, so everything that is in the DRAM is encrypted and useless to the physical attacker.

So basically, when used for real security, it is a more modular substitute for a subset of the functionality of secure boot that has the dubious benefit of protecting against an attacker with a DRAM emulator - except it currently doesn't even do that, because there is no way to secure user input.

When used for DRM, of course, it works just fine. It's basically the rebirth of the whole Trusted Computing brouhaha, but worse, because it doesn't depend on securing the entire boot path and is thus actually practical to implement in some browser plugin. While it will not be possible to depend on this feature being present in users' CPUs for the next few years, it could become very dangerous after that.

Ugh.


I must be missing something, but how does one bootstrap an enclave securely? A malicious VM can emulate the new SGX mode, but lift the restriction of not being able to access the enclave's memory.


TXT handles this with a TPM that stores a key and has hardware to check that the proper instructions executed and signs an attestation to that fact with its key. SGX doesn't need a TPM. Does it possibly have similar hardware on die?


SGX adds the ability to attest and seal individual enclaves, with a root attestation key in the CPU die: https://docs.google.com/file/d/0B_wHUJwViKDaSUV6aUcxR0dPejg/...

This key has some similarities to a TPM EK / AIK, but rather than PCRs for the measurements they have a cryptographic log of the enclave.


When the NYT article said NSA has succeeded in putting backdoors into "commercial encryption hardware", what were they talking about? Weren't they referring to chip companies like Intel?

If Intel wants our trust again, they should open source this stuff, and make it very transparent so we can verify if it's clean or not. At this point this should worry them a lot more than "giving their competitive advantage to AMD" or whatever. Same goes for the other chip makers, too, of course. So maybe they can all do it at once, to level the playing field.


This seems to remind me of the SHE from Vernor Vinge's Rainbows End. Obviously we are a ways away from that, but this is an interesting step in that direction.


The new PR name for Palladium (http://www.geek.com/chips/palladium-microsofts-big-plan-for-... After the effects of the leak I'm feeling somewhat confident a standard like this doesn't have any legs internationally.


thanks for the link. fyi, it doesn't go where you wanted it to. your (2002) annotation is being interpreted as part of it.

unmangled version: http://www.geek.com/chips/palladium-microsofts-big-plan-for-...


SGX is a very interesting development. I agree with Johanna Rutkowska that "SGX might profoundly change the architecture of the future operating systems".

Enhanced Privacy IDs are particularly interesting, since it will allow you to anonymously or pseudonymously authenticate a particular CPU: http://csrc.nist.gov/groups/ST/PEC2011/presentations2011/bri...

It's also very powerful to be able to attest secure enclaves with a root of trust within the CPU, rather than a TPM. It removes TPMs and the LPC bus as an attestation attack vector.


So no jitting then?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: