
Ironclad Apps: End-to-End Security via Automated Full-System Verification - pjmlp
http://research.microsoft.com/apps/pubs/default.aspx?id=230123
======
Animats
That's impressive. Proof of correctness has been slowly advancing for years.
The version of L4 that manages the cellular air link in most smartphones has
been formally verified. Drivers for Windows have been formally verified to not
crash the rest of the OS since Windows 7. A few CPU designs, notably AMD's
floating point processor after the Intel divide debacle, were formally
verified. This is another step forward.

There are a few self-contained products that could be verified now. A secure
BGP gateway and a secure DNS server are within reach. Since those are often in
standalone machines, they're good candidates for this approach.

(I used to do this stuff. It's embarrassing how long ago.
"[http://www.animats.com/papers/verifier/verifiermanual.pdf"](http://www.animats.com/papers/verifier/verifiermanual.pdf"))

~~~
arjunnarayan
> There are a few self-contained products that could be verified now. A secure
> BGP gateway and a secure DNS server are within reach. Since those are often
> in standalone machines, they're good candidates for this approach.

I completely agree! There's lots of crappy but crucial components of networked
systems ("the cloud") that we rely upon not to be bugged, that could do with a
healthy dose of verification. While the examples you picked are excellent
candidates, what's even more fascinating is that even if we only had a few
_extremely dumb_ systems verified correct - such as a trusted box that did
nothing but provide a verified incrementing counter - that itself is
surprisingly useful. The Ironclad paper does discuss this in Section 3.3, but
it's worth looking at the description of the TrInc paper as well:
[http://research.microsoft.com/apps/pubs/default.aspx?id=7836...](http://research.microsoft.com/apps/pubs/default.aspx?id=78369)

~~~
Animats
Right. The Internet needs a few "tough nuts" like that. BGP and DNS servers
are good places to start because they're 1) in wide use, 2) important, 3)
usually standalone, 4) not too big, and 5) attack vectors.

Next up should be a Kerberos server, which is a central security controller
for high security networks. A major vulnerability in Microsoft's Kerberos
server was announced three days ago:

([https://www.us-cert.gov/ncas/current-
activity/2014/11/18/Mic...](https://www.us-cert.gov/ncas/current-
activity/2014/11/18/Microsoft-Releases-Patch-MS14-068-Vulnerability))

 _" The Microsoft Windows Kerberos KDC fails to properly check for valid
signatures in the Privilege Attribute Certificate (PAC) included with the
Kerberos ticket request. A domain user may forge the information contained in
the PAC to request higher user privileges than should be allowed. Since the
KDC does not verify the signature correctly, it will award the user the
requested privileges, effectively making the user a domain administrator and
allowing complete compromise of the entire domain."_

Yes, there's an exploit. Applying the patch isn't enough, either.

 _" The only way a domain compromise can be remediated with a high level of
certainty is a complete rebuild of the domain. An attacker with administrative
privilege on a domain controller can make a nearly unbounded number of changes
to the system that can allow the attacker to persist their access long after
the update has been installed."_

------
pslam
Summary:

* TPM at the root of trust.

* CPU state cleared to known-good using SKINIT to reset a core.

* "Formally verified" kernel/stack.

* Dafny, a safe language to write it in.

* Verifiable machine code, like Google's NaCl (verifiable x86 subset in fact)

That's great, but there's still a bunch of issues this doesn't solve, and it's
a stretch to call this "end-to-end" or "full-system". No, the issues aren't
goals of the paper, but that's my concern - this is asserting software stack
guarantees which the human-side and hardware-side can't provide:

* Compulsion. This relies on a remote chain of trust, which can easily be subverted without your knowledge if the platform owner and another actor collude (or are compelled) to generate a chained cert which allows access to your system.

* Every CPU I've seen which declares an instruction or register is a big "reset switch" for a core _doesn 't actually do that_. Usually a ton of state is left the same as before reset. This leaks information across reset, and can be used to subvert the post-reset core if it isn't careful to manually erase anything it relies on. For example, cache data, TLB entries and even most general purpose registers tend not to be reset.

* Subverting the memory subsystem results in the CPU executing arbitrary code which wasn't what you provided, undetectably, out of reset.

* It is not possible to formally verify an entire platform. This is somewhat unfair of me, but it feels like this is securing the software stack far beyond the capability of the hardware to provide those guarantees.

* Every CPU has bugs which result in information leakage or privilege escalation. It helps that this is only running a verifiable x86 subset, but I've seen plenty of bugs resulting from rare instruction pairing/ordering. Usually they just deadlock a core, but sometimes result in escalation.

So it's great work, but it's an advance way beyond what the rest of the system
(human and hardware) can guarantee.

------
kevin_b_er
TPM and Remote Attestation. This is an article on Microsoft working hard on
DRM schemes.

> An Ironclad App enables a user to securely transmit her data to a remote
> machine with the guarantee that ev-ery instruction executed on that machine
> adheres to a formal high-level specification.

... ...

> Today, when Alice submits her personal data to a remote service, she has
> little assurance that her data will remain secure. At best, she has vague
> legal guarantees provided by the service’s privacy policy and the hope that
> the owner will follow industry best practices to secure her data. Even then,
> a vulnerable OS, library, or application may under-mine the service
> provider’s best intentions [46].

If we presume the user is a DRM-demanding media company, the translation is
easy to a member of the public:

> An Ironclad App enables a media company to securely transmit data to a
> user's machine with the guarantee that every instruction executed on the
> user's machine adheres to a formal high-level specification.

... ...

Today, when Alice submits her copyrighted data to a user's machine, she has
little assurance that her copyrighted data will remain secure. At best, she
has vague legal guarantees provided by the service’s privacy policy and the
hope that the user will follow industry best practices to secure her
copyrighted data. Even then, a vulnerable OS, library, or application may
under-mine the digital restrictions management.

~~~
jude-
FWIW, their threat model assumes that Alice can't subvert the software with
hardware-based attacks. Relying on this system for DRM would be pretty
shortsighted, since (1) Alice obviously has physical access to her hardware,
and (2) Alice could emulate the hardware in software, and mount "physical"
attacks against it without any dedicated equipment or electrical engineering
know-how. A simple screen-grabber would circumvent this.

~~~
kevin_b_er
That point of remote attestation. If you have the hardware module signs all
software and the processor, coupled with an unblockable secure mode that only
runs if the TPM authorizes it, then you have created a secure zone in the
processor of which only TPM-authorized software can run. This defeats (1) and
(2), with (2) having the obvious exception of human-observable components
being observable anyways. This does serve DRM nicely. They don't care if you
put a camera in front of a monitor for the purposes of DRM.

They coupled this with an OS and driver that is formally verified to
eliminated security bugs.

------
orf
> In other words, if she encrypts her personal data with that key, it can only
> be decrypted by a software stack that guarantees, down to the assembly
> level, that it preserves differential privacy when releasing aggregate
> statistics about the data.

Mindblowing, Microsoft Research is pushing out some cracking stuff recently.

------
higherpurpose
From the paper:

> Trusted Platform Module (TPM). Deployed on over 500 million computers, the
> TPM provides a hardware-based root of trust; that is, it records information
> about all software executed on the platform during a boot cycle in a way
> that can be securely reported, via an attestation protocol, to a remote
> party.

Why does the TPM need to send anything to a remote party? It seems to me TPM
was designed much more like a DRM module than a "secure enclave" module.

~~~
arjunnarayan
You need the TPM to bootstrap trust from the root. Just as you need a
certificate authority to guarantee that an SSH session is end-to-end secure
from MITM attacks (or some other out-of-band key signing scheme), you need a
way to ensure that the Ironclad system you're talking to over the network is
running on the raw hardware, and not in some emulated form. The TPM boot is
what lets you do that.

Ironclad Apps make promises like "send us your private data, we promise not to
reveal anything about you except through this information theoretically secure
differentially private channel". To do so it needs to be running on bare
metal, and prove over the network to the user that it is running on bare
metal. For instance if it was really running in a VM controlled by an
adversary, then the guarantees would be useless as the adversary could leak
your information after inspecting the VM's memory. The only way to ensure that
you're running on bare metal is with a TPM attestation.

~~~
kefka
>The TPM boot is what lets you do that.

Fine. Where can I go download my TPM key? Perhaps it is etched on the chip or
on a sticker?

Nope? What do you mean it's kept from me? Hmm.

~~~
drdaeman
You can upload one, but not download, since TPM's very purpose is to take hold
of the key and never disclose it. Well, not sure if that's true for any
module, but this is how it worked on my old ThinkPad X300 (I have no other
machines with TPM, they're restricted technology here in Russia, no idea how
my X300 had slipped through)

No idea how one's initialized at factory. However, I'm totally fine with TPM
if I can control one, that is upload my own keys and use it for my own
security. Still, there's an issue lurking around, if everyone's got a factory-
initialized module with industry's trusted keys and the majority keeps those
defaults, users who don't trust may be a minority that will be just ignored
thus either left without any entertainment of forced to submit.

------
TheLoneWolfling
...And then you wait for a RAM error and your security goes out the window.

This has already happened (there was a proof-of-concept attack against the JVM
at one point that relied on this).

------
payload
substantiating a scientific argument with a dilbert comic [check]

