Keystone is a (still pretty early!) research project and enclave/TEE
framework for general use on RISC-V processors. We'll be using it
ourselves heavily as a platform for secure hardware and systems
There are a couple logical parts to Keystone:
- The security monitor (riscv-pk/sm) is the core of this, and is the
TCB for the entire system. It provides privacy by isolating enclave
memory from the rest of the system.
- The SDK/runtime provides one possible (minimal) example of how to
target and host applications in the enclave, we'll be making new
tools and runtimes as targets going forward.
- The demo, which uses our minimal sdk and runtime.
The demo isn't particularly flashy, but demonstrates how to accomplish
privacy and integrity against a malicious OS on a remote RISC-V
machine. Right now we don't have protection mechanisms against
side-channels and the like, but those are some of our next goals.
We're excited to build a community around open-source secure enclave
development on RISC-V and are actively looking for collaborators and
other researchers. Expect to see significantly more documentation and
information on keystone-enclave.org and docs.keystone-enclave.org in
the next week. Also check out the slides from our recent talk here:
Can you maybe compare this work with Hex Five MultiZone Security?
Are the plans to integrate this into the Rocket (BOOM), Pulp, LowRisc and others SoCs?
Can you do all that a modern IPhone Secure Enclave provides?
There's also a branch of BOOM (https://github.com/riscv-boom/riscv-boom/tree/secureboom) that supports Keystone, we're just waiting for it to be merged into master.
... there's not much there.
(For the same application, Apple uses L4).
Workshop on Building Open Source Secure Enclave, with little information about what was presented . The Google slides are superficially interesting: https://keystone-enclave.org/workshop-website-2018/slides/Sc...
Mailing list: https://groups.google.com/forum/#!forum/keystone-enclave
Compare to the object/capability schemes in modern L4s.
Well, I don't know what they're doing applications-wise, but there's a seL4 port for RISC-V. Nothing seems to really be stopping anyone from putting enclave applications on seL4 in a RISC-V processor.
Do I understand correctly that these are basically a hardware execution environment (processor + ram?) where code and data need to be input encrypted with a public key and that the private key is unobtainable to the user? Is that the key feature? For users to be able to execute code without being able to know what that code is and therefore be unable to copy and execute it elsewhere (aka engage in piracy)?
As I recall from my skimming on the issue back then, and as I see from what was written on the issue in Wikipedia, the criticism circulated mostly around basically the possibility of a manufacturer like Intel requiring booted software to be trusted by them (IOW, trusting Windows but not Linux and others because money, further grounding Microsoft as a monopoly of consumer OSes):
> The concerns include the abuse of remote validation of software (where the manufacturer—and not the user who owns the computer system—decides what software is allowed to run
That's actually part of the "trusted boot" feature you mention, and not what allows for DRM.
There were also concerns of:
> possible ways to follow actions taken by the user being recorded in a database, in a manner that is completely undetectable to the user.
But that seems like an issue with using closed source stuff in general, not specifically of TEEs.
Anyway, I can see how DRM can always be maligned from a GNUish we-should-never-trust-closed-source perspective. While very much appreciate the ideals that GNU/FSF promotes, I worry a bit about the current trend of how software vendors prefer to provide software as a web service when it doesn't really provide any technical benefit from the user's perspective. I know there are other reasons for doing so, but it seems to me that a big reason for that trend is that it's the most effective and available way to avoid piracy.
I wonder if widespread availability of TEEs (ones that wouldn't have the trusted boot issue mentioned) and a standard procedure like what I've mentioned in this SE question could reverse that trend of further making the web an operating system, moving ownership/control of our data and processes from our own machines to various online entities.
Given that I'm not going to have control over the software either way, I'd rather it were running on their hardware, safely isolated on the other side of an Internet link, than on hardware which I payed for and nominally own but which has been partitioned off for someone else's use.
TEEs and TPMs have legitimate uses, but only so long as they are fully controlled by the owner of the device. That implies that there are no pre-installed keys which the owner doesn't control: to a remote exploiter, an emulated TEE/TPM should be indistinguishable from an official hardware device. Unfortunately this is not something that can be designed into the hardware, short of omitting the feature altogether, since non-owner-controlled keys could be installed at any point prior to final delivery. Erasing them after delivery is no good; the mere expectation that the manufacturer's key is present is enough to make treacherous remote attestation practical. Devices controlled by their owners should be the norm, not second-class citizens.
The problem is ultimately a social one, not a technical one, but the technical capabilities of TEEs and TPMs are empowering the wrong side. From one point of view they may just be tools, but they're tools which are more readily used against the interests of device owners then for them.
All of the sys calls in the GitHub repo just set flags in something called magic_mem and then call hitf_syscall, for which I can't find a definition.
There's (I think) an analogous design in the Apple SEP; search "mailbox" in:
What exactly does it mean to proxy a syscall to the debugging host?
Keystone is in its early stage, and the first version contains somewhat minimal functioning components.
In 1964, Multics was developed initially for the GE-645 mainframe [..] Bell Labs pulled out of the project in 1969; some of the people who had worked on it there went on to create the Unix system.