

An Exploration of ARM TrustZone Technology - 2510c39011c5
http://genode.org/documentation/articles/trustzone

======
pjc50
"Hiding peripherals and memory from the non-secure world is a key feature of
TrustZone"

Whenever anyone says "secure", you must ask what is being secured, and against
whom. The purpose of the secure world is to run a small operating system with
privileged access to hardware that the outside world has no access to. It is
the ultimate in nonfree software: it cannot be inspected at all, even on the
binary level, and its loading can be governed by hardware keys. I believe it's
also secure against JTAG inspection although this article doesn't mention
that.

To use a device with TrustZone is to place total trust in the author of the
software inside the trust zone. If they want to put Superfish-style malware in
there, there's nothing you can do about it.

~~~
perone
As far as I understand, you can use open-source code for the monitor mode and
also for the secure world of the TrustZone, so you're not required to use non-
free software.

~~~
ptx
The question is who "you" is here – if the program is being secured against
the user, and you are the user, then you don't have freedom 1 of the free
software definition: "the freedom to study how the program works, and change
it so it does your computing as you wish".

To paraphrase agent Smith, what good is the source code if you're unable to
run it?

------
Quequau
The USB Armory from Inverse Path looks like a great platform to try this out
on. It's shipping in March.

[http://inversepath.com/usbarmory#usbarmory_top](http://inversepath.com/usbarmory#usbarmory_top)

One of the devs presented a talk about it at CCC earlier this year.
[https://events.ccc.de/congress/2014/Fahrplan/events/6541.htm...](https://events.ccc.de/congress/2014/Fahrplan/events/6541.html)

------
Nursie
This kinda sounds like intel's TXE, which allows you to run code in a state
that's non-debuggable and inaccessible to anything else on the system. Not
sure TXE covers peripherals though.

It does have the possibility of being used for evil, but much like the TPM
chips in some systems, it can also be used for good.

------
userbinator
This somehow reminds me of x86 SMM:

[http://en.wikipedia.org/wiki/System_Management_Mode](http://en.wikipedia.org/wiki/System_Management_Mode)

 _In almost all cases, bootstrap code stored in the ROM switches to non-secure
mode prior starting the boot loader, possibly to prevent access to certain
parts of the SoC that are not intended for public use._

One has to wonder why most SoCs do that, as it would be the perfect place to
hide a very deep backdoor of the government kind... one that is unremovable
and nearly impossible to inspect. At least with SMM, the code can still be
extracted from a BIOS dump; not so with a ROM internal to a SoC.

------
anonymousDan
Interesting article. Was at a talk at CCS last year by a guy from Samsung who
described how they are using TrustZone in recent Samsung devices. I think
Intel's SGX stuff looks even more useful though.

~~~
justincormack
Here are two good articles about SGX[1][2].

[1] [http://blog.invisiblethings.org/2013/08/30/thoughts-on-
intel...](http://blog.invisiblethings.org/2013/08/30/thoughts-on-intels-
upcoming-software.html)

[2] [http://blog.invisiblethings.org/2013/09/23/thoughts-on-
intel...](http://blog.invisiblethings.org/2013/09/23/thoughts-on-intels-
upcoming-software.html)

------
runeks
I'm really excited about this type of technology.

I don't think I would boot an entire separate OS in the secure world, but
rather a small, simple OS with secure key storage ability, secure display
ability, and the ability to boot Android as the insecure OS.

This gives the ability for applications, running on Android in the insecure
world, to store keys in the secure world, making them inaccessible to
applications in the insecure world. For example, one could create a new secret
key, residing in the in the secure would, and specify that it must only be
read by code that hashes to a certain hash (running the secure world).

So, a Bitcoin wallet, for example, could create a new key in the key store,
and -- upon creation of the key -- specify that it can only be read by a piece
of ECDSA signature code which hashes to a certain value, and runs in the
secure world. So the only thing the wallet can do is send an unsigned Bitcoin
transaction to a program running in the secure world, which then parses the
transaction, displays the amount and destination address(es) securely on the
display (not modifiable by Android running the insecure world), and then the
user can accept or deny on the display. Upon accepting, the ECDSA signature
code running in the secure world will sign the transaction (inputs), and
deliver a signed transaction to the Bitcoin wallet software running in the
insecure world.

Assuming this small "secure key storage" OS is implemented securely, and
protected by a password (with exponentially increasing waiting period after
each unsuccessful attempt), it wouldn't be possible for software running in
Android -- even kernel-level code -- to steal Bitcoin wallet private keys, or
any other keys protected by this mechanism. If this key storage OS is kept
small enough, it should be possible to create something that can be audited
for a reasonable sum of money.

Perhaps it could even be written in a language with superior type safety like
Haskell, to make it easier to reason about its security.

This would be a really big leap in information security -- providing we do it
right. It would mean other hardware key storage devices like SIM cards and
credit card chips would become less necessary, as secure phone apps would be
able to replace these. A VISA card could become an app on a phone, that can
securely sign transactions, thus reducing credit card fraud significantly.

The secret keys stored on a SIM card could be stored on the phone instead --
using a Diffie-Hellman key exchange between a program running in the secure
world and a server run by the cell tower operator. No more receiving SIM cards
in the mail.

------
eleitl
This is why I don't trust ARM. The only way to really make sure is to
synthesize your CPU core, and put it into the FPGA (though there are FPGAs
shipped with backdoors as well).

