This is the approach mandated by Nevada Gaming Commission slot machine regulations.
Thus, we may not be able to simply go back to a ROM with today's architectures. However, we can give today's systems something that behaves like a writeable flash chip, but is readily (and automatically) reset to a clean/factory state.
Far as Joanna's idea, this has been done before twice. I posted a business and legal analysis of it here:
What do you think of the analysis? Aside from I should patent my ideas more. ;)
That makes me said. If it were the other way around (pull to VCC to enable writes), you'd at least need to add a pretty obvious bodge wire to the WP pin to enable writes.
What to you is reasonable? Specs + price + level of security?
Or do you only run SSH/VNC/RDP and connect to a desktop someplace else?
I wouldn't suggest using a Chrome OS device for anything where opsec matters, but I find the similarities, at least in thought, striking. Chrome OS's early security benefits were that the device could be trusted if the dev switch wasn't flipped -- and it could be fully wiped and restored on demand if need be. The trusted stick described in the PDF would likely share similar characteristics as far as disposability goes.
In fact, for chromebooks for education, Google even allows the schools to MitM the traffic of the pupils with no way for the pupil to know it.
Which gets risky when the pupils take the device home, and the school still controls it.
The reason I call it twisted is because I'm wondering why that lock-down is necessary in the first place: you don't want them to "abuse" the device, causing potential damage? This might be a valid concern with company cars due to the life-threatening aspect of driving, but extending this concept to laptops seems abusive. Either way, the risk posed to the school or company's assets can be effectively taken into account by some kind of insurance policy, not by attempting to lock down the device.
Or GalliumOS, which is even better.
Then replace/restore the write-protection.
I’m just arguing that Secure Boot or similar solutions don’t mean that you can make sure to always know what’s on the system, but that the person with the keys can. In this case, that’s an employee at Google.
It also prevents some more exotic attacks like replacing the BIOS (but not any hardware) with a malicious one as the laptop is delivered, used without network access (but not with network access) and then stealing the laptop and trying to read unencrypted user data leaked by the malicious BIOS.
It is not effective against undetected arbitrary physical attacks (insert a keylogger between keyboard and motherboard) or against persistent software attacks against a single vulnerable OS (persist via the OS autostart mechanism and exploit the OS on each boot).
Having an external stick also mitigates detectable physical attacks (e.g. theft of laptop, or manipulation detected by a broken tamper-proof seal) where the attacker has already stolen the encryption password, since they still won't get the stick and thus won't be able to get the data anyway.
The stick being external doesn't seem to provide much advantage otherwise, since if the laptop hardware is malicious it doesn't help, and if it is not malicious then an internal trusted stick equivalent works just as well.
You can take out the external USB and keep it in your pocket when you go someplace you wouldn't want to carry a laptop (eg. public bathroom). Whether this is necessary depends on how paranoid you want to be.
> The stick being external doesn't seem to provide much advantage otherwise, since if the laptop hardware is malicious it doesn't help, and if it is not malicious then an internal trusted stick equivalent works just as well.
I think it provides a security-conscious user an added level of comfort/faith over a built-in solution. If you move the flash memory out to this external unit, and there is simply a three wire type of interface that pretty much only gives the system no permanent writability to the flash contents, that is a fairly solid and tangible promise. To some degree, you get to assert a new level of control over the "root of trust," at least the poinbt at which it begins in firmware.
That doesn't mean that there is not room for motherboard vendors to improve things, but we will have to have faith in them having done things correctly. I am not even talking about a hostile motherboard vendor - there are plenty of good faith or half baked efforts that end up being circumventable.
I don't believe that it is possible to build a secure system which isn't based on trusting the device that you hold in your hands. At some level, you need to have a device which is capable of both UI and computation functions to a sufficient extent to validate whatever transaction you are attempting to sign. You could push that onto a smaller device than your laptop (we already know that phone-sized devices are viable), but you still have to end up at the thing you interact with for signing purposes being a device that you trust.
(Or rootkit the "trusted" usb stick)
If you're willing to accept stateful storage for the TPM then I agree this is straightforward, but then I don't think the "stateless device" has been achieved. If you're willing to trust the TPM's storage then you could have just used that to establish trust for everything (which is the status quo on chromebooks).
However, if you have that, you do get a significant benefit. You are no longer vulnerable to someone sticking a rootkit in your BIOS. That is where a lot of the up-and-coming SMM-level rootkits like to be installed. You can move a fair chunk of your root of trust (the firmware, etc) into a device separate from the notebook, and feel pretty comfortable that device is going to provide certain guarantees about flash memory contents that we do not have with today's systems.
I am not convinced that forcing attacks to be more likely to be detected is really much of a deterrent in a world where Lenovo can pre-install a rootkit in the bios and not suffer all that much from it. I still think there are overall benefits to aiming for a stateless system.
There is ROM (as far as we know) in the GPU which instructs it to read the CPU firmware from the SD card, and then boots up the machine. Unfortunately neither firmware is open source or well documented, so you still can't really trust it.
I don't see any assurance for anyone that doesn't control the foundry itself.
If anyone happens to know a suitable candidate let me know. Wouldn't mind a bit of hardware replacing etc. if the lone closed component could be swapped out etc.
I think AMD has open sourced most of their BIOS, and a lot more of their hardware supports IOMMU anyway. Maybe that is a more fruitful direction to consider.
Also, I have to disagree that FPGAs are ideal for the architecture proposed by this paper. Performance and state issues of an FPGA aside, they're field programmable, which seems more vulnerable than 'microcode updates'. Of course, you could just disable field programming, but why even use an FPGA in the first place?
Disclaimer: I believe Joanna is much smarter than I am, so I wouldn't be surprised if my comments are based on a fundamental misunderstanding.
That's pretty much what they note in "The SPI flash chip" section and what's replaced / muxed from the trusted stick in "Putting it all together" (page 15). SPI is that firmware provisioning component here.
Finally, regarding FPGAs, they're as programmable as you want them to be. There are a number of applications where they do indeed become write-once chips that just handle what needs handling. Additionally, depending on what you're doing, FPGAs can be more than fast enough -- there are a number of them that support more interesting busses and interconnects, like built in 10-gigabit ethernet. So basically, you end up using the FPGA as a chip fine-tuned and protected based on your needs, not generic needs.
What happens when you add a new peripheral device to your laptop that didn't exist when your read-only SPI-connected firmware repository was created? How do you solve this with less risk than what we have now? Eliminate hardware upgrades and peripheral devices in favor of disposable computers and e-waste?
I'm afraid the FPGA argument still doesn't make sense. Sure, the community could create a "trusted" processor or SoC, but why use an FPGA over a custom designed processor?
If the FPGA is reprogrammed at every reboot, we now have to ensure this process can't be exploited. If it's never reprogrammed, why use an FPGA in place of a CPU in the first place?
I appreciate the input and perspectives, but I still don't see how the "laptop" described in the paper is advantageous. There are many promising paths that move us much closer to secure computing, but simply moving firmware around doesn't seem to move us forward.
I'm probably missing something, but I don't see how this is feasible. Moving all firmware to a device that lives on an external bus means that you must either create a 'trustworthy' distribution channel for all supported firmware (including all system components and peripheral devices), or support only a select few devices and forbid adding any new peripherals.
In general, most of the firmware needed for system components is streamed from the main SPI chip to the various components as they are configured by the main system BIOS. Thus, there mainly is a single chip we are concerned about. However, the author identified a second embedded controller )EC_ flash that is also usually present, so we end up being concerned about 2 flash modules. The author addresses other firmwares - the main one being discrete GPUs - and suggests having a system that does not include them.
> Also, I have to disagree that FPGAs are ideal for the architecture proposed by this paper. Performance and state issues of an FPGA aside, they're field programmable, which seems more vulnerable than 'microcode updates'. Of course, you could just disable field programming, but why even use an FPGA in the first place?
If your only interface between the computer and the FPGA is a three wire interface that emulates an SPI chip, that does not provide any vector for reconfiguring the FPGA. The bitstream for configuring the FPGA is provided via a completely separate set of hardware pins.
FPGAs don't take any permanent form, and read their configuration from an EEPROM. If state is evil, the EEPROM would have to be on the stick itself.
Decent intro to ASIC design with FPGA comparison and a pricing chart to illustrate the high costs:
Note that the tooling that makes this possible, esp design synthesis, can run to $1+ million per user per year. Some are merely upper 5-digits to lower 6-digits. Really inexpensive. Mask costs have come down for older nodes in recent years because fab equpiment is finally paid off. Yet, you're still talking millions for a full SOC with modern features. And it will be slow as hell because it's on old stuff if it's a CPU.
That the market demands more speed, more functions, less power, etc is why they keep dropping to smaller node sizes. Each one adds new effects that try to break the chip. The electrons even tend to leak out of the transistors. Can't even assume they'll stay in them haha. Actually, from what I've read, it appears chips are broken all over on latest nodes with lots of logic there just to correct that. Here's an example of the crap they have to do at 28nm, which isn't cutting-edge anymore:
So, you need specialists that make big $$$, $1+ million in EDA tools, mask costs at millions a set per trial, and other stuff like boards (regularly 6 digits on kickstarter). That's for an ASIC. An FPGA's design flow ends at the RTL simulation part, has no mask costs, free to cheap EDA, and often has pre-made boards you can use. Price you pay is lower-than-ASIC performance, higher watts, and very-high per unit price. Still a better deal on lots of systems plus can be converted to a hybrid later (see eASIC Nextreme).
Hope that clears up why one would choose a FPGA over an ASIC. All that said, the difficulties they're facing in this case is largely due to choice to stay on Intel, Xen, and other difficult-to-secure crap. If one forgoes those software, then one can use Cobham-Gaisler's SPARC SOC's since they're designed for easy modification & already at quad-core. Academics made many secure CPU's out of his stuff. Just gotta license it, modify it, and run it through later parts of ASIC flow. FPGA still cheaper, but you can FPGA it too. :)
Also you would take your I/O with you because it makes no sense if you hardly trust a device you don't let leave your side to interface with some static hardware that any third party could have modified. Current monitors have more processing power than early mainframe computers and more than enough room to hide rf equipment for remote snooping.
I've bought a 512GB SDXC card for the purpose of backing up my laptop, and often wonder whether to use it as a boot device. It's much less vulnerable to theft when it's safe in my pockets, compared to in a bag.
I'd make one small change. Rather than aim for a laptop first, mod a WiFi SD card or other pocket-sized device. The KeyAsic platform (PQI Air Card/Transcend) has been extensively hacked, and Ubuntu can run on it. Client devices (laptop, phone, etc) could connect over WiFi and run VNC through a web browser. It's still vulnerable to keystroke logging on the client, but it would be possible to switch clients halfway through typing important messages. In my opinion the most secure client device would be an iPod running Rockbox, and connecting to the PQI Air Card over serial. My "WiPod" seems like the closest thing we have to a practical pocket-sized open source device, and it lets me share photos from an SD card to my phone :).
With these firmware attacks, compromising a device at one point in time may allow the compromise to persist even if the user reinstalls the OS or replaces it with a different one.
One way to see this paper is as a response to
proposing more details of a safer future platform.
Right now, someone who can briefly get kernel-level control on a machine intended to run your OS might be able to reprogram the hard drive firmware. At that point you have a serious authenticity challenge when booting your OS, because the hard drive can alter the contents of particular binaries at the moment they're read from disk. There are some powerful software-only defenses against this, but if an attacker knows which ones you use, they can probably design an attack that evades those.
"The fundamental design flaw of all of these compromised password managers, keychains, etc. is that they keep state in a file. That causes all sorts of problems (syncing among devices, file corruption, unauthorized access, tampering, backups, etc.)."
(disclaimer: I wrote #2)
I'm also for separation of church and state.
Far as I know, I came up with it first with a proposal on Schneier's blog, etc to put both the CPU and trusted state on a stick or card you inserted into a machine containing only peripherals maybe with RAM. Research CPU's at the time had RAM encryption/integrity to make it untrusted. I was thinking PC Card rather than stick due to EMSEC, storage, and cost issues. I'll try to find the link later today.
It was actually inspired by foreign, airport security compromising stuff. People asked me to develop a convenient solution. So, real problem was physical access to the trusted components. That access couldn't happen but can't keep all our gear with us or away from inspection. A simple chip or PC Card they carried on would be better. The chassis, from laptop to whatever, they could acquire in country or ship separately with inspection. I further imagined a whole market popping up supplying both secure sticks/cards and the stuff you plug them into. Inspiration for that was iPod & its accessories like docks. One more part was that each user could determine how much protection, from tamper-evidence to EMSEC, to apply to their trusted device.
As it sometimes happens, another company showed up with government backing IIRC and R&D on security devices. Their proposed portfolio was very similar. They undoubtedly started patenting all of it. This created a second risk for anyone attempting what I or now Joanna is attempting: a greedy, defence-connected, third party legally controlling pieces of your core business. They usually just rob people but I predicted on Schneier's blog & later here in a heated debate that they could attempt to change or get rid of the product using their patents. Especially true if a proxy for an intelligence agency. We might have just seen that happen with Apple over iMessage but I can't be sure. Anyway, do know there's both prior art and probably patents on these concepts in defense industry.
So, it was a cool concept. It was one of those I was proudest of given it collapsed problems with all kinds of devices to design and protection of one component. That's basic Orange Book-era thinking I try to remember. Unfortunately, after much debate with marketing types, we determined there was a chicken and the egg problem with these [at the time]. The NRE cost would be high to the point you'd want to be sure there was a demand for thousands of them plus people willing to pay high unit prices. Custom laptops were often closer to $10,000 than $3,000 if low volume. My greater market idea was chicken-and-the-egg times a million. That plus risk of 3rd party patents made me back off the idea as nice but not practical.
Since then, what's changed is dramatically lower cost for homebrew hardware or industrial prototyping. Projects like Novena show it can probably be done for lower NRE than before. However, this is security-critical design that needs strong expertise in both hardware (esp analog/RF) and Intel x86. That will up the NRE and odds of them screwing up. ARM or MIPS ("cheaper ARM") might be easier to do but still need HW expert and significant NRE.
So, there's my take. It's a good idea that two of us in security industry already fleshed-out with removable firmware being proven in ancient mainframes. Serious marketing obstacles to getting this done and done securely. A high-level design for the technology, as I did, is pretty straight-forward and will teach one many lessons. It was a good learning experience if nothing else.
Also changed is industry awareness of x86 platform (in)security, an increased role for the Intel ME in 2016 laptops, and the existence of a software-hardware partnership (Qubes-Purism) that could advance the proposed architecture.
Any open-hardware implementation of these ideas has the potential to influence mainstream x86 OEMs, as OLPC inspired the netbook category. The more people who design and build open hardware prototypes, the faster the industry can converge on a disaggregated firmware/hardware TCB.
Regardless, it was very important to move the CPU out given it's a high chance of targeting or subversion. It is literally the root of trust for computation. I protect it because I assume attackers will be smarter than me and use it against me somehow. Far as cooling, I admit I didn't think much of it for the high-end: just decided on efficient CPU's where that wasn't so much a problem. Think along the lines of the card computers that need no cooling but have good performance.
"Joanna's approach is much more attractive in terms of being something that involves very little modification of existing platforms."
Convenience vs security. Always a tradeoff. I promise you that in physical you'll find the more convenient versions will usually get you screwed. Especially if EMSEC or subversion matters to you. I'm holding off reviewing specifics of her work until she finishes it. No promises that I will but I'd rather wait for finished thing given nature of this topic. I'm writing on the general concept which predates it on paper and partly in real products.
In the past a CPU was attached to a relatively low speed bus, and the peripheral interconnects all came off some external chip. These days you've got PCIe coming off the CPU package and memory clocks in the GHz range, so the mechanical aspects of this become massively more inconvenient. Even ignoring that, once you've got storage and CPU on the card, you've basically got a card that's a significant proportion of the size and weight of a laptop. At which point you could just carry the laptop instead.
> Think along the lines of the card computers that need no cooling but have good performance.
The attempts on that side (such as the Motorola phones that had laptop-style docks available) have been complete failures.
> I promise you that in physical you'll find the more convenient versions will usually get you screwed
And a solution that's excessively inconvenient will just be ignored.
It's strange because my friend's desktop CPU fit into my hand and plugged into place. That was a year or two ago. If that's no longer possible, though, then the CPU can't be extracted into its own device and my scheme can't apply.
In a literal sense, yes - laptop parts are designed for SMT only.
> Everything else, including cooling, could be built into laptop part
The point of this design is to allow users to take their state with them when they leave a hotel room without having to worry about the rest of the system being tampered with. You need the removable device to be packaged such that it's trivially removable, fits in a pocket, and is sufficiently hard-wearing that it won't be damaged. Your approach would require it to have a several hundred-pin connector and some means to bind into the cooling design, and that's an incredibly non-trivial engineering problem.
Just gotta have something that does computation & storage that will not lie to its user.
Considered harmful is great format because the title makes it clear that the author is intentionally taking one side and he is then free to concentrate into one side of the story.
Especially since this isn't really a typical one, apart from the title, but includes quite a bit of analysis and proposals.
Democracy gives you a say in politics. Rule of Law gives you a safe life (at least from governmental actions).
I concede that both may be not unrelated. But this is not necessarily the case.
First of all, one important rule of democracy is that you cannot vote about stripping someone's right in democracy (or alternatively, the democratic decisions have to be reversible), otherwise you will get Russell-style paradoxes. The existence (or not) of democracy itself cannot be decided democratically; this is often neglected but nevertheless important rule.
Second, your example is unrealistic. There are far less wolves then sheep in the real world (literally!). In the real world, this is not really a big concern, because you wouldn't want to live in society where are more wolves than sheep anyway.. Almost every social technology (including evolved human cooperation) is predicated on this being false.
Of course, there are other things that are important for society and orthogonal to democracy - as you mention, for example, rule of law. But I disagree with rule of law you're safe from government actions - the state is typically the actor who enforces the law. So I very much disagree with the sentence "Rule of Law gives you a safe life (at least from governmental actions)." I would like to see an example of society where this is true due to it not being democratic; since democracy effectively gives you control over government actions.
The United States is not a Democracy, it is a Representative Republic. It happens to use _some_ bits from the democracy toolbox.
As to any back-and-forth regarding people's rights being voted away and courts of law re-establishing them, there is the issue of slavery, and most recently, gay marriage.
This is very obviously false. It may have been established with not that intent, but today (especially after progressive movement) it's one of the most advanced democracies in the world. Of course there is still a long way to go.
For example, there is paper from Cato Institute that showed (on different U.S. states) more democracy leads to better management of state budget.
> people's rights being voted away
This is misrepresentation - people's right are not being voted away; what happens in these cases is that social progress is not as fast as some liberals would want. (In other words, there is a huge difference between regress and slow progress.) Democracies are conservative, as most normal people are (and this is arguably good engineering practice). And slavery was actually supported by the representation (you're actually contradicting yourself here a bit, if you want to blame democracy for slavery, then you shouldn't say that U.S. wasn't democracy).
This is the so-called "panarchy" arrangement and emerges organically from most forms of anarchism.
Every community is responsible for its own defense, which it can hire out or assemble itself.
I think there needs to be some balance of these two opposing forces. At minimum, I would love to see these anarchist ideas backed up by some computer model.
The most prominent problem with the view on resources is, that the prevailing mindset about nature is that it's a free resource. Which is fatally wrong! Again, in theory anarcho-capitalism should fix this also, but personally I don't trust individuals overcoming their greed for a greater good.
I don't buy the idea that there are no free resources in the world. It seems very obviously false, it seems isomorphic to labor theory of value, and leads to logical contradictions (such that value is not monotonic in effort). At minimum, there are other actors in the world that produce value; take for instance a hen producing an egg. The egg is valuable and is produced by the effort of the hen (scavenging for food). Yet we cannot assign this value to any person, it just comes for free from nature.
There is no such thing as free lunch:
I'm not nearly as privacy conscious or paranoid as the author, so I'm satisfied with the convenience of a stateful laptop. I don't even have a screen lock when it wakes from sleep. If you want to use a stateless machine like the author describes, you're going to need a personal server or a cloud provider you really trust to keep your stuff.
Edit: ge0rg had already posted link to non-PDF version.
I think that is also part of the author's concern with Intel ME being present on all systems. It is a separate microcontroller in the chipset that has power on the level of "ring -3" (I believe it is used to implement much of the new SGE instruction set, for example).