
State Considered Harmful – A Proposal for a Stateless Laptop - kushti
http://blog.invisiblethings.org/2015/12/23/state_harmful.html
======
Animats
A read-only memory device for firmware would be helpful. Not flash, not
something erasable, but hard ROM, in a socketed device. All the firmware would
go in that ROM, and you could take it out if necessary and compare it with
other ROMs. There are simple hard-wired ROM comparison devices.

This is the approach mandated by Nevada Gaming Commission slot machine
regulations.[1]

[1]
[http://gaming.nv.gov/modules/showdocument.aspx?documentid=33...](http://gaming.nv.gov/modules/showdocument.aspx?documentid=3309)

~~~
rolandr
Once upon a time, most/many PCs had physical BIOS protection in the form of a
jumper on the motherboard that would allow you to put the BIOS into a read-
only state. However, we have now had many years where such control cannot be
manually asserted by the end user, and the flash just sits there writable
(although there are chipset-level firmware write protections, various hacks,
like Dark Jedi, have found ways around them). Plus, apparently even when you
pull down the WO pin on some flash chips, the hardware setting can still be
overridden by software commands. The paper suggests, particularly with the
more recent versions of Intel ME, that the PC architecture has now evolved to
expect, and perhaps require, access to a writable BIOS (in part, because the
ROM stores not only firmware, but also things like configuration settings and
data for the new ME-implemented TPM).

Thus, we may not be able to simply go back to a ROM with today's
architectures. However, we can give today's systems something that behaves
like a writeable flash chip, but is readily (and automatically) reset to a
clean/factory state.

~~~
mehrdada
Some Chromebooks have a "flash write-protect screw"[1] that basically makes
the firmware read-only. Acts as a physical protection for flashrom by software
(though I am not sure if there are ways to work around it).

[1]: [https://www.chromium.org/chromium-os/developer-
information-f...](https://www.chromium.org/chromium-os/developer-information-
for-chrome-os-devices/acer-c720-chromebook)

------
Shank
This is interesting to me, because this is the same sort of thinking that led
to the current iteration of Chrome OS devices. While not stateless entirely as
the paper suggests, they clearly have the same train of thought. Without the
developer switch flipped, verified boot should entrust that Google (and anyone
with its private key) is the sole originator of the code on the device.

I wouldn't suggest using a Chrome OS device for anything where opsec matters,
but I find the similarities, at least in thought, striking. Chrome OS's early
security benefits were that the device could be trusted if the dev switch
wasn't flipped -- and it could be fully wiped and restored on demand if need
be. The trusted stick described in the PDF would likely share similar
characteristics as far as disposability goes.

~~~
kuschku
The issue with ChromeOS is that you know that Google controls everything
that’s on it – which is not very useful for someone who wants actual security.
A state actor could still force Google to install spyware on the devices.

In fact, for chromebooks for education, Google even allows the schools to MitM
the traffic of the pupils with no way for the pupil to know it.

Which gets risky when the pupils take the device home, and the school still
controls it.

~~~
emidln
How is that any different than an employer forcing their laptops provided to
employees through a VPN (where the user is subject to
inspection/blacklist/whitelist/etc) all the time? In both situations you have
a machine provided to someone for a specific purpose (business/education) that
likely describes extremely particular acceptable use cases. MITM traffic on
whatever network seems like a valid way to enforce that. Doing your banking on
your own machine.

~~~
pierrec
These forced MitMs are all very similar, though I don't see how that gets us
anywhere. It's not a valid way of enforcing any policy no matter how twisted,
because people will root their devices and circumvent the lock-down.

The reason I call it twisted is because I'm wondering why that lock-down is
necessary in the first place: you don't want them to "abuse" the device,
causing potential damage? This might be a valid concern with company cars due
to the life-threatening aspect of driving, but extending this concept to
laptops seems abusive. Either way, the risk posed to the school or company's
assets can be effectively taken into account by some kind of insurance policy,
not by attempting to lock down the device.

~~~
jackgavigan
"So how _did_ the hackers get into the corporate network?" "One of our execs
downloaded a game from the Internet that contained malware and then he
connected his laptop to the corporate network..."

~~~
kuschku
If a single person with a laptop can damage your whole network, you have
completely different issues.

------
devit
As far as I can tell this proposal allows to ensure that it is possible to
"factory reset" a system in a way that removes all malware, and also allows to
"dual boot" OSes without any of the OSes being able to compromise the other
(by flashing malicious firmware which then exploits the other OS on a
subsequent boot).

It also prevents some more exotic attacks like replacing the BIOS (but not any
hardware) with a malicious one as the laptop is delivered, used without
network access (but not with network access) and then stealing the laptop and
trying to read unencrypted user data leaked by the malicious BIOS.

It is not effective against undetected arbitrary physical attacks (insert a
keylogger between keyboard and motherboard) or against persistent software
attacks against a single vulnerable OS (persist via the OS autostart mechanism
and exploit the OS on each boot).

Having an external stick also mitigates detectable physical attacks (e.g.
theft of laptop, or manipulation detected by a broken tamper-proof seal) where
the attacker has already stolen the encryption password, since they still
won't get the stick and thus won't be able to get the data anyway.

The stick being external doesn't seem to provide much advantage otherwise,
since if the laptop hardware is malicious it doesn't help, and if it is not
malicious then an internal trusted stick equivalent works just as well.

~~~
loudmax
> The stick being external doesn't seem to provide much advantage otherwise,
> since if the laptop hardware is malicious it doesn't help, and if it is not
> malicious then an internal trusted stick equivalent works just as well.

You can take out the external USB and keep it in your pocket when you go
someplace you wouldn't want to carry a laptop (eg. public bathroom). Whether
this is necessary depends on how paranoid you want to be.

------
asuffield
It's an interesting idea, but probably hopeless: all this does is move the
goalposts. You're still fundamentally trusting the hardware to read and obey
the instructions on your weird "external SPI" device, and if you're willing to
trust the hardware then you might as well just put it on the motherboard like
today's devices do. If you don't trust your hardware (and I challenge anybody
to prove that a motherboard full of chips hasn't had a chunk of flash memory
added to it without your knowledge) then you have no reason to think that the
code it loaded was the code stored on your "trusted stick".

I don't believe that it is possible to build a secure system which isn't based
on trusting the device that you hold in your hands. At some level, you need to
have a device which is capable of both UI and computation functions to a
sufficient extent to validate whatever transaction you are attempting to sign.
You could push that onto a smaller device than your laptop (we already know
that phone-sized devices are viable), but you still have to end up at the
thing you interact with for signing purposes being a device that you trust.

~~~
mjg59
Moving the bar from "This machine can be subverted by anybody who can access
the SPI bus" to "This machine can be subverted by anybody who can solder on
additional hardware that intercepts and modified bus activity" is still
significant.

~~~
asuffield
I don't think soldering's really required. If I was attacking the proposed
system, I'd just jam a variation on those nifty miniature usb keyloggers into
the path of the usb port itself. If somebody was building devices like this,
then somebody else would quickly create such a device that could be quickly
slipped into place (possibly even one you could just jam into the usb port
without opening the case, like the fake slots that ATM hacks use).

(Or rootkit the "trusted" usb stick)

~~~
mjg59
What would that get you? The input devices are almost certainly going to be
internal and PS/2 or I2C, and if you're the sort of person doing this you'd be
using encrypted storage.

~~~
asuffield
I mean tapping the external SPI connection at that point. You can't encrypt
the first stage that gets loaded into the CPU, so you would simply replace
that with your rootkit and then continue as normal until the user types the
decryption key into the now-compromised device.

~~~
mjg59
The CPU measures the first block of the firmware into the TPM. This is already
a solved problem.

~~~
asuffield
I'm fascinated by the idea of having a TPM without any on-board storage for it
to use. How do you propose that would work?

If you're willing to accept stateful storage for the TPM then I agree this is
straightforward, but then I don't think the "stateless device" has been
achieved. If you're willing to trust the TPM's storage then you could have
just used that to establish trust for everything (which is the status quo on
chromebooks).

~~~
mjg59
As described in the article, PTT includes a TPM running on the ME. The CPU
loads the ME firmware (which is validated against a key on the ME), then
starts executing the rest of the firmware (including copying measurements to
the TPM).

~~~
asuffield
So it just boils down to TPM-protected encrypted storage? That obviously works
(because it's how a bunch of devices work today), but it's a lot less
exciting... if you can set up a full TPM stack for sealed storage (which we
don't have on consumer linux today :( ) then I don't see what attacks this
"stateless laptop" defends you against that the TPM doesn't already handle.

------
kriro
I really wish there was a good laptop that runs Qubes OS and has no
BLOBS/closed code/firmware at all. I do some research on this every now and
then and always come up short. The Librem 15 was the last one I looked into
but it had closed components. Something like the Novena can't run Qubes but
would otherwise be ideal (I'd gladly give up some battery life, looks and
whatnot for pure freedom)

If anyone happens to know a suitable candidate let me know. Wouldn't mind a
bit of hardware replacing etc. if the lone closed component could be swapped
out etc.

~~~
rolandr
I do not believe there is are any Intel chips that give you VT-d (IOMMU) and
do not require a firmware blob. Blame Intel for that situation.

I think AMD has open sourced most of their BIOS, and a lot more of their
hardware supports IOMMU anyway. Maybe that is a more fruitful direction to
consider.

~~~
kriro
You are absolutely right. In fact I didn't know that Intel ME is that evil.
Interesting (a bit hard to follow imo) talk from the recent CCC:
[https://media.ccc.de/v/32c3-7352-towards_reasonably_trustwor...](https://media.ccc.de/v/32c3-7352-towards_reasonably_trustworthy_x86_laptops#video)

------
vrtx0
I'm probably missing something, but I don't see how this is feasible. Moving
all firmware to a device that lives on an external bus means that you must
either create a 'trustworthy' distribution channel for all supported firmware
(including all system components and peripheral devices), or support only a
select few devices and forbid adding any new peripherals. It also means
peripheral devices must either be capable of bootstrapping themselves from
this chip, or the system must provide a mechanism that 'sends' this firmware
to all peripherals before booting the system. I'm inclined to think that the
complex interdependency of this boot process seems impractical.

Also, I have to disagree that FPGAs are ideal for the architecture proposed by
this paper. Performance and state issues of an FPGA aside, they're field
programmable, which seems more vulnerable than 'microcode updates'. Of course,
you could just disable field programming, but why even use an FPGA in the
first place?

Disclaimer: I believe Joanna is much smarter than I am, so I wouldn't be
surprised if my comments are based on a fundamental misunderstanding.

~~~
Sanddancer
Computers already have the type of bus she's talking about. A lot of the low-
level/boot level components use SPI/I2C/LPC/etc busses because they're so dead
simple to implement. For peripheral cards, PCI-e already supports I2C (well,
SMBus technically, but they're close enough), so extension there would also
not be terribly difficult.

Finally, regarding FPGAs, they're as programmable as you want them to be.
There are a number of applications where they do indeed become write-once
chips that just handle what needs handling. Additionally, depending on what
you're doing, FPGAs can be more than fast enough -- there are a number of them
that support more interesting busses and interconnects, like built in
10-gigabit ethernet. So basically, you end up using the FPGA as a chip fine-
tuned and protected based on your needs, not generic needs.

~~~
vrtx0
My point is not that it's difficult to design a peripheral device that loads
its firmware over SPI (or whatever bus). My point is that firmware for all
supported devices must be maintained on the proposed external device.

What happens when you add a new peripheral device to your laptop that didn't
exist when your read-only SPI-connected firmware repository was created? How
do you solve this with less risk than what we have now? Eliminate hardware
upgrades and peripheral devices in favor of disposable computers and e-waste?

FPGA: I'm afraid the FPGA argument still doesn't make sense. Sure, the
community could create a "trusted" processor or SoC, but why use an FPGA over
a custom designed processor?

If the FPGA is reprogrammed at every reboot, we now have to ensure this
process can't be exploited. If it's never reprogrammed, why use an FPGA in
place of a CPU in the first place?

I appreciate the input and perspectives, but I still don't see how the
"laptop" described in the paper is advantageous. There are many promising
paths that move us much closer to secure computing, but simply moving firmware
around doesn't seem to move us forward.

------
OJFord
Interesting (and surprising to me) that the word 'laptop' is used instead of
'computer'.

~~~
gbtw
It is the formfactor of a portable computer that you could own completely,
unlike anything more mobile where at least some of the chips are configured
and run by someone else.

Also you would take your I/O with you because it makes no sense if you hardly
trust a device you don't let leave your side to interface with some static
hardware that any third party could have modified. Current monitors have more
processing power than early mainframe computers and more than enough room to
hide rf equipment for remote snooping.

------
joveian
One random detail that came to mind: I guess the clock would now need to live
on the trusted stick?

------
joveian
One thing I would love to have that could be a solution to the worst case
early boot encryption issues is an external crypto processor that generates
and stores long term keys, implements authentication, and only releases
temporary keys to the main system (to which it only connects via a simple
serial connection). It has enough of a screen and input to receive a password
and query the user before performing various actions. That is, something like
Bitcoin Trezor but maybe a little more complex input and for more general
crypto use. Ideally, such a device could even physically store the trusted
stick (or several), although that trusted stick shouldn't interact with the
rest of the system differently than any other device for maximum reliability.
This way the most sensitive crypto is not performed on a general purpose
system and the user could authenticate to the device once and then the device
can authenticate the user and provide keys to multiple independent systems
without hastle. It is an additional expense so hopefully wouldn't be
necessary, but would be one way to solve the early boot encryption problem (if
needed and less expensive solutions do not work) in a not completely special
purpose way.

------
peterburkimsher
SD cards have SPI pins, so I recommend SD for Trusted Stick rather than
repurposed USB.

I've bought a 512GB SDXC card for the purpose of backing up my laptop, and
often wonder whether to use it as a boot device. It's much less vulnerable to
theft when it's safe in my pockets, compared to in a bag.

I'd make one small change. Rather than aim for a laptop first, mod a WiFi SD
card or other pocket-sized device. The KeyAsic platform (PQI Air
Card/Transcend) has been extensively hacked, and Ubuntu can run on it. Client
devices (laptop, phone, etc) could connect over WiFi and run VNC through a web
browser. It's still vulnerable to keystroke logging on the client, but it
would be possible to switch clients halfway through typing important messages.
In my opinion the most secure client device would be an iPod running Rockbox,
and connecting to the PQI Air Card over serial. My "WiPod" seems like the
closest thing we have to a practical pocket-sized open source device, and it
lets me share photos from an SD card to my phone :).

------
ge0rg
Non PDF version rendered by github:
[https://github.com/rootkovska/state_harmful/blob/master/stat...](https://github.com/rootkovska/state_harmful/blob/master/state_harmful.md)

------
jakeogh
C3TV – Towards (reasonably) trustworthy x86 laptops:
[https://news.ycombinator.com/item?id=10833637](https://news.ycombinator.com/item?id=10833637)

------
hendry
My OS [https://webconverger.com/](https://webconverger.com/) gets you mostly
there. The slate is wiped clean between sessions like
[https://en.wikipedia.org/wiki/Privacy_mode](https://en.wikipedia.org/wiki/Privacy_mode)
of course.

~~~
schoen
While an OS that doesn't preserve state is an important component of
Rutkowska's proposal, and your OS might be one basis for that component, I
don't think this is "mostly there" in terms of everything that the paper
discusses. Much of what's new in the paper is about _hardware_ issues,
especially because it's concerned with firmware attacks that are already being
used by attackers like NSA, and that other people clearly understand how to
develop in principle.

With these firmware attacks, compromising a device at one point in time may
allow the compromise to persist even if the user reinstalls the OS or replaces
it with a different one.

One way to see this paper is as a response to

[http://www.slideshare.net/hashdays/why-johnny-cant-tell-
if-h...](http://www.slideshare.net/hashdays/why-johnny-cant-tell-if-he-is-
compromised)

proposing more details of a safer future platform.

Right now, someone who can briefly get kernel-level control on a machine
intended to run your OS might be able to reprogram the hard drive firmware. At
that point you have a serious authenticity challenge when booting your OS,
because the hard drive can alter the contents of particular binaries at the
moment they're read from disk. There are some powerful software-only defenses
against this, but if an attacker knows which ones you use, they can probably
design an attack that evades those.

------
purpled_haze
Done:

[http://cdn.rsvlts.com/wp-
content/uploads/2013/02/1671718-sli...](http://cdn.rsvlts.com/wp-
content/uploads/2013/02/1671718-slide-p139-4jpg1.jpeg)

+

[https://www.youtube.com/watch?v=m98agJUoCck](https://www.youtube.com/watch?v=m98agJUoCck)

------
w8rbt
I've been saying this about password managers for years:
[https://news.ycombinator.com/item?id=9731361](https://news.ycombinator.com/item?id=9731361)

 __ _" The fundamental design flaw of all of these compromised password
managers, keychains, etc. is that they keep state in a file. That causes all
sorts of problems (syncing among devices, file corruption, unauthorized
access, tampering, backups, etc.)."_ __

~~~
curryhoward
There are a few "stateless" password managers worth considering:

1) [http://www.supergenpass.com/](http://www.supergenpass.com/)

2) [https://www.stephanboyer.com/post/101/hashpass-a-simple-
stat...](https://www.stephanboyer.com/post/101/hashpass-a-simple-stateless-
password-manager-for-chrome)

(disclaimer: I wrote #2)

------
DonHopkins
A laptop without any state sponsored back doors sounds like a good idea.

I'm also for separation of church and state.

------
transfire
[http://tinycorelinux.net](http://tinycorelinux.net)

------
nickpsecurity
Short on time, but I'll say she's looking in the right idea in general. This
idea has actually been done before. Removable firmware part used to happen in
older machines, too.

 _Far as I know_ , I came up with it first with a proposal on Schneier's blog,
etc to put both the CPU and trusted state on a stick or card you inserted into
a machine containing only peripherals maybe with RAM. Research CPU's at the
time had RAM encryption/integrity to make it untrusted. I was thinking PC Card
rather than stick due to EMSEC, storage, and cost issues. I'll try to find the
link later today.

It was actually inspired by foreign, airport security compromising stuff.
People asked me to develop a convenient solution. So, real problem was
physical access to the trusted components. That access couldn't happen but
can't keep all our gear with us or away from inspection. A simple chip or PC
Card they carried on would be better. The chassis, from laptop to whatever,
they could acquire in country or ship separately with inspection. I further
imagined a whole market popping up supplying both secure sticks/cards and the
stuff you plug them into. Inspiration for that was iPod & its accessories like
docks. One more part was that each user could determine how much protection,
from tamper-evidence to EMSEC, to apply to their trusted device.

As it sometimes happens, another company showed up with government backing
IIRC and R&D on security devices. Their proposed portfolio was very similar.
They undoubtedly started patenting all of it. This created a second risk for
anyone attempting what I or now Joanna is attempting: a greedy, defence-
connected, third party legally controlling pieces of your core business. They
usually just rob people but I predicted on Schneier's blog & later here in a
heated debate that they could attempt to change or get rid of the product
using their patents. Especially true if a proxy for an intelligence agency. We
might have just seen that happen with Apple over iMessage but I can't be sure.
Anyway, do know there's both prior art and probably patents on these concepts
in defense industry.

So, it was a cool concept. It was one of those I was proudest of given it
collapsed problems with all kinds of devices to design and protection of one
component. That's basic Orange Book-era thinking I try to remember.
Unfortunately, after much debate with marketing types, we determined there was
a chicken and the egg problem with these [at the time]. The NRE cost would be
high to the point you'd want to be sure there was a demand for thousands of
them plus people willing to pay high unit prices. Custom laptops were often
closer to $10,000 than $3,000 if low volume. My greater market idea was
chicken-and-the-egg times a million. That plus risk of 3rd party patents made
me back off the idea as nice but not practical.

Since then, what's changed is dramatically lower cost for homebrew hardware or
industrial prototyping. Projects like Novena show it can probably be done for
lower NRE than before. However, this is security-critical design that needs
strong expertise in both hardware (esp analog/RF) and Intel x86. That will up
the NRE and odds of them screwing up. ARM or MIPS ("cheaper ARM") might be
easier to do but still need HW expert and significant NRE.

So, there's my take. It's a good idea that two of us in security industry
already fleshed-out with removable firmware being proven in ancient
mainframes. Serious marketing obstacles to getting this done and done
securely. A high-level design for the technology, as I did, is pretty
straight-forward and will teach one many lessons. It was a good learning
experience if nothing else.

~~~
mjg59
Having the CPU on an external card makes things significantly more difficult -
your connector now needs to break out the entire bus and also be capable of
delivering ~100W, and you need a cooling solution that can handle that without
the benefit of the greater surface area of the laptop chassis. Joanna's
approach is much more attractive in terms of being something that involves
very little modification of existing platforms.

~~~
nickpsecurity
You might be looking at it a bit differently than me, here. What I was looking
at is the central components for CPU & storage are on the card. Power, memory,
and peripherals are in the chassis. The card's connectors plug directly into
that. So, if anything, I'm re-creating the old situation in towers of
pluggable CPU's except it's externally pluggable and the tower is now a laptop
with integrated electronics.

Regardless, it was very important to move the CPU out given it's a high chance
of targeting or subversion. It is literally the root of trust for computation.
I protect it because I assume attackers will be smarter than me and use it
against me somehow. Far as cooling, I admit I didn't think much of it for the
high-end: just decided on efficient CPU's where that wasn't so much a problem.
Think along the lines of the card computers that need no cooling but have good
performance.

"Joanna's approach is much more attractive in terms of being something that
involves very little modification of existing platforms."

Convenience vs security. Always a tradeoff. I promise you that in physical
you'll find the more convenient versions will usually get you screwed.
Especially if EMSEC or subversion matters to you. I'm holding off reviewing
specifics of her work until she finishes it. No promises that I will but I'd
rather wait for finished thing given nature of this topic. I'm writing on the
general concept which predates it on paper and partly in real products.

~~~
mjg59
> The card's connectors plug directly into that.

In the past a CPU was attached to a relatively low speed bus, and the
peripheral interconnects all came off some external chip. These days you've
got PCIe coming off the CPU package and memory clocks in the GHz range, so the
mechanical aspects of this become massively more inconvenient. Even ignoring
that, once you've got storage and CPU on the card, you've basically got a card
that's a significant proportion of the size and weight of a laptop. At which
point you could just carry the laptop instead.

> Think along the lines of the card computers that need no cooling but have
> good performance.

The attempts on that side (such as the Motorola phones that had laptop-style
docks available) have been complete failures.

> I promise you that in physical you'll find the more convenient versions will
> usually get you screwed

And a solution that's excessively inconvenient will just be ignored.

~~~
nickpsecurity
So, to be clear, you're saying modern processors can no longer be physically
plugged into a motherboard? That the processor and BIOS chip are physically
too big to be isolated into a card-sized container that plugs into such a slot
on a laptop? Everything else, including cooling, could be built into laptop
part. But this critical part is impossible with today's technology and they
all have to be hardwired at manufacturing?

It's strange because my friend's desktop CPU fit into my hand and plugged into
place. That was a year or two ago. If that's no longer possible, though, then
the CPU can't be extracted into its own device and my scheme can't apply.

~~~
mjg59
> you're saying modern processors can no longer be physically plugged into a
> motherboard

In a literal sense, yes - laptop parts are designed for SMT only.

> Everything else, including cooling, could be built into laptop part

The point of this design is to allow users to take their state with them when
they leave a hotel room without having to worry about the rest of the system
being tampered with. You need the removable device to be packaged such that
it's trivially removable, fits in a pocket, and is sufficiently hard-wearing
that it won't be damaged. Your approach would require it to have a several
hundred-pin connector and some means to bind into the cooling design, and
that's an incredibly non-trivial engineering problem.

~~~
nickpsecurity
Appreciate your elaboration. Seems mine is a no go for mobile, then, if it's
Intel chips and such. Embedded-style computers for trusted part are still a
possibility. I've seen a card computer put into a laptop for a coprocessor.
Hardwire in a KVM-style switch so the coprocessor can be the main processor
when necessary. It's naturally removable. This lets key stuff be done on
trusted component, safe storage, and even checking of untrusted stuff with
what techniques are available.

Just gotta have something that does computation & storage that will not lie to
its user.

------
cbd1984
Can this be marked as a PDF?

------
loudmax
I had a system nearly like what the author describes after the internal SSD
drive on my first generation Acer Aspire One died. There didn't seem to be an
easy way to replace the drive with a generic SATA SSD (at least none that I
was aware of), and a replacement drive was like $80. This is for a $200
netbook, and it wasn't a very good SSD to begin with. So I put Knoppix on a
USB stick and basically used that. Since all my stuff is either on my personal
server or in the cloud anyway, it was a workable solution. In my case it
worked reasonably well, and if I went though the trouble of identifying or
putting together a bootable Linux distro with a desktop I really liked I could
probably live with something like that as a permanent solution.

I'm not nearly as privacy conscious or paranoid as the author, so I'm
satisfied with the convenience of a stateful laptop. I don't even have a
screen lock when it wakes from sleep. If you want to use a stateless machine
like the author describes, you're going to need a personal server or a cloud
provider you really trust to keep your stuff.

Edit: ge0rg had already posted link to non-PDF version.

~~~
spydum
I'm not sure you read the paper? The concern centered on all the
flash/firmware compnents such as SPI flash and wifi/BT firmware, not just the
OS or hard drive. The idea proposed was to have SPI and all device firmware on
a secure USB stick that you always keep with you..

~~~
loudmax
You're right. I read the blog post, then skimmed the actual paper. You can
already do what she proposes at the OS level, but her concerns go much deeper.

~~~
rolandr
It seems like a pretty valid concern. Part of the next generation of rootkits
seems to be targeted at SMM-level rootkits (termed "ring -1" by some) that are
installed in the BIOS. They are practically undetectable once installed, and
can punch through hypervisor protections too.

I think that is also part of the author's concern with Intel ME being present
on all systems. It is a separate microcontroller in the chipset that has power
on the level of "ring -3" (I believe it is used to implement much of the new
SGE instruction set, for example).

