You may also like to consider that nearly all modern server systems have an iLO/iDRAC or whatever that can do all sorts of things, and at least one internal USB interface. PCs can have the Intel ME and other horrors. The best you can hope for is that it is only your local intel. agency that potentially have routine access to your system.
You could check a laptop for malware later by reading out literally every bit of nonvolatile state, including the BIOS and stuff, and confirming that all changes had expected form (to files you meant to work on, etc.). Of course, then you have to trust the equipment you use for that...
A little weird that he ran the experiment. Did he really suspect that malware was routinely getting installed by attackers with physical access to laptops during business travel? If yes, then why didn't someone notice it calling home or whatever?
How does this procedure work for multiday evil maid situations? The first day while you're out the maid replaces your collection of plastic disposable tamper-evident bags with faulty ones that open with a particular chemical but otherwise look identical. The second day the maid tampers with your laptop and you don't notice. Do you just have to take the whole box of additional bags with you everyday? That seems prohibitively inconvenient.
Or the "bag" can be the laptop's existing case. You can put seals (stickers, or the sparkly nail polish trick mentioned below) over all the fasteners and seams of the laptop, fill all the non-power ports with epoxy, etc. None of these make tampering impossible, but they can make it uneconomic.
I don't think anyone at serious risk of these kinds of attacks lets computers out of their physical control. I've seen agencies that do the seals/epoxy even for computers inside their secure facilities, presumably to give their guards more time to catch an inside tamperer.
The most sensible precautions seem to be a) full-disk encryption with a strong passphrase, b) hardware 2fa using a token that is stored separately from the computer, c) physically securing the machine whenever possible and d) tamper-evident seals covering screwholes or seams.
If your adversary is capable of beating these precautions, you're probably screwed anyway.
I suppose a large amount of the problem could be solved just by taking checksums of all non-volatile memory on the device - however that doesn't check for, for example, hardware keyloggers which might be inserted without your consent, and then a thorough evaluation of the hardware would be necessary. However that still doesn't tell you if somebody has simply tried to copy data off of your device - so maybe in this case you need something which physically marks the device in the case that the hard drive is removed and presumably accessed outside your computer, like those dye traps they use in banks and when transporting money.
I doubt it but his job is to suspect all sorts of things. If you are going to attempt to quantify risk then some experimentation is in order rather than simple speculation. As to "notice it calling home", it is surprising how much is missed. For example, Meltdown n Spectre were predicted many, many years ago ...
Tens of millions of laptops have been exposed to at least as much evil maid opportunity as the author's
However, do you have a decent citation for that assertion?
Air passengers make about 3B trips per year. A laptop lasts about three years. I said "tens of millions", so if I'm right then we have at least 16 * 20M exposures on existing laptops. That would mean at least one passenger in 28 travels with a laptop and is as careless as the author was.
That seems high to me--like, it's not too common to check your laptop (unless you were flying from an Arab country last year...). On the other hand, that ignores opportunities before the laptop's first retail sale. Those seem more attractive to me--more time to work, less diverse hardware, etc.--and almost every laptop sold is exposed that way.
So my comment above was probably too flip. His experiment still seems pointless to me, though.
Do you have URLs to products that are big enough for say 17'' laptops?
It builds up a concept of "Colour" as describing information about a thing (distinct from metadata / tagging) which is not necessarily derivable from the thing itself. Most frequently it uses the term to describe provenance, but is careful not to limit the concept. To quote the ansuz' essay above in relation to the linked article:
When we use Colour like that to protect ourselves against viruses or malicious input, we're using the Colour to conservatively approximate a difficult or impossible to compute function of the bits. Either our operating system is infected, or it is not. A given sequence of bits either is an infected file or isn't, and the same sequence of bits will always be either infected or not. Disinfecting a file changes the bits. Infected or not is a function, not a Colour. The trouble is that because any of our files might be infected including the tools we would use to test for infection, we can't reliably compute the "is infected" function, so we use Colour to approximate "is infected" with something that we can compute and manage - namely "might be infected". Note that "might be infected" is not a function; the same file can be "might be infected" or "not (might be infected)" depending on where it came from. That is a Colour.
Once you've left your computer alone with a potential adversary, it has the might-be-compromised Colour. Proving whether it definitely has or has not been compromised is easy for devices which do not have this Colour, but as described in the linked-to article, very difficult or impossible once it has this Colour.
Let's be honest here. None of the more cutting edge attacks are going to be risked by attacking. as hard a target as this guy. The level of sophistication of attack the author is starting to reach is going to be reserved for state-level persons-of-interest.
Espionage is a game of judging capabilities, and cracking some security researcher's laptop telegraphs to the rest of the world that you can. As a national actor you don't actually WANT to flex your spy muscles in obvious ways unless the payoff is JUST THAT CRITICAL. It removes the veil of the unknown, and gives potential adversaries/persons-of-interest that much better a chance of successfully applying tradecraft to hide what you actually want to monitor because they have more accurate knowledge of what your capabilities are. Contrary to popular belief, most organiztions capable of pulling an evil maid attack simply won't because of the revelation of capability already mentioned, and the PRISM problem. Too much information/access in general lends itself to becoming useless due to the difficulty of separating the tasty bits from the mundane.
Kudos to the guy for actually trying the experiment, but it doesn't really tell anyone anything we didn't already know 20 years ago.
Computers are inherently insecure. Every form of "security" is insecure at some point. Computers haven't changed anything except for making a person's computer a juicy target to get some juicy financial information/passwords for non-state actors, or making surveillance potentialities so much more horrifying on account of the ubiquity of networked cameras, sensors, and microphones on the ground waiting to be exploited.
Forget about laptop evil maid attacks. Start thinking about the ticking time bomb of 'poisoned' hardware rife with 'tailored access' whereby state actors can push a button and have every device with a camera/microphone within a certain set of GPS coordinates start silently acting as an input sensor. Combine that data stream with the right neural networks, and you'll see a world that no one in their right mind wants, but is well within our manufacturing capabilities to create.
Or stop worrying, go outside, and make a friend. It's way better for your mental health.
Even if you built all the binaries from scratch from the official repos, you'd still be at risk of security bugs like heartbleed, or a compromised compiler.
In the end, I think security is always a numbers game. Someone can always get to your protected resources, it's just a matter of how much the attacker wants it.
It's easier to attack a resource than defend it.
They admit that implementing their fix could take time and money. But without it, their proof-of-concept is intended to show how deeply and undetectably a computer’s security could be corrupted before it’s ever sold. “I want this paper to start a dialogue between designers and fabricators about how we establish trust in our manufactured hardware,” says Austin. “We need to establish trust in our manufacturing, or something very bad will happen.”
Experiment: after having gone through a number of - some meaningless - attempts to be able to proof that this happened, there was no evidence it happened.
Doubt: did it happen nonetheless without leaving any trace
ot it din't actually happened at all?
Bonus: the experimenter learned that NVRAM exists in the stupid UEFI firmware
Conclusion: None worth mentioning, but be very aware of what the terrible evil maids can do, and do use the recommended Android app to defend against them.
 Hashing a whole hard disk is only a "positive" proof, if the hashes correspond nothing changed, but it is very possible that the hashes change because of any filesystem or disk issue if the system is used, so the method is pointless in the real world, where people bring with them a laptop in order to use it.
This is known to be true, this experiment was about seeing if anyone would access this laptop. Which also addresses what you view as meaningless, real world scenarios are trying to avoid their laptop being compromised while the author was hoping that it would.
The "experiment" has too few data points to be meaningful, and the proposed way to verify remains meaningless, two simple cases:
1) the evil maid simply makes a forensic image of the disk
2) a sector in the hard disk goes bad
Case 1: there was an intrusion, all the data was stolen, but the hashes do not show that (false negative)
Case 2: there was NOT any intrusion, but the hashes show that there was a change (false positive)
This isn't science, we know this is possible and the "experiment" was to try and find examples of it happening.
A false negative is always assumed, it is impossible to know you haven't been compromised. A false positive is meaningless as finding a change is only the first step. You then need to analyze what the change is, and if you can't pin down what has been compromised you're just back to the default state of unaware.
This is a honeypot. If you leave your honeypot and return to an empty one, you're pretty sure a bear is around but can't do anything. If you find a bear with their paws in the pot, you don't need to run the experiment again to prove there's a bear.
That boots using an unencrypted /boot partition, but everything else running on luks (one big partition, LVM'd down). I have a VeraCrypt partition which is for files that I want to work on from both operating systems. Works really well, crypted disks doesn't materially impact performance, and gives peace of mind.
The most likely scenario for theft is someone after the hardware, and they'll not spend much effort trying to break into the file system.
I'd be wary if the machine was stolen and then returned, but restoring mbr & /boot partition should be sufficient in that instance.
I've travelled to regions that I considered dubious, if not especially technically sophisticated. I haven't done this, however research suggested the best way of confirming your laptop hasn't been opened is to use a sparkling nail varnish. Dab a small amount on some or all of the case screws, take a close-up photo, store that photo somewhere safe. After the event, take photos of the screws again, and compare. The random patterns are effectively impossible to replicate.
Combined with disabling USB booting, and BIOS admin password, and keeping the OS in sleep -- it should be possible to prove your laptop hasn't been hacked via physical intrusion.
This: https://www.linuxjournal.com/content/take-control-your-pc-ue... will also help in keeping the evil maid (sexist) out.
As to your last assertion: you can't really prove it 100% but you could at least satisfy your risk assessment.
So: Your user data is on the HDD and encrypted AND you use a removable disc to boot your machine AND you have a "something you know" (password)
That looks quite secure to me, provided you look after your removable disc and password. I'm not familiar with IBM gear - is ExpressCard a removable disc? I tried to read the WP page on it but got confused.
I have one of these for my laptop - Dell Inspiron 17. It runs Arch Linux. I don't trust it at all (I'm CREST accredited) but I still use it.
I agree my setup is probably pretty secure, but not any moreso than a single os install with FDE, especially since I often leave an OS ExpressCard in the machine and the other ones I leave scattered around my desk...
In reality, as the article explains, the windows partition is basically invulnerable to this class of attacks if you take the 5 minutes to enable bitlocker. OTOH Linux systems have no effective defense.
Code licensing or copyright status isn't a form of security.
The issue isn't just that the remote server's code is impervious to scrutiny. A locally installed program that you can reverse engineer isn't automatically trustworthy because it is open-source, or even copylefted. Someone actually has to reverse engineer the binary and prove that it matches the source code. Many users of free software trust upstream binaries. (Even if they compile their own programs, they trust compiler binaries at some point.)
My primary concern - I should perhaps have spelled it out more clearly - is that the Windows partition is likely exploitable via Microsoft Windows.
Detecting data at-rest exploits such as described in TFA, and per my mitigation suggestions -- because they don't scale well -- implies that you're already of interest to your adversary.
> The adjective trusted, in trusted boot, means that the goal of the mechanism is to somehow attest to a user that only desired (trusted) components have been loaded and executed during the system boot. It's a common mistake to confuse it with what is sometimes called secure boot, whose purpose is to prevent any unauthorized component from executing.
(Search terms used: "secure boot linux" and "secure boot macbook")
Rather less likely to be lugging that through customs :)
Perhaps the author meant there was no 'universal' Linux implementation, however it's been available for a while in certain distros.
Right now every national "security" agency(usa, china, uk) is racing to create a truly comprehensive suite of tools to monitor its citizens en masse. Exploits for every router, iphone built in backdoor, etc. Pretty much anything that would give the government access to the most intimate details of your life. With the current political climate it's just going to get worst.
If you care about your privacy AND security, become informed and vote for privacy advocates. Visit fightforthefuture.org and eff.org to learn more.
DISCLAIMER: I am in no way affiliated with either of these foundations or their members.
So that's not a particularly viable option.
(Difficult to prove, natch, but possible.)
We know that the NSA intercepts machines for modification. And it's possible that hardware is generally backdoored. Maybe even by Chinese manufacturers.
But what can one do, if everything is pwned? It's not practical to build machines from transistors etc. There are dreams of open-source hardware. But how could that even be done securely? The NSA can plant agents anywhere, in theory.
I'm increasingly of the view that it's not, at least not through individual action.
My interest is, for first strokes, painting an accurate picture of the landscape. Which means discarding inaccurate models and frames.
Among those: that laying low is possible, or a positive (that's precisely the objective of the Panopticon, and self-censorship abd -reegulation are the most efficient), or that individual rather than collective action is appropriate.
It also seems that surveillance itself faces various realities and economies, which can be directly attacked.
A much smarter approach in my opinion is to live on two separate layers: a secure private underground and an uninteresting public surface.
This not only keeps the enemy content, it also keeps tracking low; any open confrontation will necessarily lead to harsher measures, which ultimately means violence.
If samizdat worked in the URSS it can work nowadays too.
Yes. And never connect layers. Mirimir has no meatspace contacts. Also, one can have more than two layers.
This is what I call "defense by presumed motive" and is flawed.
For practical security it is also important to have some physical things on the laptop body that allow you to identify your hardware. Otherwise somebody will just replace it with their own hardware to collect your password. Obviously pretty much anything can be replicated, but absolute security is anyways impossible to achieve so you can only try to make things harder for them.
With regards to my checked luggage - no electronics there - when traveling to/from/in the US, I always save those 'Inspected by TSA' placards, and place one prominently atop my clothes prior to closing and locking my bag.
Based on various physical telltales I utilize, the success rate of placing a used 'Inspected by TSA' placard in one's bag to deter searches is 100%, at least in my experience.
Since I started doing this, I haven't received any new 'Inspected by TSA' placards, either. So, that's another indicator of the technique's probable success rate.
That parenthetical is an important and almost always unstated axiom.
The general inability to prove a false statement does not mean you cannot prove that the answer to some equation is a number below zero. I am not really aware of the phrase being used in the context of math, but rather more often with examinations or experiments that are susceptible to evidence.
To be sure, "you cannot prove a negative" is itself unproven. It more a rule of thumb to remind you not to assume that though some statement is false now that it always was false and always will be false.
It's not perfect, but it's also not a law of logic or anything. It's just a guideline.
On a side note Fermat's Last Theorem isn't a good counterexample because it hasn't been proven yet either.
Both in math and in the real world we don't have 'perfect observation'. There is plenty of conjectures in math and the real world that lack a proof of something being true or false.
I think "can't prove a negative" is one of the least informative ways of trying to say something, I assume he meant to say "absence of evidence is not evidence of absence" or perhaps "absence and evidence don't commute"
You know things are bad when people are annoyed by an operating system they dont even use!
Edit-the case could have additional logic and wireless charging for power.
What the author actually forgot to do was to add some honey into the honeypot. I.e. become an attractive target.
Seems like an experimental bias to me
Nevertheless, as his experiment have shown, that was not enough.
Preicsely. All the author had to do was use his best broken English and pretend to be a member of The Shadow Brokers.
The article would have been far more exciting had it involved speculating which three letter agency compromised the laptop the most, or the finer points of writing an article while being physically hunted by a snatch and grab team.
If you would find this more exciting, there's no shortage of fiction already available on similar subjects. This article was about detailing the current risks, and the author's attempt to catch the attack in action. Not as exciting, but important information for people that may be at risk of a similar attack.
>If you would find this more exciting, there's no shortage of fiction already available on similar subjects.
I mean, trying to impersonate TSB as a way to attract attention from intelligence agencies is insane on the face of it.
The article was great; no qualms with it whatsoever.
On most Linux setups, that system is the initramfs--if you've ever installed Arch or similar, this is what the `mkinitcpio` step generates--and if you peek in your boot partition, it'll probably be named something like `initramfs-linux.img`.
The initramfs is a (often gzip compressed) ramdisk image for a full-blown tiny Linux system, complete with its own set of coreutils (if you want to see what it contains, run `lsinitcpio -x` on it). It's what handles your boot process, like setting up your keymaps, mounting disks, and of course, decrypting encrypted partitions.
By unpacking, modifying, and repacking the initramfs, it's possible--even trivial--to run whatever code you want as root, or intercept the user's encryption password when they type it in to do the type of conventional unencrypted backdooring you have in mind.
But after doing a standard LUKS install, you can move /boot to an SD card. You can also backup the LUKS header to the SD card, and wipe it from the system.
Now the machine cannot be booted without the SD card. After restoring the LUKS header. And even if an adversary creates a new /boot on the machine, you can check for that, and nuke it before booting from the SD card.
If you're detained, you can just chew up the SD card and swallow it. Maybe a little hard on the teeth, but hey.
But of course, that SD card must never leave your body. Except that you probably want to hide copies somewhere. In case you lose it, or whatever.
I think I remember reading a story recently about Thunderbolt or maybe USB being connected to an Option ROM over PCIe (must have been Thunderbolt I guess) that allowed an attacker to simply plug in a USB stick and permanently and irrevocably pwn the system - right down to securing the flaw that allowed flashing of the ROM over the PCIe connection. I think the malware overwrote some bit that allowed any further writing, so even attaching physical chip flashing device to the ROM wouldn't clear the malware. The machine was effectively permanently compromised and could only be thrown away.
*Probably not, going by history.
There are some nice tips here on spray glitter on a seal and nail-polishing it, then taking a photo. That way, anyone that breaks the seal has to reproduce the same glitter pattern.
* [the sum S of two even numbers A1 and A2] is an odd number
Here is its negation:
* [the sum S of two even numbers A1 and A2] is not an odd number
* [the sum S of two even numbers A1 and A2] is an even number.
Let's "you can't prove the negative":
* A1 is even => there exists an integer a1 such that A1 = 2a1
A2 is even => there exists an integer a1 such that A2 = 2a2
by substitution the sum (A1+A2) = (2a1 + 2a2)
* by distributivity: (A1+A2)=2(a1+a2)
sum s of integers a1,a2 is an integer: a1 + a2 = s integer
* substitution (A1+A2)=2s with s integer
S=(A1+A2)=2s, hence S is even
an even number is not odd
* hence [the sum S of two even numbers A1 and A2] is not odd
We proved a theorem that was also a negation of a statement!!
If the ring gets stolen by someone breaking in via the window then you know it has gone but you do not know whether the thief say changed the locks in some way. Now they can come and go with impunity.
If you find some media that rises to this level, please let me know about it, because I thought it was excellent.