It does have some benefits, with its process-based model, with various login_* helper utilities (in contrast to PAMs shared libraries). This allows some cool things like "sandboxing" with pledge(2), which it appears many are.. including login_duress.
I think that’s standard procedure already.
Also, note that destructing or concealing evidence that is relevant to a court case or legal investigation is a criminal offense in many jurisdictions. For sure you will face charges just for trying even if you are not successful (e.g. because they’ve imaged the system).
It can still be useful when they don't have physical access to make an image, for instance when the system in question is being remotely accessed via ssh.
> Also, note that destructing or concealing evidence that is relevant to a court case or legal investigation is a criminal offense in many jurisdictions.
It can still be useful when there's no court case or legal investigation. For instance, when you're being illegally threatened to reveal your password.
Bear in mind that law enforcement generally has the ability to go get those too.
> It can still be useful when there's no court case or legal investigation. For instance, when you're being illegally threatened to reveal your password.
You're absolutely right! One caveat worth considering is that this might not be a typical security threat that most people are likely to face.
I'm rehashing your parent's comment, but: Unless the command wiped the entire disk or the entity gained access to the unencrypted versions of the disk before/after the duress command, an outsider wouldn't be able to tell if a log had been updated or if an entire partition had been wiped.
On the other hand, it's possible to delete an encrypted partition by only overwriting the encryption key, which might be a small-enough change to go undetected.
Disk encryption, unlike other forms, does not have a terribly high avalanche factor when small changes are made--because it's expensive to write lots of things to disk.
However, it is possible to make a small change (as small as, say, writing the audit log file on a real successful login) that renders data completely inaccessible. Consider an encrypted disk on which you can tell the magnitude of changes on the filesystem, but not which data has changed. Let's say you have a lot (many gigabytes) of sensitive data on that disk. If a successful login triggers the encrypted filesystem to decrypt the contents of the disk using an encryption key (of, say, a 4kb length) that is stored only on that disk, then a duress code could simply destroy (or corrupt by randomizing a few bytes) that key, rendering the contents of the disk inaccessible, without writing more than a very small amount of data.
This fundamentally trades off deniability for data security: the disk would still contain all of the encrypted data and could be brute-forced, but that would be the case anyway if an image had been taken previously.
Of course, situations in which that deniability would be legally well-received are, as others here have pointed out, vanishingly rare.
If you have an encrypted volume, you can use the command 'diskutil apfs eraseVolume' to make data inaccessible instantly by deleting the encryption key. (Note that the disk passphrase is not the same as the encryption key, so even if you use a weak password for your disk, you can't brute force the key)
How would they even be able to determine which files are modified? If we're talking full disk encryption here, you can't tell which files are being accessed/modified, just locations on disk. Without metadata to map blocks to objects they're flying blind.
File encryption does that. Disk encryption means that you don't even know how many files there are, much less which ones were changed. The whole disk is just a blob of random data until the right password is entered.
Don't they also need to know what files are changed by a normal login, so that they can see that the changed set in this login was different from that set?
Comparing an image after a login to an image from before the login gives you a set of changed files, but it doesn't tell you if that is the normal login change set or the duress login change set.
Anyway, if I were setting up a duress login I'd make it so normal and duress login change the same set of files.
When different passwords (for the same user) simply decrypt and access different parts of the filesystem it's not the case.
Simply transferring everything off a 2TB HDD at 100MB/s will take over 5 hours, and that's ignoring any hashing.
I could be behind on certain policies but that’d seem to fly in the face of the fifth amendment (or is concealment not covered?).
Citizens and permanent residents cannot generally be refused entry for flexing their rights. There are a few ways permanent residents can be denied entry, but the main one is having been out of the country for over 180 days.
Temporary residents and visitors are the ones who can be denied entry for looking at an immigration agent wrong, or trying to flex their right against absurd digital searches.
However much we might wish for things to be different. Europe and Oceana may have a less intrusive policy about digital searches, but everywhere else is either worse than the USA, or isn't developed enough to have paranoid security services that want to search everything.
That's not exactly the same thing as denying a previously-valid password, but it is along similar lines.
Might the following script make sense:
- Delete/change what you don’t want to share
- Change the regular password to the duress password
- Remove login_duress
Any flaws with this approach or improvements?
The whole point of this is that the adversary doesn't have access.
This might not be precisely a winning scenario.
1Password already has the ability to wipe for travel, would be good to see more of that.
Specifically, plymouth provides the splash screen on boot with many common Linux distros. If I enter the decoy password, take some action, like displaying a fake error screen.
I toyed around with making a plymouth theme to this end, but didn't get far. It would be nice to have a theme that offered plausible deniability wrt FDE.
Torture works (in situations in which it actually works, which are vanishingly few--in no way should this be taken as in favor of torture) by creating expectations of future suffering. Torture victims are made compliant over time, so it's more likely that a victim would have the wherewithal to enter a duress code early in the process than it is that they would not crack after prolonged torture.
If you're thinking that way, and the forces you're up against are too, then the only thing this really accomplishes is potentially protecting the information that reveals your actual comrades. And if you're actually in a situation where protecting information about your comrades and operations is more important than potentially avoiding torture, then you're braver than most of the people on the planet, and good luck - you'll need it.
In some rather narrow cases it might help avoid random searches from a non-malicious threat. (e.g., border guard just proving a point for whatever reason)
If there is a purpose to recover something from your machine, an invested adversary that is willing to rubber hose is going to keep going. And the numbers seem to show that most people will not have the will and calm to use their duress password when they've been tortured into logging in.
You trick them into entering a password that either wipes the data, alerts someone to the attempt, or dumps them into a sandbox that appears normal but doesn't provide the access they want.
What do you think happens next?