The UK law requires that you make the encrypted data intelligible. Since you have encrypted data there's a pretty good chance you have the software to decrypt it. "They" don't want the password, they want the data.
Failing to make the data intelligible (whether that's failing to provide the passphrase or whatever) carries a 2 year prison sentence for some people, with possibilities for a 5 year sentence for others.
tl:dr - this does not prevent law enforcement from getting the password.
Also, using this to guard against rubber hosing is stupid. People prepared to use torture will do so, whether there are laws preventing it or if it's going to provide any useful evidence or not.
Extremetech articles are really lousy. The self-posting by the author of a poorly written article is a problem; the heavy ad load is another problem, but I find it hard to believe that there isn't a vote ring up-voting these lousy articles.
A quick glance shows that about 90% of Mrs Ebastian's subs are to articles that they've written, for their employer.
I think two or three ads per page is pretty good. I have seen some tech sites with much more than that. (As you probably know, running a free site that makes money from ad revenue is pretty tough at the moment, and isn't getting any easier.)
Apologies if you find the stories lousy. I try my best to dig up interesting stuff. Obviously the quality of the reporting isn't as good as if a professional cryptographer/material scientist/engineer etc wrote it -- but... I do the best I can :)
Um, not mentioning the "details" thankfully supplied by this comment  is quite a big omission.
The topic is super interesting, but it's hard to be sure if the reporting is inaccurate.
One can torture you until you start attempting to input the password and recover it from your neurological pathways. A password is a password, it doesn't matter how you're storing it because it can be retrieved.
You can't produce the password. You can only subconsciously recognize it.
From the original paper:
Further complicating the attacker’s life is the fact
that subjecting a person to many random SISL games may
obliterate the learned sequence or cause the person to
learn an incorrect sequence thereby making extraction
Having said all that, I agree it's a valid risk.
But it's extreme. Other risks are interesting to look at.
The game produces one stream of output mixed from one unique (to each player) log in sequence and some random data. So recording many streams would seem to make the real sequence available, and now you just need to simulate the player response to that input. Analysis of sound is an established technology now, having had considerable investment and research because of its military technology. Recording the sound of keystrokes (of type writers, some printers, computer keyboards) can produce accurate transcripts of what has been written.
Having said all that, I am glad that there are people researching this stuff. It's a bizarrely under-researched gap in security.
And the underlying idea seems reasonable enough. I have a few passwords that I can enter if I'm in front of my keyboard, but give me a different keyboard and I'd struggle.
The proposed system is designed to be
used as a local password mechanism requiring physical
presence. That is, we consider authentication at the
entrance to a secure location where a guard can ensure
that a real person is taking the test without the aid of
We note that physical presence is necessary in
authentication systems designed to resist coercion
attacks. If the system supported remote authentication
then an attacker could coerce a trained user to
authenticate to a remote server and then hijack the
If the attacker does not, you'll simply ask for help as soon as you're there.
If the attacker wants to impersonate you, a photo check will work as well and is much faster
The authors and the news coverage claim this offers some sort of rubber-hose defense but the only scenarios described are either contrived or duplicate more proven techniques (e.g. duress codes, biometrics)
Yeah, it's not described too well in the article but I think there are some users for this.
Or maybe it's only accessible from secure locations - you don't go this much trouble to secure day to day stuff.
It's not unbreakable, but the idea that you're letting someone in by statistical analysis of their conditioned response to a game, including their mistakes is clever - no reason you couldn't lock them out or use other security measures by failure to meet that response.
However, if you're in a situation where someone is trying to forcibly extract access from you, there's s good chance that a stressed state of mind would be reflected in variations in how you play, which could be noticed by the system.
Even if it's limited to a specific secured location, though, you still have to worry about the possibility of keyloggers, which could be used to mount a reply attack without you ever having to know.
I'm not sure how one would apply this method to encryption.
more likely Mr Sebastian ... or was that intentional?
(It is Mr Seb though. Today, anyway.)
"Threat model. The proposed system is designed to be used as a local password mechanism requiring physical presence. That is, we consider authentication at the entrance to a secure location where a guard can ensure that a real person is taking the test without the aid of any electronics"
Many of the comments I see here tend to assume that this is directly applicable to protecting a remote system such as logging in to a website. Perhaps with adaption this could be a useful technique for authenticating into a website, but as far as I know no authentication scheme can protect against an intruder with a gun to your head forcing you to log in. Instead, the use case here is to prevent someone who has stolen your ID badge and forced you to give up your PIN from being able to get access to the top-secret bunker.
1) Tell me who you are, so I can load up your secret 30 character "password" from some database (the fact that this needs to be stored in a retrievable way makes this entire system insecure)
2) Here's one random sequence of 30 characters. Look at it for a little bit, ok now try to reproduce it from memory.
3) Repeat several times (not stated how many).
4) One of those attempts was your specific password, let me check to see if you did significantly better at it than the other (random) ones.
EDIT: Upon re-read, it sounds like 2-4 are a bit different:
2) Play a long sequence of characters "Guitar-Hero" style. The computer will "slip-in" the true password and watch to see if you do better on that section.
Still storing the password in the clear and still susceptible to being watched several times and finding the "common" sequence.
Maybe there's a more obtuse use-case but this seems like more of a cool experiment on human memory than a practical cryptography tool.
Several times i've been drinking and am unable to remember how to log into my machine, because I can't replicate the pattern and don't remember the password. After 15 minutes of concentration it comes back.
I guess it's only worth giving your mental quirks a name if, say, they're 2σ above norm. I dunno, ask your doctor (or statistician :P).
(Neat trick, but reversible password encryption still seems like a massive flaw here...)
A few weeks ago it was late, I'd just come from the gym and not eaten anything and I couldn't figure out why my PIN wasn't working. Turned out I was trying to use a code that I stopped using a couple years ago.
Also, I get into a stateful kind of memory where if I have to produce a password I normally produce at home while I'm at work, I produce the wrong password.
Surely for this system to help in allowing you to plausibly say that, you'd have to reference this system (or equivalent) and demonstrate that it is indeed used for the authentication the police want access to. And in that case, surely the police could just say "in that case, please authenticate for us"?
The real problem is the device stores the password, so the real defence is the tamperproof-ness of the device, not whether you can be tricked or coerced into outputting the sequence.
"Since our aim is to prevent users from effectively transmitting the ability to authenticate to others, there remains an attack where an adversary coerces a user to authenticate while they are under ad- versary control. It is possible to reduce the effective- ness of this technique if the system could detect if the user is under duress. Some behaviors such as timed re- sponses to stimuli may detectably change when the user is under duress."
No. Of course not. What this system provides is a unique -extra- method of authentication. I really doubt this is meant for putting this on your laptop in place of a password scheme. But you might use something like it as part of multi-factor authentication, e.g. into a secure facility. Remember all those movies where somebody's eyeballs are removed/replaced/copied in order to fool a retina scanner? I can't comment on how plausible that is, but I can certainly tell that if it were this system, they could not have broken it, period. I think that's pretty useful don't you?
As a trivial example: this system assumes a single attempt in a guarded facility. What benefit does this offer over a duress password which our poor hostage provides knowing that it will trigger a full security response and locking out of their access? For that matter, why not have the same guard who looks for tricks check your face against the employee database?
Even if not perfectly resistant to all kinds of coercion, or ideally strong in an information-theoretic sense, its weaknesses in various dimensions are different than more traditional systems. It is thus suggestive of other potential directions in the design space, leveraging other aspects of human memory/behavior.
It bears some similarity to systems which add the timing of a person's typing as an added authenticating factor.
If they'd published it as a minor curiosity suggesting an area for future research there'd be far less backlash.
Before running, the game creates a random sequence of 30 letters chosen from S, D, F, J, K, and L, with no repeating characters. This equates to around 38 bits of entropy
So that's 6 choices for the first character, and 5 choices for each of the next 29 gives us log2(6*5^29) =~ 70 bits of entropy. Does anyone know where this 38 bit figure came from?
We only use 30-character sequences
that correspond to an Euler cycle in the graph shown in
Figure 2 (i.e. a cycle where every edge appears exactly
once). These sequences have the property that every non-
repeating bigram over S (such as ‘sd’, ‘dj’, ’fk’) appears
exactly once. In order to anticipate the next item (e.g., to
show a performance advantage), it is necessary to learn
associations among groups of three or more items. This
eliminates learning of letter frequencies or common pairs
of letters, which reduces conscious recognition of the
embedded repeating sequence .
Also, the paper assumes physical presence of a live human at some terminal for authentication. At the point that you can make assumptions about who is operating your authentication system, biometrics seem to be a far faster and more reliable authentication system. Both those limitations,however, could change with further research.
Does not compute. If there is a mechanism by which you can authenticate, you can be coerced into authenticating through that method.
The paper covers this of course:
>Coercion detection. Since our aim is to prevent users
from effectively transmitting the ability to authenticate
to others, there remains an attack where an adversary
coerces a user to authenticate while they are under adversary control. It is possible to reduce the effectiveness of this technique if the system could detect if the
user is under duress.
I take issue with the the article suggesting it's completely resistant to coercion. A system that detects duress... interesting I guess but seems like a stretch.
>This equates to around 38 bits of entropy, which is thousands/millions of times more secure than your average, memorable password.
Really? Playing around with KeePass briefly, it seems this is comparable to a 6 character password that includes upper, lower, numeric, and special characters. I wouldn't consider that very strong. Besides the fact that it appears you're not entering the password exactly, but only (if I'm understanding correctly) "good enough".
> creates a random sequence of 30 letters chosen from S, D, F, J, K, and L, with no repeating characters. This equates to around 38 bits of entropy
Which is not so bad for certain applications, but certainly isn't the 180+ bits you'd have in a true random 30 character password.
I wonder what applications they have in mind where this password system could be used.
Only this time you'll have to log-in/decrypt on the spot rather than cough up your password.
It may be an interesting research, but it certainly won't help with that issue (with noting passwords down maybe)
And one thing may happen, you can have no clue of what your password is, or write it down, but you may need to look at the sequence to remember it! (Piano players may identify there)
Conclusion: Interesting psychological experiment, not actually backed by any appreciable crypto knowledge.
Edit: disregard my NIST comment, someone linked the paper used to get the 38 bit figure, http://bojinov.org/professional/usenixsec2012-rubberhose.pdf.
A better argument against this system would be one that addresses human usability and unnecessary cost/complexity.
Further arguments include high overhead for learning (not to mention changing passwords) a given password, storage of passwords, and the idea that your password isn't summonable on demand.
Authentication requires that you play a round of the game —
but this time, your 30-letter sequence is interspersed with
other random 30-letter sequences.
If the attacker is allowed multiple authentication
attempts — iterating the extraction and test phases,
alternating between the two — then the protocol may
become insecure. The reason is that during an
authentication attempt the attacker sees the three
sequences k0; k1; k2 and could memorize one of them (30
symbols). He would then train ofﬂine on that sequence so
that at the next authentication attempt he would have a
1/3 chance in succeeding. If the attacker could memorize
all three sequences (90 symbols), he could ofﬂine
subject a trained user to all three sequences and
reliably determine which is the correct one and then
train himself on that sequence. He is then guaranteed
success at the next authentication trial.
We note that this attack is non-trivial to pull off
since it can be difﬁcult for a human attacker to
memorize an entire sequence at the speed the game is
But it sounds like the system is designed to only give an attacker one trial (notionally opening a trap door under his feet if he fails even once), and it does seem much more secure in that context.
http://www.techrepublic.com/blog/tech-manager/personal-data-... "Last week in San Francisco, a federal court for the first time ruled that the Fifth Amendment of the U.S. Constitution — the right to not self-incriminate — protects against “forced decryption.” The judge, from the 11th Circuit in San Francisco, ruled that a Florida court violated a defendant’s rights when its Grand Jury gave him the choice to either reveal his TrueCrypt password or go to jail."
And it's not unbreakable. For starters, this system absolutely requires that the passwords be stored in the clear.
edit: yes, i know, encrypting the key with another string makes it just that tiny little bit secure, technically it's still plain text...
So it's not unbreakable, nor is it crypto. I'm not sure if it's anything, really.
For instance, this could prevent employees of a large corporations from writing down or sharing a password with a coworker, or even spelling out their password over the phone to a bogus "support engineer" -- although probably fingerprint/eye/face recognition systems are more practical and easy to implement than a "guitar hero" learning session. But then the OP method has an advantage over those: you can change your implicit-learned password easier than your face or fingerprint...
The good thing about memorizing passwords this way is that it doesn't matter how random the password is - totally random letters, numbers and symbols or a sentence are the same when it's a keyboard dance.
As long as you have a keyboard anyway...
I have to find a keyboard to figure out half of my passwords when setting up my phone.
It's not "30-character unbreakable cryptography", you can crack it in minutes on your phone or desktop.
The article actually says that each 'character' you learn is one of only 6 possibilties - for only 2.5 bits per character and total entropy of 38 bits.
To see how woefully little entropy this is, if you code, try writing a program that counts to 2^38 - or on a 32-bit system go through the 4.2bn possible values of an integer 64 times. That's how many possible keys there are in a 38-bit password. It really just takes minutes - certainly far less than the 45 minutes the article says it takes to learn this password!
38-bit keys/passwords are not secure by any stretch of the imagination, no matter how they are chosen. (i.e. even the best random number generator on Earth doesn't help if you can just try every possibility in minutes.)