Hacker News new | past | comments | ask | show | jobs | submit login
Unbreakable crypto: Store a 30-character password in your subconscious memory (extremetech.com)
208 points by mrsebastian on July 19, 2012 | hide | past | favorite | 84 comments

> It also gives you deniability: If a judge or policeman orders you to hand over your password, you can plausibly say that you don’t actually know it

The UK law requires that you make the encrypted data intelligible. Since you have encrypted data there's a pretty good chance you have the software to decrypt it. "They" don't want the password, they want the data.

Failing to make the data intelligible (whether that's failing to provide the passphrase or whatever) carries a 2 year prison sentence for some people, with possibilities for a 5 year sentence for others.

tl:dr - this does not prevent law enforcement from getting the password.

Also, using this to guard against rubber hosing is stupid. People prepared to use torture will do so, whether there are laws preventing it or if it's going to provide any useful evidence or not.

---- EDIT: META:

Extremetech articles are really lousy. The self-posting by the author of a poorly written article is a problem; the heavy ad load is another problem, but I find it hard to believe that there isn't a vote ring up-voting these lousy articles.

A quick glance shows that about 90% of Mrs Ebastian's subs are to articles that they've written, for their employer.

Hey! No, no voting ring. But I do submit a lot of ET stories, that's for sure. I only try to submit stuff that I think is new/interesting/pertinent.

I think two or three ads per page is pretty good. I have seen some tech sites with much more than that. (As you probably know, running a free site that makes money from ad revenue is pretty tough at the moment, and isn't getting any easier.)

Apologies if you find the stories lousy. I try my best to dig up interesting stuff. Obviously the quality of the reporting isn't as good as if a professional cryptographer/material scientist/engineer etc wrote it -- but... I do the best I can :)

> Apologies if you find the stories lousy. I try my best to dig up interesting stuff. Obviously the quality of the reporting isn't as good as if a professional cryptographer/material scientist/engineer etc wrote it -- but... I do the best I can :)

Um, not mentioning the "details" thankfully supplied by this comment [1] is quite a big omission.

The topic is super interesting, but it's hard to be sure if the reporting is inaccurate.

[1] http://news.ycombinator.com/item?id=4271999

To add to that, the article doesn't make it clear why torture wouldn't work...

One can torture you until you start attempting to input the password and recover it from your neurological pathways. A password is a password, it doesn't matter how you're storing it because it can be retrieved.

The problem with that is, they don't have the passphrase, so they can't put it into the game they show you. And you don't know it, so you can't enter it, even if you want to.

You can't produce the password. You can only subconsciously recognize it.

From the original paper:

   Further complicating the attacker’s life is the fact 
   that subjecting a person to many random SISL games may
   obliterate the learned sequence or cause the person to
   learn an incorrect sequence thereby making extraction
They can, at best, try to log in as you, record the sequence the terminal gives, have you (under duress!) play against that sequence in a remote location, determine the code, and train themselves. But that requires a login failure; this particular system is supposed to panic after even one login failure.

I don't understand. Why can't they just hand you the terminal and say, "log in or we'll shoot you"? Why the roundabout process with recording the sequence and having a failed login and all that?

Doing something with a gun at your head is, I imagine, pretty hard. Imagine something you do with muscle memory every day - maybe typing fast or editor keyboard shortcuts or somesuch. Now deprive you of sleep, and food, and make you anxious, and then have someone with a gun at your head. Are you still going to be typing as fast? With the same error rate? Are you going to be making the same keyboard shortcuts?

Having said all that, I agree it's a valid risk.

But it's extreme. Other risks are interesting to look at.

The game produces one stream of output mixed from one unique (to each player) log in sequence and some random data. So recording many streams would seem to make the real sequence available, and now you just need to simulate the player response to that input. Analysis of sound is an established technology now, having had considerable investment and research because of its military technology. Recording the sound of keystrokes (of type writers, some printers, computer keyboards) can produce accurate transcripts of what has been written.

Having said all that, I am glad that there are people researching this stuff. It's a bizarrely under-researched gap in security.

And the underlying idea seems reasonable enough. I have a few passwords that I can enter if I'm in front of my keyboard, but give me a different keyboard and I'd struggle.

Sorry for the late reply. This system isn't designed to be used on a terminal over the net. From the original paper:

    The proposed system is designed to be
    used as a local password mechanism requiring physical
    presence. That is, we consider authentication at the 
    entrance to a secure location where a guard can ensure 
    that a real person is taking the test without the aid of 
    any electronics.
And . . .

    We note that physical presence is necessary in 
    authentication systems designed to resist coercion 
    attacks. If the system supported remote authentication 
    then an attacker could coerce a trained user to 
    authenticate to a remote server and then hijack the 
If you're allowed remote attempts and multiple failures, the system is insecure in several ways. It's designed to work in a scenario where you get ONE attempt, and there's an armed guard who doesn't take kindly to it if you fail.

If the attacker has long-term control (e.g. hostage, blackmail, etc.) this is useless.

If the attacker does not, you'll simply ask for help as soon as you're there.

If the attacker wants to impersonate you, a photo check will work as well and is much faster

The authors and the news coverage claim this offers some sort of rubber-hose defense but the only scenarios described are either contrived or duplicate more proven techniques (e.g. duress codes, biometrics)

They would just threaten with torture unless you play the "passgame", just like they make you enter your password.

I think one advantage would be that they can force you to enter it… but only you. They can't get the password and kill you afterwards:)

Yeah, it's not described too well in the article but I think there are some users for this.

Have you play it and record your performance. Now they can spoof the game with a computer player that intentionally mimics your performance.

Simple answer there.... just have the system automatically lock the account if you statistically miss the password, just at it statistacally allows you in if you play properly.

Or maybe it's only accessible from secure locations - you don't go this much trouble to secure day to day stuff.

It's not unbreakable, but the idea that you're letting someone in by statistical analysis of their conditioned response to a game, including their mistakes is clever - no reason you couldn't lock them out or use other security measures by failure to meet that response.

My first thought was "how would locking the account if you miss it help, if you don't know how to miss it?"

However, if you're in a situation where someone is trying to forcibly extract access from you, there's s good chance that a stressed state of mind would be reflected in variations in how you play, which could be noticed by the system.

Even if it's limited to a specific secured location, though, you still have to worry about the possibility of keyloggers, which could be used to mount a reply attack without you ever having to know.

If you don't like the article, flag it and move on. Meta discussions (particularly about voting conspiracy theories) are not interesting.

The deniability aspect is irrelevant. This is a method of authentication, not encryption. For the authentication to work, the system must know the password and present it to you, along with two other non-password foils. Then to access the system you must demonstrably show that you "score higher" on the trained sequence.

I'm not sure how one would apply this method to encryption.

> Mrs Ebastian's

more likely Mr Sebastian ... or was that intentional?

I've always liked that about my name -- it has a certain amount of ambiguity.

(It is Mr Seb though. Today, anyway.)

Some clarification/speculation: This is a method of authentication, not encryption. The trained sequence is not used to unlock/decrypt your data. In the multi-factor authentication scheme, this is probably best thought of as "something you are", and might be used along with something you have (RSA token, physical key, RFID badge) and something you know (encryption password, secret handshake). The threat model in the paper talks about protecting physical access and ensuring the person is watched by a guard.

"Threat model. The proposed system is designed to be used as a local password mechanism requiring physical presence. That is, we consider authentication at the entrance to a secure location where a guard can ensure that a real person is taking the test without the aid of any electronics"

Many of the comments I see here tend to assume that this is directly applicable to protecting a remote system such as logging in to a website. Perhaps with adaption this could be a useful technique for authenticating into a website, but as far as I know no authentication scheme can protect against an intruder with a gun to your head forcing you to log in. Instead, the use case here is to prevent someone who has stolen your ID badge and forced you to give up your PIN from being able to get access to the top-secret bunker.

Quite the login method:

1) Tell me who you are, so I can load up your secret 30 character "password" from some database (the fact that this needs to be stored in a retrievable way makes this entire system insecure)

2) Here's one random sequence of 30 characters. Look at it for a little bit, ok now try to reproduce it from memory.

3) Repeat several times (not stated how many).

4) One of those attempts was your specific password, let me check to see if you did significantly better at it than the other (random) ones.


EDIT: Upon re-read, it sounds like 2-4 are a bit different:

2) Play a long sequence of characters "Guitar-Hero" style. The computer will "slip-in" the true password and watch to see if you do better on that section.

Still storing the password in the clear and still susceptible to being watched several times and finding the "common" sequence.

I'm not sure you've got 3-4 right, but it doesn't matter. Step 1 sinks the whole thing.

There's also the fact that your password will ALWAYS be shown as one of the sequences. Would-be hacker just tries 5 times and notes that THIS sequence keeps showing up, that must be the right one.

Maybe there's a more obtuse use-case but this seems like more of a cool experiment on human memory than a practical cryptography tool.

The paper is cheap and assumes they have a human attacker." Threat model: The proposed system is designed to be used as a local password mechanism requiring physical presence. That is, we consider authentication at the entrance to a secure location where a guard can ensure that a real person is taking the test without the aid of any electronics."

I think this system is designed more for something like authenticating people for entry into a secure area, rather than for logging in to a computer. If it is more difficult to obtain access to the password storage than to your user's mind, then this is a useful system.

This is basically the same method I use for laptop hard disk encryption. I don't remember the password, but I typed it so many times my fingers remember exactly the pattern to type. Kind of like playing a piano.

Several times i've been drinking and am unable to remember how to log into my machine, because I can't replicate the pattern and don't remember the password. After 15 minutes of concentration it comes back.

I found I couldn't enter my password in unusual situations, e.g. holding the keyboard in one hand, typing with the other. I finally cottoned on that my fingers' muscle memory was typing the password consistently but it wasn't what I thought it was; it had been corrupted by similar N-grams in Unix commands from the moment I'd first entered it twice to passwd(1).

I don't even need to be drinking, but sometimes I'll fat finger it a few times and get frustrated. The only way I can get logged in is to type really really fast.

Yeah, I find speed is important too. The quicker you can type it, the easier it is to recall. Another thing, if I slow down I notice i'm sort of humming parts of the pattern in my head, as if each character held a sort of audible weight that indicates where my fingers should go next... again, kind of like playing an instrument. Yet I can't play anything. Weird.

Sounds like that could be a sort of synaesthesia:


I've been wondering for a while where the boundary for being diagnosed with synaesthesia is - nearly everyone I know has some sort of synaesthesia, even if it's only associating numbers with colors. Yet, it's no where near the viewing-numbers-as-landscapes of Daniel Tammet's Born on a Blue Day or Nabokov's gift with words.

I suspect almost everyone has a little synaesthesia, just like almost everyone has a little depression, a little anxiety, a little Borderline Personality, a little schizophrenia...

I guess it's only worth giving your mental quirks a name if, say, they're 2σ above norm. I dunno, ask your doctor (or statistician :P).

No, it's not. Your laptop doesn't know your password and hence you can actually use it to generate and encryption key. This system needs the laptop to know your password.

When entering a password on my phone, I have to type it on my PC and then read it out. Even if I can say the password, things like case-sensitivity become an issue.

It is of course a much more refined approach; critically, there never is a stage at which you retain explicit knowledge of the password. With pseudo-implicit passwords (knowing how to type but not quite remembering what), recall is still possible -- either via explicit recall after sufficient deliberation, or via presentation of the input device.

(Neat trick, but reversible password encryption still seems like a massive flaw here...)

Same here with my ATM PIN - I could probably make a guess and tell you what the numbers are but not the order without actually using an ATM pad.

A few weeks ago it was late, I'd just come from the gym and not eaten anything and I couldn't figure out why my PIN wasn't working. Turned out I was trying to use a code that I stopped using a couple years ago.

I find that in this case, conscious concentration can actually make performance of the password sequence more difficult for me.

Also, I get into a stateful kind of memory where if I have to produce a password I normally produce at home while I'm at work, I produce the wrong password.

> If a judge or policeman orders you to hand over your password, you can plausibly say that you don’t actually know it

Surely for this system to help in allowing you to plausibly say that, you'd have to reference this system (or equivalent) and demonstrate that it is indeed used for the authentication the police want access to. And in that case, surely the police could just say "in that case, please authenticate for us"?

Hopefully stress means that you won't be able to do it properly anyway, which means coercion is useless.

The real problem is the device stores the password, so the real defence is the tamperproof-ness of the device, not whether you can be tricked or coerced into outputting the sequence.

Yeah, the research paper notes that they need to implement 'coercion detection'. From page 12:

"Since our aim is to prevent users from effectively transmitting the ability to authenticate to others, there remains an attack where an adversary coerces a user to authenticate while they are under ad- versary control. It is possible to reduce the effective- ness of this technique if the system could detect if the user is under duress. Some behaviors such as timed re- sponses to stimuli may detectably change when the user is under duress."

That's more of a bug than a feature when you're the one under duress.

What if you're running late to do something, or you are anxious to get access to the data behind the authentication for some other non-duress reason? Duress-detection will be tricky (but I look forward to them doing it!).

I would personally prefer to have my password at any time, rather than have to get in the "zone" to authenticate into my computer.

The problem with using coercion is, the people using it never believe it's useless regardless of what's coming out of your mouth.

So all the clever people have concluded that this system is useless, because you can pull a gun on someone and force them to play the game. Not to mention: it's not even that much entropy! So let's all just forget about it move on with our lives, right?

No. Of course not. What this system provides is a unique -extra- method of authentication. I really doubt this is meant for putting this on your laptop in place of a password scheme. But you might use something like it as part of multi-factor authentication, e.g. into a secure facility. Remember all those movies where somebody's eyeballs are removed/replaced/copied in order to fool a retina scanner? I can't comment on how plausible that is, but I can certainly tell that if it were this system, they could not have broken it, period. I think that's pretty useful don't you?

The authors are claiming that it helps against duress but the system as described only does so in the most limited theoretical scenario where the attacker and defender both have significant, contrived restrictions. There's a reason why you remember those retina scanner tricks from movies: in the real world, security is about protocols and those tricks would fail in any realistic scenario short of, say, aliens with body-sculpting nanobots.

As a trivial example: this system assumes a single attempt in a guarded facility. What benefit does this offer over a duress password which our poor hostage provides knowing that it will trigger a full security response and locking out of their access? For that matter, why not have the same guard who looks for tricks check your face against the employee database?

Some critics are getting hung up on the hard-to-understand details, or zeroing in on a few stretch claims about potential usefulness in certain situations. There is still novelty and innovation here. It is a different way to train, prompt, and evaluate authentication attempts.

Even if not perfectly resistant to all kinds of coercion, or ideally strong in an information-theoretic sense, its weaknesses in various dimensions are different than more traditional systems. It is thus suggestive of other potential directions in the design space, leveraging other aspects of human memory/behavior.

It bears some similarity to systems which add the timing of a person's typing as an added authenticating factor.

People are reacting to the authors overselling their work, as amplified by extremetech: they're claiming this as hardening against rubber-hose cryptoanalysis, which is simply untrue. Their paper actually describes a system which actually has nothing to with cryptography - it's authentication - and has failure modes which are identical to password authentication, except where it imposes significant new barriers to practical application.

If they'd published it as a minor curiosity suggesting an area for future research there'd be far less backlash.

Beside the title being misleading (it's a 30-symbol password, not 30-character, as "character" implies printable ASCII to most people), the math doesn't quite make sense:

Before running, the game creates a random sequence of 30 letters chosen from S, D, F, J, K, and L, with no repeating characters. This equates to around 38 bits of entropy

So that's 6 choices for the first character, and 5 choices for each of the next 29 gives us log2(6*5^29) =~ 70 bits of entropy. Does anyone know where this 38 bit figure came from?

Ahh, wow. Thanks. This was totally glossed over in the article copy:

We only use 30-character sequences that correspond to an Euler cycle in the graph shown in Figure 2 (i.e. a cycle where every edge appears exactly once). These sequences have the property that every non- repeating bigram over S (such as ‘sd’, ‘dj’, ’fk’) appears exactly once. In order to anticipate the next item (e.g., to show a performance advantage), it is necessary to learn associations among groups of three or more items. This eliminates learning of letter frequencies or common pairs of letters, which reduces conscious recognition of the embedded repeating sequence [5].

This is interesting in regards of the brain, but not so much when it comes to waterboarding cryptoanalysis... I mean, instead of asking for the password, they'd ask you to play the game: same difference, right? Or am I missing something?

It analyzes the performance of your sequence compared to the benchmark random sequences. It's no longer a black-and-white comparison in cryptography.

So? That doesn't have anything to do with what I just said/asked.

This proves Authentication, not key storage that enables encryption/decryption. Per the paper, for authentication "a participant is presented with multiple SISL tasks where one of the tasks contains elements from the trained sequence." Hence the system must already know the secret password. If that system is your laptop, then the feds already have the key when they seize it and don't need to resort to rubber hose or its russian variant thermal-rectal cryptography.

Also, the paper assumes physical presence of a live human at some terminal for authentication. At the point that you can make assumptions about who is operating your authentication system, biometrics seem to be a far faster and more reliable authentication system. Both those limitations,however, could change with further research.

>The most important aspect of this work is that it (seemingly) establishes a new cryptographic primitive that completely removes the danger of rubber-hose cryptanalysis — i.e. obtaining passkeys via torture or coercion.

Does not compute. If there is a mechanism by which you can authenticate, you can be coerced into authenticating through that method.

The paper covers this of course:

>Coercion detection. Since our aim is to prevent users from effectively transmitting the ability to authenticate to others, there remains an attack where an adversary coerces a user to authenticate while they are under adversary control. It is possible to reduce the effectiveness of this technique if the system could detect if the user is under duress.

I take issue with the the article suggesting it's completely resistant to coercion. A system that detects duress... interesting I guess but seems like a stretch.

>This equates to around 38 bits of entropy, which is thousands/millions of times more secure than your average, memorable password.

Really? Playing around with KeePass briefly, it seems this is comparable to a 6 character password that includes upper, lower, numeric, and special characters. I wouldn't consider that very strong. Besides the fact that it appears you're not entering the password exactly, but only (if I'm understanding correctly) "good enough".

This is pretty awesome, but the following is noteworthy:

> creates a random sequence of 30 letters chosen from S, D, F, J, K, and L, with no repeating characters. This equates to around 38 bits of entropy

Which is not so bad for certain applications, but certainly isn't the 180+ bits you'd have in a true random 30 character password.

I wonder what applications they have in mind where this password system could be used.

Obligatory xkcd: http://xkcd.com/538/

Only this time you'll have to log-in/decrypt on the spot rather than cough up your password.


It may be an interesting research, but it certainly won't help with that issue (with noting passwords down maybe)

And one thing may happen, you can have no clue of what your password is, or write it down, but you may need to look at the sequence to remember it! (Piano players may identify there)

While this does sound interesting from a psychological/neurological perspective, I feel bad for anyone who actually tries to implement a password system based on this. 38 bits of entropy is nothing, a standard password with 38 bits of entropy would take about 5 minutes to crack (assuming a GPU that can compute 1 billion hases/second). Nevermind that by the NIST specification for human-generated passwords, a 30 character string of alphas would be 45 bits of entropy. Also, as some others have pointed out, storing people's unique strings in the clear invalidates any strength this scheme could hope to achieve.

Source: http://en.wikipedia.org/wiki/Password_strength#Human-generat...

Conclusion: Interesting psychological experiment, not actually backed by any appreciable crypto knowledge.

Edit: disregard my NIST comment, someone linked the paper used to get the 38 bit figure, http://bojinov.org/professional/usenixsec2012-rubberhose.pdf.

38 bits of entropy for authentication may be plentiful if other security controls are put in place. Bank card security would not be noticeably increased by having 6 or 8 digit PINs instead of 4 digit PINs. The risk is mitigated by account lockout (swallowing cards), surveillance, damage limitation (daily withdrawal limits) and similar measures. The system proposed in this paper could be a valid mitigation against authentication risks in very specific circumstances.

A better argument against this system would be one that addresses human usability and unnecessary cost/complexity.

Fair enough. My numbers are of course based on an unsalted hash which has been stolen from a db or otherwise obtained by an attacker.

Further arguments include high overhead for learning (not to mention changing passwords) a given password, storage of passwords, and the idea that your password isn't summonable on demand.

They had me until this part . . .

    Authentication requires that you play a round of the game —
    but this time, your 30-letter sequence is interspersed with 
    other random 30-letter sequences.
Which makes it sound to me like your password could be deduced from a single (failed) login attempt, and then reproduced after a session in the trainer.

Their discussion of that attack, from the paper itself:

    If the attacker is allowed multiple authentication 
    attempts — iterating the extraction and test phases, 
    alternating between the two — then the protocol may 
    become insecure.  The reason is that during an 
    authentication attempt the attacker sees the three 
    sequences k0; k1; k2 and could memorize one of them (30 
    symbols). He would then train offline on that sequence so 
    that at the next authentication attempt he would have a 
    1/3 chance in succeeding. If the attacker could memorize 
    all three sequences (90 symbols), he could offline 
    subject a trained user to all three sequences and 
    reliably determine which is the correct one and then 
    train himself on that sequence. He is then guaranteed 
    success at the next authentication trial.

    We note that this attack is non-trivial to pull off 
    since it can be difficult for a human attacker to 
    memorize an entire sequence at the speed the game is 
. . . which isn't all that reassuring, given that if I were trying to break in using this technique, I wouldn't be memorizing, I'd be recording.

But it sounds like the system is designed to only give an attacker one trial (notionally opening a trap door under his feet if he fails even once), and it does seem much more secure in that context.

On the topic of courts: There is a US court case in the 11th circuit where a federal judge, in fact, ruled that people are not required to give up their encryption passwords under the 5th amendment. It isn't a supreme court case however.

http://www.techrepublic.com/blog/tech-manager/personal-data-... "Last week in San Francisco, a federal court for the first time ruled that the Fifth Amendment of the U.S. Constitution — the right to not self-incriminate — protects against “forced decryption.” The judge, from the 11th Circuit in San Francisco, ruled that a Florida court violated a defendant’s rights when its Grand Jury gave him the choice to either reveal his TrueCrypt password or go to jail."

Nitpick: This is not unbreakable crypto. This is more of a more secure key storage mechanism. Perhaps also a good defense against phishing attacks.

And it's not unbreakable. For starters, this system absolutely requires that the passwords be stored in the clear.

What about encrypting your 'secret' password with normal password? So you get assigned this 30 characters password, which you learn. Then you use normal password (like 'password123' :) ) to encrypt that string. Then when you need to log on, you first type in you normal password to decrypt your 'secret' password, which is then used to authenticate you further. I know, sounds ridiculous, just thinking out loud.

edit: yes, i know, encrypting the key with another string makes it just that tiny little bit secure, technically it's still plain text...

It sounds ridiculous for a reason. The weakest link in that chain is still the low-entropy password.

It's not even that. YOu can't store a key with this divice because for the authentication game to work, the system has to have the password.


So it's not unbreakable, nor is it crypto. I'm not sure if it's anything, really.


It may not be even close to unbreakable or torture-free as the author implies, but this encryption system (or similar approaches) could work to tighten some classic security flaws with passwords.

For instance, this could prevent employees of a large corporations from writing down or sharing a password with a coworker, or even spelling out their password over the phone to a bogus "support engineer" -- although probably fingerprint/eye/face recognition systems are more practical and easy to implement than a "guitar hero" learning session. But then the OP method has an advantage over those: you can change your implicit-learned password easier than your face or fingerprint...

I don't see what's new here. I already use muscle memory to remember my passwords. I am awful at rote memorization, but when I train my fingers to perform a 12 character password dance, everything is fine and I can remember the password for a long time.

The good thing about memorizing passwords this way is that it doesn't matter how random the password is - totally random letters, numbers and symbols or a sentence are the same when it's a keyboard dance.

As long as you have a keyboard anyway...

I have to find a keyboard to figure out half of my passwords when setting up my phone.

Isn't there a slight problem whereby someone denies knowing the password, you just put them in front of the keyboard and just ask them to type something? Due to it being a subconcious memory, it 'just happens'.

It wouldn't necessarily 'just happen'. From what I can glean, the idea is that if you are trying to play the game as well as possible, then the portion that you originally learned would be played better. You could certainly intentionally play the entire game poorly, thereby masking which portion is the password.

This explains why I could never remember my locker combo, but could unlock it if you handed me the blasted thing. Same goes for pin numbers. The second I think of what the real number is I lose it...

Does this mean the system will have to store your password as plain text? I more trust myself to choose a secure password than any service to keep the plain text password secured.

Crypto? This is a solved problem: http://www.halfbakery.com/idea/ATM_20handshake_202

Wouldn't it be better to lock a system up after X number of failed attempts and then require another unknown person to also login, perhaps even remotely?

>> can’t be obtained via coercion or torture "Hi, yes that is a gun to your back, please log in to your system for me"....."atta boy"

It's not going to fly because it's not compatible with the corporate policy of changing password every 60 days.

I store a passphrase which is 31 characters in memory.

This is a sensationalist headline, and this is not a strong password length. Based on the information in the article, this is really equivalent to a "strong" 5-character password - not very secure.

It's not "30-character unbreakable cryptography", you can crack it in minutes on your phone or desktop.

Technical details:

The article actually says that each 'character' you learn is one of only 6 possibilties - for only 2.5 bits per character and total entropy of 38 bits.

To see how woefully little entropy this is, if you code, try writing a program that counts to 2^38 - or on a 32-bit system go through the 4.2bn possible values of an integer 64 times. That's how many possible keys there are in a 38-bit password. It really just takes minutes - certainly far less than the 45 minutes the article says it takes to learn this password!

just want to point out that the "entropy of 38 bits" comes from the researchers - the first character has entropy of 2.5 bits but not all 30-length 'passwords' are valid, only a very small number of them, according to the researchers.

38-bit keys/passwords are not secure by any stretch of the imagination, no matter how they are chosen. (i.e. even the best random number generator on Earth doesn't help if you can just try every possibility in minutes.)

Unbreakable? Bah, torture would work, and that's much faster than cracking a password.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact