149 points by arthur2e5 on March 14, 2018 | hide | past | favorite | 33 comments

 This reminds me of the Pontifex cipher that featured heavily in Neal Stephenson's Cryptonomicon, which in turn was developed by Bruce Schneier (and called Solitaire). [0]
 I came to post this same comment, then found that the paper itself cites Solitaire!The only downside I see for this implementation is that a stack of "weird" tiles would be much more discoverable for any TLA than a deck of cards.
 I don't see any obvious reason why you couldn't use deck of cards with ElsieFour1) shuffle the deck 2) split into two parts, first part having 36 cards 3) Assign the alphabet to the 36 card part (e.g. by sorting the cards and assigning characters sequentially) 4) shuffle parts separately 5) combine the parts in a easily reversible way (e.g. simply appending)When doing the crypto, just use the 36 card part. I suppose the remaining cards could relatively easily be repurposed as RNG for nonce generation.
 > I suppose the remaining cards could relatively easily be repurposed as RNG for nonce generationI spent better part of a hour trying to google this to no avail. How do you convert random permutation of n items into random number
 You'll want to basically perform the reverse of a Fisher-Yates shuffle. Instead of generating a random number, you look at which card needs to be swapped with the Kth card in the deck to make the top K to N positions of the deck in sorted order.If deck is a list holding a permutation of the numbers 0 to N - 1, it looks something like:`````` random_value = 0 for i in range(len(deck)-1, 0, -1): index = deck.index(i) random_value += index random_value *= i deck[index], deck[i] = deck[i], deck[index] `````` The above is a one-minute prototype and may or may not work. When I get some time, I'll actually test the above code, but it's close to doing the right thing.The above naive implementation is O(N^2). Your end result is a sorted deck, and random_value is a number that when repeatedly mod'd and divided by the appropriate index to generate a "random" number during a Fisher-Yates shuffle would produce your original permutation.In practice, you'd implement the algorithm in O(N) by first doing a single linear scan to build an index (actually just held in an array) of cards to their positions. Also, you wouldn't actually perform the swap, since you never look at deck[i] after performing the swap. You just do deck[index] = deck[i]. So, at the end, you won't wind up with deck being sorted.
 e12e on March 14, 2018 [ed: no, that's not right. With two cards, you can have red-back or black-red. I guess that'd translate to N bits for 2 n cards, but it's not quite trivial...For four, we have: rrbb, rbrb, rbbr, bbrr, brbr, brrb etc]I think I'm misunderstanding you - but say you have 2n cards, n red, n black. If you shuffle them, you have 2n random bits, read from first to last? (this is assuming it's feasible to actually randomly shuffle cards).
 No, in this application cards are all distinct, not merely half red and half black. There are n! permutation of a deck of n cards, and a uniformly chosen random permutation contains entropy equal to the base 2 logarithm of n!.Both choosing a random permutation given a supply of random bits and, as noted in the parent post, extracting random bits from a given random permutation are nontrivial computational problems leading to interesting algorithms; shuffling cards or tiles is a good practical shortcut to obtain a random permutation more directly.
 Indeed. I was attempting to induct from basic principles - as in two distinct cards.. But gp's right - it's not trivial. Unsure if it's simple :)
 It'd be simpler to just ignore the tens and face cards.
 It'd be easy enough to store a key in a deck of cards.(just map the alphabet to cards 1:1 and ignore some cards)
 I'm not really digging deeply into this at all, because really does it matter, but for what it's worth the rationale this paper has for being more secure than RC4 (on which it is based, and which is infamously one of the most egregiously insecure mainstream ciphers) is a little flimsy.For many years after the adoption of RC4, it was known that there were pronounced biases early in the RC4 keystream. IIRC, these were the weaknesses that the 802.11 attacks too advantage of.But the more recent TLS attacks are based on a better understanding of RC4 keystream biases, and some of them (the Fluhrer-McGrew biases, at least) persist throughout the entire keystream. I don't think you can simply encrypt a long nonce to avoid them.(I'm not an expert on this and could be wrong, of course).
 I'm completely novice at this matter, but from the sounds of it even if LC4 does not completely remove the biases but only reduce them, wouldn't it still be improvement? I'm kinda assuming that the remaining biases would be smaller, and thus harder to exploit in an attack?
 Particularly if we're talking hundreds of characters here, not megabytes to work with or gigabytes that you can bounce off of an oracle. Humans will notice if you ask them to decrypt 20 near-identical messages and report which ones give a MAC error.
 I mean, again, this is not a branch of cryptography I take especially seriously, but I'd assume that if you were defining any kind of new cipher, you'd want to avoid constructions that were known to have fatal flaws embedded in them.Either way, I brought it up because the author brings this up in their paper, but doesn't seem to fully address the literature of the attack he's trying to defend again (I may have missed something, though).
 It seems to me that LC4 key is essentially equivalent to RC4 state and thus RC4 early keystream bias does not apply as there is no key-expansion phase.Edit to clarify: LC4 key has to be permutation of 36 elements, while RC4's state is bijection of 256 elements that is somehow construed from the byte-string key and the issue is in how this string->state transformation works (ie. you have to pump the function for >500 times to get unbiased output).
 In other words, brute-force attacks depend on the ability to keep trying essentially indefinitely; it's a similar situation with payment card CVVs --- although they're only 3-4 digits, which makes for a tiny keyspace, any attempts to bruteforce one will be quickly detected and blocked.
 Of course, but that wasn't my point. Not only brute force attack are foiled by humans, also attacks like padding oracles will be impossible because the user will notice.
 I know this is not the appropriate venue, but can you get somebody to renew the cert for starfighter? Thank you.
 For ciphers like this it would be interesting to compare the speed of which a human would do this cipher (LC4 vs Solitare AKA Pontifex [0] ). Also, it seems like Scheiner's cipher only requires you to have playing cards and LC4 requires you to already have tiles (described as an appliance).
 The tiles are hardly a requirement here, as mentioned in other comments you can use deck of cards also with LC4
 In case you find the alphabet too small, there's a variant called ls47 (https://github.com/exaexa/ls47) that runs on a 7x7 tile with ~128 bits of security. It also suggests a key expansion algorithm.
 It's cute but at even at 30 seconds per character, a 160 character SMS takes over one hour to decode.
 I don't think this is intended for electronic communications.
 160 characters isn't an exceedingly long message. That's two lines on a default terminal.
 It's kinda weird to measure a non electronic message in units of electronic terminal widths.
 It certainly is.I'd add that it's two lines on a typewriter, but then I'd need to explain what a typewriter is.
 jandrese on March 15, 2018 Trying to translate it into a handwritten message length instantly runs into the problem that handwriting is all over the place. I guess another way to think about it is 160 characters is roughly 32 English words.
 A lot can be encoded in 3 chars.
 I don't see the benefits of these. Just because I can manually encrypt and decrypt I don't have more confidence in the cipher. If I already trust the cipher I might as well trust a program that does the encryption and decryption.
 RC4, which this paper is based on was the original choice for the CipherSaber project:https://en.wikipedia.org/wiki/CipherSaberThe core goal of which was the idea that people should have basic access to some sort of reasonably strong cryptography as a matter of personal liberty.The theory goes that if it's an easy enough algorithm to do manually if you had, that makes it an easy enough algorithm to reimplement as needed in a situation where you have to forge your own tools if need be (all you have is a deck of cards, pencil, and paper, or all you have a TI-83 graphing calculator and its BASIC programming manual; can you securely send a message?).
 There are two benefits, both related to keys:1. Key generation using tiles as proposed is truly random, not susceptible to attacks on a PRNG.2. The key is never connected to any network in any way--it's effectively airgapped. This removes whole classes of key-stealing and side channel attacks.That said, while real randomness is still uncommon, there are electronic solutions, and the benefits of networking generally outweigh the security risk for most use cases.
 The idea is interesting, but unfortunately it's still too hard to remember the rules and encrypt or decrypt by hand. One-time pad is much easier.

Search: