"This paper presents the first publicly available cryptanalytic attacks on the GEA-1 and GEA-2 algorithms."
"This unusual pattern indicates that the weakness is intentionally hidden to limit the security level to 40 bit by design."
So in other words: GPRS was intentionally backdoored.
Note that this high level insight isn't really a contribution of the paper, given that the authors of the algorithm basically admitted this themselves. Excerpt from the paper:
It was explicitly mentioned as a design requirement that “the algorithm
should be generally exportable taking into account current export restrictions”
and that “the strength should be optimized taking into account the above require-
ment” [15, p. 10]. The report further contains a section on the evaluation of
the design. In particular, it is mentioned that the evaluation team came to the
conclusion that, “in general the algorithm will be exportable under the current
national export restrictions on cryptography applied in European countries” and
that “within this operational context, the algorithm provides an adequate level of
security against eavesdropping of GSM GPRS services”
Further, we experimentally show that for randomly chosen LFSRs, it is very
unlikely that the above weakness occurs. Concretely, in a million tries we never
even got close to such a weak instance. Figure 2 shows the distribution of the
entropy loss when changing the feedback polynomials of registers A and C to
random primitive polynomials. This implies that the weakness in GEA-1 is un-
likely to occur by chance, indicating that the security level of 40 bits is due to
The problem with this statement is that nobody outside of the design staff understood how the algorithm was weak, or (AFAIK) precisely what the criteria for "weak" actually were. Moreover -- after the export standards were relaxed and GEA-2 had shipped, nobody came forward and said "remove this now totally obsolete algorithm from your standards because we weakened it in this way and it only has 40-bit security" which is why it is present in phones as recent as the iPhone 8 (2018) and potentially may be vulnerable to downgrade attacks.
There are some stupid ways to weaken a cipher that would make it obvious that something was weak in the design, e.g., just truncating the key to 40 bits (as IBM did with DES from 64->56 bits, by reducing the key size and adding parity bits.) The designers didn't do this. They instead chose a means of doing this that could only be detected using a fairly sophisticated constraint solver which (may not have been) so easily available at the time. So I don't entirely agree with this assessment.
And, as I mention, pointing that out was a contribution of the paper.
> or (AFAIK) precisely what the criteria for "weak" actually were
I think this 40 bit limit is well documented for other encryption algorithms. I couldn't find it in any (old) US regulation text though after a cursory search.
The "U.S. edition" supported full size (typically 1024-bit or larger) RSA public keys in combination with full size symmetric keys (secret keys) (128-bit RC4 or 3DES in SSL 3.0 and TLS 1.0). The "International Edition" had its effective key lengths reduced to 512 bits and 40 bits respectively (RSA_EXPORT with 40-bit RC2 or RC4 in SSL 3.0 and TLS 1.0)
Maybe I didn't make it clear. The open question prior to this paper was not "precisely how did the algorithm implement a specific level of security", the question was: what is that specific level of security? This was totally unknown and not specified by the designers.
Notice that the specification doesn't define the desired security, in the same way that it defines, say, the key size. It just handwaves towards 'should be exportable'. I can't find a copy of the requirements document anymore, but the quote given in the spec doesn't specify anything more than that statement.
>I think this 40 bit limit is well documented for other encryption algorithms. I couldn't find it in any (old) US regulation text though after a cursory search.
In the United States (note: GEA-1 was not a US standard) some expedited licenses were granted to systems that used effective 40 bit keys. In practice (for symmetric ciphers) this usually meant RC2 and RC4 with explicitly truncated keys. GEA-1 does not have a 40-bit key size -- a point I made in the previous post. It has a 64-bit key size. Nowhere does anyone say "the requirement for this design is effective 40-bit security": they don't say anything at all. It could have had 24 bit security, 40 bit security, 56 bit security or even 64-bit security.
ETA: Moreover, there is more to this result than 40-bit effective keysize. A critical aspect of this result is that the attackers are able to recover keys using 65 bits of known keystream. The uncompromised GPRS algorithms require several hundred (or over 1000) bits. Note that these known plaintext requirements are somewhat orthogonal to keysize: capturing 65 bits of known keystream is possible in protocols like GPRS/IP due to the existence of structured, predictable packet headers -- as the authors point out. Capturing >1000 bits may not be feasible at all. That's really significant and interesting, and not the result one would expect if the design goal was simply "effective 40-bit key size". One has to wonder if "ability to perform passive decryption using small amounts of known plaintext" is also included in the (missing) security design requirements document. I bet it isn't.
Really? How would you get 64-bit security from a 40-bit key?
I think the question then becomes, is the regulation still satisfied if the specifics of the intentional limitation/weakness/exploit are undocumented? It's likely moot these days, but curious nonetheless.
I kind of suspect that the weakness of GEA-1 is one of those industry secrets that everybody knows but nobody talks about.
in 1st world maybe
In fact in Germany, the country of one of the paper's authors, precisely this is happening: 3G is being turned off while 2G is used as fallback for devices that don't support LTE. Apparently there are some industrial use cases of 2G that still rely on it. In Switzerland, they are instead turning off 2G and keeping 3G as the fallback.
IDK how the situation is in third world countries, but note that India at least is in the top ten when it comes to LTE coverage. https://www.speedtest.net/insights/blog/india-4g-availabilit...
Especially later revisions of 4G and all 5G due to MIMO and signal power estimates for roaming. Also may allow the use of local WiFi to save power.
It would be nice to use newer protocols on lower frequencies for range but regulation is what it is.
Trust the phone to do the right thing.
> According to a study by Tomcs ́anyi et al. , that analyzes the use of the ciphering algorithm in GRPS of 100 operators worldwide, most operators prioritize the use of GEA-3(58) followed by the non-encrypted mode GEA-0(38). Only a few operators rely on GEA-2(4), while no operator uses GEA-1(0).
And they will move to exploiting backdoors in newer standards.
Even in their best case effort, 64 bit keys were in the realm of supercomputer level bruteforcing by late nineties if just few more cypher quality degradations, and key leaks were know.
Encrypting traffic for privacy purposes was less important. Prior analog cellular telephony systems were unencrypted, as were analog and digital wireline services. Thus, the privacy cryptography was intended to be strong enough to make it a little more inconvenient/expensive to eavesdrop on digital cellular than it was on analog cellular or wireline services without significantly impeding law enforcement.
There's a reason this was done so discreetly.
Those government that did probably violated their own laws, we deal with criminals here.
IBM had Commercial Data Masking Facility which did key shortening to turn a 56 bit cipher into a 40 bit one.
Now we've got this weird interaction which similarly reduces key length.
Seems pretty obviously intentional?
Maybe 40 bit was seen as sufficient at the time, but are there any engineering reasons to actually shorten the key intentionally, does it improve the transfer rate in any way?
I can't think of any, but I'm no expert, so maybe somebody else can chime in?
And as far as the other half of your question, no, there's no possible benefit (other than to the backdoor owners) from a smaller keyspace, as it goes through the motion of encrypting with the larger one.
Some situations, rather than designing new codecs, they would just weaken key gen side. The IBM effort there was public to allow for easier export, but also an approach that could be used to hide the weakness which in other settings may have been beneficial. It's possible however that folks involved understood what was going on to a degree but that it was seen as necessary to avoid export / import restrictions.
More recently I think places like China ask that the holders of key material be located in country and make the full keys available or accessible to their security services. Not sure how AWS squares that circle unless they outsource their data centers in China to a third party that can then expose the keys for China to use.
This is in fact precisely what they do. The Beijing region is operated by Sinnet and the Ningxia region is operated by NWCD. It's documented here: https://www.amazonaws.cn/en/about-aws/china/
With KMS actually the only thing that prevents encrypt/decrypt from anywhere on the Internet is the access control logic of the service. AWS of course hold the private keys for ACM certs too and could provide them upon request. (Side note: the KMS "BYOK" functionality is almost completely useless).
Realistically you should expect any cloud-hosted infrastructure to be fully transparent to relevant authorities. This is usually not a business problem.
If you check something like the Public Suffix List , you will notice that Amazon maintains separate com.cn domains for several of its services in China. Amazon doesn't appear to do that for any other country. It follows that AWS in China might well be isolated from AWS elsewhere.
If I make a product for global use and Russian use, am I malicious for telling Russian customers that I’ll cut out information on homosexuality by Russian law? Or is the malice in Russia requiring that? Can we expect companies, especially international conglomerates, to give up on potential markets in order to protest a law?
(This is not a caricature, this is more or less what a couple of Russian acquaintances working on YouTube at Google tell me. One could also say the same, mutatis mutandis, for China or India or Saudi Arabia, but I know much less about what’s happening there. Also intentionally not the most outrageous example, only the one that hopefully hits closest to home for a Western audience.)
For example, I don't think James Comey was acting with malicious intent calling for universal crypto back-doors; I do however think that he was dangerously naïve and deeply wrong-headed.
No back-door will ever remain unbreached and by baking vulnerabilities into the specification you're paving the way for malicious manufacturers to exfiltrate your network's communications as they see fit.
There's a reason that 5G rollouts have national security implications and they could've been largely avoided (metadata aside).
I live quite far out though, so I guess it won't apply to that many.
There's a clearer write-up of this discovery and its implications here:
If law enforcement needs to wiretap a specific phone/SIM they only need to request it to the operator. Over-the-air encryption is irrelevant.
Nowadays operators can duplicate all packets and send copies to law enforcement/security services in real time so that they can monitor 100% of a given phone's traffic from their offices.
It's nothing new either - for as long as there is postal service your mail could be open by order of this or that government oficial. Not even the Pony Express was immune to that ;-)
If you want greater secrecy do like the paranoid intel community has been doing since at least the cold war and exchange messages on seemingly disconnected channels - say 2 different newspaper columns, wordpress blogs or even websites made to look like robot-generated SEO spam, using code that was previously agreed in person, with everyone keeping hardcopies if necessary.
Does anyone here know? Or am I correct in assuming the implementatiom is kept under pretty heavy lock due to its proprietary nature?
Historically the realisation that you need outside Cryptographers (not consumers) if you actually want to do anything novel with cryptography† was slow to arrive.
Even on the Internet, for PGP and SSL there was no real outside cryptographic design input. In SSL's case a few academics looked at the design of SSLv1, broke it and that's why SSLv2 is the first version shipped. Only TLS 1.2 finally had a step where they ask actual cryptographers "Is this secure?" and that step was after the design work was finished. TLS 1.3 (just a few years ago) is the first iteration where they have cryptographers looking at the problem from the outset and the working group rejected things that cryptographers said can't be made to fly.
And TLS 1.3 also reflects something that was effectively impossible last century, rather than just a bad mindset. Today we have reasonably good automated proof systems. An expert mathematician can tell a computer "Given A, B and C are true, is D true?" and have it tell them either that D is necessarily true or that in fact it can't be true because -something- and that helps avoid some goofs. So TLS 1.3 has been proven (in a specific and limited model). You just could not do that with the state of the art in say 1995 even if you'd known you wanted to.
Now, we need to get that same understanding into unwieldy SDOs like ISO, and also into pseudo SDOs like EMVco (the organisation that makes "Chip and pin" and "Contactless payment" work) none of which are really getting the cryptographers in first so far.
† "But what I want to do isn't novel". Cool, use one of the existing secure systems. If you can't, no matter why then you're wrong you did want to do something novel, start at the top.
I claim the correct approach isn't "We should hire a cryptographer" although that wouldn't hurt a lot designs, but "We need a lot of cryptographers beating on this". Because of that problem about the easiest person to fool being yourself. That means the outside world needs a good look, and that's one reason the IETF was able to get there first because it's all on a mailing list in public view (well these days it's on GitHub, but if you're allergic that's summarised to the list periodically).
One of the hidden advantages TLS 1.3 has over SSLv2 is that of course today TLS is famous. If you're an academic in the area TLS 1.3 work was potentially a series of high impact journal papers, and thus would do your career good, whereas I can't think even Hellman (who had worked with both Elgamal and Kocher at Stanford) would have had a lot of time for SSL in the 1990s.
But really that's fair. And it's even possible that the key difference was only ever that we learned along the way how to do this and so any bunch of fools might have developed TLS 1.3 knowing what we did by then, while not even a prolonged public effort could have made SSLv3 good. Perhaps if that's right in ten years every Tom, Dick and Harry will have a high quality cryptographically secure protocol that isn't just TLS...
But I think what I was getting at is that at last TLS 1.2 had a bunch of outside cryptographers critiquing it. It's just that they're too late because it was finished. Some of the things that today are broken in TLS 1.2 weren't discovered years later, they were known (even if not always with a PoC exploit at the time) at roughly the time it was published. Having such critiques arrive during TLS 1.3 development meant the final document only had the problems known and accepted by the group [such as 0RTT is inherently less safe] plus, so far, the Selfie attack. Not bad.
e: also "3GPP TS 33.501" if you want to read about 5GNR encryption, it's open to us all, time to read through all notwithstanding
If you're concerned about your government intercepting your communications with a warrant, there's not really anything you can do except move to an E2E encrypted app like Signal. But if you're OK with only being monitored if a judge signs a warrant, then the GP's suggestion helps.
These protocol backdoors are more dangerous than application-level wiretaps because anyone can find and use them; they might be private at first, but once they are discovered there's usually no way to fix them without moving to a new protocol (version).
Protocol breaks seem to me to be more in the category of "added by the NSA through subterfuge or coercion in order to enable illegal warrantless surveillance", which I find much more concerning than publicly-known processes with (at least in theory) established due process like CALEA wiretaps.
> You're better off treating it as untrusted and using your own encryption on top (eg. Signal).
But yes, this is a sensible approach to the world-as-it-currently-is.
I always consent to that.
And signal isn't really very helpful in this scenario, because it doesn't properly protect against MitM attacks.
Being warned about a changed key is only sensible at all if the one before that was verified. Otherwise, how do you know everything wasn't MitMed in the first place? Also, most users ignore the warning if the next message is "sorry, new phone, Signal doesn't do key backups". Which everyone will understand and go along with because they either don't know about the danger there. Or because they know Signal really doesn't do shit to provide authentication continuity through proper backups.
Signal is only suitable for casual communication. Against adversaries that do more than just passive dragnet surveilance, Signal is either useless or even dangerous to recommend. It is intentionally designed just for this one attack of passive dragnet surveilance, nothing else. Please don't endanger people by recommending unsuitable software.
Note that the only alternative is to trust a third party to identify people to you. I guess you might have forgotten to mention that. Or, as seems more likely, you don't realise you're trusting a third party... But of course if you do trust a third party to identify people to you, you wouldn't need this Signal feature, so...
> Signal doesn't ever tell you about this necessity and doesn't have any option to e.g. pin the key after manual verification or even just set a "verified contact" reminder.
Signal does, in fact, explain how this works, provide a "Verified" flag you can set on contacts, and automatically prompt you if the Safety Number changes for contacts you've marked as verified, as well as removing the flag if that happens.
> Signal really doesn't do shit to provide authentication continuity through proper backups.
Leaving copies of your data around to enable "authentication continuity" aka enable seamless Man-in-the-Middle attacks is exactly opposite to Signal's actual goal here.
No, the proper alternative is blocking or discouraging sensitive communication until an in-person verification has taken place.
Also, you are always trusting a third party. You have to trust the Signal people (maybe), you have to trust Intel and their SGX (lol, look for some papers on those) and you have to trust your phone vendor. Proper security educates people about whom the are currently having to trust. Spinning it like no third party needs to be trusted for Signal to operate is dishonest.
> You have to trust the Signal people (maybe), you have to trust Intel and their SGX
You don't have to trust either. SGX only gets involved if you are willing to trust it in exchange for having quality-of-life features which are optional. The sort of person who never verifies Safety Numbers probably should take that deal, the sort of person who needs Safety Numbers to protect them from the Secret Police should consider carefully.
The most important thing SGX is doing for you is making guesses expensive. If your Signal PIN is a 4-digit number then SGX's expensive guesses make it impractical for an adversary to just try all the combinations, but if your Signal PIN is 12 random alphanumerics then that's too many guesses to be practical anyway even without SGX.
I suppose it depends on where exactly the Middle here is, but for basic MitM of the physical network, if nothing else shouldn't the TLS connection to Signal's servers be sufficient?
If you want that no-one be able to evesdrop then yes you have to have your own encryption on top. These days a lot of data already goes through TLS but for instance standard voice calls are obviously transparent to operators.
A lot of these standards are generally created by industry consortia, and participation in standards setting is limited to companies who are members.
This isn't the IETF where any rando can join a mailing list and chime in on a proposed standard/draft.
IEEE (Ethernet) is somewhere in the middle: you have to be an individual member (though may have a corporate affiliation), and make certain time/attendance commitments, but otherwise you can vote on drafts (and many mailing lists seem to have open archives):
In addition, the systems are vast and complex, too big for one person to capture and understand. This means you get into the area of design teams, business teams, politics and geo stuff by default. Even re-implementing the specification (or most of it) in FOSS is extremely hard, and that's with all the information being publicly available. Designing it is an order of magnitude harder.
Besides the systems in isolation, we also have to deal with various governments, businesses, and legacy and migration paths in both cases.
Ironically, because all of this and the huge amount of people involved, consumers _are_ involved in this. It's not like the GSM, 3GPP etc. don't use their own stuff.
I think you just answered your own question.
It's always been fascinating to me how we have this parallel infrastructure between the open internet and the locked down telecoms. The free for all that is the internet has evolved much more robust protocols, but the telecoms continue to operate in their own parallel problem space, solving a lot of the same problems.
They also fight tooth and nail to prevent being dumb pipes.
Similarly, you can develop and document a crypto system but unless you're publicly funding a lot of adversarial research that doesn't prevent something like Dual_EC_DRBG being submitted in bad faith. I haven't seen any indication that the NIST team thought they were developing open standards — it's not like they sent a pull request from nsa/add-secret-backdoorz — and the level of effort needed to uncover these things can be substantial, requiring highly-specialized reviewers. That also hits all of the usual concerns about whether the cryptographic algorithm is secure but used in an unsafe manner, which can be even harder to detect.
The biggest win has simply been that these issues get a lot more public discussion and review now than they used to, and the industry has collectively agreed not to trust the infrastructure. Switching to things like TLS has done more to protect privacy than all of the lower level standards work and that's nice because e.g. a Signal or FaceTime user is still protected even if their traffic never touches a cellular network.
That's a poor example, because everyone (who both knew anything about cryptography and actually bothered to read it) knew or suspected it was garbage. As  put it:
> > and Dual_EC_DRBG was an accepted and publicly scrutinized standard.
> And every bit of public scrutiny said the same thing: this thing is broken!
see also: https://blog.cryptographyengineering.com/2013/09/18/the-many...
Ah, fair enough. My point was that it's at least not demonstated, if not not true, that you need a lot of adversarial research to prevent something like Dual_EC_DRBG (specifically) being submitted in bad faith - you just need to actually bother to read the crypto specification you're considering adopting, and have the bare minimum competence to notice that there's no benefit to a number-theoretic design besides the ability to prove security relative to some presumed-hard task, and that there is no such proof.
This field is too complex even for quite a few if not most IT-centric professions.
From my point of view, we have to put the blame on us. We should treat anyone supporting invasive technologies (in the sense of subverting privacy and basic human rights) as an outcast.
The impression I get (maybe not primarily from HN) is the opposite. We'd not take that job (pay...), but still we appear to honor their efforts. Or we _do_ take that job because of pay on the opposite spectrum.
It's a sad state of affairs. I honestly believe people actually want this, or have at least been conned into wanting it.
Giving up liberty (in this case, privacy) for the guise of safety is all the rage these days
Edit: fixed the incorrect "RSA-56" encryption. Thanks graderjs.
39551945224675453 (56-bit semiprime)