The reason CAs are required to use 64-bit serial numbers is to make the content of a certificate hard to guess, which provides better protection against hash collisions. IIRC this policy was introduced when certs were still signed using MD5 hashes. (That or shortly after it was retired.) Since all publicly-trusted certs use SHA256 today, the actual security impact of this incident is practically nil.
The main practical reason seems to have been that a popular application used by Certificate Authorities, EJBCA, offered an out-of-box configuration that used 63 bits (it called this 8 bytes because positive integers need 8 whole bytes in the ASN.1 encoding used). That looks superficially fine, indeed if you issue two certs this way and they both have 8 byte serial numbers that just suggests the software randomly happened to pick a zero first bit. It's only with a pattern of dozens, hundreds, millions of certificates that it's obvious that it's only ever really 63 random bits.
But yes, I agree the sensible thing here (and several CAs had done it) was to use plenty of bits, and then not worry about it any further. EJBCA's makers say you could always have configured it to do that, but the CAs say their impression was that this was not recommended by EJBCA...
If you could go back in a time machine, probably the right fix is to have this ballot say 63 bits instead of 64. Nobody argues that it wouldn't be enough. But now 64 bits is the rule, so it's not good enough to have 63 bits, it's a Brown M&M problem. If you can't obey this silly requirement to use extra bits how can we trust you to do all the other things that we need done correctly? Or internally, if you can't make sure you obey this silly rule, how are you making sure you obey these important rules?
If memory serves it isn't a theoretical attack either, I read about it used against (Startcom maybe?) not so many years ago
It's "Rage Culture" or maybe just front-page seeking by the author. The problem with that is that it makes people desensitized because if everyone is screaming all the time, one should just shut their ears. We have real issues to discuss and this isn't one of them by a long shot.
Reducing the search space from 64bits to 63bits is of no consequence because if an attack on 63bits was feasible, it would mean the same attack would work 50% of the time on 64bit (or take twice as long for 100%). That wouldn't be acceptable at all.
Sure, 64>63, but at the very least it's not "A world of hurt"
Even though the actual security impact is nil, the current policies in place don't allow any flexibility in how non-compliant certs are treated. Therefore, millions of customers now need to replace their certificates due to a mere technicality.
The problem however as pointed out down-page  
> If you can't obey this silly requirement to use extra bits how can we trust you to do all the other things that we need done correctly? Or internally, if you can't make sure you obey this silly rule, how are you making sure you obey these important rules?
> The reason for the urgent fixes is to promote uniformly applied rules. There are certain predefined rules that CAs need to follow, regardless of whether the individual rules help security or not. The rules say the certs that are badly formed need to be reissued in 5 days.
> If these rules are not followed and no penalties are applied, then later on when other CAs make more serious mistakes they'll point to this and say "Apple and Google got to disobey the rules, so we should as well, otherwise it's favoritism to Apple and Google."
This specific error isn't a serious issue, as indicated by how little impact it's had on real-world security.
It's not favoritism to Apple and Google if they emit certs with 63 bits and get minor criticism and someone else, say, stops using random numbers to seed cert generation and gets raked over the coals. The latter case would require more urgent and serious attention.
It's probably worth noting that the problem lasted three years and wasn't discovered by an exploit in the wild, but by followup spot-checking of Google certs as a result of spot-checking Dark Matter certs. I don't think seriousness of the issue is in dispute.
The moment we see a small sign that you don't do it right in some detail, then that trust is gone.
Consider all the details in the spec to be Van Halen's brown M&Ms (although that had no functional effect, and losing a bit of security does). They knew that if people did that right, then they could trust that they also read the details of the rest. If Google gets this wrong, we can't trust on that.
That's not a slippery slope argument. That'd say, if we allow this then you would then do worse things because we let it go. But that's not the argument.
And people are sticking to the letter of the rules entirely independent of the article's author. The author is not advocating for anything to be done, just reporting that this process is already in motion.
Of course we have real issues to discuss. But the fact that all these certs are going to get revoked and require replacing is a real issue that impacts people, even if there's no technical reason for it.
Okay, but, that's because 2^63 itself is more than 9 quintillion. Where the search space was previously 18 quintillion, it's now 9 quintillion. Both of those are "big". The attack is 50% easier than "theoretically impossible before certificate expiration," which should still mean that it's impossible.
If you discovered your AES key generator only created 127 bit keys, would you correct the mistake moving forward? Or go back and immediately burn everything with the old key? The difference between 2^127 and 2^128 is much, much more than 9 quintillion.
If the 64-bit random serial number has already provided an adequate security margin, it should be that no action needed for all existing 63-bit certificates. But it seems the choice of 64-bit here is arbitrary without good justification...
I'm curious why that's the case. A plain reading of reducing the security level from 128 to 126 bits would seem to imply the answer is yes?
I get that it’s meaningless - 4x effectively 0 is still effectively 0 - but denying the math doesn’t really help anything.
The problem here is my choice of an ambiguous word, "security". Formally speaking, the "security level" or "security claim" of a cipher is defined by the computational complexity (time/memory) of breaking it, often represented as the number of bits. so the Biclique attack indeed reduced the "security" of AES to 25% of its original claim. "Security" in a broader sense can be roughly understood as "how well a system is practically protected, under a specific threat model", in this case, the underlying details, such as this minor reduction to a cipher's security claim hardly matters.
I should have edited my comment to use a better word, but now it already became permanent.
Either you crack it or you don't.
A better way to put this: instead of saying "it reduces the search space by 9 quintillion," say "it reduces 50% of the search space." Sure, that's a lot, but not nearly as much as trimming 8 bits and saying "it reduces 99.6% of the search space."
i.e., reduce it by close to practically infinity?
A $32 million (1985 dollars) Cray 2 super computer could do 1.9GFlops.
You can now get over 50x that performance for less than a grand in a device that fits in your pocket. I bet those engineers didn't expect that in half a lifetime.
Certainly any cryptosystems designed in 1985 that wanted to encrypt data until today should have taken the most aggressive form of Moore's Law into account.
The crux of this entire issue is a company known as Dark Matter, which is essentially a UAE state sponsored company, potentially getting a root CA trusted by Mozilla.
It's highly suspected that Dark Matter is working on behalf of the UAE to get a root trusted certificate in order to spy on encrypted traffic at their will. Everyone involved in this decision is at least suspect of this if not actively seeking a way to thwart Dark Matter.
Mozilla threw the book at them by giving them this technical hurdle about their 63-bit generated serial numbers - which turned out to be something that a lot of other (far more reputable) vendors also happened to have this issue.
Should it get fixed? Ya, absolutely.
Is it nearly as big of a deal as giving a company like Dark Matter, who works on behalf of the UAE, the ability to decrypt HTTPS communication? Not even close - this is far more scarier, and much more of a security threat to you and me. It's pretty disappointing that this is the story that arstechnica runs with instead of the far more critical one.
The measure of what makes a trustworthy CA are things like organizational competency and technical procedures. These are things that state level actors easily succeed in. There is no real measure in place for motives and morals for state level actors. That should be the terrifying part of this story - anyone arguing about the entropy of 63 or 64 bit is simply missing the forest for the trees in this argument.
This is false. DarkMatter already operates an intermediate CA, so _if_ this were something they were actually planning to do they wouldn't need a trusted root CA to do it. So far, there's been no evidence presented that DarkMatter has abused their intermediate cert in the past, or that they plan to abuse any root cert they might be granted in the future.
Serials were originally intended for... well, for multiple purposes. But if they only function today as a random nonce, and if they're already 65 bits, then they may as well be 128 bits or larger.
A randomly generated 64-bit nonce has a 50% chance of repeating after 2^32 iterations. That can be acceptable, especially if you can rely on other certificate data (e.g. issued and expire timestamps) changing. But such expectations have a poor track record which you don't want to rely on unless your back is against the wall (e.g. as in AES GCM). Because certificates are already so large, absent some dubious backwards compatibility arguments I'm surprised they just didn't require 128-bit serials.
The attack that we're talking about here isn't breaking a signature, but relies instead on being able to manipulate certificate data to generate a certificate with a known hash. That hash must collide with another certificate hash, which would then let you generate a rogue certificate.
A team demonstrated that this attack was possible by being able to issue a rogue cert by being able to predict the not_before and not_after on the certificate that would be issued, predicting the serial of the issued cert, and finding an input for the rest of the cert fields which caused a collision.
So, yes 128 bit serials would be better, but we should be safe even at 63 bits of entropy.
> it’s easy to think that a difference of 1 single bit would be largely inconsequential when considering numbers this big. In fact, he said, the difference between 263 and 264 is more than 9 quintillion.
Curious why everyone doesn’t agree to use 64 bits in future and just let the mis-issued certs live out their natural life?
Seems to create a lot of busywork for lots of people for no discernible benefit?
If these rules are not followed and no penalties are applied, then later on when other CAs make more serious mistakes they'll point to this and say "Apple and Google got to disobey the rules, so we should as well, otherwise it's favoritism to Apple and Google."
> 4) This only came up because of DarkMatter, a very shady operator who most people are very happy to have an excuse to screw with technicalities.
Edit maybe these are sources?
Still not getting the whole picture.
https://www.eff.org/deeplinks/2019/02/cyber-mercenary-groups... covers some background on DarkMatter.
One of the Baseline Requirements is you may not issue certs with fewer than 64 bits of entropy. Turns out DarkMatter was doing that, by issuing certs with 63 bits of entropy. Also turns out this was a thing lots of CAs did. Now that it's been pointed out publicly....
The reason people are concerned about DarkMatter is that they have (allegedly, they seem to be denying this) previously developed and sold software that can be used to MITM connections (though not by abusing any CA certificates), and that this software has been used for less-than-noble purposes.
So yes "You're a bunch of sketchy creeps, we don't trust you." is an accurate assessment of some people's opinions towards DarkMatter, but "widely expected to start running a governmental MITM once trusted" is inaccurate.
When you point a virgin browser to a new ssl endpoint the user should be presented with the certificate and a list of certificate chains that imply trust in the certificate. At that point you should decide which certificate to trust or not. This can be
- only the end certificate (because you verified the hash),
- some intermediate certificate or
- some/all root certificates (that come with the browser).
Obviously the last option is stating “I’m incompetent and/or blindly trust the browser”. Unfortunately it is the default and the software doesn’t help you to manage certificates you trust in a reasonable way.
For me it would be okay to turn of dumb mode during installation. As a start, the green address bar could be used for these user trusted certificates (instead of for EV).
It’s not less obvious than just trusting your browser vendor.
EDIT: Also note that in the presented approach you can still trust some root CAs. It’s just that the user has to do it explicitly.
However for the average person what you propose is meaningless.
You also need the recipient of the MITM cert to notice it and report it. It's generally hard to MITM an entire nation's traffic, for reasons of computational overhead. So instead you let people browse the web normally, and you deploy MITMs against specific targets for specific sites for limited times. It's probably easy for the MITM to do this in a way that avoids the victim noticing that the cert is illegitimate, and also probably easy for the MITM to prevent tools that report suspicious certificates from sending that report to the internet at large.
(Also, if your threat model is a malicious lying CA, things get much harder under the current practices: a CA has actually said "Oh, that was an internal test certificate for google.com, it didn't actually go anywhere, but also we've fired the employees who thought issuing a test cert for google.com from the prod CA was a good idea" and not been revoked. So if you get caught, just say something like that and don't fire anyone, and there's a nonzero chance you won't get kicked out.)
Doesn't Chrome now require CT?
Not great, but doesn’t rely on crls or other broken systems.
> It's generally hard to MITM an entire nation's traffic, for reasons of computational overhead
Isn't that what Iran did with DigiNotar?
My understanding of current cert transparency efforts was that they wouldn't catch "we fingerprinted your connection, identified you, and are just injecting a malicious cert for you" scenarios.
And were more targeted at the "rouge / misconfigured CA signing half the internet" to any client mishap.
But most people don't have e.g. Expect-CT set up, so it's not clear it would help on a majority of sites.
(One reasonable option would be to require certs from DarkMatter, and really every CA going forward, to have SCTs in their certs, and enforce that with a flag in the root store. But if there's a concern about DarkMatter specifically, it's probably better to phrase a change to the root store policies that say "We won't accept CAs we just don't trust" instead of waiting for them to misbehave and then rescinding their membership.)
Unless you can define the policies up front that's a very risky road to go down. Why refuse to trust DarkMatter, but not refuse to trust China Bank?
> As demonstrated in https://events.ccc.de/congress/2008/Fahrplan/attachments/125..., hash collisions can allow an attacker to forge a signature on the certificate of their choosing. The birthday paradox means that, in the absence of random bits, the security level of a hash function is half what it should be. Adding random bits to issued certificates mitigates collision attacks and means that an attacker must be capable of a much harder preimage attack. For a long time the Baseline Requirements have encouraged adding random bits to the serial number of a certificate, and it is now common practice. This ballot makes that best practice required, which will make the Web PKI much more robust against all future weaknesses in hash functions. Additionally, it replaces “entropy” with “CSPRNG” to make the requirement clearer and easier to audit, and clarifies that the serial number must be positive.
64 bits, 63 bits, what's the difference? The difference is that we now have to go through everything you might have forgotten that will make a difference. In other words, we apparently can't trust you to follow instructions, and certificates are all about trust.
The disruption caused by reissuing everything surely exceeded the disruption of this theoretical issue. I guess, on the plus side, we get to find out whether the PKI infrastructure is ready for a mass revocation/replacement event...
Recently they stopped releasing new updates for the community edition (blocker at 6.10, while the 7.0.1 is out) because they are a really greedy company.
Building by yourself is half a nightmare and the installation process as well, relying on ant tasks for it and that fail 5 out of 10 times.
Considering the UI, most of the settings can be really misused and even their evangelist can get fooled by it (especially with their Enterprise Hardware Instance, whose synchronization across the nodes is also faulty)
Now if only the same policy would be applied to CAs (possibly a few to mitigate abuse of power concerns, but far less than are in my trust store today).
On a tangent: one practice I'd genuinely like to see for security reasons (and which I'm surprised the CAs haven't proposed themselves, since it would make them twice as much money) is that major sites should always hold valid certs from two CAs, so that if a CA gets revoked it's just updating a file or even flipping a feature flag and certainly not signing up with a new CA. It would make sense to have two certs generated by different software, then. (It might also make sense, re abuse of power concerns, to present both certs and have browsers verify that a site has two valid certs from two organizationally-unrelated CAs. That way you can be significantly more confident that the certs aren't fraudulent.)
Two complete certs is twice as much data to transmit, making the TLS setup a bit heavier.
A typical cert is 0.1% of that
I don't think Digicert/Symantec are using it
The 'pull the certificates from the browsers' thing demands people from these companies maybe recuse themselves from conversations?
(this is public trust process stuff, not technology per se)
Many of the affected CAs have already come out and "confessed" that they've issued non-compliant certs and stated that they're revoking them.
No certificates are being "pulled from browsers" as a result of this incident as far as I know.
How true is this?
2^63 and 2^64 are effectively the same cost to break. Instead of costing $2X to break, it now costs $X.
You're totally right here, I'm just nerding out on threat models and security economics.
The notion that someone who has access to X amount of funds for a given task automatically has 2X and can also afford to spend 2X on the given task is not necessarily true, so such claims are generally baseless.
What is most interesting is that these claims are generally about non-exact amounts, so the logic should follow that if you can afford X, then you can afford 2X, also means that you can afford 4X, and 8X, ad infinitum.
In practice, a 2X difference in majority of real life cases concerning substantial amount of resources is by definition substantial and far from a trivial.
But I'm talking specifically about cryptographic threat models. No reasonable threat model says, conducting this attack takes $100,000, and since most people don't have $100,000 in savings it's safe, because defending against "most people" isn't meaningful. A reasonable threat model says either, conducting this attack takes $100,000 so we're going to add an additional layer of security because it's a realistic attack, or conducting this attack takes $100,000,000,000,000. In such a threat model, if the numbers change by a factor of two in either direction (either through a one-bit error like this, or through macroeconomic trends, or whatever), it doesn't change the analysis.
And in particular the claims here are in fact about exact amounts: a factor of two, or one bit. Cryptographers tend to measure things very precisely in bits. There's usually no good reason for a particular choice (64 is not a magic number here, it's just a convenient number for computers), but the analysis is still done with that particular choice. You can measure the difficulty of attacking a problem with N bits of entropy, and then add a heavy margin on top, and be very clear about what that margin is. Once you've done that, N-1 becomes probably reasonable, and you can argue precisely about why it's reasonable; you can argue equally precisely that N-5 is questionable and N-10 is not reasonable, and that the arguments are not recursive.
Sure, that is never the claim.
> And in particular the claims here are in fact about exact amounts: a factor of two
Sure, but that is still a factor of X, an unknown amount.
The bottom line is that for many actors, even nation state, the cost difference of 20M and 40M might mean that they have to seek alternative options. Not every actor has access to infinite amount of USD or compute.
And the neat thing about crypto is that's easy to do: just increase the amount of entropy involved. A mere ten more bits make a brute-force attack cost 1000x as much. If we're genuinely worried that 63 bits is too small, ditch the 64-bit requirement and make it 128-bit. (Probably phrase it as 120-bit, so people can use UUIDs and whatnot - the point is still that 120 is still clearly more than enough, not near the borderline.)
But is it? I think the underlying claim is that 2X difference doesn't matter, which is patently false.
I'd chalk this up to the author of the relevant module not really grokking the two's complement behavior in java.math.BigInteger.
Imagine a collision attack that takes about a 1 year with 64bit serial numbers, so with 63bit serial number it should take about half, at 6 months.
The average certificate is issued for about 1 year, so being able to mount a collision attack that took 1 year in 6 months can make the difference from generally-not-useful to very practical and dangerous.
Any such attack would also become feasible with twice the budget.
As far as we know.
> Any such attack would also become feasible with twice the budget.
Assuming that the attack yields to parallel computing and scales linearly with more cpu/cores, because linear programming is bound to current compute capabilities and then theoretical limits like Bremermann's limit and Margolus–Levitin theorem.