
A world of hurt after GoDaddy, Apple, and Google misissue 1M certificates - jonburs
https://arstechnica.com/information-technology/2019/03/godaddy-apple-and-google-goof-results-in-1-million-misissued-certificates/
======
profmonocle
Just to be clear - "mississued" in this case doesn't mean they were issued to
someone who doesn't control the domain. The issue is they were issued using a
63-bit serial number instead of the minimum 64 bits. (The software these CAs
were all using was generating 64 random bits, but setting the first bit to
zero to produce a positive integer.)

The reason CAs are required to use 64-bit serial numbers is to make the
content of a certificate hard to guess, which provides better protection
against hash collisions. IIRC this policy was introduced when certs were still
signed using MD5 hashes. (That or shortly after it was retired.) Since all
publicly-trusted certs use SHA256 today, the actual security impact of this
incident is practically nil.

~~~
ehsankia
Is there a reason why certificates are being issued using the bare minimum
required number of bits, instead of something higher like 128 or 256? Why even
risk being at the very edge?

~~~
tialaramex
There are legends that some software doesn't like serial numbers that don't
fit in 64-bits. As with most legends you'll tend to hear a third hand story
that someone heard once from somebody who remembers someone else telling them.
Since this came up on m.d.s.policy we have a very specific client to always
keep in mind, Mozilla's Firefox, and that doesn't care, but perhaps something
else does, or did, at some unspecified point in the past. Probably.

The main practical reason seems to have been that a popular application used
by Certificate Authorities, EJBCA, offered an out-of-box configuration that
used 63 bits (it called this 8 bytes because positive integers need 8 whole
bytes in the ASN.1 encoding used). That looks superficially fine, indeed if
you issue two certs this way and they both have 8 byte serial numbers that
just suggests the software randomly happened to pick a zero first bit. It's
only with a pattern of dozens, hundreds, millions of certificates that it's
obvious that it's only ever really 63 random bits.

But yes, I agree the sensible thing here (and several CAs had done it) was to
use plenty of bits, and then not worry about it any further. EJBCA's makers
say you could always have configured it to do that, but the CAs say their
impression was that this was not recommended by EJBCA...

If you could go back in a time machine, probably the right fix is to have this
ballot say 63 bits instead of 64. Nobody argues that it wouldn't be enough.
But now 64 bits is the rule, so it's not good enough to have 63 bits, it's a
Brown M&M problem. If you can't obey this silly requirement to use extra bits
how can we trust you to do all the other things that we need done correctly?
Or internally, if you can't make sure you obey this silly rule, how are you
making sure you obey these important rules?

~~~
adrianratnapala
Thank yous all for teaching me about Brown M&Ms.

[https://www.snopes.com/fact-check/brown-out/](https://www.snopes.com/fact-
check/brown-out/)

------
air7
This article really annoys me.

It's "Rage Culture" or maybe just front-page seeking by the author. The
problem with that is that it makes people desensitized because if everyone is
screaming all the time, one should just shut their ears. We have real issues
to discuss and this isn't one of them by a long shot.

Reducing the search space from 64bits to 63bits is of no consequence because
if an attack on 63bits was feasible, it would mean the _same attack_ would
work 50% of the time on 64bit (or take twice as long for 100%). That wouldn't
be acceptable at all.

Sure, 64>63, but at the very least it's not "A world of hurt"

~~~
isostatic
It isn't a problem in itself. It doesn't make the certificate any less secure
in practice, even if we still used md5 as a hash.

The problem however as pointed out down-page [0] [1]

> If you can't obey this silly requirement to use extra bits how can we trust
> you to do all the other things that we need done correctly? Or internally,
> if you can't make sure you obey this silly rule, how are you making sure you
> obey these important rules?

> The reason for the urgent fixes is to promote uniformly applied rules. There
> are certain predefined rules that CAs need to follow, regardless of whether
> the individual rules help security or not. The rules say the certs that are
> badly formed need to be reissued in 5 days. > If these rules are not
> followed and no penalties are applied, then later on when other CAs make
> more serious mistakes they'll point to this and say "Apple and Google got to
> disobey the rules, so we should as well, otherwise it's favoritism to Apple
> and Google."

[0]
[https://news.ycombinator.com/item?id=19377292](https://news.ycombinator.com/item?id=19377292)
[1]
[https://news.ycombinator.com/item?id=19375758](https://news.ycombinator.com/item?id=19375758)

~~~
fixermark
That's a slippery slope argument, and the answer to slippery slope arguments
is "We'll address serious issues with appropriate seriousness."

This specific error isn't a serious issue, as indicated by how little impact
it's had on real-world security.

It's not favoritism to Apple and Google if they emit certs with 63 bits and
get minor criticism and someone else, say, stops using random numbers to seed
cert generation and gets raked over the coals. The latter case would require
more urgent and serious attention.

~~~
Scarblac
It's not a slippery slope argument, it's an applying the rules argument. The
rules don't allow for a difference between more and less serious infractions,
they just need to be followed to the letter.

~~~
fixermark
"If you can't obey this silly requirement to use extra bits how can we trust
you to do all the other things that we need done correctly?" is a slippery
slope argument. The response is "We allocate testing resources proportionally
to the seriousness of the consequences of failure to adhere to a requirement,
as any good engineering project does."

It's probably worth noting that the problem lasted three years and wasn't
discovered by an exploit in the wild, but by followup spot-checking of Google
certs as a result of spot-checking Dark Matter certs. I don't think
seriousness of the issue is in dispute.

~~~
Scarblac
It's saying, you have an extremely important job for the functioning of the
Internet, that everybody has to blindly trust you do right.

The moment we see a small sign that you don't do it right in some detail, then
that trust is gone.

Consider all the details in the spec to be Van Halen's brown M&Ms (although
that had no functional effect, and losing a bit of security does). They knew
that if people did that right, then they could trust that they also read the
details of the rest. If Google gets this wrong, we can't trust on that.

That's not a slippery slope argument. That'd say, if we allow this then you
would then do worse things _because_ we let it go. But that's not the
argument.

------
geofft
> _Adam Caudill, the security researcher who blogged about the mass
> misissuance last weekend, pointed out that it’s easy to think that a
> difference of 1 single bit would be largely inconsequential when considering
> numbers this big. In fact, he said, the difference between 2^63 and 2^64 is
> more than 9 quintillion._

Okay, but, that's because 2^63 itself is more than 9 quintillion. Where the
search space was previously 18 quintillion, it's now 9 quintillion. Both of
those are "big". The attack is 50% easier than "theoretically impossible
before certificate expiration," which should still mean that it's impossible.

~~~
beardbandit
Its 50/50.

Either you crack it or you don't.

~~~
saagarjha
Except that's not how probability works.

------
Golfkid2Gadfly
What an incredible non-story burying an actual real and terrifying story.

The crux of this entire issue is a company known as Dark Matter, which is
essentially a UAE state sponsored company, potentially getting a root CA
trusted by Mozilla.

It's highly suspected that Dark Matter is working on behalf of the UAE to get
a root trusted certificate in order to spy on encrypted traffic at their will.
Everyone involved in this decision is at least suspect of this if not actively
seeking a way to thwart Dark Matter.

Mozilla threw the book at them by giving them this technical hurdle about
their 63-bit generated serial numbers - which turned out to be something that
a lot of other (far more reputable) vendors also happened to have this issue.

Should it get fixed? Ya, absolutely.

Is it nearly as big of a deal as giving a company like Dark Matter, who works
on behalf of the UAE, the ability to decrypt HTTPS communication? Not even
close - this is far more scarier, and much more of a security threat to you
and me. It's pretty disappointing that this is the story that arstechnica runs
with instead of the far more critical one.

The measure of what makes a trustworthy CA are things like organizational
competency and technical procedures. These are things that state level actors
easily succeed in. There is no real measure in place for motives and morals
for state level actors. That should be the terrifying part of this story -
anyone arguing about the entropy of 63 or 64 bit is simply missing the forest
for the trees in this argument.

~~~
Ajedi32
> It's highly suspected that Dark Matter is working on behalf of the UAE to
> get a root trusted certificate in order to spy on encrypted traffic at their
> will.

This is false. DarkMatter already operates an intermediate CA, so _if_ this
were something they were actually planning to do they wouldn't need a trusted
root CA to do it. So far, there's been no evidence presented that DarkMatter
has abused their intermediate cert in the past, or that they plan to abuse any
root cert they might be granted in the future.

------
wahern
Presumably 64 bits were originally chosen because it still permitted simple or
naive ASN.1 decoders to return the parsed value as a native 64-bit type. But
ASN.1 INTEGERs are always signed, so theses serials would now have to be 65
bits. But any ASN.1 decoder interface that permitted directly storing a 65-bit
value into a 64-bit type--even an unsigned type--is dangerous if not broken.
I'm guessing that most X.509 management software (much like my own) simply
maintains the parsed serial as a bignum object.

Serials were originally intended for... well, for multiple purposes. But if
they only function today as a random nonce, and if they're _already_ 65 bits,
then they may as well be 128 bits or larger.

A randomly generated 64-bit nonce has a 50% chance of repeating after 2^32
iterations. That _can_ be acceptable, especially if you can rely on other
certificate data (e.g. issued and expire timestamps) changing. But such
expectations have a poor track record which you don't want to rely on unless
your back is against the wall (e.g. as in AES GCM). Because certificates are
already so large, absent some dubious backwards compatibility arguments I'm
surprised they just didn't require 128-bit serials.

~~~
paulddraper
Yes, now certificates are about half as hard to hack as they were supposed to
be.

~~~
baking
Does that mean that twice as many will be cracked?/s

------
SethTro
Seems like a lot of hand wringing over nothing, security is done with huge
factors of safety (moving to 256 bit keys when no one had ever broken a 128 or
even 96 bit key). It's hard to imagine that 1,2, or even a quarter of the bits
couldn't be zero-ed.

> it’s easy to think that a difference of 1 single bit would be largely
> inconsequential when considering numbers this big. In fact, he said, the
> difference between 263 and 264 is more than 9 quintillion.

~~~
schoen
In fact, without a practical attack against SHA256, _all_ of the serial number
bits could be zeroed. This is undesirable for other reasons, but the serial
number isn't part of the cryptographic security of the certificate except as
far as it can be used to prevent the person requesting the certificate from
anticipating or controlling what the entire signed data will be.

~~~
tialaramex
Well not _all_ the bits. We do want the serial numbers to be non-identical
because you need a way to talk about specific certificates for validity
checking. Once upon a time bug reports would have focused on certificate
serial numbers, these days they're more likely to be crt.sh links but arguably
we should discourage that because crt.sh could go away some day.

~~~
schoen
Yep, that's what I mean by "for other reasons". (Without distinctive serial
numbers or crt.sh, we would probably have to attach PEM copies of the
certificate in every discussion about it.)

------
xyzzy123
Given that there seems to be no security impact (and none expected in the next
year or two)...

Curious why everyone doesn’t agree to use 64 bits in future and just let the
mis-issued certs live out their natural life?

Seems to create a lot of busywork for lots of people for no discernible
benefit?

~~~
mmastrac
No idea and completely unsourced, but one of the site comments states this:

> 4) This only came up because of DarkMatter, a very shady operator who most
> people are very happy to have an excuse to screw with technicalities.

Edit maybe these are sources?

[https://bugzilla.mozilla.org/show_bug.cgi?id=1531800](https://bugzilla.mozilla.org/show_bug.cgi?id=1531800)

[https://groups.google.com/forum/#!msg/mozilla.dev.security.p...](https://groups.google.com/forum/#!msg/mozilla.dev.security.policy/nlN_QrDwgaw/cg_v-
VY0AQAJ)

Still not getting the whole picture.

~~~
geofft
The basic story as I understand it is that DarkMatter under contract to the
United Arab Emirates wants to become a trusted CA, and they are widely
expected to start running a governmental MITM once trusted, but the CA root
programs don't have any provision for "You're a bunch of sketchy creeps, we
don't trust you." (Oddly enough for a "trusted" root program, there is
generally no actual evaluation of trust as conventionally defined. The "trust"
part is "can you pass audits and generally be technically and organizationally
competent to not let your private key be stolen / your infrastructure be
abused by an attacker." Individual employees are part of the threat model, so
there's usually a two-person rule for access to the private key; entire
malicious organizations willing to lie in public and cover their tracks are
not envisioned by the model.) So people are trying to block their application
by nitpicking technical mistakes that, by the letter of the Baseline
Requirements, disqualify you from being a CA.

[https://www.eff.org/deeplinks/2019/02/cyber-mercenary-
groups...](https://www.eff.org/deeplinks/2019/02/cyber-mercenary-groups-
shouldnt-be-trusted-your-browser-or-anywhere-else) covers some background on
DarkMatter.

One of the Baseline Requirements is you may not issue certs with fewer than 64
bits of entropy. Turns out DarkMatter was doing that, by issuing certs with 63
bits of entropy. Also turns out this was a thing lots of CAs did. Now that
it's been pointed out publicly....

~~~
lixtra
The whole CA system is fundamentally broken.

When you point a virgin browser to a new ssl endpoint the user should be
presented with the certificate and a list of certificate chains that imply
trust in the certificate. At that point you should decide which certificate to
trust or not. This can be

\- only the end certificate (because you verified the hash),

\- some intermediate certificate or

\- some/all root certificates (that come with the browser).

Obviously the last option is stating “I’m incompetent and/or blindly trust the
browser”. Unfortunately it is the default and the software doesn’t help you to
manage certificates you trust in a reasonable way.

For me it would be okay to turn of dumb mode during installation. As a start,
the green address bar could be used for these user trusted certificates
(instead of for EV).

~~~
geofft
Not obvious to me at all. I would say that believing you can manually verify
hashes in a trustworthy way is incompetent. Where do you get the hashes to
compare against from?

~~~
lixtra
You get the hashes you trust from the counterparty that you trust. I.e. your
bank could print it everywhere.

It’s not less obvious than just trusting your browser vendor.

EDIT: Also note that in the presented approach you can still trust some root
CAs. It’s just that the user has to do it explicitly.

------
helper
This is the CAB Forum rationale for serial number entropy[1]:

> As demonstrated in
> [https://events.ccc.de/congress/2008/Fahrplan/attachments/125...](https://events.ccc.de/congress/2008/Fahrplan/attachments/1251_md5-collisions-1.0.pdf),
> hash collisions can allow an attacker to forge a signature on the
> certificate of their choosing. The birthday paradox means that, in the
> absence of random bits, the security level of a hash function is half what
> it should be. Adding random bits to issued certificates mitigates collision
> attacks and means that an attacker must be capable of a much harder preimage
> attack. For a long time the Baseline Requirements have encouraged adding
> random bits to the serial number of a certificate, and it is now common
> practice. This ballot makes that best practice required, which will make the
> Web PKI much more robust against all future weaknesses in hash functions.
> Additionally, it replaces “entropy” with “CSPRNG” to make the requirement
> clearer and easier to audit, and clarifies that the serial number must be
> positive.

[1]:
[https://cabforum.org/2016/03/31/ballot-164/](https://cabforum.org/2016/03/31/ballot-164/)

------
mikestew
Theoretical possibilities and minimal security impacts aside, I'm not seeing
comments along the lines of the brown M&M clause [0]. Yeah, brown M&M's
weren't going to ruin the day of David Lee Roth, but that wasn't the point:
when dealing with heavy and high-amperage equipment of a stage show, what
_else_ did you forget or ignore?

64 bits, 63 bits, what's the difference? The difference is that we now have to
go through everything you might have forgotten that _will_ make a difference.
In other words, we apparently can't trust you to follow instructions, and
certificates are all about trust.

[0] [https://www.snopes.com/fact-check/brown-
out/](https://www.snopes.com/fact-check/brown-out/)

------
nneonneo
Ok, I’m all for strong security and better SSL infrastructure, but the
response to this issue was just totally overboard. The issue - one fixed bit
in a 64-bit randomized serial field - does not compromise the security of
these certs in any meaningful way, especially not before their natural expiry
dates anyway.

The disruption caused by reissuing everything surely exceeded the disruption
of this theoretical issue. I guess, on the plus side, we get to find out
whether the PKI infrastructure is ready for a mass revocation/replacement
event...

~~~
Cthulhu_
It's not about whether it compromised security; it's that they didn't adhere
to standards. If you're a certificate authority, you need to conform to
standards. If you're not, you SHOULD get evicted as an authority, like
DigiNotar [1] was for example.

[1]
[https://en.wikipedia.org/wiki/DigiNotar](https://en.wikipedia.org/wiki/DigiNotar)

~~~
matmg
I don't think you can compare misissuing certificates, including *.google.com,
to leaving one bit out of 64 marked as 0.

------
IloveHN84
Personally I hate EJBCA.

Recently they stopped releasing new updates for the community edition (blocker
at 6.10, while the 7.0.1 is out) because they are a really greedy company.

Building by yourself is half a nightmare and the installation process as well,
relying on ant tasks for it and that fail 5 out of 10 times.

Considering the UI, most of the settings can be really misused and even their
evangelist can get fooled by it (especially with their Enterprise Hardware
Instance, whose synchronization across the nodes is also faulty)

------
spydum
Sooooo all the big players depend on one CA PKI package: EJBCA - is that not a
major concern ?

~~~
gpm
That seems like the correct state of things. More packages means more
possibility of bugs. We want to trust as little code as possible.

Now if only the same policy would be applied to CAs (possibly a few to
mitigate abuse of power concerns, but far less than are in my trust store
today).

~~~
geofft
Counterpoint (which I'm not fully convinced of myself, to be fair): CAs are
supposed to be interchangeable and easy to revoke. While the CA ecosystem as a
whole must be robust, no individual CA can be too big to fail. If a serious
bug is found in software used by one or a few CAs (imagine something like the
Debian OpenSSL bug from 11 years ago), revoking them and requiring customers
to move to other CAs is feasible. If a serious bug is found in software used
by all CAs, you can't revoke all the certs on the web and leave HTTPS useless
globally while CAs set up new software.

On a tangent: one practice I'd genuinely like to see for security reasons (and
which I'm surprised the CAs haven't proposed themselves, since it would make
them twice as much money) is that major sites should always hold valid certs
from two CAs, so that if a CA gets revoked it's just updating a file or even
flipping a feature flag and certainly not signing up with a new CA. It would
make sense to have two certs generated by different software, then. (It might
also make sense, re abuse of power concerns, to _present_ both certs and have
browsers verify that a site has two valid certs from two organizationally-
unrelated CAs. That way you can be significantly more confident that the certs
aren't fraudulent.)

~~~
kardos
Would two signatures on the same cert fit the bill?

Two complete certs is twice as much data to transmit, making the TLS setup a
bit heavier.

~~~
isostatic
A typical webpage is something like 2MB

A typical cert is 0.1% of that

~~~
kardos
Lots of things use TLS that is much smaller than a bloated webpage, for
example REST APIs.

------
a-wu
For background, earlier this month, DarkMatter applied for Mozilla root CA
inclusion. There was an email thread [1], with concerns about DarkMatter, and
one of the emails[2] was concerned that DarkMatter was generating serial
numbers in this exact same fashion using EJBCA. There was a pretty long-winded
discussion in the thread about whether flipping the MSB constituted a loss of
1-bit of entropy and an EJBCA dev chimed in[3] saying basically that they are
pushing a fix to solve this. This seems to have kicked off this issue.
(there's a lot more to it, with DarkMatter's CTO saying that the method did
not constitute a loss of a bit, etc, but this thread seems to be where the
issue was discovered at least.)

[1]
[https://groups.google.com/forum/#!topic/mozilla.dev.security...](https://groups.google.com/forum/#!topic/mozilla.dev.security.policy/nnLVNfqgz7g)

[2]
[https://groups.google.com/d/msg/mozilla.dev.security.policy/...](https://groups.google.com/d/msg/mozilla.dev.security.policy/nnLVNfqgz7g/VAdQotoiBQAJ)

[3]
[https://groups.google.com/d/msg/mozilla.dev.security.policy/...](https://groups.google.com/d/msg/mozilla.dev.security.policy/nnLVNfqgz7g/OVKywVZIBgAJ)

------
ggm
CT principles would surely demand they do some public facing declaration?

The 'pull the certificates from the browsers' thing demands people from these
companies maybe recuse themselves from conversations?

(this is public trust process stuff, not technology per se)

~~~
Ajedi32
I don't understand what you're asking.

Many of the affected CAs have already come out and "confessed" that they've
issued non-compliant certs and stated that they're revoking them.

No certificates are being "pulled from browsers" as a result of this incident
as far as I know.

------
modeless
Is this a consequence of Java's failure to expose unsigned integer types?

------
jrochkind1
from that write-up, I'd call that a bug in EJBCA more than a
"misconfiguration". If it was working as designed, then it's design was buggy.
:)

------
bitxbitxbitcoin
“Almost no chance of exploitation.”

How true is this?

~~~
ikeboy
True.

2^63 and 2^64 are effectively the same cost to break. Instead of costing $2X
to break, it now costs $X.

~~~
j16sdiz
This is an anti-collusion measure against birthday attack. The effect is
exponential.

~~~
Ajedi32
2^x _is_ exponential. GP is correct, it's still only one bit, so the cost is
halved.

------
tbodt
The true cost of Java not supporting unsigned integers

~~~
bdhess
In an X.509 certificate, the serial number is encoded as the ASN.1 integer
type, which is arbitrary length. So that can't map to a native integer type on
any platform.

I'd chalk this up to the author of the relevant module not really grokking the
two's complement behavior in java.math.BigInteger.

------
bandrami
Again and again, the problem with PKI is not the tech, but the agents. We need
an authorityless solution.

~~~
j16sdiz
It works fine until some bug arise and nobody have the authority to fix it....

------
omeid2
The interesting aspect that a lot of people are overlooking is that, for a
theoretical attack within certain timeframes, this difference can be make-it
or break it!

Imagine a collision attack that takes about a 1 year with 64bit serial
numbers, so with 63bit serial number it should take about half, at 6 months.

The average certificate is issued for about 1 year, so being able to mount a
collision attack that took 1 year in 6 months can make the difference from
generally-not-useful to very practical and dangerous.

~~~
p1mrx
Why do you assume that an attack would take 1 year, and not (e.g.) a billion
years? A factor of two is only interesting if the number you're dividing was
interesting in the first place.

~~~
omeid2
_imagine_ is hardly assuming. But it doesn't have to be exactly 1 year, any
attack that takes longer but less than 2x average certificate lifetime with
64bit serial numbers (useless) becomes practical on 63bit serial numbers
(useful, for a strange meaning of useful).

~~~
fwip
Such an attack doesn't exist.

Any such attack would also become feasible with twice the budget.

~~~
omeid2
> Such an attack doesn't exist.

As far as we know.

> Any such attack would also become feasible with twice the budget.

Assuming that the attack yields to parallel computing and scales linearly with
more cpu/cores, because linear programming is bound to current compute
capabilities and then theoretical limits like Bremermann's limit and
Margolus–Levitin theorem.

~~~
fwip
Yeah, assuming these true things.

~~~
omeid2
Assuming the parallelism of an algorithm that you know nothing about is beyond
foolish.

~~~
geofft
Right, which is why we know things about the algorithm.

