Hacker News new | past | comments | ask | show | jobs | submit login
Mozilla fears DarkMatter ‘misuse’ of browser for hacking (reuters.com)
141 points by Santosh83 21 days ago | hide | past | web | favorite | 43 comments



Can the admins change this to point to the actual mailing list thread[0]?

I'm still reading through the thread, but it seems that DarkMatter have some small technical issues around issuing certifications. Specifically, the claims seem to be:

1. using 63 bits of entropy for the serial number instead of the required 64.

2. name constraints were incorrectly filled in for their intermediate certificates.

3. DarkMatter's other business units may compromise the CA internally for malicious purposes.

[0]: https://groups.google.com/forum/#!topic/mozilla.dev.security...

EDIT: more background - DarkMatter have an open request to be included in the Mozilla CA Store[1], and are currently trusted through intermediate certificates rooted with QuoVadis, who were recently acquired by DigiCert.

[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=1427262


I think DarkMatter is right on 64 bit entropy discussion, requirements are clearly wrong unfortunately.

- it says you should use 64-bit entropy as output from a CSPRNG

- it says Serial number is a signed number

If you use 64bit serial, you clearly lose 1 bit. That means any certificate with 64bit serial is not conforming to the standard.


The requirement in question is "Effective September 30, 2016, CAs SHALL generate non-sequential Certificate serial numbers greater than zero (0) containing at least 64 bits of output from a CSPRNG".

This has been generally accepted (via discussions on MDSP and in the CABForum itself) to mean that the serial must contain at least 64-bits of entropy. Not that the generation process consumes 64-bits. If it was the latter then `head -c 8 /dev/urandom | head -c 1` (and clamping the top bit to 0) would be a permissible method of generating a serial since it consumes 64-bits of CSPRNG output to generate the serial number.

CAs are given enormous power in the ecosystem and we judge them based on their incident response and process improvement. The barrier to entry should be very high for new entrants as we strive to improve the state of the industry. This incident alone is not a disqualifier for DM's inclusion but should be considered in the context of the other issues (both technical and otherwise) raised during review of the application process.

Regarding your claim that all certificates with 64-bit serial numbers are guaranteed invalid: A certificate with a 64-bit serial MAY be valid under the standard because a CA might choose to implement the spec such that they generate a serial and prepend a null byte if and only if the high bit is 1. To determine whether or not a CA is doing that experimentally you could use services like crt.sh or censys to find all CAs with valid 64-bit serial number certificates and see if they also issue 72-bit serial numbers. If you don't see any >64-bit they're likely violating the BRs and you should submit a report to them (Mozilla maintains a contact list for CAs on their wiki) and also send an email to MDSP. Independent researchers are the lifeblood of the current wave of enforcement across the industry!


The number 64 is arbitrary, and was chosen because of potential birthday attacks on digest algorithms. Mozilla enforce the BR sometimes in extreme ways, and we derive security and trust from that enforcement. In many cases, the BR are confusing, unclear, or ambiguous. The confusion here arose from the BR requiring 64 bits of entropy from a CSPRNG and to always be positive, and ASN.1 integer fields using signed integers, so 1 bit is lost to the sign bit.

In this specific case, they have the option to use 160 bit fields for the serial number. They chose to use the minimum required of 64, apparently out of performance concerns for embedded devices with limited memory/compute.

The CA platform (EJBCA) they use defaulted to this behavior, and from what I was able to understand[0] are in the process of pushing out an update to all their customers to resolve the issue. It is very likely that someone will start going through the CT logs and we will see many reports of insufficient entropy in cert serial numbers.

[0]: https://groups.google.com/d/msg/mozilla.dev.security.policy/...


I don't have much knowledge about certificates, except from what I read on this mail list. But are there are other CAs active (in mozilla's root trust), using 64bit serial?


Probably.

Mozilla doesn't have a big team of investigators wandering about looking for possible problems. The CAs could dob themselves in (filing a "problem report" that says they broke this rule and what they're doing about it) or an independent researcher could write to m.d.s.policy about a problem they found. Otherwise it may go undetected forever.

Most interest is focused on problems that either directly indicate a grave problem (e.g. some researcher got a CA to issue for example.com when they shouldn't because that researcher doesn't control example.com) or suggest a management problem that could hide further grave problems from us (e.g. Symantec had paperwork that directly contradicted the operational behaviour of CrossCert, so why did we notice the problem before they did?)

The 64-bit entropy rule isn't such a problem. CAs should use random values here because it protects us against a problem with the secure hash function (SHA256 usually these days). If the hash function is slightly broken the random numbers buy us time to get a new one. But if you have 63-bits instead of 64-bits, it's not like that's the difference that makes an attack viable. It's just a technical violation, it's a Brown M&M in that sense. The rules say to do it, and they didn't, so we are more worried than otherwise that they are not obeying the rules.


How does that mean the requirements are wrong? Larger than 8 byte serials are allowed in the certificates, as is mentioned in that thread. I don't see how the requirements of the CA forum and the certificate representation can be taken as inherently contradictory instead of simply requiring a larger than 8 byte serial.


It is not contradictory but it can be confusing when many parties involved.

You have some software to get key for serial from RNG, when you get 64 bit and put it to serial field, you feel it is ok.

This is probably what happened in this case. But if requirements made sure lets say signature part is 64bit random + some extra (serial, identifier etc) forcing serial field to be one more octet, this could never happen.


> If you use 64bit serial, you clearly lose 1 bit.

That's not clear to me at all. A 64-bit signed integer can still contain 64 bits of entropy, it just has to be negative 50% of the time.


Some things don't like negative serial numbers, and so the Baseline Requirements (which is where this 64-bit entropy requirement comes from) say to only use positive serial numbers. Thus, "you clearly lose 1 bit".


I've got to say, DarkMatter seems suspicious to me just because they're trying to enter the CA business when Let's Encrypt already exists.

I can understand legacy CAs staying around as long as their existing customers keep paying. And I'm sure there will be an ongoing market for one or two other CAs specialising in the needs of big corporations.

But starting a new CA today, trying to sell a product that's available for free? You've gotta wonder where they think they're going to add value.


Yeah. I really really can't think of a single valid reason why DarkMatter would want this that is not malicious.

From the EFF

> The concern in this case is that DarkMatter has made its business spying on internet communications, hacking dissidents’ iPhones, and other cyber-mercenary work. DarkMatter’s business objectives directly depend on intercepting end-user traffic on behalf of snooping governments. Giving DarkMatter a trusted root certificate would be like letting the proverbial fox guard the henhouse.

Its also worth mentioning that the FF truststore is regularly used as the truststore in various flavors of Linux.

https://www.eff.org/deeplinks/2019/02/cyber-mercenary-groups...


Well, we shouldn't want a single CA, no matter the organization. We should always have at least several that are independent in its leadership, ownership and auditing.

And yes, there are things LE doesn't yet offer. They still don't do EV, SANs, or wildcards do they?


They do wildcard certificates now, I have several of them. I _think_ they do SAN as well.


But Let's Encrypt doesn't offer Organization Validation and Extended Validation (which other CAs offer for a fee). So there is always a market for a new CA.


If you thought the title was nonsense, it gets worse when the author tries to explain what a certificate authority does:

> Websites that want to be designated as secure have to be certified by an outside organization, which will confirm their identity and vouch for their security. The certifying organization also helps secure the connection between an approved website and its users, promising the traffic will not be intercepted.


It's actually not all that wrong, given a very loose and forgiving interpretation of what they mean.

Certificate authorities do "help" secure the connection by providing trust by "promising the traffic will not be intercepted" through an exploit in their certificate, and oftentimes insuring the connection to provide this trust (that's a lot of the difference in price between cheap and expensive certs, they insure for more). You could say this helps secure the connection by providing enough assurance that people are then willing to use the connection at all.

Welcome to the world of marketing and PR, where loose terminology, ambiguous phrasing means and a little imagination means never having to say you were wrong...


Yeah, no, it's pretty badly wrong.

Sometimes you will see a journalist say something that's just _technically_ wrong, in an "Um... actually" kind of way. Like saying there are 4 billion "Internet addresses". Yeah it's actually 2^32 and those are IPv4 addresses, and anyway some are in reserved classes we can never use and... We shouldn't care about these cases, it's fun to make a big deal as a _nerd_ but it's not a real problem.

But here they are misleading people as to the general purpose of a CA.

The CA doesn't in fact promise "traffic will not be intercepted" and how could they. The CA promises that their subscriber proved to them that they know a private key to which the corresponding public key is in the certificate, and that they control the things (usually FQDNs) named in the certificate.

Insurance, as another poster points out, is basically worthless. In fact it's worthless in multiple independent ways, it's deliberately engineered to _be_ worthless. For consumer insurance that would be illegal in many countries (taking people's money to "insure" against a risk that won't happen is prohibited in those countries), but they don't sell it to consumers, they sell it to the Certificate Authority and non-individuals are more free to make bad decisions since nobody important will get hurt.


The CA doesn't in fact promise "traffic will not be intercepted" and how could they. The CA promises that their subscriber proved to them that they know a private key to which the corresponding public key is in the certificate, and that they control the things (usually FQDNs) named in the certificate.

Right, but surely you see how 90% of that statement is utterly incomprehensible to a general audience?


As written, sure. But making that statement into a claim that's comprehensible to a general audience is distinct from just claiming something completely unrelated. Notice that in my statement the CA has nothing to do with this traffic and whether or not it's intercepted. Because they don't.

Part of a journalist's job is to find a way to summarise an inevitably complicated story _without_ in the process making it into a completely different story. "Bilbo the wizard stole a magical ring" is not a good summary of The Hobbit, because it is wrong. The wizard isn't named Bilbo and that isn't what that story is about even though maybe it's important for other reasons.


How would you phrase it so that it is both technically correct and easy for the general population to understand?


Well, not really my job, but here goes:

> Websites that want to be designated as secure have to be certified by an outside organization, which will confirm their identity and vouch for their security. The certifying organization also helps secure the connection between an approved website and its users, promising the traffic will not be intercepted.

becomes

Certificate Authorities are organisations that validate a web site's "domain name" and issue certificates for validated names. The site uses a certificate to prove its name to your web browser when you visit an encrypted site.

This seems relatively easy to understand, most importantly it's clear that the certificate is about naming and not some nebulous "security" or identity more broadly. Also valuable it's clear that the CA plays no direct role in actually securing an HTTPS connection (very common for people not to understand that, even fairly technical people) and yet it's vague enough that I don't have to explain about how public key technology works which is a whole can of worms.


> Certificate Authorities are organisations that validate a web site's "domain name"

Sounds more like you're describing a domain registrar or NIC. And why is "domain name" in quotes?


That kind of explanation is how a company could "promise the traffic will not be intercepted" while using ROT13 to secure it.

Also the insurances/warranties are a joke, the way they're written, there is pretty much no way anyone will be able to ever land a claim against them.


That is a perfectly reasonable description for a non-technical reader.


The biggest issue with it to me is the "vouch for their security" part, perpetuating the myth that HTTPS means the site is "safe" or that a certificate is a security audit of some kind.


I'm not a fan of promise - it makes it sound like a choice, and not something enforced by the design (assuming it's set up properly, which the CA _can't_ promise).


Sometimes I find Reuters stories boring compared to all the superficially exciting news produced by everyone else. But these are the kind of stories that make me thankful they exist. That contrast, in distracting times, reminds me of what choices I make when I consume news.

Kudos to Mozilla for keeping things open. Holding on to your values, in cut throat competitve times is fucking hard. Mad respect.


I'm not sure for what you praise Reuters here.

This story has been reported by multiple news sites before, and I don't see that Reuters adds anything that hasn't been reported before.

(Except if you mean the original Reuters / Project Raven story that was indeed exceptionally.)


I had to read it twice to get that they were writing about a CA. Definitly a writer without a clue.


Compounding the problem was that the author kept using the word "safe".

FTA: "While Mozilla had been considering whether to grant DarkMatter the authority to certify websites as safe, two Mozilla executives said in an interview last week that Reuters’ report raised concerns about whether DarkMatter would abuse that authority."

Conflating "safe" or "secure" with "this website is who they say they are" may make it easier for a lay person to understand, but is wrong and creates a dangerous impression.


The main thing this thread (going with steventhedev's link) demonstrates to me is that the CA program is completely broken. Here we have a potential CA that is

- Demonstrably involved in internet surveillance [0]

- Doesn't understand the word "entropy" [1] or intentionally interprets the guidelines in an obtuse manner in which the sequence 0, 1, 2, ... can be considered randomly generated with 64 bits of entropy [2].

- Likely in the business of lying [3] (either they are lying or reuters is thoroughly mistaken or lying)

How could whatever public benefit is gained by including these people as a CA possibly outweigh the risk? I'd think that any one of the above should disqualify them.

It also seems completely backwards that the root CAs publish certificates that give subCAs equivalent power while not having the subCAs go through the same level of scrutiny as root CAs.

[0] https://www.reuters.com/investigates/special-report/usa-spyi... - also whatever you think "defensive cyber security" means.

[1] This is after repeated prompting and consulting with engineers (hint for those not familiar with entropy, it's not possible to have more than x bits of entropy in a x bit random number, and they are using a 63 bit number) https://groups.google.com/forum/#!msg/mozilla.dev.security.p...

[2] Kindly pointed out by Matt Palmer: https://groups.google.com/d/msg/mozilla.dev.security.policy/...

[3] Compare mentions of dark matters in the first link, to there statement in the second link (one assumes that any reasonable definition of defensive does not involve spying on people in other nations)

- https://www.reuters.com/investigates/special-report/usa-spyi...

- https://groups.google.com/d/msg/mozilla.dev.security.policy/...


>It also seems completely backwards that the root CAs publish certificates that give subCAs equivalent power while not having the subCAs go through the same level of scrutiny as root CAs.

AFAIK the logic is that the since root CA is fronting the risk (in case of any misissuance), there doesn't need to be any extra scrutiny on the part of browser vendors. Besides, if this was somehow outlawed, then the root CA will simply provide a API that rubber-stamps whatever certificate is requested.


> AFAIK the logic is that the since root CA is fronting the risk (in case of any misissuance), there doesn't need to be any extra scrutiny on the part of browser vendors.

This logic doesn't make sense (though I think it is the logic)

- We audit CAs precisely because the cost of being caught isn't high enough to provide a reasonable deterance.

- No one is suggesting removing digicerts root certs over this. The cost to the public is too high.

> the root CA will simply provide a API that rubber-stamps whatever certificate is requested.

- We audit CAs precisely to prevent them from doing things like this... Any CA caught doing this after it was forbidden isn't worth anyones trust and could be immediately removed.

- Even that would be an improvement because they would presumably keep logs...


This is a pattern. Mozilla and others are furthering the interests of gatekeepers under the pretext of 'end user security'. The browser community have been scaremongering against self signed certs for years now and yet are happily handing 'authority' to shady projects in the middle east. What does one make of that?

Because of this obsession with 'central authority' even local ajax calls to 127.0.0.1 and localhost are blocked by Mozilla. What is even worse is they are doing this intentionally inspite of the standards explicitly not requiring this for localhost or 127.0.0.1 and are misleading users on bug reports and wasting their time. [1][2][3]

Some have wasted days mucking about with cors headers and ssl only to realize this is a Firefox only issue. Even Chrome which is not shy about furthering gatekeepers and their own interests doesn't do this. Needless to say this is not a problem on any other browser.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=903966

[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1440370

[3] https://stackoverflow.com/questions/51269439/why-are-my-requ...


Reading the bug reports, it seems that Firefox allows access to 127.0.0.1 ports if the originating website is not https. This is weird - I would never want some random site control my local services (including LAN) without getting an explicit permission. This leads to vulnerabilities: https://lwn.net/Articles/703485/


A standard website cannot access your local services. 127.0.0.1 and localhost are local addresses to the server, not the end user system. Private addresses cannot be accessed over the public internet.

This is about a server accessing backend local services on the server or local private network via a SSL terminating reverse proxy etc.


I don't understand a couple of things from this answer.

> 127.0.0.1 and localhost are local addresses to the server, not the end user system.

If localhost is relative to the server, then a browser running on an end user device cannot access it due to unroutable IP in the first place, obviating the problem, which is not the case, seeing that the bug is open in Firefox.

> Private addresses cannot be accessed over the public internet.

That makes sense, the 127.0.0.0/8 space is routed only to localhost.

> a SSL terminating reverse proxy

If the proxy endpoint is on the localhost, it looks the same to the browser as any other local service before a connection attempt. If the proxy cannot certify that it matches the website (using a SSL cert?), then it's indistinguishable from any actual local service, and therefore should not be accessible by default.

EDIT: I genuinely do not understand what those sentences mean in the context of that discussion - they seem to be using terms not in a way I've always seen them used.


The standards explicitly name and allow 127.0.0.1 and localhost. Is there any value in rehashing this?

This works perfectly on Chrome and all other browsers. Its only Firefox which is a problem. So its for Mozilla to explain why they are going against the standards and other browsers.


If Firefox is doing something that's closer to the right thing to do in terms of security, then it's not Firefox that is the problem, but the standard.

It would not be the only Web standard that acts against the users (WebRTC enumeration of IP addresses, canvas fingeprinting, 3rd party cookies, many others come to mind).


Then what's the point of having standards?

One could make the case Firefox is trying to 'protect users' if they resisted or protested any of the user hostile standards you mention, but Firefox has implemented all of them.

It's just this case and this is not 'user hostile'. If they are going their own way there must be transparency, disclosure and reasoning. But in the bug reports they are denying it.


Okay, I'm not saying Firefox is trying to protect the users in this case. Based on [0], they care about following the standard, and they implemented a patch once they saw they were not compliant. About 6 months ago, the bug came back. With a charitable outlook, it looks like a bug they don't give much priority to.

When it comes to the bigger topic of standards, they are meant to increase interoperability. However, there are many ways to interoperability, and they can serve different masters. Seeing how standards are just codified behaviours, they are similar to law, and it's quite clear that law is not always good. Civil disobedience is widely recognized as a valid way to influence laws.

In this light, I would say breaking standards can be a form of civil disobedience, if the standard doesn't serve the regular, disenfranchised person.

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=903966


> A standard website can not access your local services.

I’ve used https://ziahamza.github.io/webui-aria2/ in the past to manage aria2c with aria2c running on localhost (see https://github.com/ziahamza/webui-aria2/blob/master/README.m... for instructions) and the webui loaded directly from said address. That was a while ago and I don’t know if any browsers still allow access to localhost by default but at least some of them did last time I used that webui at least.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: