Windows and macOS both have a signing infrastructure for apps. The rules of that infrastructure dictate only that apps must have been signed by a valid certificate at the time they were signed. That way old app downloads don't need to be periodically re-signed just to account for expiring certificates. I can download a 5-year-old version of 7zip or whatever and it runs just fine because it was signed with something valid to the timestamp in the signature. The process of distributing desktop apps would be utterly insane if this were not the case.
Not following this model for browser plugins seems unnecessarily cumbersome. Is it really worth requiring all browser plugins to be signed by a currently valid certificate? Is there a document or blog post where this is argued to be more appropriate?
I get that it arguably leads to more stringent security, but I'm not convinced by the delta improvement of that model over the desktop model, given the additional downsides. And the "let everything expire after a few years and resign it" process should not be used as a substitution for revocation. After all, if it were determined retroactively that a malware extension was signed does it really help everyone that it can't be loaded in a year or two, given the damage it could cause right now?
Code signing is a well understood problem with a well known solution, but the blog post discusses everything except the well known solution.
Right now you have a problem caused directly by lack of time stamping, and the article doesn’t even acknowledge that.
That’s not inspiring confidence. I’m genuinely still not sure if they have understood what the actual problem is and how to solve it properly.
> We’ll be running a formal post-mortem next week and will publish the list of changes we intend to make
The lessons noted down here are just some thoughts by the author of this blog post:
> but in the meantime here are my initial thoughts about what we need to do.
I thought the mention of "ticking time bombs" showed someone is thinking about this properly because end users get the same experience if e.g. a timer gets treated as negative in 2038 or the browser depends on the century field being 20 as they do if an X.509 certificate expires. If you are sure you handled all certs, but you blow up because your GPS epoch wrapped then you still screwed up.
The post did say that they'll "be looking more generally at our add-on security architecture to make sure that it’s enforcing the right security properties at the least risk of breakage". I hope they'll be looking at that point here; if they want to support explicit revocation, there may be other ways to do it which aren't quite so prone to invocation by accident (e.g., publishing a revocation list signed by certs valid at the time of revocation).
And the expiration of that "intermediate" obviously should have not made already accepted add-ons stop working, in this specific use-case. It's not about establishing a new trusted communication channel for a new content, it's not about disabling some specific add-on.
Thinking logically, that was not the role of that immediate certificate.
So the behavior was clearly designed wrong, because completely wrong analogy was used -- that of creating a connection, where expired intermediate certificate should prevent the new connection, which could an transfer a new attack if not verified. Here the verification already happened, and the "ban" of a specific set of add-ons was also clearly not a case.
Additionally it seems the handling of the update of the expired intermediate was not the topic of the design at all.
I’m not saying they shouldn’t, but it is a significant piece of complexity.
Mozilla already requires uploading even your private extensions to a Mozilla server to be signed for internal or external deployment.
Nope. Actually all CAs that offer code signing certificates are required to provide timestamping service compatible with RFC 3161. These timestamping servers are usually free to use, see for example https://knowledge.digicert.com/generalinformation/INFO4231.h...
Though the way things stand, all CAs have no problem timestamping sigs made with certs that are from other CAs, so perhaps even no explicit agreement is required.
I really don't understand why Mozilla designed their system like they did. Code signing is well known and probably even done for the Windows installer of Firefox, why did they just not duplicate the model? Checking expiration vs current time makes absolutely no sense for code signing, esp. at runtime (it could kind-of sort-of make a little bit of sense at install time, but I'm not really convinced, and maybe it is even not really possible to distinguish between the two with an effective boundary)
Are we worried that someone will steal an old/expired cert and have control over a user's clock?
Without some other verification mechanism, you can't tell the difference between this and an actual signature signed when the key WAS valid
The whole point of a trusted timestamp is that such signature cannot be made for a fraudulent date, otherwise it would be utterly pointless.
This scenario and threat model does not exist if timestamping is correctly implemented.
RFC 3161 timestamps - which is what we're discussing here - can be fraudulently constructed for any timestamp value by someone who has the private key for the TSA (Time stamping authority). So what your parent described is easily possible: A system that relies on RFC 3161 timestamps has to trust that
* any cryptographic hash algorithms used remain safe
* any public key signature methods used remain safe
* the TSAs retain control over their private keys for as long as you continue to accept timestamps from that TSA
This is a big ask, and in practice the code signing systems you're probably thinking of just don't care very much. A state actor (e.g. the NSA) could almost certainly fake materials for these systems, we know that this has been done (presumably by the NSA or Mossad) in order to interfere with the Iranian nuclear weapons programme in the past.
You _can_ build a system that has tamper evident timestamping, but it's much more sophisticated and has much higher technical requirements. That's what drives the Certificate Transparency system. CT logs can prove they logged a specific certificate within a 24-hour period, and monitors verify that their proofs remain consistent, the to-be-built Gossip layers allow monitors to compare what they see in order to achieve confidence that logs don't tell different stories to different monitors. But to achieve this a CT log must be immediately distrusted if it falls off line for just 24 hours or if an error causes it to not log even a single timestamp certificate it issued. Massive Earthquake hit your secure data centre and destroyed the site? You have 24 hours to get everything back on line or be distrusted permanently. Bug in a Redis configuration lost one cert out of 4 million issued? You are distrusted permanently. Most attempts to build a CT log fail the first time, some outfits give up after a couple of tries and just accept they're not up to the task.
Sure. Which is why these are heavily secured and guarded. Just like the keys for any cert, and highly trusted root certs in particular.
Any private/public crypto system can be compromised if the private keys are leaked. Everyone knows that.
That however is in no way a good argument for not using timestamps.
Alice the OS Vendor wants to let Bob the Developer make certificates saying these are his Programs, she is worried Bob will screw up so his cert needs to have a short lifetime, but her OS needs to be able to accept the certs after that lifetime expires so users can still run their Programs. So, Bob makes certificates and uses Trent's public TSA that Alice authorised to prove they were made when they say they were. Alice only has to trust Trent (who is good at his job) for a long period, and Bob who can be expected to screw up gets only short-lived certificates.
But Mozilla's setup doesn't have these extra parties. There is intentionally no Bob in Mozilla's version of the story, they sign add-ons themselves, so timestamping plays no role. If a 25 year TSA would be appropriate (hint: it would not) then a 25 year intermediate cert would be just as appropriate and simpler to implement for Mozilla.
So yes, Mozilla signs the extensions, but that doesn't change the importance of keeping the private key private... that is HOW we know it is Mozilla doing the signing
Let's say you compromise by letting old signatures stay valid, but only for a year. And you rotate the intermediate cert every 90 days.
This system is more secure than the old one, because you only have to worry about key leaks for 15 months, instead of years.
But at the same time, it's impossible to have this giant wave of everything failing at once. Instead the best case is nobody can sign extensions for a couple days, and the worst case is extensions that updated exactly a year ago start to fail in real time.
Or at least more useful than the 'when all you have is a hammer' style applications of blockchains.
As always Blockchains are a good solution only for a tiny group of problems.
Flowchart here: http://philippe.ameline.net/images/ShouldYouUseBlckchn.jpg
Firefox has to deal with malware injecting extensions outside the normal browser process, so I'm not surprised they would default to having periodic re-checks.
Considering Mozilla's certificate failure, Apple's cert failure, and the number of websites I encounter that have forgotten to renew their certificates, it seems like a broken system. Or a really effective form of DRM, I'm not sure which.
You could just as well have certificates that never expire and just start signing with a new one if you feel like it.
Note the Mozilla ‘solution’ has the same problem: their root now authorized a new certificate with the old public key. If the original expectation was that the private key would be safe for a certain number of years (which is why you have expiry dates in the first place) the unsafe private key is now valid again. This defeats the purpose of expiry.
It could just as well been signed today with a fake date.
If you don't like having the trusted third party, you can also publish the hash in some write-once medium, such as on the blockchain, in certificate transparency logs, or in the small ads of a reputable newspaper.
This trusted timestamping relies on the same signing things with public and private keys process so it just adds a step, it doesn’t solve the problem. The timestamping key needs to expire as well and then it can’t be trusted anymore.
Admittedly, CAs are held to higher standards than most certificate users in terms of keeping keys in hardware security modules and suchlike. So perhaps it's not 100% unjustified that they get longer validity periods.
With this, the time when code-signed application is executed doesn't matter at all, fake date or not.
In other words you check everytime you use it basically. That is how stolen Microsoft cert stopped.
All old app are checked for validity. It is not as you implied.
The bug was fixed so quickly that I wouldn't even have realized it had happened if it hadn't been for the thread here on HN. My extensions hadn't even been disabled yet by the time the patch came out. And pushing out the hotfix through studies followed by a new version probably ensured that a large fraction of the "average joe" userbase didn't even realize there was a problem.
So obviously there are some improvements to make for the future but I think some of the criticism over the last few days has been a bit harsh. Firefox is still my preferred browser by far.
Maybe this is a timezone thing, but I was in East Asia, and I had to deal with the internet for close to 36 hrs (android) with no ublock. It was almost enough to look for a new browser (but browsers with adblock on Android are few and far between - so instead I just didn't use the internet as much for a day or two.). Part of that delay was play store being slow to push it, as I recall seeing the binaries somewhere a while sooner.
Mozilla's official Linux builds disallow xpinstall.signature.required = false (last I checked), but the unbranded builds (as well as builds provided by at least some repos) do indeed allow signature bypassing.
Edit: on second thought, maybe I saw this solution (with or without the mention of nightly) and skipped over it as the Mozilla blog post on May 4th said "There are a number of work-arounds being discussed in the community. These are not recommended as they may conflict with fixes we are deploying."
Would I have to sign in to sync again to access my bookmarks, logins, etc? Are there ever issues with syncing between phone and computer (nightly to stable) or would I have to change my desktop browser as well? Does anything ever break at all?
Even if the answer to these questions is "no", the fact that I'm asking them is the barrier to entry. And if any of the answers are yes, there's no reason to change from stable branch - avoiding one issue to get different one(s) isn't a solution.
> certainly no more than there would be for, you know, normal Firefox
Yes, indeed, switching to Firefox from Chrome a couple years ago did have a significant barrier to entry
On desktop front there is no problem connecting current and nightly to the same sync account and indeed same profile directly.
The sole annoying thing about nightly is that it naturally updates frequently. It's quite stable and gets legitimately useful features faster and let's you disable signing and run locally built add-ons. I use it as my primary browser on Android and Linux.
Fair enough. The answers, for the record, are indeed "yes" (but that takes, what, 30 seconds?), "no", and (at least not severely) "no".
But apparently even non-Nightly Firefox for Android supports xpinstall.signatures.required = false, which is even less of a barrier to entry, so that's good news, I guess. While I understand Mozilla's reasoning for not wanting a bunch of people to set this and forget about it, it's a bit ridiculous that not once did they mention it aside from a "don't do this thing that we're not going to specify because it's a hack" (of course it's a hack, and it's one that got me up and running again long before there was even a fix via Studies).
Yeah, the last two would have been the broader deal breakers. The first one is just an issue for me personally - I don't know my sync password. I have it written down at home, but I'm not there right now.
I have no issues whatsoever with the speed of Firefox on mobile. I also prefer to support a non-chromium browser.
Don't you, from a privacy perspective, find it more than a little disturbing that the study mechanism has so much access to internal APIs in Firefox that it can install certificates without your involvement?
That seems like a crazy security risk, let alone privacy risk. It's built in and enabled _by default_ in Firefox.
Seriously, look at how studies operates. If you're using firefox, go to about:studies and see for yourself.
> look at how studies operates
It works on the background and without asking you anything. Is that correct?
> If you're using firefox, go to about:studies and see for yourself.
It says "You have not participated in any studies." - probably because I have them disabled.
In my own browser, if I go to about:studies, I have two Studies running. One is the hotfix for the add-on signing issue. The other is:
prefflip-push-performance-1491171 • Active
This study sets dom.push.alwaysConnect to false.
(Disclosure: I work for Mozilla)
The Studies mechanism occurs silently, running private code, without specific user action, and with "extensive access to Firefox internal APIs".
At the end of the day the question is whether you can trust Mozilla. I trust them more than most entities, including many that push changes through apt-get.
I would love to see a link to the code related to various Studies. So far I've not been able to find any. All you seem to get told is the name of any studies you're in, and only if you go over to about:studies and go looking.
This is not to say I don't trust Mozilla. I do. I trust them far more than I do Google / Chrome. It's hard not to see Studies as a privacy nightmare, though, and the level of power it has is disturbing.
When the study involves an addon, I think the bug will link to its code.
The other thing is that it's not like there are many other options for me. The only mainstream browsers I could realistically use are Chrome, Firefox and Edge. Out of those I think Firefox probably cares the most about my privacy.
If you don't trust them at least a little bit, you shouldn't be using their software full stop.
You were lucky you extentions were not disabled before the fix, but for many people this was a major problem.
To get the fix though, I had to opt in to the Firefox studies. Apparently I had opted out at some point in the past.
Currently testing out "Edgium" though, and I can already say it's the one browser that could pull me away.
But it did take a few days for the Android fix to be released on Google Play, so it was more annoying than on the desktop.
Add to that that the recommended fix was to activate the backdoor of Firefox, thats just a horror story from beginning to the end.
And no admission of guilt anywhere.
> We strive to make Firefox a great experience. Last weekend we failed, and we’re sorry. (...) We let you down
I think they handled it terribly… or rather, they handled the same event three years ago terribly. There's a reason this is called armagadd-on 2.0… that cert had expired once already, the previous fix didn't work. I think last time they actually had a week of lead time, too.
That is 100% not reassuring. Remember Looking Glass?
Well, the marketing team has the power to tweak how ads are handled, silently, with no update action from the user.
Out of the frying pan, into the fire.
I'm still running FF 56.0.2 because I can't live without Tab Mix Plus.
The official line was to wait for the update but about:studies never came up with anything even after 24 hours of waiting and everyone was saying it was fixed but it wasn't for me, I presume because nobody cares about the refugees stuck on pre-add-on breaking versions.
So it was completely broken until I finally found a reddit thread that described how to use the developer console to manually import the certificate extracted from the fix.
That worked, but it enabled all my add-ons, even ones that were previously disabled.
Bloody irritating. I didn't even know that it was possible to break things remotely like this.
Also they could have mentioned in their post that their fix did not do anything for older versions, instead of specifically telling everyone to just keep waiting.
xpinstall.signatures.required = false didn't fix it.
I'll be very happy to update when there's a version that has a good tab manager. I'm on the latest version at home and it regularly loses whole windows full of tabs, even though it is set to restore my session on startup. And there's no way to manually save sessions. It's hopeless.
I wanted to point out this wasn't remotely broken, however. Even if you had no internet connection, your addons would have stopped working when the certificate expired.
I just wanted to chime in way down deep in this comment chain because my thought only makes sense in the context of your comment right here.
I think there may be a special mode of operation of Firefox that may need to be considered here.
You said, "Even if you had no internet connection, your addons would have stopped working when the certificate expired."
This seems like an unfortunate design flaw to me. Consider a Firefox, kitted out with specific set of add-ons setup to the user's liking. Then, the network that Firefox is located on becomes permanently cutoff from the internet and can no longer make contact with the Mozilla mother ship. Maybe it's running in a VM, or maybe it's running in a country with an oppressive regime. I can think of many scenarios where a Firefox would be cutoff.
I think it is a reasonable expectation that the marooned Firefox should continue to run indefinitely without failure. Perhaps the user could be occasionally (monthly, yearly) flagged with warnings that the mother ship could not be contacted, but other than that, nothing should fail.
Please consider this and share it with your teams when the post mortem is discussed.
Thanks! I'm a loyal user since before Firefox.
I personally agree this is a worthwhile goal. The blog post talks about "tracking the status of everything in Firefox that is a potential time bomb and making sure that we don’t find ourselves in a situation where one goes off unexpectedly." I expect once that is done we'll be positioned to evaluate if and how we could support this.
I've fixed it now anyway, in the way I specifically read that I wasn't meant to do.
Maybe you guys could inform us unsupported old version users what we should do instead of waiting for an update that can't come?
It's ok, thanks for your post. I know stuff is complicated and shit happens.
I'm much more disappointed that the latest Firefox doesn't have a working session manager than I am about this mixup. IMO that should be a core browser function.
Would you mind please telling whoever you need to that we need automatic session saving/restoring to work properly, and then lots of people like me will happily update.
I think part of the reason this fiasco stirs up so much emotions is because normal people, including tech professionals, still expect the browser to behave like a product, not a service. And products aren't supposed to randomly break like that, they aren't supposed to ship with a time bomb attached.
Not this? I didn't think things would break with updates disabled.
Like I said, now I've been educated on a new mechanism for things to break.
Give me back my Tab Mix Plus!
>In order to respect our users’ potential intentions as much as possible, based on our current set up, we will be deleting all of our source Telemetry and Studies data for our entire user population collected between 2019-05-04T11:00:00Z and 2019-05-11T11:00:00Z.
I love Mozilla.
I wonder if the engineers who implemented the original system had a clear idea of what would be done when the cert expire approached and it just never got documented (or FF accidentally changed to make it no longer applicable?), or if they just figured they'd figure that out years later when the date approached.
> Many developers have asked why we can’t make this a runtime option or preference. There is nowhere we could store that choice on the user’s machine that these greyware apps couldn’t change and plausibly claim they were acting on behalf of the user’s “choice” not to opt-out of the light grey checkbox on page 43 of their EULA. This is not a concern about hypotheticals, we have many documented cases of add-ons disabling the mechanisms through which we inform users and give them control over their add-ons. By baking the signing requirement into the executable these programs will either have to submit to our review process or take the blatant malware step of replacing or altering Firefox. We are sure some will take that step, but it won’t be an attractive option for a Fortune 500 plugin vendor, popular download sites, or the laptop vendor involved in distributing Superfish. For the ones who do, we hope that modifying another program’s executable code is blatant enough that security software vendors will take action and stop letting these programs hide behind terms buried in their user-hostile EULAs.
There are of course always workarounds on an open platform like Windows/Mac/Linux, but the threshold isn’t “impossible”, it’s just “as difficult as injecting into the browser’s code.”
Edit: For example, what if the config file contained a checksum of the file's contents + the user's hardware? If the setting is changed by the user within Firefox, the checksum is updated and everything works—if the checksum is invalid, settings are reset to the default.
Video games will occasionally do this type of thing with their config files. Modders often figure out the formula—but again, the idea here isn't to make editing the file impossible, it's to make it as difficult as injecting into the Firefox executable.
Of course, there are ways but just none are convenient. Generally the well established security boundaries are between the operating system and the user, and then between different users. Almost all the ways I can think of involves Firefox temporarily elevating privileges, which is undesirable.
I don't even remember what it is supposed to protect against precisely -- and MS does not do a lot of communication about it. They maybe don't officially consider it a security boundary (same as UAC). Mozilla seems to be concerned about attackers having admin privileges (but that are still shy of just plainly hacking the Firefox binary or something crazy like that): quite hard to defend against that... and if you can, that means the user actually has lost some control over its own computer, which is a situation which has its own issues.
But Windows? A sadder story, I think.
More than "going forward", this has been the case for a while now. It's been long enough ago that they disabled that setting in stock Firefox that I don't remember exactly when it happened.
Our Add-on Policy is discussed further at https://blog.mozilla.org/addons/2019/05/02/add-on-policy-and...
I am extremely wary, having been bitten more than once, about software that automatically updates itself or is otherwise subject to remote interference. I don't run Windows 10. I don't use Chrome for anything important. I avoid subscription-based or activation-required software as much as possible. And in Firefox, I specifically chose to be prompted to install updates (which I usually do immediately, but it's my choice and I can do a quick search first in case there ever is a problem being widely reported).
I find the argument that it's impossible to make this configurable because malware could then circumvent it very weak. If we're talking about that level of interference, anyone with access to the Firefox executable could in theory replace it, and given the open source nature of the Firefox codebase this wouldn't be particularly difficult technically for anyone willing to go to such lengths in the first place.
In any case, even if the argument about hard-coding protection into the executable did stand up to scrutiny, there are alternatives possible instead of retrospectively disabling addons with no possible workaround. Perhaps most obviously, you could show a warning message and require explicit user approval at startup before activating the addon, for example, as is already done with various other useful features that are also potentially open to abuse.
As things stand, far from protecting our privacy and security from malicious addons, the current system in fact deactivated all of our privacy- and security-protecting addons, without warning, right in the middle of browsing sessions. One of these seems to be very much worse than the other in terms of the risks created, and I urge you to consider that when deciding how addons are treated in the future.
As mentioned in GGP's quote, Firefox was specifically modified by popular software by Fortune 500 companies and laptop companies, which feel safe enough to modify user preferences, but not to replace the Firefox executable. This does specifically fix that real-world attack vector.
> In any case, even if the argument about hard-coding protection into the executable did stand up to scrutiny, there are alternatives possible instead of retrospectively disabling addons with no possible workaround. Perhaps most obviously, you could show a warning message and require explicit user approval at startup before activating the addon, for example, as is already done with various other useful features that are also potentially open to abuse.
I think that would be categorised as an option "that these greyware apps [could] change and plausibly claim they were acting on behalf of the user’s “choice” not to opt-out of the light grey checkbox on page 43 of their EULA".
> As things stand, far from protecting our privacy and security from malicious addons, the current system in fact deactivated all of our privacy- and security-protecting addons, without warning, right in the middle of browsing sessions.
That this happened was a risk, and they're taking active measures in the future. The other was a certainty, and this risk was the active measure they were taking against it.
Sorry, but I don't really see how. We've been using click-to-play safeguards on embedded content for years, and they have proved highly effective at stopping abusive or outright malicious content in Flash, Java applets, etc. Why couldn't a similar safeguard be used to isolate untrusted addons but still give users the option to override and run them in the current session if they really do want to? I don't see why such a mechanism would be more vulnerable than any other hard-coded browser behaviour, including refusing to run those addons at all. If you made the behaviour configurable via a persistent setting then obviously that could be subject to external modification, but you don't have to do that here.
I respectfully disagree with this stance. That malicious sites are actively compromising user privacy is also not a risk, it is a certainty. That addons to block unwanted content have stopped malware from exploiting browser vulnerabilities and infecting user systems is also not a risk, it is a certainty. It would take substantial evidence to convince me that the risk from greyware apps was really greater than the risk of privacy and security invasions across the entire Web.
But other software on the user's computer wasn't trying to work around those safeguards. That's the main attack vector, as I understand it.
> That malicious sites are actively compromising user privacy is also not a risk, it is a certainty.
Absolutely, and as far as I'm aware that's also something that Mozilla's actively taking measures against.
> That addons to block unwanted content have stopped malware from exploiting browser vulnerabilities and infecting user systems is also not a risk, it is a certainty.
Sure, in hindsight it is, but I wouldn't have predicted a week ago that it was about to happen, and as far as Mozilla is able to predict future occurrences they are taking measures against it as well.
There is a known risk of Firefox being compromised by malicious addons, including those preinstalled by certain organisations. This risk is what is moderated by requiring addons to be signed and hard-coding a block. However, moderation is all this gains, because anyone who is preinstalling Firefox on a computer could still install a modified executable instead.
There is also a known risk of the user's security or privacy being compromised by visiting malicious websites that exploit weaknesses or vulnerabilities in Firefox. This risk is what is moderated by addons that block or otherwise interfere with undesirable content. It doesn't take any sort of hindsight to anticipate this; it is one of the major reasons people advocate blocker extensions, and this has been true for many years.
It is understandable that Mozilla would want to disrupt the former threat, but as I and others have explained, there are tried and tested ways they could do so that are no more vulnerable than the current approach yet would not suddenly remove all protection offered by addons against the latter threat without warning in the middle of a browsing session. The current heavy-handed approach is like building a secure home by making a concrete bunker with no doors and windows: the efforts to secure the addon system ultimately rendered the entire system useless.
Worse than that, though, the current strategy violates the basic principles that attract some users to Firefox in the first place, specifically its extensibility through addons and its relative respect for users' privacy and control of their own systems. The fact that Mozilla have so far shown little understanding of why some users would have a problem with this is regrettable, but perhaps they will come around with further thought after the event. However, the fact that there are people here still trying to defend the policy despite the highly visible train wreck that just happened seems very odd to me.
Well, apparently that is a line that vendors are not prepared to cross.
> There is also a known risk of the user's security or privacy being compromised by visiting malicious websites that exploit weaknesses or vulnerabilities in Firefox.
When it comes to actual weaknesses or vulnerabilities, it seems clear to me that Mozilla should not rely on add-ons for patching those. But yes, blocker extensions still provide value; luckily, they are also still allowed.
> as I and others have explained, there are tried and tested ways they could do so that are no more vulnerable than the current approach yet would not suddenly remove all protection offered by addons against the latter threat without warning in the middle of a browsing session. The current heavy-handed approach is like building a secure home by making a concrete bunker with no doors and windows: the efforts to secure the addon system ultimately rendered the entire system useless.
You've said this before, so to prevent getting into a loop, I won't repeat my response :)
> Worse than that, though, the current strategy violates the basic principles that attract some users to Firefox in the first place, specifically its extensibility through addons and its relative respect for users' privacy and control of their own systems.
This I understand, and I wish it wasn't necessary too. I do think Mozilla has not shown little understanding - they've repeatedly explained how they are caught between a rock and hard place, and reached a different conclusion than you did, after weighing the pros and cons. That does not mean a lack of understanding of the cons, but merely that they did not outweigh the cons of the alternatives in their view.
This might simply be the result of different valuations of the pros and cons between you and Mozilla; given the amount of data and insight Mozilla has on the use of Firefox, I would also suggest to be open to the idea that there might be a lack of understanding on our side about the scale of the problem of malicious extensions.
Are Mozilla currently helping any organisations to sue these companies?
Is there more detailed evidence provided on this somewhere? Like which companies and exactly what they did?
Firefox is still considered open source, correct? From what I know, open source software is meant to be altered.
My initial reaction when Let's Encrypt had you re-issue every 90 days was negative, but I was wrong. Very wrong. A 90 day re-issue forces you to have working re-issue infrastructure and procedures, and therefore you're less likely to get stung by an accidental expiration.
Long expirations are a trap, a very easy trap to fall into.
It should be noted that most LetsEncrypt tools will renew a certificate when it is 30 days from expiry, so if you run the renew script every week (or day) you're also never close to expiry.
Ironically, I had the opposite problem. I used to be on top of things like cert expirations, but now I just let certbot do everything. The problem (at least in my case) was that even though certbot updated the cert on time, it doesn't restart / reload nginx so that it picks up the new cert. My site was up for the full ~30 days between the renewal and the expiration of the old cert. So my site went down because of letsencrypt's cert renewal policy.
(I now have a script set up that reload's nginx's configuration whenever the certificate is updated.)
It's also infrequent enough to often remain a manual process when it really should be automated.
That said, I still haven't automated renewal of my personal ones using LetsEncrypt, but it's also so simple I don't really need to.
At any of my previous jobs where I've had to deal with certificates, a 90 day cycle would have guaranteed I'd have it down to at least just "run this script"
This clearly says they screwed up and that internal procedures need to be changed so it cannot recur. If this was in fact an oversight/accident, what other kind of explanation can they give?
> This was due to an error on our end: we let one of the certificates used to sign add-ons expire
> We clearly need to adjust our processes both to make this and similar incidents it less likely to happen and to make them easier to fix.
Why not link to the xpi that can be installed now?
This is the crux of my remaining frustration with how Mozilla handled this issue. That XPI should've been front-and-center in all the articles that detailed the fix. And yet, instead of something like...
"If you have Studies enabled, a fix should apply automatically. If it hasn't yet, or if you have Studies turned off (or are using a version which does not support Studies), you can install the hotfix add-on [here](URL to XPI)."
...pretty much all the official messaging ended up like so:
"If you have Studies enabled, a fix should apply automatically. It may take up to 6 hours; please be patient and wait for it. If you don't want to (or can't) turn on Studies, you're SOL until we push out a point release (and further SOL if you're at the mercy of a Linux package maintainer or you want to use a version of Firefox that still supports XUL-based addons)."
The notion that this was a deliberate ploy to get more people to turn on Studies is surely conspiracy-theorist mumbo-jumbo, but nonsense like this makes me wonder.
This is not the case. Please see my response downthread: https://news.ycombinator.com/item?id=19872490
Like, just link to the XPI. Not that hard. The unexplained reluctance to do so is suspicious.
Now that we have a stable fix, we will publish an XPI with the option of direct installation for users of older, unsupported versions of Firefox (all the way back to 52) who have opted out of automatic updates.
> Manually installing the hotfix XPI makes cleanup a bit harder now that we have a proper fix. E.g., without coming from Studies, there's no study to ever end.
The language around enabling Studies for the hotfix also claimed that once the hotfix installed, one can feel free to turn off Studies. Could similar language not have been included for the XPI approach (e.g. "once the fix is applied, you can uninstall this add-on")? Or is this a case where the extension does have to be installed (at least until the user upgrades to a point release with a fixed certificate)?
Alternately, do extensions have the ability to uninstall themselves? If so, then perhaps the extension could install the new certificate and immediately uninstall itself (or, in the "extension has to be installed for the fix to exist" scenario above, uninstall itself if it detects itself running on an updated Firefox and/or flag itself as incompatible with Firefoxen newer than the latest affected version)?
Alternately, is there no way for Firefox itself (e.g. in a point release) to explicitly blacklist an extension?
Alternately, is it possible to revoke the certificate/signature for that extension such that Firefox deems it invalid and disables it (using, presumably, the same mechanism and rationale as what caused this particular bug)?
Seems like this is a problem with multiple potential solutions besides "just do it as a Study". Even if it really is/was unsolvable, I feel like power users would be perfectly happy with getting the quick fix in exchange for subsequent cleanup being on them; ain't ideal, but it's better than waiting for multiple hours for Studies to work its magic.
> Direct installation also makes it harder to quickly respond to any bugs we might discover in the initial revision of the hotfix.
I'm sure there are some people out there who would be happy to test the XPI while having Telemetry enabled so y'all can get all that juicy fresh debugging data :)
> For users who cannot update to the latest version of Firefox or Firefox ESR, we plan to distribute an update that automatically applies the fix to versions 52 through 60. This fix will also be available as a user-installable extension. - https://blog.mozilla.org/addons/2019/05/02/add-on-policy-and...
(Disclaimer: I work for Mozilla)
Finally some good news. This is what I suggested in one of the previous threads: there should be a delivery channel for important updates, and a channel for experiments/telemetry/whatnot. Some other HNer said it was an unrealistic expectation "because manpower". Guess what, it isn't. This is how things should always be.
The article doesn't explain why elevated privileges are required to apply the update.
(I work on the Firefox updater and am a Mozilla employee)
Nobody deserves that.
I'd expect addon usage to follow a pareto distribution. Thus resigning the most important ones would have helped a lot of users. Why didn't they start going this route anyway? Not enough manpower for this to not diverted resources from the other more important fixes?
Republishing was one of the options we were investigating early on. However, the problem is that it only fixes things once you check for and install the updated version signed with the new certificate. Firefox would still have disabled your installed version that had an expired certificate.
Firefox checks for addon updates every 24 hours, but it checks in with Normandy every 6 hours. Thus once we had stopgap fixes shipping to users via Normandy, and a proper fix of a new Firefox release in progress, republishing addons wasn't necessary.
An update being deactivated doesn't trigger an update check for said addon? Well, complexity of a graceful update attempt for a not loadable addon probably outweighs the benefit of such rare cases.
But the main reason I had a hard time with you guys discarding this path is probably addon stats like . The spike in downloads made me think it could have helped at least some users. But on second thought, it might also have been caused by extensions getting re-enabled post fix. 2m downloads at 4m DAU makes me guess this was caused by something automatic.
Shouldn't it be impossible to generate a new cert (with a different expiry date) that ends up having the same public key as an existing cert?
If you have the secret key of the original certificate, you can use the same key material, and just use different meta data (like expiry date).
The signature is over the contents of the certificate, so the certificate cannot change without the signature changing.
The public key, though, is just part of the arbitrary information that the certificate is intended to secure. Much like multiple certificates can be issued with the same subject name (but varying other details), multiple certificates can be issued with the same public key.
For example, in the TLS context, you can produce an unlimited number of CSRs (certificate signing requests) off of the same private/public key pair used for TLS. It's a common practice to generate a new private/public key pair every time you generate a new CSR in order to mitigate potential compromise of the private key, but even that practice is becoming less common because changing the public key prevents pinning it using e.g. HSTS - today, some clients establish trust by verifying the public key against both the certificate and some other separate method (often TOFU). This is a separate practice used alsongside certificate verification intended to mitigate some of the security concerns around the certificate infrastructure.
In this case, as in Mozilla's case here, it is a practical requirement to issue a new certificate with the same public key, because clients expect the public key to remain constant for various reasons.
But you can always re-sign the same keypair from the root with a new, non-expired certificate. And since this keypair (the intermediary) signed all the individual add-ons to begin with, it will just magically work.
Remember that certificate is nothing but a message that is signed by some "higher" keypair, which says that some "lower" keypair is trusted.
If an intermediate certificate becomes compromised, you can revoke it and issue a new intermediate certificate with your still secure root without the need to push out new binaries.
TL;DR is "In order to respect our users’ potential intentions as much as possible, based on our current set up, we will be deleting all of our source Telemetry and Studies data for our entire user population collected between 2019-05-04T11:00:00Z and 2019-05-11T11:00:00Z."
What’s the point of re-validating of installed add-on in the first place then?
Don't know whether to be happy or scared that the TLS validation code is "relatively well understood" :D ! I assume it's just a sub-optimal choice of phrasing.
These are two entirely different domains which follows entirely different rules.
Current Firefox behaviour is still broken, even after the “fix”.
That’s not inspiring a lot of confidence, to be honest.
I assume I missed something in reading this.
My kubuntu boxes had the patch wednesday, not before.
> Second, we immediately pushed a hotfix which suppressed re-validating the signatures on add-ons.
Wait, that's not right is it!?
The only hotfix I'm aware of (and I was following this pretty closely, as you might guess by my dozens of comments on the topic) was the addon installed via the studies system.
That addon didn't suppress the re-validation of signatures, it installed a new certificate and then triggered the re-validation of signatures immediately. It left the validation that happens on a 24 hour cycle alone.
> This study sets app.update.lastUpdateTime.xpi-signature-verification to 1556945257.
It seems super simple to do, but in practice IMO is harder than it seems. Most major cloud providers have been hit by at least one cert expiry causing an outage in the past year... Hell, likely in the past month.
This doesn't surprise me at all, certs are hard.
either today's engineers are sub-standard or the foxes rule the henhouse.
Current bug for issue:
This was discovered after the expiring caused a similar process to kill data.
I mean that's the most crucial add-on to ever work: ad blockers.
When I visit a site with ad-blocker off I feel compelled to clear all caches and cookies and go have a shower.
> We’ll be running a formal post-mortem next week and will publish the list of changes we intend to make, but in the meantime here are my initial thoughts about what we need to do. First, we should have a much better way of tracking the status of everything in Firefox that is a potential time bomb and making sure that we don’t find ourselves in a situation where one goes off unexpectedly. We’re still working out the details here, but at minimum we need to inventory everything of this nature.
I'm sorry but when your business is to enable 30 000 extensions to work your business is also to check that certificates that enable such extensions don't fail tomorrow. That's the core of one's job, not even a side project or something.
They should simply streamline the "normal" updates for that purpose, not invent the new "channels."
Specifically, not all binaries in the directory should be changed just to push an update where only few lines of the code are different.
I knew it. They are going to use THEIR fuck-up to justify why they need even more remote control and access to firefox installed on users machines.
Except it wasn’t working for this few days. Anyway older versions of Firefox wasn’t signing add-on, so they are not concerned by this fix.
Is Firefox use the normal PKI authentication mechanism. Their reaction is like this is a surprise and even signing intermediate cert as the first step and instead of talking about bypass or hack the whole PKI trust chain.
Based on some of the comments here, I think one has to understand that it is not just timestamp and validity. The checking of PKI is per transaction and on a continuous basis. It is NOT just based on signing but also based on CRL (cert. revocation list) which is also key.
I read the blog a few time. I feel frightened not enlightened. It seems they are not on the ball. A minor mistake (forget to renewal cert. like O2 (not sure but heard same issue)) gave a lot of lights on issues.
Do they have CPS even ... :-) or :-(((
> A SHIELD study must be designed to answer a specific question.
Why have they abused it again here to deploy a hot fix, breaking their promise and policy that they put in place last time they messed up?
Or am I ignorant or some part of the story or technical details.
I feel a complaint like this verges unhelpfully in to the pedantic.
> [...] we need a mechanism to be able to quickly push updates to our users even when — especially when — everything else is down. It was great that we are able to use the Studies system, but it was also an imperfect tool that we pressed into service, and that had some undesirable side effects. In particular, we know that many users have auto-updates enabled but would prefer not to participate in Studies and that’s a reasonable preference [...] but at the same time we need to be able to push updates to our users
Its a difficult situation to be in: Some users do not want any changes being applied automatically, but when something breaks, changes need to be applied. It sounds like the Firefox team is doing everything they can with respect to both ends of the spectrum.
This seems like a very good compromise and is honestly more than they needed to do imo
It’s bad optics at the very least. Users who opted in for the update were in fact entered into studies they explicitly wouldn’t have wanted to be in without the lure of an earlier update.
We are also completely deleting all Telemetry and Studies data received in the week following the incident to ensure we respect people who had concerns like yours, but enabled Studies in order to receive the hotfix.
Specific details and timestamps are in the post at https://blog.mozilla.org/blog/2019/05/09/what-we-do-when-thi...
That said, nuking this data is the first good thing Mozilla has done in this whole fiasco. It's a small but real act of contrition, so kudos for that.
GP used the word "felt" and was expressing that he felt a certain way about enabling Studies. You're nit-picking a conversation and it has gone like this:
A: I felt that $x.
B: I'm sorry that you felt that $x.
C: "I'm sorry that you felt ..." is an insincere apology.
Yes, some people use this trick to get out of admitting guilt or responsibility but this is not an example of that.
And that's the minimum. Anything less than that is shifting the blame.
> "In order to respect our users’ potential intentions as much as possible, based on our current set up, we will be deleting all of our source Telemetry and Studies data for our entire user population collected between 2019-05-04T11:00:00Z and 2019-05-11T11:00:00Z."
> Second, we need a mechanism to be able to quickly push updates to our users even when — especially when — everything else is down. It was great that we are able to use the Studies system, but it was also an imperfect tool that we pressed into service, and that had some undesirable side effects. In particular, we know that many users have auto-updates enabled but would prefer not to participate in Studies and that’s a reasonable preference (true story: I had it off as well!) but at the same time we need to be able to push updates to our users; whatever the internal technical mechanisms, users should be able to opt-in to updates (including hot-fixes) but opt out of everything else.
Mozilla previously told us
> we have created a set of principles that we will always follow when shipping a SHIELD study to our users, and two principles are most relevant to this situation.
What question did this hot fix answer? None. So what’s the point in the policy and promise if they’re doing to disregard it. It was supposed to be there to stop what went wrong last time. It’s like they disabled a safety put in after the last bad accident.
Given, of course, that the truly right action of not letting the certificate expire in the first place, was no longer possible.
I’m sure they thought the same about the Mr Robot addon, but their judgement was wrong so this policy was put in place.
This was a user-friendly move, and while it's unfortunate that it was necessary in the first place, your criticism reads like a "gotcha."
I don’t get it. Distributing something that was not a study as a study was what upset people last time. So they promised not to do it again. And they have just done it again.
People got upset because an add-on for a commercial TV series auto-magically installed itself in their browser without their manual intervention.
You can't just abstract that out into an ethics framework of, "If Firefox sends non-X over a channel reserved for X, then users get upset." I mean, you can, but you're going to be rightly confused and misunderstood.
In this case, they were pushing a hotfix for a critical issue effecting all firefox users. It was only pushed through the studies system because it was the best option to get the fix to as many users as possible while they worked on a firefox update.
Implying that this is in any way equivalent to looking glass is completely disingenuous.
I’ve seen some folks trying to explain it away and compare the Normandy preference system to standard auto-updates, acting like this is no problem if you already trust them for auto-updates.
Flat wrong. This is a dark pattern by Mozilla pure & simple. It’s confusing, hard to disable fully, and clearly can be abused for non-experiment modifications to the user’s settings.