Hacker News new | past | comments | ask | show | jobs | submit login
Private by Design: How We Built Firefox Sync (hacks.mozilla.org)
529 points by feross on Nov 14, 2018 | hide | past | favorite | 174 comments

Sadly, I believe that Firefox Accounts — upon which Firefox Sync is built — are insecure in execution. The security of the whole system relies on keeping your password secret from Mozilla, but as you can see when accessing https://accounts.firefox.com/oauth/signin?scope=profile&clie... your browser downloads the JavaScript it will use to derive your authentication token from Mozilla.

There’s nothing stopping Mozilla from serving all users malicious JavaScript which sends the password in plaintext; there’s nothing stopping Mozilla from serving 1% of users, or even a single targeted user. They could be forced to do so by a government with control over them; presumably some number of their employees could likewise be suborned by a state or criminal enterprise.

This should all be done by code compiled into the browser and triggered from the UI, rather than JavaScript run within a web page. Yes, Mozilla could also serve malicious Firefox binaries, or even publish malicious Firefox source, but the odds of someone in the world noticing that are significantly higher than a single targeted individual or small number of targeted individuals noticing.

Or maybe I’m missing something? I don’t think I am, though: that Firefox Account page is just HTML served from accounts.firefox.com; it’d be easy enough to just make it a simple form submission and capture the user’s password. And with the password, the security of the system fails completely: it’s simplicity itself to decrypt all stored data with it.

Hi, Firefox Accounts developer here. You're correct in your understanding that that login flow is ultimately driven by a webpage, and this is a deliberate trade-off that we made in the interests of reach and usability of the system.

It's certainly a trade-off that not everyone is comfortable with, but we're confident it's the right one for the majority of our users. You can read some previous discussions on the topic in these bugs (and additional suggestions/feedback therein is definitely welcome):

  * https://bugzilla.mozilla.org/show_bug.cgi?id=1034526
  * https://bugzilla.mozilla.org/show_bug.cgi?id=1447656

>The reason the login form is delivered as web content is to increase development speed and agility

You saved some sprints but invalidated the purpose of the project. Very agile.

>Ultimately I think we can have web content from accounts.firefox.com be just as trustworthy as, say, a Mozilla-developed addon which might ship in the browser by default, which is a pretty high bar. We're not there yet, but it seems worth pursuing to try to get the best of both worlds.

The safety of the default installation is crowdsourced across all users and can't be targeted. The safety of the JS I load from Mozilla is not and I would have to verify its safety every time. Unless I'm misunderstanding something it can never be as trustworthy.

A more complete comment from rfk on the bug tracker:

> The reason the login form is delivered as web content is to increase development speed and agility. You're right that web content has a larger potential attack surface than code that's built into the browser, but using web content also brings other kinds of security benefit that may not be obvious. That agility meant that during the incident in [1] we were able to respond quickly and effectively to protect users data, and to roll out an updated login flow containing an email confirmation loop. It means that when we ship two-factor authentication over the coming weeks, it will be immediately available to all users on all platforms. It means we can address Bug 1320222 in a single place and be confident we won't lock out older devices. And it means we can easily bring new Firefox apps like Lockbox into the Firefox Accounts ecosystem.

> Our approach has been to embrace the benefits of web content while trying to reduce the potential attack surface as much as possible. That includes some simple things like hosting the web content on its own server to reduce exposure to application server bugs, and shipping default HSTS and HPKP settings for the accounts.firefox.com domain. It also includes some in-browser measures to prevent interference with FxA web content, such as (the currently private) Bug 1415644. As a future step I'd like to see us implement content-signing for accounts.firefox.com and have it enforced by the browser, following the example of things like Bug 1437671.

> Ultimately I think we can have web content from accounts.firefox.com be just as trustworthy as, say, a Mozilla-developed addon which might ship in the browser by default, which is a pretty high bar. We're not there yet, but it seems worth pursuing to try to get the best of both worlds.

"Every time" for this use case is once per browser install, at the moment you perform the authentication with Firefox Sync, which is the same as the number of times you'd want to verify the binary right before authenticating.

The tradeoff they made here has essentially zero impact on the number of times you need to verify their code, it's just a matter of whether you'd have to verify browser native authentication code or authentication code delivered through a website written in JS, at the moment you authenticate.

A concern like the one raised in this thread is certainly valid for websites that have expiring sessions, where you can switch accounts and log in and out of. And we certainly do need better tools around signature verification and version pinning for websites like we do for binaries (content-addressed networks like IPFS may have good answers there).

But for this use case, it's not a practical concern by any measure, and all this alarmism seems really misdirected.

You're still not addressing the ease with which a targeted attack can be directed at a single user.

In order to compromise firefox native code, they would have to compile malicious code and ship it to everyone. My distro maintainers would need to include the malicious binary in their repos, including a signed hash of the compromised binary, and I'd need to install it, where my package manager would verify the hash.

In order to compromise a single user's browser session, they'd simply need to fingerprint the user's browser and then serve them different content than everyone else gets. No hashes or signatures on javascript, no safety in numbers, etc.

if someone is using a package-manager that uses code signing then indeed, the binary is harder to attack than the JS. (only because the package-manager would need to collude).

However, a lot of people get their software from downloaded .exe's or auto-upgrading installations. For them, JS or binary are equally vulnerable. (All it takes is a mozilla signature)

Besides, it is undeniably better to only be vulnerable to an active attack from mozzila, than to be vulnerable against a passive attack from them.

Most distributions disable auto-upgrade in Firefox, for many reasons (security and auditability being one of the main ones) so you won't get auto-upgrade from a distribution.

And even the "download .exes from the internet" usecase is precisely as secure as downloading JS from the internet that is verified once per install. To attack someone who has an auto-updating Firefox and downloaded it from the internet, you need to intercept and attack TLS -- but only when the upgrade happens which is a fairly limited opportunity. The JS attack has the exact same properties if the above comment (that it only gets downloaded once per install) is true.

So therefore it is strictly less secure in the optimal case, and it is no more secure in the sub-optimal case. So security really isn't a strong argument (the real argument is that it allows for more "agile development" -- which is an understandable argument if you cop to that being the only reason for such a design).

If you can attack TLS, game is over, you can't trust anything. A huge majority of Firefox users use built-in update mechanisms. Making life harder for majority of users to improve security of the selected few is a questionable decision. And if you're really insisting, you can always install some addon which will calculate hashsum of JavaScript resources.

> Making life harder for majority of users to improve security of the selected few is a questionable decision

I agree in theory, though as an aside this isn't true for distribution packages because usually they are GPG signed with keys that are in TPMs on the release machines. Of course any other internet communication relies on TLS not being broken.

But another attack would be modifying one of Firefox's mirrors to host malicious Firefox (not a TLS attack but an attack of a specific mirror). GPG detached signatures for distribution packages protect against this and many other such problems (obviously some attacks against the build servers of a distribution would be problematic, but the same applies for any software project).

Though to be fair, I don't know if Firefox's auto-updater uses an update system like TUF or a distribution-style update system (which is mostly equivalent in terms of security) which would protect against these sorts of problems.

> Making life harder for majority of users to improve security of the selected few is a questionable decision.

I don't understand how logins being built-in to the browser is making life harder for the majority of users. It wouldn't make a difference to them. It would make a difference to the development team, but one could easily argue that the development team should be willing to make life slightly harder for themselves in order to make Firefox users more secure.

> So therefore it is strictly less secure in the optimal case, and it is no more secure in the sub-optimal case. So security really isn't a strong argument

I agree. I was arguing for having some form of e2e encryption (like Firefox currently has) as opposed to not having e2e encryption. I wanted to argue against the idea that, because the e2e was implemented in JS, one might as well not have it.

Then, regarding the gap between e2e in JS vs e2e in binary, my point was that JS is just as good in most cases.

> Most distributions disable auto-upgrade in Firefox, for many reasons (security and auditability being one of the main ones) so you won't get auto-upgrade from a distribution.

Does that mean that the code is only signed by the package distributor, and not mozzilla? Because in that case, the package manager becomes a single point of failure. Then again, I guess that is always the case. Still, it would be weird that, as far as mozzilla trust goes, a signed exe from internet is better than a signed package from your preferred package manager.

In openSUSE our build system can be configured to auto-check the signatures of the source archives used for building, so you can check the builds to make sure that we are building from an official source releases (assuming the GPG detach-sign their source tarballs -- something I recommend any software release manager do).

But most distributions do their own builds, and without reproducible builds being universally available -- not to mention that distributions usually have minimal compiler hardening flag requirements as well as patches in some cases -- you couldn't reuse signatures for the final binary. Also the entire package is getting signed, so the binary's signature wouldn't be sufficient (and checking it on-install would be quite complicated as well).

> Still, it would be weird that, as far as Mozilla trust goes, a signed exe from internet is better than a signed package from your preferred package manager.

I think that has always been the general case, since distributions are an additional layer of people maintaining a downstream copy of a project. But don't get me wrong, most distributions have processes that allow you to verify that the source used for builds of large projects like Firefox are built using the real sources.

There's also usually several layers of human and automated review before a package upgrade can actually land in a distribution.

The vast majority of Firefox users receive updates from Mozilla via the auto-update mechanism, which would also be vulnerable to the compromise in a similar way.

(A Linux distribution could also be compromised and used in a targeted way of course)

>> then serve them different content than everyone else gets

To help my understanding, to achieve an attack like this, would the attacker need to circumvent SSL on the client, or takeover the script serving web server? Or is there another attack vector that I'm not seeing?

The attacker in this case would be Mozilla itself. No need for an MITM. In this hypothetical, a government agency contacts Mozilla and says "Here is a canvas/HSTS/other fingerprint. Please serve this malicious code when this fingerprint accesses the login."

The point is that Mozilla can single out individual users for targeted attacks, whereas they could not do that if they had to put the malicious code into Firefox itself.

Right I see. So the barrier with Firefox itself, is that the malicious code wouldn't get built into the product and served as an update. However, in that scenario, Firefox could serve a malicious update to a single user, only that it's harder to fingerprint that.

> You saved some sprints but invalidated the purpose of the project. Very agile.

This sums it up indeed quite keenly and with an amount of snark I personally appreciate. Thanks

How did you install Firefox in the first place?

You can verify hash with others or compile it yourself.

Did you personally, and at least one other trusted party, sign off on every single commit, or are you trusting Mozilla?

Where did you get the hash you're comparing against?

Firstly, no matter what you're trusting the developers of the software you're running on your computer.

Secondly, the software (and/or its hash), just like this JavaScript, is delivered to you in a verifiably secure fashion i.e. SSL.

What's the difference?

Sure, this JS can change. Do you have automatic updates running for Firefox, or any piece of software on your computer?

With attack vectors it's also about ease of exploitation. In this case, the ease is high. If the person you are responding to compiles their own browser, the bar to put an exploit in there is already much higher. Yes, there are still attack vectors. And there always will be. The point is they're harder to access.

Your initial comment was pretty adamant that Mozilla had really messed up by delivering the code as JS. However, what is the attack vector that they've introduced by taking this approach?

It sounds to me like you're referring to a man-in-the-middle style attack. However, to be best of everyone's current knowledge, that's simply not possible with SSL.

It's only possible if the attack vector includes having already compromised the user's computer and installed a root certificate. At which point this is all pretty moot.

I think you have me confused with someone else. I have made no points except the ones in the post you are responding to.

In this case it looks like you're missing the fact that you can change the JS on the server with a high amount of ease and a low discoverability (it can be changed just for you and it won't show anywhere else).

> I think you have me confused with someone else. I have made no points except the ones in the post you are responding to.

My apologies, that's what I get for reading on my mobile.

> In this case it looks like you're missing the fact that you can change the JS on the server with a high amount of ease and a low discoverability (it can be changed just for you and it won't show anywhere else).

You raise a reasonable point. It is indeed something everyone should be aware of. It's mostly a matter of trust, not security.

However, the same is equally true of someone you trust changing the binaries, source and/or hashes that are delivered to you; whether you got those from Mozilla, or somewhere else.

For example, the relatively recent Handbrake release compromise - https://news.ycombinator.com/item?id=14281808

“That’s simply not possible with SSL”

I agree that we don’t currently know of easy attacks on SSL if you’re pinning certs (which it sounds like Mozilla does here). But all you need is a rogue CA to MITM SSL if you’re not pinning certs, so I don’t think “simply not possible” is an accurate description of SSL as generally used by the broad web-dev community.

The question is how hard it is to detect tampering. My linux distribution builds firefox from source and signs the build. The builds are also checked to be reproducible.

Raising the bar is a good thing.

I wasn't aware that any distribution (besides Tor Browser) was building Firefox (or anything really) reproducibly.

There's debian's https://reproducible-builds.org/ effort, but I thought that wasn't making much progress lately, nor was it deployed.

Could you provide more info on what distro you're using, or how they're doing this?

S/he may be referring to Gentoo Linux.

> Do you have automatic updates running for Firefox …


> … or any piece of software on your computer?

Also, no. But even did I, there’s a world of difference between automatic updates from e.g. Debian and automatic updates from Mozilla.

> there’s a world of difference between automatic updates from e.g. Debian and automatic updates from Mozilla.

In what way?

This is obviously somewhat anecdotal, but...

I'm the developer of Heimdall. Software that flashes firmware onto Samsung phones. The software quite literally has the ability to replace almost every piece of software running on your phone. If it were compromised, it could not only own a user's phone, but also potentially everything a user accesses on said phone.

Sure my software is open-source, and I encourage anyone interested to inspect the code, I'm sure there are bugs. However, the `heimdall-flash` package in the official Debian repositories... I didn't make it, and I have no connection with whoever did. Now, don't be alarmed, despite being several years out of date, to the best of my knowledge it's a perfectly good package, and I'm thankful that the maintainer went to the effort. However, it would be so easy for someone to have published a malicious package. This is pretty powerful software, it has significantly more power than root on your mobile phone.

I love Debian, both philosophically and in practice. But does it really deserve your trust more than Mozilla?

It's perfectly normal for Debian packages to be maintained by other people that the original developers of that piece of software, isn't it? Debian has more than 60000 packages but doesn't have 60000 package maintainers – the roles are quite separate.

For example, Linus Torvalds doesn't maintain the Debian kernel packages. If whoever does were to put malicious code in the kernel packages, that would be very bad, just as if Heimdall were compromised, which is why Debian has a relatively small set of trusted package maintainers and doesn't let just anyone put code in the official distribution.

> Debian has a relatively small set of trusted package maintainers and doesn't let just anyone put code in the official distribution

There are presently 2619 official Debian maintainer GPG keys[1].

Considering the scope, that's not ridiculous, but I wouldn't call it small.

[1] http://ftp.debian.org/debian/pool/main/d/debian-keyring/

Dist repo?

This is very interesting and thanks for clarifying, but if you concede that there is a security trade-off here for the sake of usability, then isn't this, by definition, not "Private by Design".

As in: you chose other principals to guide your design other than privacy?

Nobody purely chooses privacy or security to guide their design. An implementation of Firefox sync that was purely, 100% private by design would be airgapped, it wouldn't sync over a network.

Arguably, a private by design implementation of Firefox sync wouldn't even exist. You significantly increase your number of attack vectors by making your session available on multiple devices. What happens if your Android phone is compromised? Better to only have your session on one device.

Obviously I'm being hyperbolic here, but the point I'm getting at is that security isn't black and white, and you will always be making tradeoffs for usability, no matter what the context is.

What that means for "private by design", I dunno. Maybe it's just a buzzword. Maybe it's just a matter of degree. Other people can debate that if they really want to. But I do know that the moment you put doors on your house, it's less secure than it used to be.

The actual valuable question is, "is Mozilla's tradeoff good enough for usability that it justifies the decrease in security?" I'm not sure whether the answer to that is yes or no.

The privacy is at least verifiable. In the sense that users can at least look at the implementation themselves, and (granted with some difficult) potentially detect changes.

This is much better than simply sending your password off to a third-party and having to trust that the company is doing what they say they're doing.

Sorry, but you (mozilla) deliberately crippled your system to the point where mozilla or certain key personnel of mozilla/whoever operates the data centers or anybody running some MITM middlebox - if they wanted to/where compelled to - could easily target specific users or large groups of users rather stealthily, by your servers (and any successful MITM) having the capability to underhand compromised code on a request-by-request basis that could sneak out the actual password thus compromising all the user's data you store.

Then you go ahead and publish articles how your system is "Safe" and "Private by Design" underpinned by "here is SOME MATH to prove it!".

And just now, you state that you "traded off" the "Safe" and "Private by Design" properties for "<unspecific marketing speech, something about reach and usability>", but are somehow "confident" that falsely advertising your broken implementation as having properties it does not possess is the right thing to do (for most users). wat?!

Why is the code that derives stuff from the password and the UI for entering the password not in the browser itself? Surely not for the merit of the users, as there is no real usage/UX difference for them between "https://accounts.firefox.com/" and "chrome://firefox-account/".

Why not ship the password handling logic in the Firefox binary?

Could this be solved by creating a WebExtension that completely replaces the frontend with a bundled copy and then including it in Firefox? If you trust the browser and therefore the bundled extension, you don't need to trust the server at all.

This response reminds me of this discussion https://news.ycombinator.com/item?id=17804916

If you have that much distrust you shouldn't run Firefox at all since every page you load would be at risk. I doubt that any browser would match such an high bar

There was a substantive suggestion for improvement... to include the javascript calls to grab the authentication inside the source code, rather than a remote call.

Maybe an intermediate solution would be to use integrity hashes on the remote calls.

> Maybe an intermediate solution would be to use integrity hashes on the remote calls.

That was my immediate thought upon reading your comment and realizing that the bar to hit is really to be just as secure as if it's baked in the binary, but on reading the bugzilla links that _rfk posted in a comment to your original comment, I'm not sure it really works for their goals (whether or not you think those goals are correct or match yours).

If I maybe understand what they are working towards, it's having the flexibility to provide fixes/features quickly while still providing security. I'm not sure the features portion of that is as important as the credit they give it, given we're talking about security here, but quick fixes is good. I imagine they are looking at a way to provide a specialized context or something similar to lock down what can and cannot be done if the served content is malicious, but I'm not entirely sure what that would look like.

I don't think the Javascript for authentication can be client-side. Authentication of passwords must be done on the server.

Did you read the article? The system is designed so that an authentication token is derived from the password, and this token, rather than the password itself, is authenticated against on the server. Since the encryption key can not be derived from the authentication token, this shields the content of the encrypted data the server stores from itself.

GP rightly noted that deriving the authentication token with code served by the server at log-in time renders most of the security architecture of the system rather moot, unless user has the time to verify the JavaScript they are served every time they log in.

> unless user has the time to verify the JavaScript they are served every time they log in.

Which is once per install, the same amount of times then user would have to verify the native code

There's actually a great difference, because for binary distribution it simply suffices that the user verify the cryptographic hash of the binary they downloaded. Then they can be confident that they were served the correct publicly distributed binary, and not a subverted version.

JavaScript by its very nature can not be easily, if at all, verified. Even if you verify that you are being served the same code as other users and the code is not subverted at one point in time, it's very much expected that the code may change at any time at the web developer's discretion. This is also probably the very reason why Mozilla implemented it like this (so that they can update the Sync experience branding without updating the browser.)

Even if you hard-coded the signature of the JavaScript library file responsible for the cryptographic operations in the binary, browser DOM by itself would require modifications so that any other UI logic script in the page would not be able to sniff your password as you enter it..

> JavaScript by its very nature can not be easily, if at all, verified.

Yes it can, in the same way you mentioned in your previous paragraph. Mozilla post the hash of the JS source file, and any user who wants to can verify the hash of the source file they have can do so in exactly the same way they verify a binary. It's literally calling sha1sum on the js file isntead of the binary.

> This is also probably the very reason why Mozilla implemented it like this (so that they can update the Sync experience branding without updating the browser.)

I'd partially agree here - it's implemented like this so that they can update sync without updating the browser. If there's an issue found, mozilla can fix it, change the JS that is being served, and update the hash of the file. Sync branding (to me) has nothing to do with it.

> Even if you hard-coded the signature of the JavaScript library file responsible for the cryptographic operations in the binary, browser DOM by itself would require modifications so that any other UI logic script in the page would not be able to sniff your password as you enter it..

Yes, but this is moving the goalposts from the user verifying the js. This is true whether the code is in the binary or not. If the code goes in the binary, and is shown as part of the DOM, you need to modify the DOM logic to ensure nothing else can see it.

Not really, since one can verify the Firefox code oneself, or hire others to do it, or rely on the fact that it’s a matter of public record and do forensics analysis after the fact should one be compromised.

But a target JavaScript file sent only to oneself a single time … that’s practically impossible to validate or prove.

Most people download Firefox binaries directly from Mozilla. They can't verify the code (even if they knew how).

On larger Linux distributions, the trust is less with Mozilla and more with the distro-package maintainers - use of dynamically remote-loaded JavaScript here shifts the trust back to Mozilla; if you trust your maintainers but not Mozilla, then this is a problem.

You really need to trust both. A browser vendor has so many layers to attack you, if they're malicious, that there's no way around it. Same for package maintainers.

So are you suggesting to use a private API to login to accounts.firefox.com? And what about using accounts.firefox.com in other browsers?

How could this even possibly be mitigated, though? At some point you have to trust something, right? This argument could extend all the way down to your OS, chipset, whatever. Unless you've built every part of your machine from scratch it's always going to rely on trusting a component you know nothing about.

If you include this code in firefox itself, you can review the code and verify that the official firefox binaries use that code.

But what about the code you run to verify the binaries? You have to trust that. Or are you reviewing every bit by hand?

Trusting the binary is a one-time effort. Here you have to trust the JS everytime you authenticate.

Even one-time, comparing the hash of a git commit or an installer is annoying but practically doable. I'd have no idea how to safely do something like this for a website. And I for sure ain't able to audit those ~200kb js by myself.

But my point is you also have to trust that your underlying OS, or chipset, or even some other software on your computer doesn't have some way to thwart the entire effort. If the thing you're using the do your verification is itself compromised, then you're just as screwed.

So you never update Firefox? For most users it automatically updates itself.

For this use case, authenticating is a one-time effort as well, per browser. If you're authenticating again, you're almost certainly doing it on a different device/browser, at which point you'd have to verify the new browser binary anyways.

Sign-ins to Firefox Accounts (used to access Mozilla services as web pages) don’t expire after some time? I didn’t know that.

If Mozilla wants your passwords there's nothing to stop them adding code to Firefox itself that detects when you are typing your password in and sends it to them. It could be limited to just you in that case too.

This is really to protect against two things:

1. Legal requests for your data. It's actually not really been tested whether or not the government can force an entity to insert a backdoor for a specific user or not. I think everyone hopes that can't do that but does anyone really know?

2. Data breaches.

> If Mozilla wants your passwords there's nothing to stop them adding code to Firefox itself that detects when you are typing your password in and sends it to them. It could be limited to just you in that case too.

Users who consume Firefox from distributions, rather than directly from Mozilla, are much better protected from this. "Nothing to stop" is not true in this case.

Sure but then you're just trusting a different party (the packagers). Now you have to trust two groups of people.

This is a good example of why Firefox is so important. Mozilla's incentives, unlike those of companies making significant revenue from tracking-based advertising, align with the user. Google, for example, could have implemented Chrome's sync feature in a privacy preserving manner, but instead chose to use it as a method to collect their users' complete browsing histories.

If that's the case why Firefox Accounts is not really designed with the end-user in mind? The design totally looks like a nice walled garden, completely custom and non-interoperable.

Just a few quick examples:

1. There's HAWK and OAuth2 and BrowserID there, all in the same system. That's a lot of undesirable extra complexity.

2. The Sync 1.5 protocol itself is full of non-standard weirdness, with odd stuff like X-Last-Modified (which is just like Last-Modified but with UNIX timestamp - seriously?). While I haven't experimented writing an adapter yet, I strongly suspect a plain old' WebDAV (with a tiny little bit of sub-standard collection stuff) would've worked just fine and even better.

3. Poor documentation. The documentation was draft quality when Accounts and Sync were just rolled out (so it required some reverse engineering), but that's understandable. Things have improved since then but I believe a lot of stuff isn't really fully documented even today. For example, some undocumented magic is required to show Accounts sign-in page on iOS.

My point is, the whole thing is absolutely not developer-friendly (unless you're a Mozilla developer), as it makes self-hosting and alternate implementations quite difficult.

Maybe my problem is Accounts and Sync is not a standard (neither a proposal to become one), but just a documented vendor-unique API.

Keep in mind Weave was designed as the alternative to using LDAP for storing such information. The first browser supporting Firefox Sync (Weave) was Fennec on Nokia N8x0/N900. We're talking about 2008 or so here. LDAP is no longer used for this purpose, and the other alternative is proprietary and stores your data at a third party for data mining (Google Chrome).

End-user or developer? I believe it’s plenty friendly to end-users since it’s simple to use and works. However, it’s certainly not so for developers, as I’ve struggled myself at self-hosting Firefox Accounts/Sync.

Both. Developers are end-users as well, and an ability to self-host (and protocol standardization and availability of alternate implementations) matters to non-developer end-users too, even though they don't ask for it.

Openness is in the same boah as privacy. Average user would buy just a "we pinky swear it respect your privacy" sticker on the product, but we know they want real privacy. Same with openness. And Firefox Account & friends is not an open system, it just happens to be partially documented and have a few FLOSS implementations of varying quality.

Kinto is a step in the right direction, though.

Doesn't chrome lets you use your own encryption key?

Yes, but it's not the default. From the article:

> One could, however, add a second passphrase that is never sent to the server, and encrypt the data using that. Chrome provides this as a non-default option.

The average user doesn't have the expertise to know that they have to configure an additional "master password" to keep Google from mining their data for ads.

>keep Google from mining their data for ads

I just got this morbid idea of Google mining storing passwords to recommend LastPass/1Password in ads based on your password strength.

Hmm I wonder if LastPass/1Password advertises on websites listed on haveibeenpwned

1Password is promoted by the HIBP itself, a.k.a Troy Hunt. I would not use it.

That is true, defaults are important. Firefox users know that because they have to disable the advertisements that appear in the Firefox new tab page by default.


I strongly dislike that they've done that. In their partial defense, the selection of recommended articles based on your browsing history is done on device.

In fact this was one of the primary motivations behind Sponsored Tiles in Firefox, to prove the viability of a privacy-preserving monetization model for the web.

It's interesting that while some could certainly characterize all of that screenshot's "Recommended by Pocket" stories as advertisements, recently they've started showing actual Sponsored Stories advertisements in that spot as well.

I'm downvoting this as the same off-topic whataboutism that comes up any time this topic is discussed on HN (and the fact that you felt the need to create a throwaway account to post it makes me think you knew what you were doing here).

I'm not sure account age is so relevant. I create new accounts all the time (several times per month) even though I stand behind what I write. It takes 15 seconds so hardly a big effort. I've noticed that it's possible to extract a lot of info from people's posts, in many cases deanonymizing them if you go through enough history and correlate with other sites. Maybe he is a bit privacy concerned.

It already encrypts sync data with your G username/password, but you can choose to change it in settings. Chrome 69 or 70 made it even easier to change it right from the settings.

Yes, but that leaves you with 2 passwords and isn't default.

The advantage here is twofold. - Your encryption key is derived from your single Firefox password, rather than having 2 passwords. - The ease of use of this system makes it possible to have e2e encryption by default rather than by opt-in.

I was involved in implementing the first browser-to-browser, and browser-to-mobile sync. Opera Link (codename Pangea) was my first really large project in the company, and while desktop team had multiple persons working on the feature, the server side was designed, implemented and maintained by two guys plus a QA person.

We decided not to encrypt the user information for multiple reasons; user friendliness, data recovery and naivety. The internet was a different place back in 2008 (Opera 9.50). Both me and the other guy used the service our self, and was paranoid about the user data. Only we had access, and only we would see user data while fixing bugs.

I especially remember one bug where the database suddenly started growing faster than we expected. The frontend servers started eating more and more memory. In the end, we discovered that a larger porn site network had started serving their full size images as favicons, and that Opera Link had started syncing multi-megabytes of Base64-encoded favicons in our already bloated XML protocol. That was the first time I was introduced to 'Rule 34'.

We maintained the control over the server side and userdata until the end of the Opera Link project. We where never asked to turn over userdata to the company, except for aggregated stats for speeddials (only top X lists, not including unique URLs).

After we had announced the end of Opera Link, the userdata was scheduled for deletion, but we had to shut down the service a few weeks early, after a sysadmin started dban on the userdata mount instead of the local disk ;-)

Thanks for sharing your experience.

I’ve been using Firefox since I heard about their containers. I’m happy they are pushing for all these privacy tools. For me these are the features that will make me chose over chrome.

ever since they've switched to the quantum engine it also feels really fast, almost faster than chrome with the exception of youtube, which feels sluggish sadly.

That's because YouTube uses non standard HTML features that only work in chrome and then polyfills it for every other browser so it works like shit unless you use chrome.

If YouTube page loads are slow for you in Firefox, you can install the "YouTube Classic" extension to opt-out of YouTube's (2017) Polymer design. This extension doesn't affect video playback, just the page layout.


Even though it's still not there, YT team is working towards standard compliance [0]

[0] https://news.ycombinator.com/item?id=18053935

Its amazing how low the bar for quality is on major google websites. How did such a broken feature end up running on YouTube for a year?

Works on Chrome, Google doesn't care about anything else.

Illusion of safety is worse than the lack of it. Firefox and Mozilla are just untrustworthy as google.

This might not align with the goals of Mozilla, but what I would love to see is for Firefox Sync to be extracted so that:

- It can be integrated into Google Chrome on desktop operating systems.

- It can be provided as a stand-alone app on iOS so that I can:

a) “Share” links to this hypothetical stand-alone Sync app from Safari in order to send them to Firefox Sync bookmarks storage.

b) Copy username and password from the stand-alone Sync app when using Safari on iOS.

It might sound like a peculiar setup but bear with me.

The situation is that I run Firefox as the main browser on my laptop, Chromium as main browser on desktop and Safari as main browser on iOS.

For a long time I’ve used Firefox Sync as the “main” storage of passwords, and I would go into Firefox settings and copy username and password from there on desktop and mobile. Even though I store the passwords and usernames in the other browsers on first login it’s still a bit cumbersome whenever I add a new account, log in for the first time, or change a password.

Recently on desktop I started using KeePassXC on my desktop and the browser plugins for it in order to make it a bit smoother. Today I installed MiniKeePass on iOS and am going to find a good way of keeping it in sync with my desktop. In the end though, KeePassXC is not quite what I want, whereas the core of Firefox Sync is exactly what I want.

As for my bookmarks, they are all over the place. Some in Firefox, some in Chromium, some in Safari, some in exported files or copies of old homedirs, and a lot of bookmarks probably lost at various points in time. Again, the core of Firefox Sync is what I want.

There's a few children of this comment, noting this, but to tie things together from someone from Mozilla:

- You can self-host your own sync server. It's not something we spend a lot of time making easy, but it is possible and supported in some capacity: https://mozilla-services.readthedocs.io/en/latest/howtos/run...

- You can definitely build extensions for other browsers (or a standalone app) that implement the Sync protocol.

> - You can definitely build extensions for other browsers (or a standalone app) that implement the Sync protocol.

In fact, GNOME's web browser already did so: https://blogs.gnome.org/mcatanzaro/2017/08/09/on-firefox-syn...

Last I checked self-hosting a sync server was pretty much not feasible. Happy to see that's changing.

The predecessor to FF Sync was the open source Weave which you could self host. It was WebDAV-based I believe. Sadly they deprecated it in favor of FF Sync.

EDIT: Forgot to mention Weave was exciting because you could integrate with other browsers. And there was a Dolphin add-on which did just that.

Check out https://lockbox.firefox.com/, which is an iOS app that syncs with Firefox Sync, and integrates with the iOS 12 password management interface. It doesn't do bookmarks, but it should fulfill part "b)" of your requirements nicely. It's still in Beta according to https://testpilot.firefox.com/experiments/firefox-lockbox but it works very well for me.

Aren't both Chrome and Firefox's sync protocols open? At least, the Firefox Sync server is still self-hostable and open source, and at least the client side of Chrome's sync is open source since Chromium can sync as well...

I've never understood why someone didn't implement a Firefox Sync extension for Chrome or vice-versa. Is it technical or are people opinionated enough about browsers that no one has the personal need to develop such extensions?

For firefox sync on chrome: It’s completely possible, but the crypto is enough of a pain in the ass to deter most people.

(It’s also going to be very difficult to implement a robust sync system this way, integration with the underlying storage is all but required for that)

In my opinion, crypto is the easy part. I think the only problem I had when I worked on my own Accounts&Sync re-implementation (a crude hack, I admit) was some MAC-related issue with BrowserID, where different (old) Firefox versions had generated different assertions. Don't really remember what it was.

The real problem is data model is significantly different. It's possible, but a lot of boring work, designing the appropriate transformations.

Chrome sync can be also self-hosted, and while I'm not sure how things are today I'd say it used to be significantly easier to self-host.


Don’t Lockbox and Pocket sync to a Firefox account?

Cool but there is a lot more that can be done to improve a browser privacy by fighting fingerprinting. I.e. provide standardized sets of data about the host system via JS APIs: only list a standard set of fonts (+ those a user consciously chooses to expose), a most common display resolution that is less or equal to the real display resolution of the machine, obfuscate canvas/webgl/cache response time etc.

Extensions API should also get optimized in a way so privacy-enchancing extensions (like Ghostery, Adblock Plus and NoScript) would work faster.

And there is a favourite extension of mine called "self destroying cookies" that currently is blocked for sending user data to remote servers unnecessarily, and potential for remote code execution - I believe it's actual functionality (not sending user data to remote servers but what it is meant for - one-click whitelisting particular domains and only keeping cookies for these while deleting cookies for others as you close them) should be built into the browser.

I happen to also work on Firefox/Tor Browser's anti-fingerprinting work, so yea - we're trying to make improvements there too =)

Containers is a big Firefox feature (exposed through an Add-On) in this category too.

As far as Web Extension APIs, I don't know much about that, but if you have an API that would enable a use case that Mozilla doesn't have a bug on and haven't considered; you are welcome to file a bug explaining what you would like and what you would use it for, and the Web Extension team will consider it.

Thanks. Great to read you work on anti-fingerprinting, I'd name this among the most important subjects today.

I don't code privacy enhancing extensions myself (that feels like "inventing my own crypto" for me - not enough competence to be sure I won't make it worse actually), I've just noticed Firefox becomes significantly more slow when I enable them so I guess there probably are some sorts of bottlenecks in the Web Extension APIs (or maybe not really).

This is a really great article. The level of detail is perfect. I feel that I could implement a similar encryption system as a proof of concept after reading it. It seems so obvious but it really does go against the grain of many of the more common approaches (as explained in the article) because respecting the privacy of a user is so unusual.

If only Firefox Multi-Account Containers were a feature, not an addon.

The fact that containers don't sync well is probably my biggest frustration with Firefox at the moment.

It is many folks', and we appreciate the feedback. Hopefully things will get better soon: https://github.com/mozilla/multi-account-containers/issues/3...

Containers are actually built-in into Firefox. The addon only manages the user interface.

Even if that is the case, they are effectively not.

The UI has serious holes, and containers do not work with Firefox sync.

I also haven't been able to find a way to clear the cache or history for a specific container.

At this point, containers are barely even useful. Putting the UI in an addon was an awful decision.

People who still use Chrome nowadays probably haven't heard of Firefox' tree style tabs plugin.

That's nice for those who use a lot of tabs, yes. But what I really appreciate is reader view (F9). What I long for is Chrome's page translation.

What plugin? What does it do?

Instead of tabs on the top of the window you get them along the side of the window which left you have about 10x more tabs open while still being able to read the names for all of them.

I have been using it for years and would never use a browser without it.

I'd love to know, I've never stopped using Firefox and idk what plugin they mean.

Probably this one [0], though I dislike the branding at the top of the sidebar and the fact that since FF lacks native support, you'll still have tabs on top of the window.

[0]: https://addons.mozilla.org/en-US/firefox/addon/tree-style-ta...

You can disable those tabs on the window, you just have to mess with the browser chrome's CSS yourself. Which is relatively easy, but basically the reason I haven't used it so far.

But now that you mention it - I'm going to give it a shot now.

Here's a list of tweaks that you can try: https://github.com/piroor/treestyletab/wiki/Code-snippets-fo...

Thanks! I was looking for the best snippet to hide it, and this one is a bit more specific than the one I was using. (Plus, it allowed me to hide the bar in full-screen.)

It used not to be like that but the last versions of Firefox have messed up pretty hard with it.

> Other approaches > [ list of 3 options ]

I would like to see an option 4:

Option 4: The server side of sync is open sourced, and I can run it on my own machine, and point my browsers at my personal instance. Then no data is ever on Mozilla's servers.

Firefox Sync actually allows this! https://github.com/mozilla-services/syncserver

This still uses FxA for authentication. You can self-host that up as well, but it's not nearly as straightforward, and I'm not aware of good documentation on how to do so.

I was just looking into this. I thought it was?

I have not set it up yet. But is this not what you are looking for ?


Mozilla seems to keep this very quiet, but from what I can see in this documentation it's extremely easy to implement your own sync server, with only one flag in about:config needing to be changed and minimal build dependencies.

The harder aspect comes with the Firefox accounts server, as that requires a bit more configuration and deployment.

I would be very interested in seeing someone build a docker-composer setup for this, such that it can be automatically deployed for those who don't have the time/skills to set it up.

How does Firefox Sync handle a user changing their password? What if the password is changed from another device? Does it force users to first sign in with old password and only then allow to change it, essentially re-encrypting everything and syncing again? If so, does that mean forgetting the password equals to losing all synced data?

Yes, sort of (more about the process: https://github.com/mozilla/fxa-auth-server/wiki/onepw-protoc...). Key difference: you don't need to sync all the data up again, since there's a separate key for that, which doesn't change.

That said, if you forget your password you do lose remote synced data. The hope is that you have at least one device connected that still has the data, which will upload it in that case. This doesn't work in all cases (there's a pretty common and unfortunate pattern where someone reinstalls their OS, doesn't remember their password, and loses data as a result)

You can also generate and store a separate recovery key to help avoid data loss, https://support.mozilla.org/en-US/kb/reset-your-firefox-acco....

Thanks. I've been using Firefox Sync ever since it came out but I don't remember it ever mentioning these risks. I think Firefox should mention this when new users enable sync for the first time. It should also offer users to generate recovery keys when they start using sync and for some time after if they haven't.

There's no risk to your data. It's not like this will delete your local data, the only risk is when doing a password reset in some (fairly rare) cases, which does attempt to make the risk of doing so clear.

I don't disagree about recovery keys, they're rather new though, eventually it wouldn't surprise me if we did something like that.

There is a risk if you were relying on sync to be a backup solution (which in the absence of discussing the risk of remote data loss, someone might be tempted to do).

Sure, and we do warn during password reset.

I’m unsure of what alternative there is that avoids that risk while still providing the current privacy benefits.

Ah, so local data is not encrypted. It's only encrypted before packets are sent to the servers?

Yes, for the most part. For passwords the story is a bit more complex (they're encrypted locally on all platforms, either with your master password (Desktop, probably Android but I don't remember), or a random key stored in the OS storage for iOS),

But we don't use the sync key to encrypt local data ever; many users never set up sync, and have no sync key

You seem to know a bit about encryption. Which is why it baffles me- how does telegram do this? Does it need a connected device in this way too? So one can upload the encryption key if its lost? If no device is connected, can/how do they do it? If yes, can Firefox copy that way?

Telegram chats by default are not end-to-end encrypted. It does have e2e-encrypted chats as an option, but they're only accessible on one device. So:

>how does telegram do this?

...the short answer is they don't.

I was referring to them saving encrypted data on their servers. Isn't that e2e encrypted? If not, does that mean an adversary with access to their database knows my chats?

That is not end-to-end encrypted, no. The company has all the information necessary to retrieve your plaintext conversation data. They can (and likely do) encrypt this data at rest within their infrastructure, and they can make it as hard as they want for an individual employee to access this information, but fundamentally you're trusting that their internal controls are sufficient.

I've never heard of HKDF before but it is really an elegant solution to this. My first guess on how to do this would have been something stupid like split the Authentication token in half and 0 pad it. But this would have significantly reduced the entropy available on both keys, reducing the search space on the authentication token and the encryption key making them much more brute force able. HKDF instead expands the key and essentially requires the server to be able to reverse HMAC-Hash to find the encryption key from the the authentication token.

What I'm confused about is that they seem to be using HKDF as a hash [1] and not as a key generation funciton. I think this is just as secure as what I was expecting but it seems more complicated and doesn't jive with the purpose of the RCA[2] as I read it.

[1] https://github.com/mozilla/fxa-js-client/blob/1d92f0ec458ace... (separate HKDF calls with the same IKM)

[2] https://tools.ietf.org/html/rfc5869

From the RFC: "Its goal is to take some source of initial keying material and derive from it one or more cryptographically strong secret keys."

In our case, the initial keying material is the output of PBKDF; and the two outputs we use are used as an encryption key and a bearer token (essentially a password but I call it an authentication token to avoid confusion with your actual password). There are less complicated ways to do this; but this one is cryptographically conservative.

"essentially requires the server to be able to reverse HMAC-Hash to find the encryption key from the the authentication token" - the server can't do that; which is why the server can't figure out your encryption key from your authentication token. (The best the server could do would be to try a password guessing attack.)

Right what I'm confused about is that first bit, my understanding from the RFC is that the implementation should have look something like

    return pbkdf2.derive(password, email, PBKDF2_ROUNDS, STRETCHED_PASS_LENGTH_BYTES)
      .then((quickStretchedPW) => {
        result.quickStretchedPW = quickStretchedPW;
        // stretch to twice the length necessary
        return hkdf(quickStretchedPW, kw('generated'), HKDF_SALT, HKDF_LENGTH * 2)
          .then((generated) => {
            // split output into two cryptographically strong keys
            result.unwrapBkey = generated.slice(0, HKDF_LENGTH);
            result.authPW = generated.slice(HKDF_LENGTH);
but my read in pseudo code of what they end up doing is closer to this:

    hashed_password = hash(password, 'salt1')
    hashed_auth_tok = hash(hashed_password, 'salt2')
    hashed_unwrap_key = hash(hashed_password, 'salt3')
which seems secure because the server can't reverse hashed_unwrap_key to find hashed_password and thus shouldn't be able to calculate hashed_auth_tok. However the point of HKDF is to make multiple cryptographic keys while it looks like in practice we are just using it as a one way funciton.

Ah okay, I understand better.

The (second) pseudocode you have is right (the second two 'hash()' should be 'hkdf()', and the first should be 'pbkdf()'.)

The first is an alternate way to do it. But for cryptographic reasons that tend to be buried in formal proofs; you generally don't want to derive twice the keylength you need and then split for two keys. (Besides the necessity for formal proofs (as I understand it) - it's just easier to make an indexing mistake and reuse key material. One also becomes more vulnerable to a collision attack, although that might not make sense in this context it related to the formal proofs.) I will note that sometimes - especially in embedded spaces - you'll see people taking this shortcut in the name of speed or codesize.

Instead you want to fully derive two keys using separate HKDF calls with separate 'labels'. This provides strong domain separation for the keys.

But I'm mostly trying to provide with a pointer to what to read about to convince yourself. I'd start at https://crypto.stackexchange.com/search?q=domain+separation

If you find out we're doing something that still seems weird though, please send me an email!

got it, thanks for the response!

The new Firefox sync (v1.5) is surely great and has good usability, I preferred the old Firefox sync (v1.1), because it didn't require a Mozilla account (for password retrieval). The new service requires two servers for one's fully own setup (for managing accounts and a sync server), although it is possible to use Mozillas account server.

Also, from the description of the protocol ( https://github.com/mozilla/fxa-auth-server/wiki/onepw-protoc... ) it was not clear to me, what exactly is encrypted? Only the passwords? I had the impression that the bookmarks were stored in a way readable by Mozilla.

Update. It is not clear what kind of data is in "Class-A" as described here: https://wiki.mozilla.org/Identity/AttachedServices/Architect... ... It also says:

> we can share e.g. bookmarks with a third party (by telling them the decryption key)

Everything is Class-B. Class-A and Class-C are relics of ideas we had back in 2013 that we did not pursue.

Specifically, once of the consequences of Class-B is that your encryption key is derived from your password. If you forget and reset your password, you necessarily lose access to all of your synced data. (Though we did recently add support for recovery keys: https://support.mozilla.org/en-US/kb/reset-your-firefox-acco...)

The idea behind Class-A was to let users choose to place some less sensitive data--like bookmarks--into a bucket which survived password resets at the cost of Mozilla holding a copy of the encryption key.

The idea for Class-C was to allow users to generate and an entirely separate encryption key entirely separate from their password, as with Sync 1.1, but at the cost of more complex setup when adding a new device: you have to either maintain a backup copy of the key, or always have a previously configured device on hand for PAKE. Our experience with Sync 1.1 taught us that this does not work with real people at scale; people often lost data as a consequence of this design.

> Our experience with Sync 1.1 taught us that this does not work with real people at scale; people often lost data as a consequence of this design.

Far better to lose passwords, bookmarks & history than to have them exposed — which is what the current design does (because the user's password can be stolen if the users logs in to his Firefox Account via the HTML page).

There are reasonable countermeasures I can take against losing my passwords: I can record them elsewhere; I can reset them if I lose them. But the only reasonable countermeasure I can take against Mozilla stealing my password is to never login to a Firefox Account (the alternative, hand-verifying HTML and a JavaScript bundle myself on every login attempt, is patently unreasonable).

So that's what I do: I don't use the Firefox Sync functionality, because the security of the system is broken.

1000 is a very low number for PBKDF2 iterations. OWASP recommends 10 000, and also recommends using Argon2 instead. Apple does 10 million.

They have an open bug to change the number of iterations. Also, there are more rounds server side, so at least the values in a database are pretty safe in case of a leak.

Firefox has really been appealing to me lately. Unfortunately, on my MacBook Pro, Safari is more energy efficient and it's not even close. If Firefox could close the gap there a bit I would make the switch.

I'm a long time Firefox user and I haven't noticed a difference in battery usage. I have a MacBook Pro and I've got other things that drain my battery of course, like the Scala or Haskell compilers :-)

That said Mozilla is apparently working on it and to improve energy efficiency, try setting gfx.compositor.glcontext.opaque to true in about:config.

Courtesy of @pcwalton: https://news.ycombinator.com/item?id=18048844

> Another interesting wrinkle is that Brave does not keep track of how many or what types of devices you have. This is a nuanced security trade-off: having less information about the user is always desirable… The downside is that Brave can’t allow you to detect when a new device begins receiving your sync data or allow you to deauthorize it. We respect Brave’s decision. In Firefox, however, we have chosen to provide this additional security feature for users (at the cost of knowing more about their devices).

Why not store a hash of the user's devices instead?

I love Firefox but sync is by far the worst aspect. You can't use password syncing if you have a master password. Why, and more importantly, why doesn't the user get told this rather than passwords just silently not syncing? Who decided that was that the user would want or expect? Even without the master password I find password syncing rather hit and miss.

Hum, personally I would love to have a more human-manageable profile, so I can forge a personal FF profile template with not only user CSS, options etc but also extensions that get picked-up and updated by FF from first run and after.

I'm not interested in sync my stuff on other's computers, private or not, encrypted or not. I know how to backup my data and I prefer my own system. This may not conflict with casual PC users unable to backup their data and unable to manage anything by themself. It's only a matter of choice: mainstream only or mainstream+tech savvy users through simplicity, MIT school.

While I can appreciate the technical merits of this (and rauhl's valid criticism), Firefox Sync is simply something I don't want, no matter how securely it's implemented.

I don't want anything to carry over between browsing sessions. Nothing. Not cookies, not super-cookies, not browser cache, not fonts. (About all I'm ok with carrying over is my uBlock plugin. There may be some other things like DNS cache and HSTS... though they're deeper than my realm of knowledge. I reckon keeping those are an acceptable trade for security vs tracking.)

As much as possible, I want each browsing session to be fresh, as if I were starting the browser up in a VM, and then reverting state after closing. I reckon that makes it that much harder to track me across the internet.

Firefox Sync seems to be an antithesis to this. (And I'm not very trusting of it's password saving feature, either.)

However, given that YouTube's front page seems to be picking up on my interests, maybe my measures are ineffective. (No 3rd party cookies, clear everything after browser close.) Perhaps I should give up hope and just permanently stay logged in.

(As an aside, one thing I was very sad to find out... I VEHEMENTLY DESPISE Firefox restoring all my tabs after a crash. If something evil caused the crash, I don't want to give it a second shot! ((Or worse, opening a browser in front of a coworker or client, and it restoring tabs I wouldn't want them to see. There's no warning or consent obtained, just... "HEY! Here's all your old tabs!")) However, disabling this feature ALSO disables "open previously closed tab", which I'm OK with. It seems a bit strange that these features are linked, perhaps someone can explain it to me.)

>I don't want anything to carry over between browsing sessions. Nothing. Not cookies, not super-cookies, not browser cache, not fonts.

You can customize what you want synced...I for instance only sync bookmarks, logins, add-ons, and firefox settings. History, open tabs, etc, are not synced.

>However, disabling this feature ALSO disables "open previously closed tab", which I'm OK with. It seems a bit strange that these features are linked, perhaps someone can explain it to me.)

I would think both features use the same underlying method for making snapshots of open tabs.

Option 4: Use Fejoa auth to use the same password for authentication and for encryption without revealing the password to the remote party:


How portable is Firefox Sync to other browsers? I find myself in need to keep both Firefox and Chrome open at all times (different websites work better with either of them). I would like to have at least my bookmarks and passwords synced—don't really care for history or tabs.

LastPass and others provide a cross-browser password sync service. But after Xmarks having been discontinued, I could not find find any cross-browser bookmark sync service. I was hoping Firefox Sync would be open enough for someone (preferably Mozilla Foundation) to have built a Chrome extension to support it, but this hasn't happened, and doesn't even seem to be on the works.

I'm actually willing to pay a subscription for this functionality (i.e. bookmarks/password sync across the the major browsers) for anyone who supplies it without trying to lock me in their service or browser.

There are more details above but the short version is that it is possible to build extensions for other browsers that work with Firefox Sync. But the only one linked seemed to be for Gnome's browser, so no one may have actually done it.

What self-hosted alternatives are there to Firefox Sync these days? (i know that Sync itself can be self-hosted).

There used to be at least one bookmark add-on that allowed syncing with your own WebDAV server.

I use sync. Unfortunately, sending tabs works very randomly. Sometimes I have to send tabs from one device, just to receive tabs I had sent (like "refreshing" the tab sender)...

My experience with that feature is better. It may take a couple of minutes until it's actually synced and opened on the other device, but recently it's been quite reliable for me.

I wish there was a sane way to run your own Sync server (without email registration and authentification).

Is there a good open-source self-hostable cross-browser alternative?

all privacy aside. i think mozilla is doing good lately. firefox is fast and it has a great dev tool since i last checked. today i'm switching from chrome!

I am curious why CBC was chosen over GCM?

For no reason other than "legacy reasons" - much of the client-side crypto code in the current Firefox Sync is inherited from an earlier system that predates widespread acceptance of GCM as a best practice. If we designed it from scratch today it would almost certainly be using GCM instead.

Makes sense. Thank you for your answer!

Apple publishes detailed whitepapers on how they implement security. I'd suggest to Mozilla, get iCloud paper and copy the implementation. Way more reliable.

I like, actually love Firefox. So as a major user I resent them for posting this article, its somewhat inethical when no recent work has been done on sync and they are , as they claim themselves, not willing to touch this code in fear of breaking it. If you are don't understand enough to make changes, maybe don't make an article about itm

This is not true. I work on Firefox Sync full time, as do multiple other engineers.

Admittedly, the current version in Desktop/iOS/Android is in a sort of 'maintenance mode' (we still fix bugs, but don't work on new features or actively fix it up).

The reason for this is basically that those three versions are entirely separate implementations that share no code (they're also in languages that have integration difficulties on the other platforms, unfortunately, so we can't just settle on one).

We're currently rewriting it as a cross-platform module, and planning on replacing them.

Thank you for your efforts.

>We're currently rewriting it

...in Rust?

(I kid, although, it's a serious quesiton)

Lmfao! That's amazing! Did not expect this response, and it really is >60% Rust code!

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact