> For Linux users, we removed the connection from content processes to the X11 Server, which stops attackers from exploiting the unsecured X11 protocol.
It's hard to overstate how much of a benefit this is in terms of security for those on Linux. Any application with access to the X11 socket effectively has the keys to the kingdom, because it can hijack other applications, including those running at higher privileges, at will. (There have been halfhearted attempts to address this over the years, but nothing that's been effective or widely adopted.) The only real solution for desktop app security on Linux is to forbid direct access to X11 entirely, and so it's a huge deal that Firefox is now able to do this.
In general, doing this sort of thing is a monumental undertaking for large applications like Firefox. Kudos to my former colleagues for pulling it off.
This seems like it could make Firefox one of the most secure linux desktop applications?
For instance, if you are typing $SensitiveMessage, you would be far better off doing that in a firefox 100 textedit box in a browser tab, rather than an xterm, or emacs, or your desktop environments preferred text editor, or anything else.
I currently use "secure keyboard" in xterm but I know that has problems.
I'm pretty sure that there's nothing preventing other apps on your desktop from listening to keystrokes that are supposed to go into Firefox, unfortunately. This is a fundamental problem with the X11 protocol and as far as I know the only real solution is to switch to Wayland.
wayland has never been a pleasant experience for me, neither was pipewire. everything the nix community fawns over seems to be regressive bullshit that breaks my system. systemd is only just now becoming acceptable in that in the past year it hasn't fucked me too hard.
None of those other apps are constantly running code written by millions of different people many of which are actively attacking you possibly right now in one tab while you are typing in the other.
It has to make a Herculean effort because it is by far the worst possible app to be typing $SensitiveMessage in.
At least Kate or gedit probably doesn't have a second tab open running a JavaScript subprocess which is presently trying to attack you so it can cryptolocker you or empty your wallet.
Chromium and dreivatives had this for years. Firefox's isolation is making progress, but it has a ways to go before it matches Chromium.
FF devs are finalizing a utility process overhaul and laying the groundwork for CFI; for years, Chromium has had both with a much more thorough implementation than FF will in the near future. However, the browser space desperately needs competition so I'll take what I can get. Sigh.
The isolated content processes shouldn't have access to the other processes memory. There might be shared memory regions, but it seems unlikely they'd store the keys to the kingdom in shared memory.
They are owned by the same user so they probably have access to everything the other processes have. And they can probably just read the access token from the environment which is accessible through the /proc filesystem.
Namespaces let you create isolated views of the file system, isolated views of processes (ex: if you are in a new pid namespace and run 'ps' you only see yourself), users, network interfaces, etc.
I'm not sure what Firefox does, I believe they use the Chromium sandbox, and I'm way out of date on that. It used to do some filesystem setup like hardened chroots, but I would assume that's been supplanted by fs namespacing.
They didn’t isolate all the Firefox processes from X11. Only the content processes are affected, i.e. the processes whose attack surface is rather massive.
But the content processes are still going to deliver their finished work items to the GPU process for rendering. The GPU process retains all the rights it needs, including talking to X11.
I don't see why you wouldn't still be able to do that. Firefox still uses X11; it's just that the content processes can't directly speak the protocol now, and must go over IPC to a more trusted process to do so.
Any reason you're using SSH X forwarding over something like Xpra? It also works over SSH, but lets apps persist even if the SSH connection is broken for whatever reason.
You can even run Firefox via Wayland and forward a persistent XWayland session if that's your cup of tea.
Scenario: evil.com wants to take advantage of a use-after-free in some DOM API to install a keylogger.
Before X11 isolation, it can: (1) get RCE in content process; (2) send events to your GNOME Shell to open a terminal; (3) send keyboard events to the terminal to download the keylogger, start it, and add it to some config file that makes it start automatically on future logins.
After X11 isolation, it will: (1) get RCE in content process; (2) get stopped because it can't send keyboard/mouse events to other applications anymore.
By the probability of this risk scenario (visit evil.com, be the victim of a successful RCE, with a particular method of privilege escalation that somehow gets all the right details to work) it sure looks to me like you overstated the expected benefit (negative loss).
I do concede that our utility functions may differ, if you actually believe it and not just inflating the importance of this issue.
Also I may have a bias in that I disable JavaScript by default, so the probability of such a risk is much lower. I tried to not make that assumption though, in judging the expected loss.
You are vastly overstating the difficulty of pivoting X11 access to complete system compromise with the "somehow gets all the right details to work" phrase. It's trivial.
People have literally gotten killed because of this type of attack. Yes, hostile nation-states probably don't care enough about you personally, but there are people for whom this is a life-and-death matter.
The one thing that reduces the impact of this is that it's Firefox on Linux, which is a niche browser on a niche OS, but desktop Linux Firefox is a product that Mozilla officially ships, which means that it's Mozilla's responsibility to protect its users.
> It's hard to overstate how much of a benefit this is in terms of security for those on Linux.
I assumed you talked about "those on Linux", not "those on Linux who have non-negligible probability of being targeted by hostile nation-states".
It's often difficult to step back and re-evaluate initial claims, and we sometimes choose to use tactics like zooming in on improbable contexts to justify them. But I think it's healthy to face criticism that may snap us out of it, so I'm writing this comment. But it's late now, so I will not be able to follow up anymore.
Nation states aren't the only threat. There are billion dollar organized crime and fraud networks that have the means to both collect, and successfully weaponize, such RCEs.
It's been a bit since I've looked, but browser RCEs are particularly expensive and coveted. They're a potential vector for 1-click soft/hard roots in homebrew and hacking communities, and of course, 1-click RCEs are invaluable to anyone who has the resources to obtain them and victims to exploit.
They are definitely rare, but once they exist they can spread like wildfire because people (usually correctly) trust browsers to sandbox websites. Put another way, browsers run untrusted programs day in and day out on virtually all sorts of otherwise trusted devices (including enterprise devices) so the impact is gigantic.
The best idea would be to use Wayland in addition to these new Firefox process isolation features. That provides multiple layers of defense against possible sandbox escapes and exploitable bugs in the Wayland server.
I don't think you actually understand what's going on here.
The untrusted programs in this case would be javascript and other exploitable things that firefox interprets every time you visit a webpage. Unless you run firefox itself under a different UID or inside of a VM you are effected by this, regardless of whether you are running other "untrusted" software or not.
Maybe you are thinking that using X11 as an attack vector requires a "bad" program running in addition to firefox? If so, that is not the case, any program that is running on X11 that can execute commands or read and write files could be used for this purpose. A terminal emulator is a pretty good example of such a program.
>If such a user wants to run untrusted programs, he'd use a virtual machine anyway.
In your imaginary world. Real users don't know what a VM is and expect their system to be able to run untrusted programs without exposing all of their data. Which they are able to do on most modern systems.
We are talking about users of the Linux operating system. Many of them do know what a VM is. If they don't, or if they expect to be able to run untrusted programs willy nilly, then the benefit of Firefox doing this is, again, basically zero.
As I cannot reply to your reply, I will write the response here. By using Wayland, surely the direct benefit of this change to you is zero...
I use Linux and I don't audit the source and dependencies of every single program I use or run things in VMs. I do expect wherever possible that programs run in a sandbox with the most limited set of permissions. I have been running almost everything in Flatpak on Wayland and it seems to clearly be the future.
I firmly believe that isolation is the future of endpoint security and I like experimenting with Mandatory Access Control (MAC) on Linux. Tomoyo is my favorite major MAC/LSM in the Linux kernel.
If you have a newer kernel (5.13 or greater), you may like to experiment with landlock. It's pretty cool and unlike FireJail, no suid required. Here's a landlock wrapper for Firefox:
Thank you! And, yes, I agree. I don't want FireFox or Chrome reading ~/.ssh or ~/.gnupg or any other directories in my home that it has no business reading.
Maybe one day we'll have web browsers that don't have any C code. Nothing against C. It's a great systems language, but I'd rather my web browser not use it.
Browsing the web is probably the most dangerous thing the average computer user does.
>I don't want FireFox or Chrome reading ~/.ssh or ~/.gnupg or any other directories in my home that it has no business reading.
Both browsers already do this for the processes that are exposed to the internet. The software shown here additionally does it for the entire browser (with the caveat wrt uploading/downloading that I explained, and maybe some more gotchas that aren't immediately obvious).
(You may understand this nuance, but I wanted to point it out, as it's literally what the browser sandboxes do)
> Maybe one day we'll have web browsers that don't have any C code. Nothing against C. It's a great systems language, but I'd rather my web browser not use it.
Both Firefox and Chrome are primarily written in C++, not C. They do use C libraries though including libc.
Flatpak does exactly that. You can even fine-tune what folders you give to the app (Flatseal is great for this). xdg-desktop-portal makes things even better: app can only access files that you explicitly choose when prompted - kinda like on iOS.
> xdg-desktop-portal makes things even better: app can only access files that you explicitly choose when prompted - kinda like on iOS.
Ho-hum. I can understand the appeal of that idea, but in practice quite a few file formats and applications rely on implicit and/or file-format-specific relationships between multiple files. I.e. I as the user pick one file for opening, but in order to successfully carry out that task, the program actually needs to access quite a few more additional files based on the initially opened file.
None of the sandboxing approaches I've seen so far has a really great story for that usecase.
AFAIK Android and Windows don't offer anything in that regard, no idea about Flatpak, and Apple at least seems to handle related files with differing file extensions, like movie.mp4 and movie.srt, but would still break down for more complex file formats where related/associated files don't share the same file name sans extension.
Plus it means you always have to go through the official OS file dialogues and can't e.g. just manually edit a path directly in the app's UI if that would be more convenient…
Flatpak's sandbox is pretty weak and there are many subtle loopholes that allow for trivial escapes. Relying on it as a security measure is not a great idea.
An example of such a thing, that is related to this thread, is the X11 socket being accessible even when the directory it appears to be contained in isn't bind mounted into the flakpak sandbox's mount namespace. This is because regular file system permissions do not apply to abstract sockets (which X11 can and does listen with).
I think unsharing the network namespace would fix this, and configuring X11 to not listen with any abstract sockets might be possible, but this is just one of many examples of trivial sandbox escapes that most people would never even consider.
The best bet to isolate untrusted software is to run it under a different UID (less safe) or inside of a VM (probably very safe).
Unix applications run as a user, so it's not like they have that permission. Looking at that profile, it restricts write access to the home directory to only the Firefox profile and some config files.
I guess that makes sense, but you'd have to be aware of it when uploading and downloading stuff (it would only work from a specific designated folder).
And where are all the valuable files stored like family pictures, other browsers’ cache, ssh keys etc.? In the same user’s home dir, so in practice most desktop apps do have uncontrolled access to everything on the harddrive as per the now quite old xkcd comic ( https://xkcd.com/1200/ ).
Ideally, a “shadow” Download folder would be accessible to the process, and its content would be mirrored one-way into the real Downloads folder. Upload should display a file chooser dialog which runs in an entirely different process, and the chosen files should be in effect copied to the process’s file handles list.
AIUI this is basically how flatpak does it; the file picker is called a "portal" and is indeed how you pass in files that the program couldn't reach by default.
And that is a welcome change. What I dislike about the project is that it want to be a packaging solution as well, and it is simply not a good one at that compared to the new generation one, which is Nix. Linux really shouldn’t copy Mac and Windows on everything.
That sounds interesting. There must be apps that do require full access (like a finder alternative), so I wonder if you can know to trust an app to be isolated just by looking at it.
"New" is a relative term. Sandboxing has been required on the Mac App Store for roughly ten years now -- I doubt there's anything left from before that requirement was enforced.
It has some interesting side effects. For example Zoom links don't work in the Flatpack version of Firefox. Ironically the Zoom app Firefox tries to open is in itself a Flatpack app. Not sure if this is by design or if there is a way to fix it.
It's a real shame that platform native widgets had to be sacrificed for this to work. The easy way out was to switch to arbitrary app-drawn widgets across all platforms (which Firefox did for all basic DOM HTML elements); the "we'll do one better" alternative is to use app-drawn imitations of the system-native widgets (a la Qt) which Firefox is now doing for the scrollbar. But no one ever gets these widgets right (unnatural scrolling, non-identical behavior when clicking in the gapped region, different scrollable-to-nonscrollable widget ratios, colors not respecting certain subtle theming choices, etc, etc.
I wonder if there was an alternative specifically for the scrollbar here - some way of obtaining an outer "shell" (via win32k, but then basically "orphaning" it so that you can't do anything besides kill it when you're finished) that just provides the window with an empty scrollable element that is then populated by the restricted process.
That ship has really sailed though; I think the days of native widgets are quickly coming to an end.
There already is a safe API with Wayland + some sandboxing layer. Problem is for the longest time the Nvidia drivers didn't work with Wayland. They now do so the reasons to use X11 become vanishingly small.
Firefox already used uxtheme for that. A lot of programs and toolkits do, since the heavyweight HWND-based comctl32 controls aren't suitable for things like browsers because they're slow, hard to render without flicker, there's a limit on how many you can have, they don't play well with effects like opacity and transform3d() and so on. Even Internet Explorer did[1].
I guess uxtheme is unsuitable for win32k lockdown for some reason, probably because it's GDI-based.
(I used to work on the blog author’s team, and in fact dealt with some of the COM issues mentioned in the post)
You’re right, uxtheme can’t be called directly from content because of win32k dependencies. One option could be for content to request the parent to draw those controls, but as you can imagine there are drawbacks to doing that. As usual, there are a lot of trade offs involved.
So here we have a feature of Windows 8/10, which prevents the system calls of Win32U/Win32K from being called.
When you write a Windows program, you call APIs from User32.dll, GDI32.dll, and Kernel32.dll. Those are the user mode libraries, and the main entry point to call the Windows API functions.
What's actually inside of those? User32 and GDI32 are pretty much stubs. Mostly, they have a small amount of code, then proceed to call functions in Win32U.dll. Then Win32U.dll makes system calls, causing Win32k (Kernel Mode) to carry out the functions. So everything from BeginPaint to GetWindowText is going to be a system call that's placed from within Win32U, then handled by Win32K.
Meanwhile, Kernel32.dll is a user-mode library (despite the name being "kernel32"), which mostly makes calls to NTDLL.dll. Then NTDLL makes system calls that get handled by kernel-mode components.
The isolation thing that Mozilla is using here does not stop the NTDLL system calls that Kernel32.dll uses, just the calls to Win32U/Win32K (GDI32.dll and User32.dll). So there needs to be other mitigation methods in place for the Kernel32/NTDLL stuff, such as reduced user privileges.
But preventing all the Win32U/Win32K stuff from being called does greatly reduce the attack surface.
I can think of several things you could do to Kernel32 to try to lock it down:
* Patch out all the entry points of Kernel32 you don't want to be used. Then patch out their corresponding entry points out of NTDLL. Then remove the code that invokes the system calls inside of NTDLL. Yes, you can just simply overwrite user mode code willy-nilly as long as you have write access to the process memory.
(Regular programs use the entry points. Someone trying to do something fancy might skip the entry points. If your hackers are using things like ROP chains and gadgets, chances are they don't care about entry points.)
* Create some kind of NT security context so that nothing interesting works anymore. I don't know how this works, but many NT functions want a security context passed in.
* When you are executing code, and you have no idea where Kernel32 is, you can read the module list out of the TEB, and that finds you Kernel32 and NTDLL. Maybe something that could thwart that, but not break code.
In the end though, what could stop the process from attempting to SYSENTER from somewhere besides NTDLL?
A lot of people don’t know this, but Firefox uses the Chromium sandbox. I should know: I rubber-stamped the commit that first imported that code into the Gecko repo (it is regularly kept in sync with upstream).
That sandbox (and others, for that matter) more or less follow steps as outlined below:
How does this square with Daniel Micah (of GrapheneOS) saying[0] that
> Even with [Firefox's] attempt at a sandbox […], sites aren't ever cleanly separated into different processes. […] There is no sandbox containing anything afterwards beyond the app sandbox. All sessions and data for other sites is compromised. […] [Compared to Chrome,] Firefox is easier to exploit, [has] lots more low-hanging vulnerabilities and a half-baked weak sandbox. On Android, it has no sandbox at all.
Not saying you are wrong. But maybe you could elaborate and put things into context for me: How does Firefox's sandboxing differ from Chrome's if it's using the Chrome sandbox?
While googling for an answer I found another comment of yours[1] from 5 years ago. Could you possibly speak to what has changed since then and what the current status is? (At least to your knowledge since I understand you no longer work at Mozilla.)
Keep in mind that, in May of 2017, desktop Firefox did not even yet have full support for multiprocess under particular configurations. Obviously you cannot sandbox content processes that don't exist, so there were some prerequisites that needed to be dealt with first. That was pretty much wrapped up by the Firefox Quantum 57 release.
The other thing to note is that the Chromium Sandbox is more like a "sandbox construction kit." It features all kinds of knobs and dials that allow the developer to configure how strict it is. We used to compare adding multiprocess and sandboxing to Gecko to swapping out the warp drive on the Enterprise, while it was still at warp. You can't just flip a switch one day and be sandboxed; too much code existed under the assumption that it could access whatever OS resources it wanted, without restriction. It took time to modify that code to be aware of that restriction and act accordingly. Each time an iteration of modifications were completed, we were able to tighten the screws on the sandbox a little bit more.
Not only has that been very useful as Gecko has been migrated to running in a fully sandboxed environment, it is also a necessity even at the final stage. For example, modern browsers constrain their interactions with the GPU (and its driver) to a dedicated process. That process is sandboxed, but because that process needs access to graphics devices, it obviously needs a weaker sandbox than the other processes hosting web content.
IMHO the win32k lockdown is not "mission accomplished," but is a HUGE indicator of how much code has been migrated to support sandboxing. On desktop, Firefox processes are now site isolated and are disconnected from their platform GUI systems. That's huge -- I'm sure there are still deviations between the Chrome and Firefox sandbox configurations, but they're a lot closer now.
Firefox for Android still has a long way to go (I was working on that when I left Mozilla), but unfortunately it hasn't been treated with the same urgency as desktop. I know that they're still working on that and have been able to make a lot of progress now that site isolation for desktop has been released.
Finally, I should point out that sandboxing is a defense-in-depth measure, but a lot of armchair quarterbacks seem to only ever want to focus on comparing the sandbox between Chrome and Firefox, while completely ignoring how much more of Firefox is written using memory-safe languages. That's important too.
> Firefox for Android still has a long way to go (I was working on that when I left Mozilla), but unfortunately it hasn't been treated with the same urgency as desktop. I know that they're still working on that and have been able to make a lot of progress now that site isolation for desktop has been released.
This might explain Daniel Micay's perspective on Chrome vs. Firefox (on Android) then.
Yes, it does, at least if you go by how much money the sketchy vulnerability brokers are offering to pay. On https://zerodium.com/program.html a Chrome RCE+LPE is "Up to $500k", while the other browsers are all less.
that's another sampling of actual web visits. though it skews more tech oriented of course so that's going to be away from safari/ie and more toward firefox and chrome.
I mean in the rewards. A lot of those "desktop apps" have embedded "WebViews", if you know what I mean, which would effectively be running outdated Chrome versions, which would be easier to exploit than the real thing.
I wonder how those $ amounts are arrived at, I don't see in FAQ. Maybe a third party study of potential factors and prices (quick search I'm not finding anything promising)? Surely market share/adoption is very significant, but something else must explain e.g., 2.5x more for Apache RCE than Nginx RCE?
There are several factors that may affect per-app supply and demand.
- How expensive is it to discover a new vulnerability in a given app? (This may depend on code base maturity but also on choice of programming language, its inherent memory safety, and supply chain.)
- What privileges does a typical installation of the app grant once RCE is achieved?
- How hard is it to write a working exploit for a newly-discovered vulnerability, taking into account the security architecture that protects the app?
- Given a zero-day exploit, how many times will you have the opportunity to use it? How quickly will other parties discover it, is the vendor willing to provide patches, how long it is going to take, how much do the updates cost, and how difficult is it to upgrade the software in the field?
- Apps and computers tend to come in packs, and attackers love to move laterally. What opportunities would an attacker gain from lateral movement after gaining persistence in a given system?
- Market share and adoption may be skewed, as attackers may be interested in specific targets such as journalists or politicians, who may form a specific demographic with particular adoption rates, which can differ from those of the general population.
Ahhh, the breath of fresh air coming from a truly free market doing what markets do best: processing information in the face of uncertainty to the benefit of all! Don't you feel the soft touch of the invisible hand, gently working to raise the tide of security for all?
A constant stream of newly discovered (but long existing) remote code execution vulnerabilities has been the norm for years and no quick change in sight. Depending on who you ask, catastrophic, or manageable.
Practically speaking there's probably not a ton of difference. ITW exploits are pretty uncommon for both, and they're typically somewhat targeted. I haven't paid attention in years for this reason, but I'd guess that Chrome is ahead - mitigation techniques like site isolation are quickly improving in it and were adopted earlier as well.
In general my recollection is that Chrome splits up more of its components. It sounds like, based on this post, the gpu process is now separated and there's sandboxing efforts going into that, but I'm pretty sure Chrome has had that for years now.
The current mitigations seem to be doubling-down on getting rid of C++ memory safety errors (still the main source of security holes), with Mozilla pulling the Rust and WebAssembly card. So there's some divergence here, rather than parallel paths with one ahead.
It doesn't and even if it did it would still be a better browser than Firefox security-wise. Google poured a lot of money into it with really good results.
You can probably still have a meaningful bpf sandbox even without the setuid root helper (or user namespaces), but I'm not sure Chrome supports running that way (because it doesn't make much sense from a security tradeoff point of view). As ugly as it is, the odds of a bug in the setuid helper are probably smaller than the odds of a bug in the attack surface its mitigations remove.
But the setuid helper and user namespaces are used for other mitigations that can't be done through bpf. The guys who wrote the bpf sandbox have some nice articles about the distinction which you can probably Google.
Well looking at this specific feature, I believe chrome got site isolation mid-2018 and enabled it by default mid-2019. From what I can tell Firefox got it mid-2021.
I don't know much about the specifics of the implementations, but that seems like a significant difference for such a crucial security feature, especially post meltdown/spectre.
This is not correct - like the person you're responding to, you're mixing up security features. For one, Chrome predates Windows 8, where this mitigation was introduced, so it can't have supported it before it existed.
I think you're confusing the general "sandbox that restricts access to the kernel" with the specific mitigation here. Firefox also had the generic kernel mitigations long ago: https://news.ycombinator.com/item?id=31362935
But it's true Chrome had them first. The same thread points out that Firefox reuses Chrome's sandbox, so like in the first sentence, there are causality constraints that would make it hard for Firefox to support it before Chrome did :-)
Not as efficient but uBlock Origin in advanced mode lets you get fairly close, although not with quite as fine grained control from the main UI (however you do get very detailed control via the logger and directly writing rules). The one thing I've had an issue with is that replacing a frame with a link to popup the frame in a new window seems to not work the same, at least last I checked maybe a year ago (the easily available method didn't do quite the same thing as uMatrix and I gave up before figuring out if it was possible to get uBO to do the same thing via some means). This mostly means that I've had to run google javascript on sites with embedded YouTube that I didn't before, but otherwise there hasn't been much difference in practice once I got used to the uBO way of doing things. The other major issue for me is the lack of the "ruleset recipes" as far as I can tell that means you need to reload a bunch of times if you start blocked an only allow certain stuff (such as the YouTube videos) for a particular session. I have left more scripts always enabled because of that even when I don't need them much of the time.
I'm so glad they're working on this instead of unimportant minutiae like copy/cut frequently outright refusing to deposit its contents in the system clipboard on all three major platforms, or the browser simply freezing for up to ten minutes after the system wakes up from sleep while it syncs all changes made since then.
My understanding is that same-old memory safety errors in C++ are still the main source of initial exploits for these browsers. Chromium has been looking at things like MiraclePtr, whereas Mozilla has been moving to Rust and RLBox (and Chromium is trying the former, too): https://security.googleblog.com/2021/09/an-update-on-memory-...
I think I agree with the other poster saying that "practically speaking there's probably not a ton of difference".
There used to be a wider gap, but both browsers have all the important stuff now, with any differences being much more incremental. I assume neither is going to get rid of their shitty old C++ codebase anytime soon, which would be the real win.
For sandboxing you're also restricted by what the OS offers in the first place. So expect innovation on Android, or ARM macOS, not Chrome/Firefox on Windows.
> For Linux users, we removed the connection from content processes to the X11 Server, which stops attackers from exploiting the unsecured X11 protocol.
It's hard to overstate how much of a benefit this is in terms of security for those on Linux. Any application with access to the X11 socket effectively has the keys to the kingdom, because it can hijack other applications, including those running at higher privileges, at will. (There have been halfhearted attempts to address this over the years, but nothing that's been effective or widely adopted.) The only real solution for desktop app security on Linux is to forbid direct access to X11 entirely, and so it's a huge deal that Firefox is now able to do this.
In general, doing this sort of thing is a monumental undertaking for large applications like Firefox. Kudos to my former colleagues for pulling it off.