Does anybody know if they are at least working toward separate processes per tab? Continuing to focus on single process 32 bit seems like an absurd strategic move.
It's a matter of priorities. 64-bit is important, but for almost all users, it isn't an immediate concern to use 32-bit on windows, nor would they get a big benefit to move to 64-bit bit there.
There are other things that the devs are focused on, that have more tangible immediate benefits. That's all it comes down to. At some point though I would imagine 64-bit would become higher priority.
edit: Regarding multiprocess, it is already used in FirefoxOS, for example. Enabling it on desktop would be hard because of all the existing addons, on mobile though it has been used for a while (the Android browser was multiprocess, now it is multithreaded, and the FirefoxOS browser is as I said multiprocess). Since most of the growth in the browser market today is in mobile (sales of desktop PCs actually declined this year), that is a forward-looking approach I would say.
I wondered how they went so far off the rails and let this fall by the wayside, not even updating the project status.
Mozilla had to find a replacement for C++ and chose Rust.
. . .
[Rust] Release 0.2, 2012-03-29:
I'm trying to think of a nice way to say that it's maybe not a good idea to invest in inventing a new language instead of focusing on your flagship product.
Electrolysis was "put on hold" because there were better ways to make Firefox a more responsive browser, now. That's projects snappy and supersnappy. See https://wiki.mozilla.org/Performance/Snappy and http://taras.glek.net/ and https://bugzilla.mozilla.org/show_bug.cgi?id=718121 -- the final bug being what I think will make Firefox a great browser again.
Unlike the quip about working on Rust above, this is a matter of prioritization of effort. I would not qualify myself as experienced enough to make the decisions to run a project as large and with as many users as Firefox, but I don't think that Mozilla is wrong here.
Rust, Typescript, Go, Dart are all awesome. I'd love to see any one of them change the world of development.
Very few people actually hit the point where 64-bit memory is useful to them, and thus it's not a focus for Firefox at the moment.
From the article:
> At this point, the Mozilla project does not have the resources to actively support this use case.
$300 million a year, and they can't afford to support Win64, the predominant platform, of an open source project? Someone needs to get in there and start cleaning house.
Very true. One has to experience it before it hits you. I've completely stopped using Chrome/Iron now.
The fact is that single process versus multiprocess carries different tradeoffs with respect to memory. A single process has an advantage in the short run, in that it will consume less memory initially. However, in terms of private working set memory across multiple tabs you're simply not going to see anything like the 2x claimed in that link, given that much of the address space maps to shared pages. So, most likely the authors just weren't properly measuring memory usage per-browser.
On the flip side, a multi-process architecture has its own strong advantages for memory consumption. Any complex, long-running application is going to have to contend with memory leaks and fragmentation, which exert increasing memory pressure over the life of an active process. This gets even more painful when considering the multitude of heaps in your average browser, chrome layers written in Web technologies, and (iirc) Firefox using a non-moving, non-generational GC. Whereas a multi-process browser has a vastly cleaner solution to all of that: it simply disposes of an entire address space (including all fragments and leaks) when it's done with a rendering context.
You're overstating the impact of JS here. JS memory is allocated in large "chunks" in Firefox. This minimizes external fragmentation. When a rendering context is destroyed, all of the chunks are immediately destroyed with it; because of compartments, we know that there are no chrome JS objects interspersed with the content JS objects, so we can just destroy these large chunks. So the amount of JS-related fragmentation resulting from closing rendering contexts is minimal in practice.
Regarding chrome layers written in Web technologies, this is somewhat orthogonal, since in all browsers chrome allocations stick around for the lifetime of the browser. C++ code has no compaction story, and, unlike JS, it has no reasonable path to achieve compaction in the future.
Seriously, there are other tangible security benefits to 64 bits as well. For example, ASLR is going to have more bits with which to randomize DLL base addresses.