Hacker News new | comments | show | ask | jobs | submit login

It still sounds like their focus really is not on 64-bit. That may be fine in the past, or even today, but it really doesn't seem forward facing at all.

Does anybody know if they are at least working toward separate processes per tab? Continuing to focus on single process 32 bit seems like an absurd strategic move.




> It still sounds like their focus really is not on 64-bit

It's a matter of priorities. 64-bit is important, but for almost all users, it isn't an immediate concern to use 32-bit on windows, nor would they get a big benefit to move to 64-bit bit there.

There are other things that the devs are focused on, that have more tangible immediate benefits. That's all it comes down to. At some point though I would imagine 64-bit would become higher priority.

edit: Regarding multiprocess, it is already used in FirefoxOS, for example. Enabling it on desktop would be hard because of all the existing addons, on mobile though it has been used for a while (the Android browser was multiprocess, now it is multithreaded, and the FirefoxOS browser is as I said multiprocess). Since most of the growth in the browser market today is in mobile (sales of desktop PCs actually declined this year), that is a forward-looking approach I would say.


https://wiki.mozilla.org/Electrolysis#Status (take note of the then-future dates)

http://www.2ality.com/2011/12/firefox-electrolysis-on-hold.h...

I wondered how they went so far off the rails and let this fall by the wayside, not even updating the project status.

http://www.2ality.com/2012/02/servo.html

Mozilla had to find a replacement for C++ and chose Rust.

. . .

[Rust] Release 0.2, 2012-03-29:

I'm trying to think of a nice way to say that it's maybe not a good idea to invest in inventing a new language instead of focusing on your flagship product.


First, Rust is a handful of people working on a research project.

Electrolysis was "put on hold" because there were better ways to make Firefox a more responsive browser, now. That's projects snappy and supersnappy. See https://wiki.mozilla.org/Performance/Snappy and http://taras.glek.net/ and https://bugzilla.mozilla.org/show_bug.cgi?id=718121 -- the final bug being what I think will make Firefox a great browser again.


I thought people want electrolysis or 64 bit because they have so many tabs open that they run out of address space; no amount of snappiness will fix that problem.


That's probably a valid concern, but it's at best a fringe requirement, I would suspect, compared to the huge number of people who would want a snappier browser. Electrolysis provided some benefits here, because less parts of the browser would run in the same processes and threads.

Unlike the quip about working on Rust above, this is a matter of prioritization of effort. I would not qualify myself as experienced enough to make the decisions to run a project as large and with as many users as Firefox, but I don't think that Mozilla is wrong here.


I and about half of the people I know regularly test the limits of how many tabs they can keep open. The other half can't imagine doing so. I suspect this is a sharp bifurcation of use case that the project leads have found themselves on one side of, not a "fringe requirement."


The Memshrink project is the one geared towards the lowering of memory consumption for Mozilla projects (Firefox, Thunderbird, and Boot2Gecko mainly).


New programming languages have been the downfall of many a great tech company, but not nearly as many as "focusing on your flagship product" to the exclusion of a changing world.

Rust, Typescript, Go, Dart are all awesome. I'd love to see any one of them change the world of development.


Electrolysis also broke compatibility with many Firefox add-ons. The code is not dead, however. It's used in B2G.


Single process per tab is heavyweight, and is the reason why Chrome uses more memory than Firefox once you have a few tabs open.

Very few people actually hit the point where 64-bit memory is useful to them, and thus it's not a focus for Firefox at the moment.


It should be. x64 has speed improvements (due to quirks), and Win64 has important security improvements.

From the article:

> At this point, the Mozilla project does not have the resources to actively support this use case.

http://en.wikipedia.org/wiki/Mozilla_Foundation#Financing

$300 million a year, and they can't afford to support Win64, the predominant platform, of an open source project? Someone needs to get in there and start cleaning house.


(($300m / numberOfUsers) * numberOfExtremeUsers) / numberOfEngineersRequired / numberOfYearsRequired


Don't have to be an extreme user to take advantage of the features above, they benefit everyone. Nor would it require years, it already works fine (has for years and on every other OS) ... just could use a little optimization.


> Single process per tab is heavyweight, and is the reason why Chrome uses more memory than Firefox once you have a few tabs open.

Very true. One has to experience it before it hits you. I've completely stopped using Chrome/Iron now.


It's been a while since I've used Firefox as my primary browser, but in my experience, long running use of Firefox used way more memory than Chrome. Because of its process per tab architecture, Chrome has always been able to clean up unused memory much better than Firefox. To me that's more important than saving memory on individual tabs, because I leave my browser open for weeks at a time.


Firefox isn't quite as good as Chrome when it comes to retrieving memory, but it's much better than it used to be. And it's vastly better than Chrome when you have those tabs open: http://techlogon.com/2012/09/21/review-of-firefox-chrome-and...


I have no idea what that link is supposed to be demonstrating. It doesn't say how they measured memory usage (private working set, address space, ouija board stacks?) and doesn't share any of their data sets. It's an arbitrary set of claims devoid of substantiating context, and as someone who has measured this before, it doesn't appear to align with reality.

The fact is that single process versus multiprocess carries different tradeoffs with respect to memory. A single process has an advantage in the short run, in that it will consume less memory initially. However, in terms of private working set memory across multiple tabs you're simply not going to see anything like the 2x claimed in that link, given that much of the address space maps to shared pages. So, most likely the authors just weren't properly measuring memory usage per-browser.

On the flip side, a multi-process architecture has its own strong advantages for memory consumption. Any complex, long-running application is going to have to contend with memory leaks and fragmentation, which exert increasing memory pressure over the life of an active process. This gets even more painful when considering the multitude of heaps in your average browser, chrome layers written in Web technologies, and (iirc) Firefox using a non-moving, non-generational GC. Whereas a multi-process browser has a vastly cleaner solution to all of that: it simply disposes of an entire address space (including all fragments and leaks) when it's done with a rendering context.

Now, I don't want to undermine the really great work that the Mozilla guys have done addressing leaks and fragmentation, with things like cycle detection, JavaScript compartments, and various detection and analysis tools. However, the simple fact is that users who keep their browsers open for many hours, days, or weeks at a time are generally going to suffer from these issues more under Firefox than Chrome or protected mode IE.


"This gets even more painful when considering the multitude of heaps in your average browser, chrome layers written in Web technologies, and (iirc) Firefox using a non-moving, non-generational GC. Whereas a multi-process browser has a vastly cleaner solution to all of that: it simply disposes of an entire address space (including all fragments and leaks) when it's done with a rendering context."

You're overstating the impact of JS here. JS memory is allocated in large "chunks" in Firefox. This minimizes external fragmentation. When a rendering context is destroyed, all of the chunks are immediately destroyed with it; because of compartments, we know that there are no chrome JS objects interspersed with the content JS objects, so we can just destroy these large chunks. So the amount of JS-related fragmentation resulting from closing rendering contexts is minimal in practice.

Regarding chrome layers written in Web technologies, this is somewhat orthogonal, since in all browsers chrome allocations stick around for the lifetime of the browser. C++ code has no compaction story, and, unlike JS, it has no reasonable path to achieve compaction in the future.


We're only talking about 64-bit builds of Firefox for Windows. Firefox builds for Mac OS X and Linux are already 64-bit. Chrome is 32-bit on both Windows and Mac OS X.


From what I have read, it is lack of a maintainer since the current developers are concentrating on other features. The 64-bit support problem is also only on Windows, and one of the dis-incentives is lack a of 64-bit plugins.


Lack of plugins is a benefit in my world! :-)

Seriously, there are other tangible security benefits to 64 bits as well. For example, ASLR is going to have more bits with which to randomize DLL base addresses.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: