I have to say though, with the introduction of the GGC, and all of the improvements to the JS runtime recently in Firefox, that it has gotten much faster.
Electrolysis should bring "smoothness" to the entire UI as one site should not be able to halt the UI thread for the rest of the browser any more.
Maybe I am just paranoid, lol
Anyway, I'll add this:
You are not paranoid if they are really watching you :D
I've got more useless points than I know what to do with so no problem with others having some.
ps. I think a better title that might not get mod-changed would be "Mozilla: Electrolysis"
There are certainly some performance things to iron out. Right now, we're just trying to make the base browser functions work properly, and then we'll start tackling the performance problems.
I'm looking forward to Electrolysis, since having only one process is what makes me use Chrome: the renderer hangs a lot, especially with complex pages (such as TweetDeck for me), and when that happens, the whole browser hangs.
This of course assumes a compromised tab can't go on to compromise the browser kernel (i.e. the process that manages tabs and shared tab state) or trick the kernel into giving it unauthorized data from other tabs. However, formally verifying that a kernel implementation prevents this is feasible in practice .
... assuming those tabs are cross-origin.
> This of course assumes a compromised tab can't go on to compromise the browser kernel
Also assumes that a compromised tab can't go on to compromise the OS kernel.
Not sure why this has to be a condition. Perhaps tabs loading pages from the same origin will both have read/write access to some shared data in the browser kernel (like the site's cookies), but they still run in separate address spaces regardless. A compromised tab won't be able to directly access another tab's RAM, as is the case with single-process browsers.
> Also assumes that a compromised tab can't go on to compromise the OS kernel.
Very true. However, my argument was that a multi-process model limits (but obviously does not eliminate) the impact of zero-days. In the single process model, the attacker could compromise any tab and have all tab state available with no additional effort. In the multi-process model, the attacker would have to compromise the right tab, compromise a different tab and trick the browser kernel to performing the requisite operations, or somehow bypass the OS's memory protection. Each of these require more work than before.
For example, you can get direct access to another tab's "window" object in this way, as long as it's same-origin: https://developer.mozilla.org/en-US/docs/Web/API/window.open...
> When would it ever make sense for multiple tabs to directly access each other's runtime state?
For Web compatibility.
I'm a bit concerned about the effect this could have on RAM usage and plugin compatibility - the linked wiki page already lists two plugins that I can't live without (NoScript and Tree Style Tabs) as being incompatible. Hopefully that will be fixed before this becomes default in a release.
A crashing content process, on the other hand, can just "crash" the tab, and show an interface for reviving that tab.
I'm excited about e10s though, I think this is definitely the right way to go for all browsers!
Now the question is how tightly locked down the page processes are. Can they access files? Or is all file and persistent state access via some separate process? Done right, each page process should execute in a jail, lacking the privileges to alter external files or persistent state.
Seeing "how much memory is being used" is a major use case for you using a browser?
If a browsers works fast and doesn't leak memory, one shouldn't care how much memory exactly is used.
That's a big assumption. Realistically speaking, there's only FF, Safari, and Chrome. FF leaks like crazy (sometimes on it's own but sometimes it could be a plugin). Safari doesn't support all the plugins I use and Chrome may but I'm don't like their dev console layout. And if you don't see how much mem is being used, you have no way of telling if a newly installed plugin just caused a memory leak (unless it just totally prevents you from using the browser).
Yes, certainly. My development machine has 8 GB of RAM. On a number of occasions I've recovered from a marginal memory problem/commencement of disk swap by dumping Chrome. I should have more RAM, but still.
Chrome uses a lot of memory. Unless you have a ridiculous amount of RAM, there will be times when you'll notice the load.
> If a browsers works fast and doesn't leak memory, one shouldn't care how much memory exactly is used.
Not really. Not if you care about performance.
One can even see the memory consumption and CPU usage of each tab and extension.
The latter displays total memory usage at the top of the page.
edit: As child poster noted, total memory is overcounted as it seems to be quite difficult to audit exactly what memory is shared within each child process. (see https://code.google.com/p/chromium/issues/detail?id=25454 )
And this is not a condemnation of the practice. Just saying it really isn't that different than most anything else.
I will say that it is not that troubling. Far more troubling would be either a) multiple completely inconsistent browsers dominating the field or b) a homogenous ecosystem in the web. At least, I think so.
Got to keep up with Chrome after all.</s>
If I use chrome for normal browsing (I don't normally because its feature set is poor, customisability is a dirty words, and I don't like the privacy risk), it can rapidly slurp 4GB or more of memory (with perhaps two key addons, adblock and DoNotTrackMe). Even firefox at its worst rarely if ever goes over 1.5GB, even with very heavy customisation and plenty of privacy/adblocking/etc addons.
What about by the overhead from task switching hundreds of processes and mapping all the memory when each one takes up 50MB or more?
As for the excess memory usage on chrome, "OneTab" might help https://chrome.google.com/webstore/detail/onetab/chphlpgkkbo... (It's available on Firefox) https://addons.mozilla.org/en-US/firefox/addon/onetab/
uBlock author has extensive benchmarks about this issue.
If you want to experiment with more than one tab process, you can tweak the about:config pref "dom.ipc.processCount".
(Admittedly the author is biased towards ABP. My biases: former Adblock user, former ABP user, current Adblock Edge and µblock user.)
I've had too many stalled pages for my liking so I welcome this when it is finally stable and extensions catch up.
Somehow, I doubt disabling multiple processes will be within the scope of an addon.
Firefox is not for lack of options, I'd say it is the *nix of the browser world.
ps. You can also solve all of the tab issues with TabMixPlus and the UI issues with ClassicThemeRestorer - my Firefox 33 looks virtually identical to Firefox 4
I mean, sure, it sounds great to say things are sandboxed. But when the actual exploits either a) root the machine, or b) targeted the user directly, it seems that protecting tabs from each other really doesn't do much.
For example, see the Chromium docs: http://www.chromium.org/developers/design-documents/multi-pr...
And, still, kind of amusing that the entire point of the browser is that it is sandboxed from the whole computer. Seems if we just restricted what the browser was capable of as a whole, we'd be there.
1. The browser as a whole needs to have permission to do quite a few things, including reading from and writing to the filesystem (for uploading and downloading files), talking to your system's graphical environment so it can display windows, and accessing arbitrary hosts on the network so it can access web servers. It's just not possible to meaningfully sandbox something requiring so much access. Individual browser components, on the other hand, can be designed to do very specific tasks and are thus easier to isolate.
2. You want to protect not only your system from a browser exploit but also other parts of the browser. A site that exploits a browser vulnerability shouldn't be able to read your cookies for another site.
These reasons imply that you need to focus on isolating and restricting components inside the browser instead of the browser as a whole.
Great, buy me another 32GB then please</s>. People shouldn't have to have high end gaming rigs to run a sodding browser.
Mozilla (and google) are deluding themselves in thinking that their particular software is the be all and end all of a computer, and should be introduced to this strange concept called 'multitasking'.
OS: 1GB (conservative estimate)
Various background programs: 1GB
Suddenly doesn't look so cheap.
What are you using, Windows? I'm running Ubuntu 12.04 and the "OS" (X, window manager, daemons) is certainly not even close to 1GB, more like ~100MB.
Here's a thought for developers: Next time you find yourself saying "RAM is cheap" (or any variation thereof) thwack yourself about the head multiple times with a big stick, then go rinse your mouth out with soap and water.
In any case, the number one evaluation criterion for browsers is not the RAM profile.
If performance matters that much, perhaps you should instead be asking why we are writing applications and UI in an ugly evolution of SGML.
Before you fault the Moz devs, ask yourself why we have CSS for high DPI images and why designers embed videos into website headers. Maybe your needs aren't at all times reflective of the majority. The browser serves the spec. The spec serves every case you care to imagine. Because that's what we evolved a doc format to do.
(Just imagine a parallel universe where it was instead MS Word that evolved into a facebooking client!)
Sure, Firefox, and every other program should use as much RAM as it needs. But we should be mindful of keeping that need as low as possible, at least for any program that falls into the category of "runs in a multi-program environment and probably won't be the only program running from a finite RAM pool".
Otherwise, if you are idling most of those processes, they should be the same as before. Just idle.
Memory, on the other hand, I think will go up. Not sure by how much, though. I would think the rendered content would be the largest consumer of memory. And again, if you have hundreds of tabs open with high memory usage per tab, that should already be a problem. Right?
Chrome: Anywhere from 50MB/tab up.
Suspending JS execution in invisible tabs seems highly unlikely to be web-compatible; you wouldn't want YouTube or Spotify to stop playing just because you focused a different tab. On the other hand, with technologies like requestAnimationFrame, we can make it possible for well designed applications to work well when in the background.
My point for performance, though, was mainly that if you have 200+ tabs that are actively doing something, then you are thrashing even in firefox. It isn't like they just do their work for free depending on the process model.
Also, 200 is not exactly a large N when we are too worried about scheduling, is it? (That is, unless all 200 are cpu bound, in which case, again, firefox would already be thrashing.)
Not sure about the ~5MB/tab figure.... This is FF 32.0.
That is, by going to a "per process" approach, the amount of memory that gets paged in almost certainly went up. No?
I feel like I must not have been clear earlier because I have made just a single point, and both of you are discussing things not related to that point. Let me try one more time: it's all about performance of the current tab.
The only issue would be if most of the memory allocations are under your system's page size (typically 4096 bytes) and distributed randomly, so that a lot of pages have data structures associated with multiple different tabs. But I think that's unlikely, and even if it is true, couldn't it be resolved by making your allocation strategy tab-aware (e.g., by giving each tab its own malloc arena, which would be way simpler than splitting the browser into multiple processes).
Am I missing something here?
Why would you think the current monolithic would page out a tab that was actively being used? Why would this not also happen in the "per process" approach?
That is, how would this specifically help? If you are actively using the memory for the current tab, because it is the current tab, why would it be swapped out under the monolithic case where it would not in the per process case?
I'm perfectly willing to accept there is a scenario I am not considering. I just don't see it, right off.
I'm assuming this has come up a fair bit. Any good links to read up on this?
This is the must-have feature that keeps me using Firefox -- especially when I'm doing research, which has a naturally tree-like pattern. Apparently, from the Chromium bug, it's a dealbreaker for lots of other people as well: https://code.google.com/p/chromium/issues/detail?id=344870
Another suggestion is to not manage it. I create tabs all the time, and often have many similar tabs. There's no need to manage it, only to clean up once in a while.
(I like many tabs. A few weeks ago I performed some tab-cleaning -- 550 tabs were a bit much, as it made Firefox start slower.)
Personally i just use it for things I would like to read at some point, instead of filling up my bookmarks with 50-10 entries every day.
Disabled that as soon as it was dumped on me. There are an almost infinite list of reasons to have multiple copies of tabs.
Honestly, I cleanup every week or two, and it works fine. Windows are by category of different things I do, and I tend to leave frequent sites open all the time.
I find it essential to manage the mess that is my Chrome tabs.
Panorama is a life-saver if you have to context-switch between projects regularly.
TreeStyle, really, I use more as a means to move screen real-estate to horizontal usage on my laptop's 16:9 screen. Tab organization is a side benefit.