
Multiprocess Firefox - evilpie
http://billmccloskey.wordpress.com/2013/12/05/multiprocess-firefox/
======
kibwen
For those of you interested in parallelism in browsers, I suggest you keep an
eye on Mozilla's experimental new browser engine, Servo.[1] The goal is to
make use of a variety of concurrency strategies (such as a Chrome-esque
process-per-tab design[2]) and Rust's built-in support for memory-safe
concurrency abstractions (fork/join, lightweight tasks, SIMD, etc.) to produce
a ludicrously parallel[3] web browser. And even if Servo itself never happens
to make its way into production, one of its purposes is to enable Mozilla to
explore effective parallelism strategies to pursue with Gecko.

[1] [https://github.com/mozilla/servo/](https://github.com/mozilla/servo/)

[2] I wasn't exactly thrilled about this myself, but as long as Servo has to
interact with C++ code (notably, SpiderMonkey) this was judged critical for
security. Fortunately, pcwalton seems to believe that Servo's tab processes
will occupy less memory than Chrome's.

[3]
[https://github.com/mozilla/servo/wiki/Design](https://github.com/mozilla/servo/wiki/Design)

~~~
RHSeeger
I'll admit I wasn't a fan of process-per-tab originally either. However, since
ever browser leaks memory, the ability to close some windows and reclaim the
memory that was lost has been very useful to me (I tend to have 50+ tabs open
at most times, and for many days at a time).

That being said, I'd also be ok with process-per-window, as that would give me
the same basic ability.

~~~
diydsp
Hahaha, you and me and about 1% of the browser users (well on HN, probably
more like 10%) have particular habits of keeping much larger numbers of tabs
open than everyone else.

Have you imagined an alternate scheme to tabs, where there are 100s of
thousands of potential "tabs" which can be organized, called up in groups,
moved in clumps together, and shared?

Imagine each group of tabs like soldiers in a Command and Conquer game...

~~~
danielfernandez
I have found out that lots of people from other non-technical areas also have
the "too many tabs opened" problem. We are working on a solution for that
except for the soldiers part =) The idea is that you can keep your browser
synced (one or many browsers) and move tabs for later, search them, archive
them, restore them and soon also share them. Take a look at
[http://listboard.it](http://listboard.it) if you are interested.

~~~
simcop2387
That looks really nice. I wonder if you could integrate this with firefox's
tab groups[1] I use them to manage things all the time.

[1] [https://support.mozilla.org/en-US/kb/tab-groups-organize-
tab...](https://support.mozilla.org/en-US/kb/tab-groups-organize-tabs)

~~~
danielfernandez
Interesting proposal, we do not have it integrated but we will evaluate the
idea.

------
JohnTHaller
If you'd like to try this out on Windows without affecting your main Firefox
profile, we just released portable packages of the Firefox Nightly and Aurora
builds at PortableApps.com yesterday: [http://portableapps.com/news/2013-12-04
--firefox-aurora-27-a...](http://portableapps.com/news/2013-12-04--firefox-
aurora-27-and-nightly-28-portable-released)

They run self-contained in their own directory so you can quickly extract them
to your Desktop or portable device. The installer downloads the latest build
as you install it and configures it for standalone use. When you're done
testing, you can just delete the FirefoxPortableNightly directory.

Bonus: The Nightly branch also has the new Australis UI redesign that they've
been working on and is worth checking out.

~~~
dewarrn1
Works like a charm.

------
pjmlp
I remember the days when for multiprocessing was the only option and multi-
threading was only available on a few systems.

Now with the security exploits many plugins have exposed and the way a
misbehaved thread can bring the whole application down, we are moving back to
the multiprocess model as a better sandbox model.

Old becomes new as they say.

~~~
sp332
It's always been a better sandbox model, but message passing is still a pain
to orchestrate compared to shared-memory data structures.

~~~
csmuk
Personally I prefer message passing (over pipes).

Shared memory (be it shm/heap in same process) with associated mutexes,
semaphores, locks and the like is a right pain to get right without
introducing race conditions, deadlocks etc.

Go is interesting as it uses "micro threads" (goroutines) and message passing
CSP style but I haven't found a use for it yet.

~~~
rdtsc
> Go is interesting as it uses "micro threads" (goroutines) and message
> passing CSP style but I haven't found a use for it yet.

But is also has shared heaps by default. So you still have to rely on everyone
being an "adult". I wish shared heaps would have been a specially enabled
feature not the default. But I guess the language was position to compete with
C++ and Java and isolated heaps would have provided a performance decrease in
benchmarks -- and thus people's willingness to adopt it.

For large concurrent systems, safety and fault tolerance often leads to their
failure but it is kind of hard to encode that in a quick benchmark to impress
people.

Here are a few languages/systems with default isolated heap runtimes between
concurrency units: Dart's isolates, Erlang's processes, Nimrod's threads, Web
Workers in modern browsers. Anyone know of more?

~~~
jerf
"I wish shared heaps would have been a specially enabled feature not the
default."

I hope to someday see the language at least do the opposite, have a specially-
declarable "isolated" goroutine. In theory, the compiler ought to be able to
analyze a goroutine, determine that it shares no data at startup with another
process (i.e., nothing in a closure or something), determine that it only
communicates via value-passing channels (which courtesy of my previous
restriction, can be analyzed by simply looking at what channels are passed in
at startup time, and some analysis of the types of the channels), and thus
guarantee that at least _this_ goroutine is fully isolated. Pervasive usage of
the new keyword I'm hypothesizing would allow a diligent programmer to recover
most of the isolation advantages without having to rewrite Go entirely. It
also ought to enable some other optimizations against these guaranteed-
isolated goroutines, the biggest of which is that they no longer need to
participate in a global stop-the-world GC, both in that they can continue
running while that is occurring and that they also relieve the global GC from
the task of scanning over them.

(In fact all the analysis ought to itself be fully automateable, and the user
shouldn't _have_ to declare it; I'd want them to still have the option to make
a declaration so the compiler can tell them if they screwed up, though. I
don't like such critical functionality being behind an opaque optimizer.)

But this certainly won't happen soon; there's a large enough list of stuff
that comes before that.

~~~
pcwalton
> determine that it only communicates via value-passing channels (which
> courtesy of my previous restriction, can be analyzed by simply looking at
> what channels are passed in at startup time, and some analysis of the types
> of the channels),

That would preclude sending interfaces over channels (along with any other
existential or mutable reference type), because the type system doesn't know
whether the interface is closing over shared state. Not being able to send
interfaces over channels would mean that channels would be restricted to only
one kind of type, because Go doesn't have discriminated unions so interfaces
are the only way to perform type-switch. Those goroutines would be so
restricted as to be almost useless.

You cannot just bolt isolation on after the fact. You must design your
language for it from the start.

That said, Go's race detector is very good and it's awesome that they focused
on getting first-class support for runtime race detection so early.

~~~
jerf
"You cannot just bolt isolation on after the fact. You must design your
language for it from the start."

Yes, I agree, and I'm sure I'm going to have many years of wishing they had.
At the moment I don't have a better entrant in this field that is palatable to
my coworkers, though. They've rather disliked Erlang (and not for lack of
trying, and not for lack of good reasons, for that matter), Haskell's right
out, and I'm running low on production-quality true isolation-based languages
here. Several additional up-and-coming contenders; I'm sure if I could have
used Rust-from-2018 I'd take that in a heartbeat, but, alas, it's 2013.

Go is tolerable, at least the way we're using it.

~~~
signa11
> They've rather disliked Erlang (and not for lack of trying, and not for lack
> of good reasons, for that matter)

may you please elaborate on reasons for not using Erlang ?

~~~
jerf
Sketched: 1. As neat as the clustering is, it's very opaque and hard to debug.
And even after years of using Erlang, it's always a pain to set it up again,
and opaque when it fails. This is a critical feature for the system I've
written, and I just can't keep it stable. That others seem to have managed
doesn't help _me_ any, and I'm done pouring time down this sinkhole. 2. The
syntax is quite klunky. I've been programming in Erlang for 6 years now,
including for my job, and yes, I _still_ don't like the syntax. In addition to
",.;", it's a terribly klunky functional language, wearing a lot of the
trappings while failing to reap a lot of the benefits. And I don't just mean
this is a minor inconvenience, it seriously inhibits me from wanting to create
well-factored code, because it's so much work I can't factor away. (I have
examples, but explaining them is a blog post, not a 6-th level nested HN post)
3. As neat as OTP is (and it _is_ neat), it tends to encourage a highly
coupled programming approach to fit into "gen_server" (or whatever), and due
to problem #2, many of the tools I'd use to solve that from _either_ the
imperative side _or_ the FP side are not present, or too hard to use. The
whole gen_X encourages very choppy and hard-to-follow code. If you pour enough
work into it, you can get around that, but the language doesn't help you
enough. It's also bizarrely hard to test the resulting OTP code considering we
started with a "functional language".

It's a brilliant language that was well ahead of its time, and I don't mean
that merely as a "I want to be nice" parting comment; it _is_ a brilliant
language that was ahead of its time and every serious language designer should
study it until they deeply understand it. Indeed, I will absolutely attribute
a significant portion of my success in programming Go to the wisdom (no
sarcasm) I learned from Erlang, and Go would be a better language today had
the designers spent more time learning about it first. (It still wouldn't be
an Erlang clone, but it would be a better language.) But it's just become
increasingly clear that it has been a drag on my project, for a whole host of
little reasons that add up. It was the right decision at the time, because
virtually nothing else could do what it did when I started, but that's not
true anymore.

Someone will be tempted to post a point-by-point rebuttal. My pre-rebuttal is,
I've been programming in it for six years (so, for instance, if there's some
"magic solution" to clustering that has somehow escaped my six years of
Googling, well, I think I did my part), yes, I know all other languages will
also have "little things" (and big things), and Erlang may be perfect for your
project, absolutely no sarcasm.

------
timothya
> _All IPC happens using the Chromium IPC libraries_

Interesting that they chose to share code with Chrome. Since the two are
competitors, I would have thought that they'd use completely separate
implementations. It's interesting that open source makes this sharing
possible.

~~~
timdiggerm
If someone's already written a perfectly good solution that's readily
available you either

\- pridefully write your own

\- use theirs

~~~
arsenerei
I'm curious, why does pride come into the equation?

~~~
Mindless2112
If the existing implementation is _perfectly good_ , writing your own is
either pride, stupidity, or a learning exercise.

I would take for granted that the Firefox developers aren't stupid, and that
they have enough interesting work to do that they aren't going to spend time
on it as a learning exercise.

~~~
arsenerei
Ah, okay. That feels a hyperbolic to be pride or stupidity, rather than just a
poor analysis of the efficacy of an existing solution. Thank you for your
response.

------
josteink
I'm very excited about this. I usually drive Firefox Beta without any sorts of
complaints, but installed nightly just to try this out live.

With my list of extensions[1] this doesn't seem to be particularly stable. It
fails to bring up my tabs from last time. That would be OK for
experimentation, had it not been for the fact that it also crashes regularly.

These two combined really is test-stopper for me.

Note: I'm not complaining. I'm very pleased this is being worked on. I'm just
commenting first-hand experience about the state of things, so that others can
make up their minds if they want to give it a go as well.

[1] Installed extensions: Adblock Edge, Duckduckgo search, Firebug,
Flashblock, Norwegian dictionary.

~~~
cpeterso
Are the crashes listed in Firefox's about:crashes page? Filing bug reports
with those crash IDs (which reference stack traces on crash-stats.mozilla.com)
would be a big help.

Firebug might be a problem because it is tightly coupled to Firefox's internal
debugging APIs.

------
ritonlajoie
From a code maintainance point of view, how do you manage to keep this
'branch' in track with the main one ? I mean, every patch made to the real
firefox has to be carefully reviewed and backported to this multiprocess
branch. Is that a manual process ? Or can it be automated like that :

1) Check if new commit arrived on 'head' 2) Auto backport it to the
multiprocess branch 3) Try a build + run tests. Everything looks good ? Keep
it 4) Not goot ? Send an email to the multiprocess maintainer so that he has a
look ?

Is there another way to do that ?

~~~
coyotebush
Since multiprocess is in the regular Nightly, the code is in the main line of
development (mozilla-central). Based on the preference, it decides at runtime
how to handle the content/chrome interface.

But when Mozilla does branch off separate trees, VCS merges and lots of
automated tests are largely sufficient.

------
paulrouget
To try Electrolysis (multiprocess) in Firefox Nightly: in about:config, toggle
the browser.tabs.remote pref and restart (still work-in-progress, don't expect
a fully working browser).

Edit: you will lose your current session

~~~
frik
I tried it, sadly FF nightly crashed completely (not just a tab) every about
30 seconds with just 3 tabs open. The crash recovery dialog sent a log to
Mozilla everytime, so they can hopefully fix the bugs soon.

------
nly
I welcome this just so we can determine which tabs are using the CPU
persistently. I had to switch back to Chromium (after a good few weeks really
giving FF another go) because I was sick of this issue. Firefox is smoother
and more memory friendly than Chromium these days, a pleasure to use, but in
Chrome I can kill hoggy tabs... so that's where I'm staying for the moment.

~~~
girvo
Now, this isn't snarky, but you really run into issues like that, where its
noticeably bad in a particular tab, enough to need to kill it? What sort of
sites, and what processor?

~~~
nly
In Chromium it's generally runaway memory hogging tabs. Facebook is generally
awful for example. Leave any Facebook page open all day (background of course,
possibly on another virtual desktop) and you'll be staring at a multi-GB tab
by evening.

I'm not sure what's causing the CPU utilisation in Firefox. It's common to
blame extensions in the FF community because there's no easy way to determine
where the problem is.

------
aquadrop
It also brings the possibility to overcome ~4GB memory limit per x32 process.
Which is nice.

~~~
com2kid
Very nice indeed, I crash Firefox a few times a day from hitting the memory
limit.

Granted this is due to AB+ and Reddit Enhancement Suite. Although imo.im
leaking memory over time doesn't help! (I am somewhat annoyed that an IM
client takes up 500MB of memory, I miss Meebo!)

Right now FF is at a fairly svelte 1.8GB. Heh.

The other problem is that performance degrades dramatically as the number of
open tabs increases. Once I hit 50 or so tabs scrolling becomes horribly
jerky. From the sounds of it, this change may very well fix that as well.

(For reference I am on an insanely fast home built machine!)

~~~
ferongr
>Once I hit 50 or so tabs scrolling becomes horribly jerky

That could be a GPU driver issue. GPU scheduling is generally horrible outside
the "run a fullscreen game as fast as you can and screw everything else"
usercase.

~~~
com2kid
Well in hopeful theory land those other tabs that aren't active shouldn't be
hitting my GPU. :)

I can also pop over to IE and it scrolls a-ok! (To be fair, IE11 has beautiful
scrolling, everything else looks jerky in comparison, it really is quite a
lovely effect!)

But all my plugins are in FF, so.... with an SSD it is not like FF takes too
long to come back up anyway! Still annoying though!

------
fenesiistvan
My biggest problem with Firefox is its startup time. It takes much longer to
start the firefox.exe then IE and Chrome which are both very fast. I am using
Win7 on i7-3960X, intel SSD, 16 GB RAM. If I install any plugin then this
thing is much worst. (For this reason I am not using any plugin which is a big
loss).

It is weird that this issue is seldomly mentioned, but I think that it is much
more important then the other performance benchmarks such as javascript
performance.

~~~
cpeterso
Tom's Hardware Guide's "Web Browser Grand Prix" measured Firefox's startup
times being much faster than Chrome, for both cold and warm starts and single
and multiple tabs.

[http://www.tomshardware.com/reviews/chrome-27-firefox-21-ope...](http://www.tomshardware.com/reviews/chrome-27-firefox-21-opera-
next,3534-4.html)

~~~
unknownian
As a Firefox user, I must say that these tests don't mean much considering web
browsers are updated every day to week to month depending on the version used.
Each browser wins at one point.

------
shmerl
What about IPC embedding API? It already separates the UI from the heavy Gecko
processing:
[https://wiki.mozilla.org/Embedding/IPCLiteAPI](https://wiki.mozilla.org/Embedding/IPCLiteAPI)

I just hope Mozilla won't go extreme, and won't use a separate process for
each tab like Chrome does. It produces memory bloat if you have many tabs
open. While they say they'll mitigate memory issues, this should be balanced.

------
efuquen
It's unfortunate how behind the curve Mozilla is on this. No denying this was
a huge undertaking but the length of time it's taken has obviously been
detrimental to Firefox usage, the only real reason I still use Chrome as my
primary browser. Though I'll give Electrolysis a shot with Firefox Nightly and
see how it works out.

~~~
why-el
Huh? I don't get this. Mozilla is _switching_ from a single-process model to a
multi-one. Chrome was built that way from say one. I hope you see that moving
from different models costs more time then actually pick one and support it
forever.

~~~
sanxiyn
You have a good point about switching cost. On the other hand, Chrome always
could be run with --single-process (mainly there to measure the overhead of
multiprocess), so it didn't really "pick one".

~~~
reubenmorais
Once you have a multi-process setup in place, running in a single process is
relatively simple, IPC just routes messages internally instead of across
processes. Having a single-process setup and going to a multi-process one is a
way, way, _way_ larger effort.

------
mariusmg
Not sure why we need this. Without plugins (flash, java, silverlight) ,
firefox IS pretty stabled.

~~~
pekk
Do you honestly think it is reasonable to expect everyone not to use any
plugins? Let me know how they are going to, say, get audio from webapps.

I guess everything is stable, if you just avoid all the features everyone
uses.

~~~
hsivonen
> Let me know how they are going to, say, get audio from webapps.

HTML <audio>? Web Audio API?

------
Groxx
Testing it out now, so far so good! I might make this my default profile, I
would love not having rogue tabs freeze the entire system. It's very nearly my
one remaining thing I prefer Chrome for, Firefox has really improved lately.

------
acjohnson55
As a dedicated FF user, I've been waiting for this for so long. I may finally
be able to isolate which tab is grinding my computer to a halt! As usual, this
is amazing work.

------
Too
Can this also solve the issue of flash objects taking over all keyboard input
and breaking the standard hotkeys?

~~~
nephyrin
Gecko plugin peer here: Unfortunately, no. NPAPI has two modes: windowed and
windowless. In windowless mode (roughly): we proxy input to flash, and flash
renders into a buffer we provide it. In windowed mode, we create a native OS
child window for flash and let it handle input and rendering directly. In this
mode, without a way for flash to pass "unused" keys back to us, it will
require some ugly hacks to steal hotkeys from it reliably.

Most sites run flash in windowed mode, and for good reason - flash's
performance sucks in windowless mode, and it cannot make use of hardware
acceleration (IIRC). Since Adobe's NPAPI flash seems to be essentially in
stability mode, it's unlikely this will be improved :(

Now, in current multiprocess mode, we actually force flash to use windowless
mode -- because support for windowed mode isn't finished yet (bug 923746). But
the aforementioned performance issues mean that we'll probably remove that
restriction once we support windowed mode in multiprocess.

------
ksec
I really wish all these could be speed up by us Kickstarting it or donating on
it.

It has taken far too long for e10s.

------
goggles99
This is a terrible idea. There is no justifiable reason to do this. The
reasons given are weak, this is just more over engineering that will add an
enormous amount of complexity and add no real value.

Lets look at the reasons given as to why they want to do this.

> _Performance. Most performance work at Mozilla over the last two years has
> focused on responsiveness of the browser. The goal is to reduce "jank"—those
> times when the browser seems to briefly freeze when loading a big page,
> typing in a form, or scrolling. _

You can do all of this with proper threading and task delegation. Putting
things in separate processes will not magically make things better. The answer
to "jank" is proper coding, not over engineering. Last time I checked there
was the same "jank" in IE and Chrome even though they use MPs.

> _Security. Technically, sandboxing doesn’t require multiple processes.
> However, a sandbox that covered the current (single) Firefox process
> wouldn’t be very useful. Sandboxes are only able to prevent processes from
> performing actions that a well-behaved process would never do.
> Unfortunately, a well-behaved Firefox process (especially one with add-ons
> installed) needs access to much of the network and file system._

This is BS. You could have three processes and have FireFox sandboxed
completely. Main process runs in a low integrity mode which limits it's
resource access to a single directory. Second process is a download delegation
process (takes a file after it is downloaded and moves it to the requested
location while also promoting it's integrity) running in normal integrity
mode. Third process is a network communication delegate/proxy running in
normal or possibly even low integrity. These two delegate processes I
mentioned will still be needed for the MP Firefox so it is no more work to
create them.

> _Stability_

This is the only true benefit, but it is of very little value. Firefox almost
never crashes and when it does, the session restore brings you back to were
you left off in seconds.

Cons? More complexity means more bugs. This is a workaround for really fixing
FireFox. I am going to have 150+ extra processes in my task manager now. More
memory use. More context switches in the operating system eating up resources
and causing more system latency and overall slowdown (context switches at the
kernel level which will affect the whole OS).

~~~
goggles99
LOL, I posted this same message on the blog comments and they deleted it.
Nothing like censorship to stifle opposing opinions.

------
davidbielen
breaks LastPass add-on

~~~
cpeterso
Can you clarify how LastPass breaks? I installed LastPass and it seems to work
for me with browser.tabs.remote=true.

