Hacker News new | past | comments | ask | show | jobs | submit login

This bit is also somewhat surprising/interesting:

> Improved performance and reduced crash rates by [doubling web content loading processes from 4 to 8 [1]

From personal experience, I believe that things usually get more buggy, not less, as you add more parallelism/concurrency. I think there's supposed to be a link to more explanation or the relevant ticket, but it looks like they forgot to actually add the link. Can somebody fill it in here?





Thanks!

While that post addresses the reason why we can double the number of processes, I think I'm still missing the reason why we should double the number of processes (aside from performance reasons).

I mean, the release notes made it sound like more processes is more stable than fewer processes (unless I'm misreading it). How does doubling the number of processes result in fewer crashes? If we manually force FF 66 to use 4 processes only, would it result in more crashes than using 8 processes?


Fewer tabs running in more processes reduces the likelihood of OOM crashes, increases security (because less likely a security vulnerability will allow JavaScript on one page to read memory from another page in the same process), and might improve responsiveness because Firefox can lower the OS priority of processes that only contain background tabs.


That does not really explain why it would be more stable, or did I miss it?


Increasing the number of processes decreases the number of tabs sharing each process. So if a process crashes it will bring down fewer tabs.


That would mean smaller impact per crash, not fewer crashes though.


Yes, technically. But the user experience is fewer tabs crashing, which I'd argue is what user's care about. You notice a crash because of the tab's "oh no" screen, not by watching `ps`.


I see. That starts to make sense if you explain it that way. Users who don't think about the multiprocessing model will just see fewer dead tabs when a FF process crashes, and that could feel like "fewer crashes".


Which is to say, about 99.9% of them. "Crash" in common parlance is understood to mean a glitch, that is, user-visible unintended behavior. If fewer user tabs glitch in that way, that means fewer crashes. User experience is what matters first and foremost.


Could reduce out-of-memory crashes if fewer sites are sharing the same process.


I don't quite understand how that can be possible.

At the end of the day, you're still aiming to load, in FF 66, just as many tabs as in FF pre-66. FF's total memory usage should be about the same whether you're using 4 processes or 8. Sure, if each FF process now takes care of fewer tabs, then when OOM does happen, the FF processes have a lower OOM score and are less likely to get killed. But something will get killed regardless, just maybe not FF. That's like trying to avoid punishment after a prison brawl by keeping your head low: someone will get punished regardless, just maybe not you.


You have to keep in mind that a large fraction of Firefox users are still using 32-bit builds on Windows.

For those users, the memory a process can use is capped at 2-4GB (depending on whether the OS itself is 32-bit or 64-bit and a few other things).

The most common OOM crashes on Windows are running out of virtual address space, not running out of actual physical RAM.

In that context, having more processes in fact gives you more address space and reduces the chance that you will run out.


On 32-bit Windows, processes have maximum address space limitations which can be easily hit by web browsers. Having more content processes makes it less likely that any particular one will hit that limit. Note that content process OOM crashes are the #1 source of crash reports Firefox gets (and you can see plenty of other OOM crashes in that list further down too):

https://crash-stats.mozilla.com/topcrashers/?product=Firefox...


Oh, so you mean that on 32-bit Windows, processes (can) have an address space that is smaller than the sum of physical RAM + page file? I didn't know that.

It makes sense then.


On any 32-bit OS, processes have at most 4GB of address space, because that's how much you can address with 32 bits.

In practice some of that is reserved for the kernel, so you get less for use by the process itself. Historically 2GB on Windows, though there were some non-default compilation/linking options you could set to get 3GB.

A 32-bit process running on a 64-bit kernel can get 4GB of address space.

And yes, lots of computers have >4GB physical RAM, even if you don't count swap/page files.


Just spitballing: maybe there's a race condition for some kind of resource handle used in those shared content-loader workers, such that the longer a worker's lifetime, the higher the probability of some kind of resource leak or deadlock. More workers = less lifetime per worker = less probability of triggering it.


>> From personal experience, I believe that things usually get more buggy, not less, as you add more parallelism/concurrency.

Mozilla/Firefox is writing more stuff in Rust, where concurrency is easier to do right. I'm not sure how much that effort has improved the reliability so far but it's happening:

https://wiki.mozilla.org/Oxidation


Thanks for catching this - the link is in the notes correctly now.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: