Hacker News new | past | comments | ask | show | jobs | submit login
What’s Next for Multi-Process Firefox (blog.mozilla.org)
355 points by cpeterso on Aug 2, 2016 | hide | past | favorite | 240 comments



I feel like my case is an anomaly compared to others on this thread, but this has been a consistent experience for me.

I hoard tabs and easily reach 40 tabs in a single session. It is typical to have 60-100 tabs open at any point in time. I find that firefox handles this much better than chrome does. Any time the tab count exceeds 15 or so in Chrome, my whole system slows down and freezes. I am forced to close some tabs and restart chrome to get some sanity. Often system reboot is the only option.

In firefox the same system slowdown happens, but rarer than chrome, usually I have nearly 200 tabs open and running for a long time. In such cases, I just have to restart firefox. Firefox presents me with the restore window which is a much saner way to select only the tabs I want. Even in that case, when I restore, it does not load all the tabs, only the active ones, so it does not impact system performance.

The only time firefox becomes slow even on startup is when I have restore windows several levels deep (yes, there is such a thing :) ) and I believe I am wholly responsible for that and not firefox.

So it really surprises me when people say Chrome is more performant.

Also, maybe because of familiarity, I find firefox Developer Tools more usable than Chrome's. But what do I know? I am a server side person dabbling in UI for my personal projects, not a web designer :)

I dread Firefox following the footsteps of Chrome!

Oh, and not to mention the scores of chrome processes that pollute the process list!

when I run the `ps` command or the `top` command, it is difficult to locate the process that I am actually looking for. Also, it is much, much harder to estimate how much memory and processor time Chrome is really consuming when there are so many processes of Chrome running. May be I am doing it wrong. Some one please enlighten me.


I keep switching between Firefox and Chrome. I want to be using Firefox, except I cannot.

Chrome is designed for web apps. If you keep several apps open in Firefox, like say Gmail, FastMail, Slack, Gitter, Facebook Messenger, WhatsApp Web and Google Music or YouTube, pretty soon Firefox ends up being really, really sluggish. And I keep these apps pinned, and the startup experience for Firefox gets horrible.

Firefox also has problems with many services, where loading a single link can make your whole browser unresponsive for a long time or even crash it. To see what I mean, load this Travis log in Firefox and compare with Chrome: https://travis-ci.org/monixio/monix/builds/148774867 ; and this is just one sample, there are others that bother me, like the OpenTSDB UI. Personally I don't keep 200 tabs open, because I'm focusing on just a few at a time. If I want to go back to something valuable, I use bookmarks (or more recently Pinboard.in). This is because tabs are hard to manage and search, unless you have one of those fancy extensions, but I dislike those as well.

On add-ons, I very much appreciate Mozilla having a good review process, but those add-ons have no isolation and everything is allowed to run even in private mode.

The fact is, if I can't comfortably run my web apps in Firefox, I'll keep going back to Chrome. And yes I keep trying using native apps, like Thunderbird and Adium/Pidgin. It isn't working out.


Notice also that the Raw Log loads instantly and is very responsive, in the same Firefox that struggles so much with the fancy travis log page.

I think it is unfortunate that insanely heavy web applications, which can kinda get away with it in Chrome, are forcing everyone to Chrome.


It's not just insanely heavy web applications that are a problem.

It's people writing stuff, then it being slow in all browsers, then them optimizing it in Chrome only (basically changing stupid stuff they did that ended up slow in Chrome but maybe fast in other browsers, but not changing stupid stuff that was fast in Chrome but slow in other browsers) and shipping the results.


It's hard... though, if you work in a mixed environment where different devs use different browsers, some things are better.

Just the same, it's hard to convince people not to bring in the jungle for the banana (appropriated analogy). With today's tooling, it's very easy to use smaller frameworks, and piece together what you need.. but in goes angular, jQuery, lodash and a few other large libraries for good measure. About the only one I'm guilty of bringing in these days is moment... and most of that is because the internals for Date are poorly lacking (maybe it's time to standardize some non-mutating, moment-like ES extensions to Date already).


Ugh I just had this debate recently about moment. I opted out of using it as I felt it was just too big for its use case.


For what it's worth, https://bugzilla.mozilla.org/show_bug.cgi?id=1291707 is tracking the travis-ci issue.

And if you have concrete examples of other pages where there are performance problems, that would be very much appreciated!


> On add-ons, I very much appreciate Mozilla having a good review process, but those add-ons have no isolation and everything is allowed to run even in private mode.

To me, that's a good thing. I don't like walled gardens because then you, the commoner, have to beg your benevolent ruler to please allow a particular use-case to be supported in their fiefdom.


I am using Firefox night builds with 33 active addons including GreaseMonkey and some user scripts at the moment as a main browser for about few months already because of multiprocess support. It's perfect - fast, responsive and pretty stable now.

What you say about web apps and Chrome you can say about FF with multiprocess on as well. If some page takes too much cpu you can find it in your OS task manager and kill it and you will see which tab crashes (TabData is an usefull addon showing how much memory pages take https://github.com/bobbyrne01/tab-data-firefox (don't sample too often, it slows FF down if it's active and you have dozens of tabs open and sample every few seconds)

I don't know the number of process they will use as default, I configured 128. I regularly use 20-60 tabs with pinned Twitter, Gmail, Reddit, WhatApp and some more ( TabMixPlus with multirow tabs makes it no problem) and it's really nice browsing experience.

I had to restart FF every morning, sometimes multiple times a day and when something crashed the whole browser went down. Now you just reload the crashed tab or plugin - was common few months back and is rare nowadays. It has a warning if some addon is slowing FF down - I hope addon authors will update them but right now I just ignore the warnings because I don't see any subjective slowdown.


> To see what I mean, load this Travis log in Firefox and compare with Chrome

Wow, that one brought FireFox Dev Edition (OSX El Capitan) to a dead stop over here.


that did indeed froze my firefox. I had to kill it (on linux)


Alright, I'm confused. This doesn't do anything for me. I even tried it in a fresh profile to ensure that it wasn't fixed by some add-on or setting that I was using, and tried it with e10s on and off, but it loads perfectly fine in any case. Granted, it's a bit slower than the average site for me, too, but it still loads within about 3 seconds.

I'm on Firefox Nightly, Linux, which should be sort of the combination of the two use-cases that you guys have...


the fun begins when you click on the logs below. It starts loading the log and firefox slows down a lot.


FWIW, I tried this link in FF 47.0.1 on OS X and it loaded quickly without blocking anything else. Maybe check your plugins or try a fresh profile.


Did you click on the build job on the bottom? The page linked above is fast for me, but the actual log[1] blocks FF for a bit.

[1] https://travis-ci.org/monixio/monix/jobs/149097798


On Safari, this loads instantly for me, butter smooth.


I don't think your usage is an anomaly at all. People use open tabs like a read-it-later list, and so far I couldn't get used to a service like that (or bookmarks) because I forget that I even added them in the first place (out of sight, out of mind). Right now, Tree Style Tabs, Tab Mix Plus, Session Manager and loading tabs on demand are what help me manage my tabs well.

I have had several hundred tabs open in Firefox many a times (yes, there are valid reasons), and that affects startup time (even with loading tabs on demand) and exit time (the time taken for process to terminate after the windows disappear). I have read about people who do the same (or a lot more).

In my observation over several years, Firefox has been able to handle many more tabs with lesser CPU and memory usage (aka sluggishness) than Chrome ever could. Chrome is always sluggish after opening several tabs and consumes a lot more memory. What's worse is recovery in Chrome. Restoring sessions and tabs in Firefox has been a breeze for a long time, and even as recent as a few days ago, I had to struggle doing that in Chrome (the session manager extension there is not reliable at all, nor is Chrome's own crash recovery).

On the multi-process model, Firefox is now using one process for the entire browser chrome (UI) and one process for the content. I have also dreaded that Firefox multi-process would become sluggish like Chrome, but we'll have to wait and see how the development progresses. The Firefox developers do know a lot more about Chrome's issues on this front, and I'm sure they will tread cautiously and adopt some kind of trade-offs that provide adquate security (better sandboxing), stability and performance.

To check memory use in Chrome, go to about:memory and you should be able to get details of all Chrome processes and the totals. Pressing Shift+Esc when in Chrome would also show all the Chrome tab stats (similar to a task manager window).


Periodically I keep coming back to HN to pretty much say the same thing. Which is that tabs are used as a replacement for bookmarks. And that's pretty much because of the failing of a good browser bookmark UI. I've tried many tab extensions and haven't been happy with any of them.

You can easily leapfrog Chrome with a better web interface and web page helpers. I see no radically visible innovation with Firefox or Chrome for that matter.

I had an Apple Powerbook, that with age couldn't cope with 'modern' web sites. So I took great pains to make browsing easier. Now I notice the same creep on 'modern' machines. JS may be processed faster, and pages rendered quicker, but many web resources are becoming more bloated and this ruins the browsing experience.

I have gone back to having one browser with no javascript running, just so I can read some pages, as I'm fed up with the inherent slow down. Yeah some 'appy' sites/pages are impressive.

But the bog standard web pages that eat my computer's resources to display a few adverts alongside a passage of text or a picture is pretty dreadful. Being able to easily identify and manage these laggy sites would be a help. There are tools but they aren't that accessible or easy to read or use.

And after running Windows 10 for the first time at the weekend, I can see how the Desktop UI and admin interface is still comparatively stuck in the stone age.

So in short I think there is plenty of room for improvement, across the board.


Session files of tabs are also better than bookmarks since they save per-tab history so you can see how you arrived somewhere.

The per-tab process switch is bad for users like us (I have 300+ tabs right now) but what is even worse in FF48 is the implementation of a walled garden for extensions.


That's a good point about tab/session history. That's another area that could see improvement. I frequently open a link in a new tab, so also having that web of history between tabs could be useful.

I've been terribly unproductive today. Darting back and forth to sites. I have a tendency to close tabs if I can. But sometimes when distracted, open something back up, like a forum to see if there is change. And I may in truth have only opened it about 10 minutes earlier. Last visited/popular site list helper could be of use there. Or of course not procrastinating in the first place.


>Last visited/popular site list

Don't both Firefox and Chrome have it, right in the URL bar?


Where?

Assume you mean:

Chrome -> burger menu -> History and recent tabs.

Firefox -> History -> Recently closed tabs/windows.

Not the best or easiest method to get to that info. Nested menus are only useful for occasional functions. They are a bit of a faff.


In Firefox, go to URL bar and click drop-down list button (or press Alt+Down).

Not sure about Chrome now, I thought it was the same but turns out it doesn't work.


I get a list of of sites, have no idea why they are there. They differ to most visited, and are not last visited.


The algorithm is the same as for displaying tiles in a new tab[1]. For me, it works perfectly.

1. https://support.mozilla.org/en-US/kb/about-tiles-new-tab


Oh it's called frecency is it? That's different to what I wanted. But least I have an idea of what it is now. Ta.


Thanks so much for about:memory and shift+esc ... I always forget about them.


I know many Mozilla developers are "tab hoarders" with hundreds of tabs, so I am confident that your use case will get plenty of attention. :)


How do they manage all these tabs without Tab Groups then (which they removed)?


Tab Groups was moved to an extension, and has had a lot of nice improvements: https://addons.mozilla.org/en-US/firefox/addon/tab-groups-pa...

A lot of folks use https://addons.mozilla.org/en-US/firefox/addon/tree-style-ta...

Personally I use separate windows for separate tasks, so I can close a whole window when I am done with something (research, work, tabs from HN, etc.)


60-100!! That's (personally) so many. How do you know which tab is which and where it is? My colleague has about 20 or 30 tabs open on Chrome and each time he wants me to look over his chrome for any debugging purpose, I get frusrated that I can't find where the right tab is! (I'm assuming there is a better tab management tool out there).


I use TreeStyleTab [1] and usually have more than 150 tabs at the same time on my PC and around 60-100 on my professional computers (at home and at work). I sort them by category, then when I have a spare day I read dozens of news one by one, or when I'm in a mood, I read by categories what I like. In the last 3 years I tried to switch to Opera and Chome a few times, but they... they just suck for my requirements. Firefox lets you search by bookmark, history and open tabs, which is really useful in my cases (I also have hundreds of bookmarks).

[1] https://addons.mozilla.org/en-GB/firefox/addon/tree-style-ta...


> and where it is?

Firefox has UI for searching your tabs. You focus the location bar, type "% " followed by a string (but don't hit enter!) and it searches the urls and titles of the things in your tabs for that string and shows you a list of results. Selecting one of those results will select the corresponding tab.

If you leave out the "% " it will still search your open tabs, but also your history; some of the history results can appear above some of the tab results, depending on how often you visit pages vs selecting their tabs and whatnot.

>How do you know which tab is which

Firefox always shows the site icon and first word or two of the page title in the tab; unlike Chrome it has a minimal tab size (precisely so you can see what the tab is!) and a scrolling UI so you can go swipe through the list when there are too many to fit in the window and look for the one you want if you don't want to use the text search option.

There are also various extensions that help people manage tabs, but even the built-in setup is quite usable (speaking as someone who has a lot of tabs open and does not use those extensions).

I agree that Chrome's non-scrolling UI, which just shrinks the tabs until nothing is visible in the tab and then starts not showing the later ones at all so you have to use keyboard shortcuts to even consider trying to get to them, makes using more than about 10 tabs pretty much impossible. But that's just because the UI sucks, not an inherent property of having many tabs open.


TabMixPlus to the rescue :) I set all tabs to have fixed width (so around 8 tabs per row) and to show 3 rows of tabs. Also unread tabs have title in red italic, current tab in blue bold and read tabs in black regular. Works for me :)


I usually leave several hundred tabs open across multiple browsers at the same time. New window for each project, and tabs within them. Between my own projects, and clients, and research on the go, I find it works for me. I would get little done if I wasn't using the vimium plugin to search my tabs much like Alfred let's me search my mac.

Another use case for lots of tabs is using the great Spaces feature in Firefox. You can fire up and maintain multiple groups of tabs and windows per project. It's a great way to switch contexts. The main reason I used Chrome/Chromium was to have a sense of which Tab was a runaway process. Hoping Firefox v48 is a step in teh right direction.


I use a tab suspender that will suspend background tabs after a couple minutes... I'll whitelist internal apps, and lighter pages (like HN). It helps quite a bit.

I have to agree on it being harder to track how much memory chrome is really using... also, hard to tell which process is for what tab or extension.

I haven't been a heavy Firefox user for a while, I find the opposite case when trying to debug in Firefox.

For me the bigger draw to chrome is I happen to like the UI more... I no longer see the need for the separate search input, sometimes the fast paced changes to chome's UI can be cumbersome. That said, I work in OSX, Windows and Linux (Ubuntu Unity) pretty regularly, so it's always different somewhere.


I noticed that Chrome is more performant when heavy Javascript is involved. Firefox works better with heavy tab usage.

I usually run Firefox for general browsing and Chrome for one-off dedicated app.


yes, I too find that Google Drive apps works better under chrome and leads to weird errors in Firefox. So I have Chrome hanging around just for this case.


Not terribly representative, though, that browser A would do worse than browser B on browser B's vendor's web property...


I am another user with similar habits. I think separating the UI process from contents is still required (since you use so many tabs as well, you must have faced UI blocking problems) - though not so sure about having a separate process for each page.


You can use the excellent OneTab extension for Chrome: https://www.one-tab.com/

Just click on the button if you notice a slow down and restore the tabs later either selectively or as group.


Why not use bookmarks instead? That's what they're for.


Trust me I tried it! Only the bookmarks became a forest I could not find my way in or out. Atleast when they are tabs, when firefox starts slowing down, i realize there is too much and trim some down.

That said, I use evernote to clip urls that will be useful in the future. The tabs that are open are transient ones, that are required only in the context of a current research or feature implementation and would not be needed once concluded.

I suppose, I am a pathological reader and researcher who needs to look at all opinions, find all apps that provide a feature and try them all, dig deep into each solution etc etc. Sometimes that leads to analysis paralysis. Not at all a good thing.


Bookmarks don't save the per-tab history. With tab sessions you can go back and see what search terms and sites you used to get somewhere.


You can't see the page from the bookmark and you forget the content just from the bookmark title. Well maybe not you but many people including me do.


I regularly have 200~300 tabs open in a single session in chrome. Yes, it gets a little hard to find what I want, but it still runs excellent.


Don't dread it, it got better :) See my other post here.


Looking at https://areweslimyet.com it looks like memory consumption has been on a slight upward trend. Are these tests conducted with e10s or should we expect a massive increase?

As one of those pathological users that often keeps 50+ tabs open (somewhat manageable with the tabgroups addon), I'd hate to see one of the principal Firefox advantages go away.


I'm also one of the heavy tab users (though even more on the high end: I routinely close hundreds of tabs when I tidy up my browser) and I can wholeheartedly recommend the Tree Style Tab extension [1]. It automatically orders your tabs in a child-parent-relationship in a side-bar on the side of your screen, which is invaluable, when you e.g. search for a product on Amazon and have all of the competing products open in child-tabs below your search parent-tab. For Hackernews, I have the front page as parent tab, the discussion pages as children and all the links in the discussion as grandchildren, which neatly orders each context together.

[1] https://addons.mozilla.org/en-US/firefox/addon/tree-style-ta...


Tree-Style Tabs is an excellent example of implicit and automatic state structuring.

Without you as a user having to do anything other than simply open up new tabs as is natural in your workflow, you create an explicit structure which shows the relation of various browser tabs to one another. The best thing about this is that you can treat an entire tab tree or subtree as a task, and when you are done, clear it out.

Other elements of TST are that it can be placed as a sidebar, which, in this world of Very Wide Screen Displays, takes up extraneous space (for textual presentation) on the side of your screen, and frees up the precious and limited vertical space for more text state.

I've tried several times, and largely given up, on explaining to Google's Peter Kasting (one of the core Chrome devs) why this is so useful. Google are apparently adopting the GNOME and systemd view that:

1. If you're a technically advanced user, you're not our target user base.

2. If you're not a technically advanced user, you're not qualified to comment on what's wrong with the product.

Somehow, that never quite seems to work out right.

Firefox's flexibility, particularly with tabs, is why I continue to use it despite some performance and functional advantages to Chrome.

(On Android, Chrome's lack of adblock makes it almost entirely useless. I've salvaged it slightly through putting a hosts + DNSMasq adblock on my router -- DD-WRT. But that's only just barely sufficient. Adverts are a complete pox on the Web now.)


What do you use for session management / syncing? I've found that the Chrome Session Buddy extension is highly preferable to the Firefox Session Manager. Why something so crucial is a hobbyist project on both ends and not internalized by the respective companies - I still wonder myself.


For session management (not sync), Session Manager [1] is an excellent extension. It's compatible with Tab Mix Plus, and I use it all the time.

[1]: https://addons.mozilla.org/firefox/addon/session-manager/


I'm a big fan of the old Panorama mode for my hundreds of tabs. Ctrl + Shift + E to display groups of tabs arranged how you like them. Panorama is now dead, but replaced by the 'Tab Groups' addon, keeping the dream alive.

https://addons.mozilla.org/en-US/firefox/addon/tab-groups-pa...


Currently e10s is only using two processes, so there should not be a massive increase in memory. Particularly, if you have many tabs open, the pages are still all just stored in a single process, so there should not be much overhead. Part of the work towards having multiple content processes will be ensuring the memory overhead is not too high.


> Currently e10s is only using two processes

There is something that I've wondered about for a while on this Electrolysis fanciness: currently Firefox is said to be single-process. So why do I see tens of "firefox" threads in htop?


You actually said it yourself without realizing it: because it's single process but multi-threaded.


Ah, of course.

So this means any performance increase will be coming from rewriting old code in a more efficient way, which could be done independently of e10s, right?

I mean, since the new multiple processes will not share address space, there will have to be explicit communication between processes with corresponding overhead and blocking? So a priori one would expect the new code to be slightly slower? And use more memory, since some data needs to be duplicated between processes?

What also comes to mind is that introducing e10s is like taking an OpenMP-parallel code and turning it into an MPI+OpenMP-parallel code.


"Performance" in the context of e10s means improving user responsiveness rather than throughput. For instance, you can usually smoothly scroll a page or interact with the browser UI even if the page is some kind of heavyweight mess.


Why couldn't that be achieved with threads? You can have asynchronous threading, I believe. (I get the arguments in favor of e10s, like sandboxing, and I have no beef with e10s, I'm just curious.)


What first comes to mind are various common memory structures and synchronization on them.


Yes. Only adventage of e10s is that compromised tab cannot read other tabs memory. IMO instead of pushing e10s they should fix UI hiccups during loading really big pages and infinite loops in JS (currently it blocks whole browser while it waits to ask user about stopping unresponsive script)


Does your htop display threads?


I made a simple extension for Firefox and chrome that might be of some use to you. Like you I had trouble with many tabs being open and having a hard time finding them so I made a new extension that opens up a new tab window listing all your tabs that are currently open and when you click on them it navigates to that tab. Simple but very useful in my day to day activities.

If you are anything like me you have a number of windows open for some project or research thing and you had one extra tab that you left open as the last active tab in the window but it's unrelated to the other 10 tabs in that window. The windows menu doesn't help you because that is the tab that is listed as the title of that window.

It's free and opensource[1] for Chrome[2] and Firefox[3].

1. https://github.com/fiveNinePlusR/tabist

2. https://chrome.google.com/webstore/detail/tabist/hdjegjggiog...

3. https://addons.mozilla.org/en-US/firefox/addon/tabist/

If you try it and like it/hate it let me know what you might want put into future versions to keep it good/change your opinion of it to the better. It's still early in its development but what's implemented works well so far. Cheers!

Edit: There is a bug in Firefox that I have a patch submitted that will allow the tab titles to show up correctly even when they have been unloaded. for now it shows the URL of the tab until it has been loaded. The patch should land by Firefox 51 hopefully.


Hi, I've just installed it. I like it. I'm a tab hoarder too and I've never found a solution that works for me.

I've raised a few issues on github, feel free to ignore them!


You may not be aware of this, but although I also installed the extension and I am trying it out, Firefox has a way to manage the tab hoarder use case, built into the default install.

Using the Awesomebar and typing "%" and a tab name will find the tab. Navigating to it and clicking (or pressing enter) will open the tab in its existing window.

Docs here: https://support.mozilla.org/en-US/kb/awesome-bar-search-fire...


creator here... problem I had was with discovery mainly. Listing the title of whatever tab was last focused didn't really indicate what was there at least for me. Sometimes I'd check the weather and forget to close it but the other 9 tabs are related to whatever project I am working on.

If you liked the extension I'd love to know what you'd like to see. If you hated it I'd love to know why. Either way thanks for trying it out!


Glad you like it! So far I am liking the issues you raised and will likely implement them when I have some time. cheers!


From the Are We Slim Yet FAQ:

  > For each change to Mozilla's inbound repository, our build infrastructure
  > generates a build which we test and include in the graphs here.
Sounds to me like they are testing with e10s (since it is enabled in the newest releases)


Somewhat tangential, but I think that many of the difficulties they are facing with Electrolysis and this whole multi-process movement is that, in older versions of FF, everything is so tightly integrated.

IIRC, Firefox has(had?) a lot of old code written long before modern concerns were even fathomed; addons back-in-the-day had access to the entire process-- every tab, all the DOM, and everything shared the same namespace (shady calls from webpages were blocked somehow, not sure if it was true sandboxing or not), so AMO mods would block addons (or updates) that didn't adhere to their namespacing policy. I'd bet a lot of that integration is still there, and most of what we're seeing is the result of what is probably a very messy untangling.


Firefox is always fighting an uphill battle against IE and Chrome who have major advantages in both resources and integration with their respective platforms.

Firefox thrived and grew in an environment where competitors where terrible. They lost the lead to Chrome and unless google really drops the ball I don't see how they can get it back by just being better. They'd have to dramatically leapfrog the competition in a very compelling way and since both MS and Google are investing heavily in their browsers, it isn't clear to me how they could ever do that.


Actually, on Android Firefox works much better than Chrome exactly because they are independent.

Mobile Firefox allows extension, crucially adblockers. I don't think mobile Chrome will allow those anytime soon.


The original release was pretty bad, and I ignored it from then on until a couple of weeks ago when I gave it another shot. It's so far superior to Chrome on Android, even without extensions like Adblock. That extension really does add value though, especially with the reduction in power demands.


Agreed, running so much better than a few months ago when I last tried... Lastpass can now at least see the website I'm on, if they can get form filling worked out, I'd be very happy. uBlock origin seems to be working well now too...


grr... can't click into a textarea to edit existing text... :-( back to chrome.


The original release was the only thing that turned my potato tablet into something somewhat useful.


It looks to be working MUCH better for me, just tried it again, as the last time I tried it (just a couple months ago) it was slow as molasses, and didn't work with lastpass, now it seems to be working with it, which I appreciate to say the least (though it still won't fill the form, may try the FF plugin directly if there is one).

It also seems to have cleaned up their tabbed interface, closer to current chome (though I actually preferred having chrome tabs with the app views). May also give the current Opera another look. All in all, much improved even in just the past 3-4 months or so since I last tried it.


Used Firefox for a very long time on Android and I can agree to this.

Opera Mobile is another tech that I'm enjoying using more and more.


It could dramatically leapfrog the competition by returning to its roots -- modularity, power, extensibility... something for the power users.

Instead Moz executives seem set on forging all sorts of weird new business partnerships, and matching Chrome/Safari feature-for-feature. So FF becomes the sad little me-too browser, users everywhere wondering just how long their addons will continue to work...


Agreed. Firefox falls short when it competes directly with the better-backed browsers. Its niche (and current advantage) really relies in working well for power users and letting the community find new innovative ways to enhance a browser.


It's so bad that they will actually push their loyal users to Chrome. The chromeization of Firefox has been awful and it seems like it's going to continue.


Well, I think Servo is their leapfrog but in a very different way.

Both IE/Edge and Chrome (and the current Firefox) are based on really old tech stacks with years of legacy code and design decisions. Servo, if it pans out, will be the only "real" modern browser, which should provide a boost in performance and experience that it will take Chrome and IE/Edge years to match.

It would also make iterating on the browser much faster, which could slowly erode Chrome and IE/Edge's lead.

If Firefox can capitalize and successfully market their new browser it could mean becoming the lead browser for years.


> it will take Chrome and IE/Edge years to match.

Or just a few months to fork and improve. I'd really like to see the Chrome and Firefox teams converge over Servo. But maybe Google suffers from too much NIH syndrome to do that at this point.


So, I was wondering about licensing, if this would mean that Google/Microsoft would have to open-source their browsers in order to use Servo, but seems like Servo's license (MPL) does permit linking to it from code which is under a different license.

The MPL is similar to the LGPL in that way, and KHTML is licensed under LGPL, which is what WebKit was forked from, so should be legally settled at this point, too.

I'm not a lawyer, though, so no guarantees that this is correct.


Servo is written in Rust. That isn't going to improve performance.


It should improve performance in the sense that Rust helps to produce code which can be easily parallelized.

But Servo being written in Rust isn't the argument that he's making.

Servo is built with a modern architecture, which all current browsers don't have, because they all started out somewhen pre-2000, before multi-core processors were really a thing, and before the web was really graphical.

What Servo can improve is mainly, that the rendering of HTML itself is parallelized in Servo, and that with WebRender, they utilize graphics cards quite similar to the way that videogames do it, which both do result in noticeable performance improvements.


Your comment implictly assumes that the programming langauge has a major impact on browser performance.

My expierence with other product development tells me otherwise, and I trust that the people behind mozilla who are specifically building a more performant browser using rust are aware of what does or does not greatly affects performance.


That's not the only difference, or even an important one for performance. Servo's WebRender engine gets 60fps (or in the hundreds if uncapped) on many pages where Chrome, Firefox, Edge, etc. all struggle in the 5-15fps range.

(Of course, Servo is still a long way from production ready).


Part of me thinks it's a lack of direction which has limited Mozilla/Firefox in the past/currently(?). Like the dropping of 64-bit support (which they reversed) which I think showed a clear lack of direction on what they wanted to do. Granted this was a few years ago now but it was still clear at that point that 64-bit provided a number of useful (security) features over 32-bit software.

Maybe I'm wrong but that's just how it appears as someone looking in from the outside and who doesn't use Firefox on a daily basis.


Moz makes a few hundred million a year, should be enough to develop a browser, no? At a certain point enlarging a dev team becomes counter-productive.


Well, Google, Microsoft and Apple make a few billions a year. Undoubtedly, those three invest a much lower percentage of that into their respective browsers than Mozilla does, but it could still easily be more than Mozilla has.

Also just in general, Google can display an advertisement 24/7 on the world's most popular webpage, and bundle their browser on Chrome OS and Android. Microsoft can bundle their browser with the most popular desktop OS. And Apple can bundle their browser with iOS and OSX, which are both still pretty popular in their space.

Mozilla has none of that, and would have to invest a lot of money to get it. They can afford the occasional billboard and organize the occasional PR event, but that's about it. The entire rest of their user gain depends on them being decisively better than the competition, so that people themselves recommend Firefox over other browsers.


They have a strong image advantage too.

They are the only company to put the user's rights, privacy and best interests first and foremost.

For years they were the only ones to carry the open source flag in the browser space. They had the Mozilla Suite when other browsers were closed, they came out with Firefox which was a blast at the time, and now Servo.

They don't play to a corporate agenda, but tend to do «what is right».


While there is merit to what you say, there's also plenty of places where a divide and conquer can happen with appropriate management of the browser as a whole.

You have the core rendering engine, ancillary features to the application, and the shell of the application itself. From the core rendering engine, you have several areas that are subsets, from svg parsing/rendering, parsing/processing of text/xml/html/css, audio/video, almost all the APIs that the browser offers outside rendering can be implemented by different teams (web-audio, websocket, webrtc, etc).

The communication libraries themselves can be broken off too. There is really a lot that can be handled by many developers.

Now if there were say 15+ developers on any one feature, that is probably too many to be very productive. On the flip side, given the push towards rust, there's room for some duplication of effort there even.


Sadly I agree. Which is a shame, because in my opinion they _do_ actually leapfrog them both in privacy.


I see your point, but they don't leapfrog Chrome in privacy when the browser is pwned and leaks its entire process state to an attacker.

Any browser is a massive, rich attack surface with bugs, but Chrome has been cutting edge in its sandboxing, privilege separation, and overall security.

True, Chrome is and has always been a Google marketing vehicle, where the user is the product. But has Mozilla always been at the forefront of privacy, even? Something as basic as private browsing, let's see:

April 29, 2005 Safari 2.0 Private Browsing

December 11, 2008 Google Chrome 1.0 Incognito

March 19, 2009 Internet Explorer 8 InPrivate Browsing

June 30, 2009 Mozilla Firefox 3.5 Private Browsing

Apple, Google, and Microsoft all beat Mozilla to the punch.


Well, private browsing really functions as mechanism to discourage people from routinely deleting their cookies. (Which, according to a mid-2000s report, was an extremely common user activity.) Firefox always had an option to delete history/cookies on exit.


Firefox had plugins for private browsing since 2008. Only Apple beat them.

One of Chrome's many small victories was including sensible plugins by default, whereas Firefox's best functionality had to be downloaded separately.


What kind of moving goalpost is that? Chrome 1.0 came with Incognito built-in. Safari had it for three years. Because there may have been a plugin for Firefox a few months before Chrome's release, and followed IE in shipping it, it's the superior privacy option?


1) I didn't say Firefox was better for privacy. I was only pointing out that comparing the browsers based on built-in privacy mode isn't apples-to-apples.

2) On a related note, Firefox left out lots of features because there were well-supported, popular plugins that covered the same thing. Whenever a browser gets built-in ad blocking, should we say that all the other browsers are worse privacy options because they chose to leave those as third-party plugins?


On 2, I would say yes. The default experience matters.


> unless google really drops the ball

They did for me recently when they changed the behaviour of the backspace key without a valid option to set it back to the way it was, so I went back to Firefox instead of futzing about with half-broken 3rd party extensions.


This probably isn't a direct response - but I've been thinking about it for a while. In my mind there are a few major reasons that Firefox is loosing market share to Google Chrome.

1) Performance

These might be select regressions (Google Maps, Slack - so I'll skip the bugzilla references), but every-time a user has to open Slack or Google Maps or Google Docs in Chrome, they are that much more likely to switch from Firefox (ultimately due to Sync issues: bookmarks, history) it just makes so much sense to use a single browser.

2) Developer Tools

I don't know how it is now, but half a year ago when I tried to switch to Firefox Developer tools (again for the Nth time) it would take 2-4 seconds for the console to open. Comparatively Chrome Dev tools opened < .5. As a result even trying to use Firefox for everything else, I still end up in Chromium almost daily.

3) Sync/Mobile

Using Google Chrome has distinct advantages when syncing with an Android device and other Google services (performance issues aside) - most of the things that I considered to be an advantage in Firefox mobile (the top button with <number-of-tabs> for tab switching) Chrome has actually copied.

Minor point - I'm almost sure there is some somewhat underhanded user-memory choice (it's just too smart and annoying to be a bug) - because as someone that is often in foreign countries my Firefox (where I am signed into Gmail btw) always tends to give search results in the native language search, while Chromium (where I'm not signed into anything displays English ones).

Over all I really think Mozilla should focus on getting an advantage on 3) -- I don't think Google and definitely not pocket (which I wish they would unbundle from Firefox) provide a good service for syncing information / personal links / knowledge. There was a browser concept that floated by HN some-time ago allowing users to organize their tabs into sessions / subjects - that could be great! (e.g. check a button to sync only my places/people/events window to my phone and leave my work stuff out).


Stubborn FF user her:

I recently

1.) Refreshed my Windows install and got rid off Lenovo crap (auto, just make backup, just make sure you have bckups of settings, incl. gpg keys etc. Documents were totally fine.)

2. Reintroduced a hosts file that redirects a few thousand domains to 0.0.0.0

3. Started using NoScript liberally. IIRC both google front page and search results would poke my processor every few seconds even when left alone. Now Firefox CPU usage is more sane. When you do have a number of search results open at most of the time this can tax even a modern CPU.

It took me quite a while to discover this as I don't know a good way to get pr domain processor usage and I honestly did not belive that Google would let such major performance problems pass QA.

Now I just use Firefox search box (ctrl-k) to get autocomplete.


> It took me quite a while to discover this as I don't know a good way to get per domain processor usage and I honestly did not believe that Google would let such major performance problems pass QA.

If you type "about:performance" into the URL-bar and hit enter, that should give you at least a rough overview of how individual tabs are eating up resources...


Wow, thanks!

Has this been available fo a long time or is it a recent improvement?


It is somewhat more recent. From what the internet tells me, this was introduced with Firefox 40.


Your missing a few important factors like:

1) Advertising/promotions budget

Microsoft and Google can both easily outspend Mozilla in TV and online advertising as well as pay companies to bundle and promote their software.

2) Network effects of related protects

Microsoft and Google are able to leverage their control over their own products (Windows, Google Search) to promote their software. All Mozilla has is a browser.


> Performance

I use Firefox about 80% of the time. I use Chrome for a few sites I need for work that still use Flash.

I really don't notice this performance issue. I don't use Slack, but do use Google Maps, Google Drive/Docs/Sheets and Gmail and it's all just fine in Firefox from my perspective.

I tend to not keep large numbers of tabs open (just habit more than anything) so that may be part of it. E.g. 10 tabs would be a lot for me.

I also don't sync settings. In fact I have no settings: I have firefox set to basically dump everything (cache, history) upon exit.


It's been my experience over the past couple of years dealing with ugly web-apps that abuse JS, have huge DOMs, thousands of tiny images etc., that such abominations (hacks) will run reasonably well on Chrome while verging on unusable in Firefox. This is from someone who was ideologically wedded to Mozilla for years, and more or less refused to use IE regardless of how good it got; I switch to Chrome basically solely on the basis of performance; secondarily the strength of dev tools and the impressive security model.

It's freaking sad that Firefox is stumbling out multi-process in 2016. This has plainly been the way to go for years. Multithreading is a disaster, it's just not a workable model for browser-scale applications.

But hey, if they could recover Firefox from the ashes of the bloated disaster that Mozilla Suite became, I have hope that the community can catch up eventually. Maybe.


> Multithreading is a disaster, it's just not a workable model for browser-scale applications.

All browsers, Chrome as much as any other, are heavily multithreaded, so this is clearly untrue.


Ok, you got me. What I said was nonsense, taken literally, I was being lazy; you're right, Chrome's heavily multithreaded too. But, there is process-level separation both between tabs/browser contexts with information of differing security sensitivity, and between nasty stuff like parsing and rendering versus basic UI etc. All the good defense-in-depth sandboxing that others have alluded to, that is the stuff of many papers.


https://github.com/i-rinat/freshplayerplugin deals with the Flash on Firefox problem.


FF performance seems to vary greatly per platform. It's nearly on-par with Chrome on Windows and good video drivers, but on Linux with crap drivers, or on an OSX laptop, it's pretty slow.


> Developer Tools

Give it another go, especially Firefox Developer Edition: https://www.mozilla.org/en-US/firefox/developer/

It's tools are fast and amazing, arguably better than Chrome's.


Except that it doesn't handle source maps in JS stack traces, which is a major pain...


Regarding language, I'm going to guess that you downloaded a localized build that set that locale as the preferred language, or that it is coming from your OS: https://support.mozilla.org/en-US/questions/949545

Chrome has this also: https://support.google.com/chrome/answer/173424?hl=en

Suggesting an "underhanded user-memory choice" is a serious claim. It might be worth comparing your browsers' language settings.


To clarify the minor point, I'm talking about the primary language of Google search results, on Chromium the front-page of search results is always English. On Firefox it's entirely Spanish or whatever other local language of my location. Yes my locale / OS defaults are set correctly and unrelated.

> Suggesting an "underhanded user-memory choice" is a serious claim. It might be worth comparing your browsers' language settings.

I'm saying two things: a) Chromium / Chrome prioritizes recognizing the same-user. b) Maybe they are (accidentally?) very bad are remembering non-chrome users. As I said if it's an accident I'd be surprised, try it next you visit a foreign country.


Is it redirecting you to a local site, or using a different language on Google.com? Maybe you've done https://www.google.com/ncr (No Country Redirect) on Chrome, but not on Firefox?


No (not for /ncr) I am saying the default search (think mobile-phone home-screen search-bar) if directed to Chrome almost always tends to work (and if I tell it to switch once to English it will remember across Wifi networks) where as Firefox is more iffy. I'm not sure it's as serious as OP conveys, I mean are you really going to complain about Google failing to track you? But I've been quite annoyed to search for something and get all of my results repetitively in a foreign language.

anecodatal: Have seen and paid a large price differences on airline tickets (google flights referrer? ) with and without /ncr ($500++)


FF and Chrome both gave me Japanese in Japan until I did the ncr trick. I think both browsers sniff your locale in various ways.


Whoa! Firefox 48 opens the Windows proxy settings now! That is not good. I especially use Firefox so that in a corporate environment I can use different proxy settings than IE.


Was in the same corporate boat as you a while back. The FoxyProxy addon [1] is much more flexible, I recommend you switch to that since it enables quick toggling between the IE proxy and your configured ones.

[1] https://addons.mozilla.org/en-US/firefox/addon/foxyproxy-sta...


FoxyProxy also lets you set different proxies for different pages. So you can run one proxy to be able to access the corporate intranet from outside, and another to access a non-public testing webpage, etc.


It does? I just upgraded to Firefox 48 and it's still the Firefox proxy settings dialog. Windows 10 here.


Just did a quick test with a clean profile and then I get the normal proxy window. I previously had several add-on installed to do some more advanced proxy settings, that probably bit me with the upgrade to Firefox 48.


That window is completely different from the proxy window in IE11. At least on Windows 7.

It gives you the choice to use the System proxy, but gives you options to specify your own.


Does it still heed HTTP_PROXY env variable under Windows?

The proxy dialog is (was?) convenient to swritch between proxies quickly; the env var, of course, cannot offer that.


Moving to a multi-process architechture has a number of benefits but security is certainly one of them.

While currently it doesn't really add anything from a security perspective the ability to run rendering in a separate process will allow it to very naturally support a number of operating system mitigations that it otherwise would not have in addition to adding restrictions on the process itself (sandboxing).

I believe this will allow Firefox to start to adapt to a more defense in depth philosophy although I admit it's probably still a long way to actually getting there. There's no getting around the fact that security in Firefox has been rather stale but e10s gives something to look forward too since the multi-process architecture allows you to do so much more for defense in depth.


True. If running Firefox in Linux, I'd also recommend to use application sandboxing facilities provided by the kernel.

For example, with Firejail [1] (completely unrelated to Firefox) one can do firejail <application> to run a sandboxed instance. Firejail comes bundled with rules for many applications to limit the files they can view from each sandboxed container. In case of a security breach, that keeps data leaks under control. E.g., an attacker might be able to steal stuff from your Firefox instance, but not from your home directory.

[1] https://firejail.wordpress.com/


I think you can do some things with chroots[1] that you otherwise couldn't although I could be wrong. My knowledge on Chrome's linux sandboxing (just for comparison) is kind of limited. I'm sure there are other protections such as seccomp which you can heavily restrict in renderer processes that you otherwise couldn't.

[1] With grsecurity they can become much more chroot-jail like. Chrome uses chroot's I believe so it's very beneficial there.


Honestly, I can't really come up with one reason that would make a multi-process Firefox more secure than a single-process one.

The multi-process trend is sold as "crash-proof" which reads like "can't fix exception handling".


With multiple processes you can apply more strict OS controls to one of the processes. That doesn't work with threads.


Which are ...?

See the problem is always local code execution. Once you have that, it's over. There are real mitigations (W^X, ASLR, stack cookies) and then there seems to be lore ("multi-process", "sandboxing").


Sandboxing absolutely does reduce attack surface, and that isn't just "lore". Nobody in the security field considers W^X superior to kernel-enforced sandboxing.


> Which are ...?

seccomp.

Local code execution doesn't mean a thing (except for wasted CPU cycles and/or memory) if number of syscalls the process can do is limited to the bare minimum.


And the Windows equivalent (win32k lockdown)[1]. This restricts a number of win32k syscalls including all GDI ones. Chrome already uses this mitigation for renderer processes and has support for PPAPI plugin processes on their dev/beta channels.

[1] https://msdn.microsoft.com/en-us/library/windows/desktop/hh8...


> There are real mitigations (W^X, ASLR, stack cookies)

We've had these for nearly a decade with other software. Have they completely stopped bugs/exploits in that time?

I'm not trying to take away from the usefulness of those mitigations against certain classes of exploits but the point of "lore" such as sandboxing is to promote defense in depth. If there's a buffer overflow which is exploitable then containing that in a restricted sandbox with no permissions to do anything requires more work to break out of.


> Which are ...?

I'm not an expert, but I wonder if this means you could restrict GPU access to the master process only (for the sake of hardware-accelerated compositing) without needing to expose it to the website-facing process (due to the security vulnerabilities in GPU drivers).


Angle (the OpenGL -> DirectX thingy) actually acts as a sanitizer for GPU commands so in a sense browsers which use Angle already have some protection over this. While I think complete protection is impossible through the use of Angle and GPU process I think you reduce it as much as you can since you can't control the driver itself aside from what protections Windows gives you by default.


That implies running Windows, though.

My biggest problem with GPU drivers is that they stick out like a sore thumb on a hardened system. All the protection and isolation in the world won't help you when you have a stock-compiled, PaX-disabled blob loaded into your binary that communicates directly with the kernel.

For this reason and this reason alone, I am forced to basically limit OpenGL access to X.org and mpv.


> That implies running Windows, though.

Indeed. My knowledge on Linux is pretty limited but I seem to remember that Nvidia fixed something which let you use Pax/grsecurity protections you otherwise couldn't. This still implies loading a binary blob but certain kernel protections could still help you IIRC (DEP?). I could be misremembering. I can't check since grsecurity set their twitter to protected.

I'm not sure what Chrome does aside from having a separate GPU process and whether or not any sanitizing takes place. They're pretty good with stuff like that so it would surprise me if some amount of protection wasn't offered.

Edit: There are some patches from the Pax folks for Nvidia drivers which I believe help with PAX_USERCOPY[1][2]? Although that may just be for getting it working...

[1] https://grsecurity.net/~paxguy1/

[2] https://grsecurity.net/~paxguy1/nvidia-drivers-367.35-pax.pa... (example)


Those patches are for running the nvidia kernel driver in a PaX-enabled kernel. It doesn't help you protect the actual libGL.so, which my concern was about.

(Indeed, I have to use those patches otherwise the nvidia kernel module wouldn't compile)


Did you not think of sandboxing?


Multi-process is not crash proof, it contains any code execution vulnerabilities to the process that hit the edge case in question.


Its quite shocking to think how we used to get by with such little amounts ram and Cpu power. Firefox is currently consuming 518.4 MB of ram (7 tabs) and using 0.6% of a 12 core 3.4 Ghz CPU. But i guess this is progress...


I don't really do anything with it professionally, but just for "feels" and to not completely lose touch with the hard reality (of registers and ports) I took the UT Austin course on embedded systems (https://www.edx.org/course/embedded-systems-shape-world-utau...) - which required the purchase of some real hardware for the automatically graded labs. I also signed up for the follow-up to this excellent course (https://www.edx.org/course/real-time-bluetooth-networks-shap...) starting in September - which is also going to be done on real hardware.

It's just a refresher for me, I did a lot of low-level and assembler in the 8 bit days, so it's nice to see what's current. It's a wonderful counterweight to otherwise doing very high level programming in dynamic languages and learning (more) FP (Scala). It's nice to be able to find plenty of use cases for 32 kByte of RAM on a tiny board (http://www.ti.com/tool/ek-tm4c123gxl).

I think some low-level embedded programming (directly in C to the chip, not on a highlevel board that even runs a full Linux OS) is ideal to keep me grounded and remember how wasteful those many abstraction layers actually are. Yes I know what they do and appreciate their service - but when I compare what I get vs. how much more I put in (in Giga-Hertz and Giga-Bytes) I'm not convinced that there isn't a lot of waste going on that cannot, should not, be justified and sold as "price of progress".


We also didn't used to have nice things like JITs and hardware accelerated graphics.


We didn't need them, since websites rendered just fine without all the shit they have today.

I used a site that rendered a grid of images. Literally, that was the only user-visible function. It took 5s to render when throttled to 1GHz i5. Unthrottled i5 required 2.5s.

It is lunacy. It is an artifact of having a terrible language (JS) and a terrible layout system (HTML) mixed up into a modern "standard". Plus naive developers abstracting so far away they have no idea what's going on, and partially don't care.

Wasn't too long ago someone asked (I think unironically) why HN was so "fast". As if such a page shouldn't load basically instantly.

Can you honestly say info browsing (not apps like GDocs) overall is better than it was 10-20 years ago? Ignore the increase in content. Images load faster, sure, but that's offset by all the other junk. Twitter, for instance - it's actually sluggish on a nice ThinkPad! WTF.


I mostly agree. But I don't think that JS and HTML (and CSS) are terrible for what they were meant to do, the problem is that what they were meant to do was lunacy in context. The core architecture of webpages got corrupted somewhere in their development.

What we wanted was a way to describe interactive hyperlinked documents backed by a distributed collection of servers. What we got was a virtual machine running on a virtual network. I would call the web the third generation of application layer and it's an even larger step back for some fundamental promises of computing (composition, interchangeability, others) than window managers were (second) from shells (first). Window managers gave us a better way to interact, but we lost things like piping applications together, easy to use environment variables, sharing other programs (most applications I install these days just ship with their own copies of everything; I blame Microsoft), etc. The web gives us cross-platform access anywhere features, but has terribly degraded any promise of applications working together unless they are owned by the same company. Not at all the shared interactive interlinked document layer we wanted.


HTML isn't a layout system.

And yeah, of course a grid of images is going to be fast, especially if they aren't even scaled. It's literally just a blit. What gets expensive is when you add bilinear filtering, alpha compositing, text, shadows with blur, path filling and/or tessellation...

All of those things are things people now expect from apps, native or otherwise.


There wasn't any fancy shadows, compositing, etc. The CPU was all spent calculating layouts and doing "stuff", before the images even rendered. My guess is some suboptimal code buried by a kilometre's worth of abstractions. It's not that FF is doing something slow (that I know of), it's the mess people have built on top of HTML/JS/CSS that ends up with non-junior developers creating monstrosities.


Not to mention decoding the image format, if there are lots then they will be swapped and it may be quicker to just decode them again


>Can you honestly say info browsing overall is better than it was 10-20 years ago?

Yes, just block ads.


Actually, that's not enough.

The expectation and dynamics of ad-driven publishing have created a huge proliferation of sites that have no reason to exist. Taboola, NewsMax, Outbrain, and a slew of others, pimping nothing but crap. It's anti-information. You're dumber for having read or even seen it.

I've been keeping up a 60k+ element blocklist (sorted for dupes, and treating entire domains via DNSMasq), which helps some. But even that just chips away gently.

I've got another tab open with a Guardian article, "How Technology Disrupted the Truth". It dives deep into a space I've been pointing at for a while -- that advertising isn't merely bad for having created bad UI/UX and malware distribution networks, but it's actively screwing with media's primary role of communicating true facts.

https://www.theguardian.com/media/2016/jul/12/how-technology...

Until and unless authors and publishers take the heat for dumping crap on people -- I'm increasingly a fan of the block -- that's not going to turn around.

(And yes, figuring out how to finance quality information also matters -- admitting it's a public good, and should probably be financed like one, would be a good first step.)

Ad blocking is necessary, but insufficient.


It could be argued that media's primary role is not to communicate true facts, but rather to communicate information their owner wants communicated. Regardless of whether the media is a news outlet or, I don't know, your own mouth.


I'll invoke another HN post -- Donella Meadows speaking of systems highlighted the importance of accurate feedback.

You can go further and note that all models are wrong, but some are useful. And note research which suggests that perceptual systems evolve to fitness rather than accuracy, a crucial distinction.

But the perception still needs to provide useful predictive or explanatory power over the range of experienced conditions. And if yo're deliberately violating that condition, you're going to end up with some less than beneficial behaviors.

Contemporary politics in various parts of the world, and interactions with media, demonstrates this well.


I'm talking with an ad blocker. I can only imagine how bad it is without one. Twitter is still sluggish as hell, especially how it's basically one big table of text+images.


You should compare 1999 Yahoo Mail to Gmail or Fastmail of today. I don't think you'd prefer the former.


>> Can you honestly say info browsing (not apps like GDocs) overall is better than it was 10-20 years ago?

> You should compare 1999 Yahoo Mail to Gmail or Fastmail of today.

I don't think your examples relevant because they are well in the realm of "apps" and not "info browsing" OP talks about.


And that's mainly due to what, though? AJAX? (Ignoring other enhancements.) And I'm excepting out a lot of "apps", because the browser has gotten more powerful, sure (and software dev has progressed). But mostly content sites that aren't heavy on any real interaction, iow, sites I "surf". Things just feel ... sluggish at a UI level (not the old sluggish of watching progressive JPEGs slowly refine).


I agreed with the top half of your post, but the bottom half is crazy talk. JavaScript and HTML by themselves aren't the problem. You mentioned the problem, but only as an addendum. Shitty programmers will write shitty programs regardless of the language they write. People like to blame JavaScript, but if the web ran on Haskell, you would be complaining about how awful Haskell (the language) is. It's not the tool, it's the person that uses the tool.


I don't think that's quite true. A huge, huge, amount of effort has to go into JS engines to make them competitive with any sane language.

I'm also implying that the attitude of JS and HTML - ignore errors - seems to carry over into users of those things. The fact that having wrappers like jQuery, which accomplish basic tasks that should have been part of the standard, makes it worse. Look at all the "shadow dom" and other hacks around terrible performance due to HTML's model.

Even poorly written desktop apps don't seem to be as bad as common web dev. I'm not sure that most desktop devs are better trained or care more. It's just harder to screw up.


> A huge, huge, amount of effort has to go into JS engines to make them competitive with any sane language.

Most people consider Python and Ruby "sane languages", and JS has been running circles around them for years and years. Even the most trivial JS JIT beats CPython and MRI/YARV hands down.

> I'm also implying that the attitude of JS and HTML - ignore errors - seems to carry over into users of those things.

JS doesn't ignore errors. It throws exceptions for all kinds of illegal operations.

And error recovery is not a reason for CSS's performance problems. Of all the wrong reasons I've heard suggested for CSS's performance issues, that is one of the weirdest.

> Look at all the "shadow dom" and other hacks around terrible performance due to HTML's model.

Shadow DOM isn't really about performance. Are you thinking of virtual DOMs as implemented by React, etc.?


Not my idea of sane (Ruby has a command line option for how it handles string printing...); I was thinking of most more-strictly-typed, compiled language.

By error handling I mean scripts don't break the page. Same as HTML - the browser tries to ignore source errors. This sloppiness spills over, I think, into the attitudes of developers. But maybe this is an invalid personal projection.

I suppose I mean virtual DOM. In other GUI platforms, can't you usually just provide a command to temporarily stop painting, then resume after you've made changes that need recalculating?

But perhaps web dev isn't special; sure there's plenty of terrible server software written and maybe I'm poorly extrapolating from too little experience.

The end result is that, day-to-day, I can count on basically every desktop app to be fairly responsive. Whereas loading any given content (low interaction) website has a high chance of feeling sluggish to operate. When the actual functionality of said websites is fairly unremarkable, it makes me think perhaps the platform has some responsibility for it, somehow.

Edit: I'll admit I'm armchair engineering. I haven't done serious web dev in over 10 years. Just extrapolating based on some observations and how the JS community seems to be, overall. I might be way off here.


That and much of it is cache that is aware of total RAM.


I remember the days of Firefox 2 and it was consuming even a bit more than that, which was a disaster if you compare the amount of RAM available back then (And I'm not even talking about speed here).


CSS and JS makes for big sites...


Multi-Process or not 48 definitely feels a lot faster here! Looking forward to more...


I forced e10s on 47, and it's still enabled in 48. In both I see random beachballs (OS X) hitting refresh in one tab and moving to another, and sometimes spinning wheels trying to navigate to a new tab.

I don't think there are any real performance improvements in 48 other than the improvement you'll get from restarting a long running process.


Have you tried with addons disabled? Your experience with E10s largely revolves around which addons you're using.

Some addons use compatibility shims with multi-process and this can significantly harm performance in some cases (even potentially making it worse than e10s disabled).

This site is useful regarding the addon compatibility: http://arewee10syet.com/


I tried it on 49 Dev. Seemed to work nice and much faster than 47 when I was quickly testing. But this was just a quick impression.

Unfortunately I couldn't use my addons with it and together with the constant updates it was a no go for me.


I wonder where Rust and servo fit here. Will they be able to "swap" some parts of Firefox with others written in Rust?


There are already bits of Rust code in newer builds of firefox, I think they'll ship first in 47 or 48. https://wiki.mozilla.org/Oxidation

Servo is a fantastic research project but it will be a while before it is "production ready".



They have done so recently, mp4parse-rust is a replacement for the old mp4 metadata parser.

https://github.com/mozilla/mp4parse-rust


I'm waiting for someone to fork Firefox and strip out anything not totally necessary to strictly browse 99% of the web, to make it light, fast, simple and memory-efficient. They could even rename it something familiar... something that envisions lightness and speed... like "Fire Bird".... Or better yet, "Phoenix"!


> I'm waiting for someone to fork Firefox and strip out anything not totally necessary to strictly browse 99% of the web, to make it light, fast, simple and memory-efficient.

A version of Gecko that contains only the things necessary to browse 99% of the Web exists. It's called Gecko.

Sites use the Web features.


What stuff can they remove to make Twitter.com fast and light? I'm guessing a lot of code is there to mitigate the insane shit modern "web developers" come up with.


That's SeaMonkey, and paradoxically, it's the old full Mozilla suite pre Phoenix but with current features, and is much faster than Firefox.


I love Seamonkey (I have to reinstall it on this machine as I haven't used it in a while), but it's also behind the Firefox dev cycle and doesn't offer the same level of extension coverage that Firefox does.

It'd certainly be very interesting if the could maintain the latest browser fixes (even with less extension compatibility), but I doubt the team has the bandwidth to keep that up.

Just installed it. It doesn't support High DPI displays on Mac OS X. Unfortunately, that's a bit of a blocker for modern Mac laptops (everything looks fuzzy).


Report a bug, or look at about:config settings.

https://fedoramagazine.org/how-to-get-firefox-looking-right-...

apply those to seamonkey.


Nobody got the joke. :/


Can anyone comment on what the future is for single-process FF? Will it remain the default for a long time to come, be behind a feature flag, or will it be unsupported like `--single-process` is in Chrome?

As a personal preference, I like my browser (and most of my applications) to be a single process.


> As a personal preference, I like my browser (and most of my applications) to be a single process.

Honest question: Why is this your preference? What difference does it make whether an app is using 1 process or 10 behind the scenes?


Depending upon your operating system of choice, there are many things that are easier to apply to single processes as opposed to multiple (killing, affinity/priority, sandboxing/permissions, etc). Also, anecdotally I've found there is overhead for each tab process (of which I may have a ton of very tiny ones in my tree tabs). One of my big use cases is that I want to embed the browser in my software. Granted Gecko is not very embeddable in its current state, but the general move towards multi-process browsers often prevents my app from being self-contained (e.g. Electron apps).

There are tradeoffs. If all of the evergreen browsers are headed this route, so be it, I just want to know.


there are many things that are easier to apply to single processes as opposed to multiple (killing, affinity/priority, sandboxing/permissions

Sandboxing is easier with more processes, at least if the processes are split up to make this easier. One of the reasons for e10s is to allow for sandboxing, so it is now easier to apply sandboxing rules to Firefox.

anecdotally I've found there is overhead for each tab process (of which I may have a ton of very tiny ones in my tree tabs

This is understood and one reason why Mozilla is conservative here.


I use tab tree and typically have a couple of hundred open tabs...


But that doesn't really answer my question: Why does it matter if each one uses its own process?


A separate process has an overhead over just using a separate thread. If you put each individual tab into an individual process, you will have this overhead for each individual tab, meaning that you're much more limited in the total number of tabs that you can have, no matter what those tabs have loaded.


Is that overhead inherently high, or does it just happen to be high given browsers' current implementations? My understanding is that the inherent overhead of an OS process is fairly low, and that it would be possible to implement process-per-tab in an efficient way, but I don't know enough about the low-level details of how that works.


You'll find the e10s roadmap here: https://wiki.mozilla.org/Electrolysis#Schedule_and_Status

The plan is to make it the default in Firefox 51 on desktop, assuming the staged roll out is successful.

Also, the current approach will not considerably increase resource usage for heavy tab users as FF will only run one content process shared across all tabs.


Thanks. I was also hoping for information on the future for the single-process side beyond just when it will no longer be the default.


I've been keeping an eye out for any such announcements, and from what I can tell, there is no current plan. They do have a GUI-toggle for it, so it should stay around at least for a little while, and with Mozilla even still supporting Windows XP, I think, it's valid to assume that they will continue supporting it even for a longer while, but yeah, can't seem to find anything official at this point.


Default Firefox user here, although I do have Chrome installed on many of my machines. All around me people are now (or have been for some time) Chrome users. I just cannot make the switch. I don't know why - maybe it's the sharp angled tab UI, maybe it's the slightly 'off' font rendering. Maybe I don't trust google. Whatever it is, I keep using Firefox as my main Browser, and mostly I'm happy.

A few things would make my Firefox life much easier though:

- The ability to launch multiple Firefoxs with the same profile, but completely separate processes. So when one Firefox crashes, the others are left alone. At the moment, no matter how many Firefox windows you have open, they are all children of the first Firefox you started.

- an equivalent of Chromes --APP=[URL] start up parameter (and the accompanying no-url-bar, custom icon, and unique window location/size settings) - this is a wonderful feature for web apps, and effectively transforms them into (almost) desktop apps in appearance. If firefox had that, I'd be over the moon.

- A way of allocating a session when you start it for 'X project' and then be able to save off all those tabs in one action to a folder or tab 'startup list'. I'd like this to be native, and not an add-on I'd have to keep track of or suffer incompatibilities when the maintainer has lost interest in it.

Finally, now that Windows and Linux roll with good default Browsers, and that Firefox (in the main) is a manual install, I can't see it recouping its percentage share. It feels like the slow death of the original Opera, all over again.


> - A way of allocating a session when you start it for 'X project' and then be able to save off all those tabs in one action to a folder or tab 'startup list'.

The existing "Bookmark all tabs" option that creates a bookmark folder from all the tabs in a window and the existing option to open all the tabs in a bookmark folder sort of address this use case, unless 'X project' needs multiple windows...


Just makes me more excited for Servo


Why? Servo is just a proof of concept, it won't be the future of Firefox.


It's definitely not a proof of concept.

> Our long-term plan is to:

> - Incrementally replace components in Gecko with ones written in Rust and shared with Servo.

> - Determine product opportunities for a standalone Servo browser or embeddable library (e.g., for Android).

https://github.com/servo/servo/wiki/Roadmap


Servo is definitely going places but it's not Firefox. It's not looking to be a Gecko replacement, and Gecko is too deeply embedded in FF to swap it out. Sharing code sounds like a good idea though.


For example, another Mozilla intern I know is working on the project to replace Gecko's CSS style system with Servo's (Stylo).

Check the "Oxidation" page on the wiki: https://wiki.mozilla.org/Oxidation


I get how a multi-process architecture is good for stability, but why is multi-process necessary for UI responsiveness? Surely the UI is already rendered on its own thread; why would moving that thread to a separate process help anything?


Firefox has a UI that is rendered in web components and javascript just like the displayed pages in the tabs. This technology is called XUL. The Gecko layout engine renders both XUL and HTML styled with CSS.

Traditionally this was all handled in one process. The tabs themselves are like fancier iframes.

If javascript in a window did not yield, stuck in a tight loop, it could make the interactive elements of the UI non-responsive (menus, modal dialogs, right-click menu, etc.)

To combat this, Firefox has a global timer in the whole-browser javascript that fires and interrupts long-running javascript to display that dialog box that says: "A script on this page may be busy, or it may have stopped responding. You can stop the script now, or you can continue to see if the script will complete." You might have even run into it. This lets you kill it which gives you the UI back, but may break the offending webpage.

By making firefox multi-process, this hack is no longer needed.

Also you can imagine that for a very javascript heavy web application like Google Docs, Slack, or Gmail, it fires so many JS events and runs so much code that when the tab is active on slower systems you find Firefox's own UI is lagging. This is because the web application is always doing something or doing it too frequently and Firefox is having a hard time getting a word in edgewise.

Such heavy interactive pages will no longer negatively impact the browser, especially on multi-core systems.


Without e10s, the UI is in fact not rendered on its own thread.

There was a lot of discussion about whether to try to move the UI to a separate thread or separate process, and the decision was that separate process has a _much_ lower risk of bugs due to racing memory accesses and whatnot. Basically, it removes a bunch of ways that you could accidentally shoot yourself in the foot while trying to retrofit threading onto a large existing C++ codebase....


The annoying problem of being unable to select via Enter key the first item in the address bar that is shown while typing any text is back.

Using OldBar on FF48 on a MAC

type text into address bar - Key down to select first entry

Hit enter - nothing happens


It's ridiculous that they still don't have sandboxing years after Chrome, Safari, and IE shipped it. It's just negligent as a browser developer to not have that basic measure of security in place.


I don't think you understand what you're complaining about. Each browser has had different types of sandboxing. IE's approach is to run the whole process (just the one) in PE mode. Chrome goes a step further and sandboxes each tab to its own process. Safari just sandboxes plugins. Firefox, prior to Electrolysis, sandboxes JavaScript, Media Playback containers, and Plugins. Post Electrolysis it does processes as well.


It's the lack of defense in depth which I would say is disappointing more than anything. Firefox prior to e10s pretty much ran everything save for NPAPI plugins in the same process. Perhaps it is "sandboxed" in the code but in practice I'm not sure you could call it a sandbox from the defense in depth standpoint. Simply running stuff in a separate process doesn't count for much if the separate process is not restricted in any way.

When you contrast it with Chrome which uses basically every single operating system mitigation in addition to their sandboxing and the difference really is striking.

I'm looking forward to the future of e10s Firefox since it now enables them to move forward with more advanced security mitigations and better defense in depth. I believe Mozilla released a plan for the future of these things which it showed what they wanted to do step by step (e.g. plugins first, etc).


I believe Mozilla released a plan for the future of these things which it showed what they wanted to do step by step (e.g. plugins first, etc).

Flash and Media Plugins (video decoders, EME/DRM) have already been sandboxed for several releases. There is a content sandbox in the development versions of Firefox. Of course it won't ship before e10s is considered stable, because that's a hard prerequisite for it. The amount of protection also varies by operating system (Windows and Mac OS X are pretty OK, Linux is still pretty crappy) but obviously that is improving week by week.


> Flash and Media Plugins (video decoders, EME/DRM) have already been sandboxed for several releases.

Firefox provides its own sandboxing now? Flash used to use a subset of the Chrome sandbox for Flash but that was restricted to the 32-bit version of the browser. As far as I was aware Firefox just ran it in the plugin-container processes for crash protection and nothing else (if protected mode wasn't being used or if you were on 64-bit Firefox). Does Firefox now make use of OS mitigations and integrity levels for sandboxing the plugin process?


Like you say, Adobe's Flash sandbox (aka "Protected Mode", based on Chrome's sandbox library) only supports 32-bit Windows. Mozilla wrote its own plugin sandbox for 64-bit Windows because we didn't want to Firefox users to lose sandboxing just because they switched from 32-bit to 64-bit. Adobe's and Mozilla's sandboxes don't use all the same mitigations and some Flash content is currently broken in 64-bit Firefox.

Here is the Firefox bug tracking the 64-bit sandbox work:

https://bugzilla.mozilla.org/show_bug.cgi?id=1165891


> Perhaps it is "sandboxed" in the code but in practice I'm not sure you could call it a sandbox from the defense in depth standpoint. Simply running stuff in a separate process doesn't count for much if the separate process is not restricted in any way.

Exactly this.


By that definition Chrome is the only browser that sandboxes. But your comment was on how ridiculous it was that Firefox was the only one not doing it.

People's hate of Mozilla is very similar and as misguided as their hate for Microsoft and it really shows in your original statement that they can do no right.

Instead of a congratulatory "welcome to the club (of one)", it's "why weren't you a member all along?"


> By that definition Chrome is the only browser that sandboxes.

I believe most Chromium based browsers could also fall under that category although I admit that's just being pedantic.

Furthermore at least Edge and to the lesser extent IE(11) do have some sandboxing which purpose is to enhance security. Their (renderer) processes do run at a low integrity level and are ran within an Appcontainer. On IE11 this is enabled through the use of Enhanced protected mode with 64-bit processes. This allows it to use AppContainers even with the desktop browser. Edge always uses AppContainers AFAIK.

I'm not sure it's sandboxed to the same extent as Chrome but it is a level of defense in depth. Edge also uses some security mitigations that Chrome does not such as CFG (control flow guard) although that's not dependent on a sandbox so CFG, baring performance issues, could be used in any browser sandboxed or not.


Are you not aware of how Safari works?

https://trac.webkit.org/wiki/WebKit2


I am, would you like to clarify whatever point it is that you're trying to make?


Why are you claiming that Safari uses a single process for everything except plugins?


I was talking about sandboxing. Multi-process rendering is not the same thing and does not intrinsically sandbox an app or make it more secure.


But Safari uses multiple processes and sandboxes those processes. You keep claiming that there is some difference between it and Chrome but you can't say what it is.


Safari is a sandboxed app but that covers everything, I have never read about individual process level sandboxing in Safari beyond the plugin sandbox. Are they using XPC or did they roll something custom?


WebProcess does have its own, more restrictive sandbox, although it's not as tight as Chrome's:

https://github.com/WebKit/webkit/blob/master/Source/WebKit2/...

For IPC they use something custom (part of the WebKit repo) called "CoreIPC".


That makes sense given that it's open source.


> Safari just sandboxes plugins

That hasn't been true for at least 5 years. Safari has used a sandboxed, multi-process architecture for Web content since version 5.1.


My understanding is that kernel level sandboxing in OSX is per application, not per process unless you dispatch XPC services. Does Safari utilize XPC for rendering? If not then the processes aren't sandboxes from on another unless by some mechanism internal to Safari, which is entirely possible.

I was referring to the internal sandboxing Safari does to isolate plugins from everything else.


> Does Safari utilize XPC for rendering?

Web content and plug-in processes are XPC services, yes.


IE does process isolation too since IE 8, not just relying on low-integrity mode for a single process.


The IE process has run in protected mode since IE7. In 10 they introduced enhanced protection mode and AppContainer sandboxing.

The multiprocess architecture in IE8 isn't an isolation layer as the browser can and does frequently render multiple tabs under the same process. It's not uncommon to hear reports of 40 processes for 2 tabs or 40 tabs for 2 processes.


It's certainly unfortunate, but "ridiculous" is a bit much. It's just a fact of history. FF is based on a very old codebase, before sandboxing and multi-process arch. was a thing in browsers. Chrome is much younger, and was designed with those features in mind. And while IE is also an old codebase, MS can devote far more resources to making these complex changes than can Mozilla.


This. Greenfield development makes it far easier to implement multiprocess and sandboxing. Try contending with a decade-old extension ecosystem and a legacy extension API that provides direct access to browser internals.


Are there any studies which show process sandboxing to improve security in practice?


I'm not sure about any studies, only my own observations and attempts at breaking browsers. It depends on how the sandbox is actually constructed, and what information/level of control the attacker is attempting to obtain. But, if we go by "Being able to spawn calc.exe on someone's computer" -- if you simply look at the amount of effort needed to attack a program like Chrome versus Firefox and successfully break out of the sandbox - yes, when done correctly, it adds a good layer of defense in depth.

Chrome for example doesn't even let render processes touch OpenGL. All the rendering and layout logic for the page is computed in a separate process, and draw commands are sent over a IPC pipe instead to a controlled renderer, which actually issues draw commands to the GPU. (If my memory is still correct, anyway). Given the complex layout and rendering engines in browsers, this is good to have - they're almost purely computational logic, so they don't need many capabilities like "Open File" or "Spawn Process". It's sort of a forced realization of real capability based security, like Eros or Capsicum.

Getting outright code execution (calc.exe) is only useful once you also have a way to escape the sandbox, after you've got code execution in the process you exploited. And on (all?) OSs, this is enforced pretty much by the kernel and many other things. So you need a kernel exploit, with a viable triggering mechanism from within the sandbox, on top of the browser exploit if you actually want to break out further.

In contrast, in Firefox, etc, once you've exploited the singular process rendering your page, you have full access to the whole system, at the privilege level of the application. There are no restrictions on what your payload can do, so spawning calc.exe is trivial. This is also why multi-process is a necessary, but not sufficient, part of a sandboxed design. Firefox still has a huge, huge amount of ground to cover to catch up to Chrome, even once it's gone full multi-process.

That said, none of these attacks are impossible, even with Chrome. They only mitigate/ban certain exploit mechanisms as a consequence of design, making things much harder, but fundamentally you can still get by. With a few infoleaks and one or two good bugs, you can take the cake. And, purely by the fact these projects are so large, those things exist. But Chrome has a much higher barrier to full compromise, I'd say, and it had the advantage of being designed that way from the start.

The next step is to do things like enforce very fine-grained control flow integrity over the whole browser, which will help stop code-reuse attacks (e.g. ROP/JOP), thus killing a whole class of exploitation mechanisms outright. grsecurity's RAP work has already been tested on the whole of the Chromium code base, and has excellent performance in general. Hopefully in time something similar can come to a wider audience.


> Chrome for example doesn't even let render processes touch OpenGL.

Except on Android, where I believe it does in fact let them do that.


No, not even on Android[1]. What was different before was that there wasn't a separate GPU process but OpenGL was executed in the browser process (in process command buffer). In any case the renderer hasn't been working with GL directly on all platforms (outside of tests) for at least a few years now.

[1] https://cs.chromium.org/chromium/src/content/renderer/render...


Ah, I stand corrected.


Chrome took code from Webkit, and Webkit itself is from the Konqueror browser/Kpart from the KDE guys for Linux/Unix.


Yep, Webkit dates to 1998, and Firefox (2002) was such a radical departure from Mozilla suite it's more than a little ridiculous in 2016 to handwave Chrome (2008)'s advantages as "greenfield." Microsoft just released a brand spanking new browser. Firefox is a great product but it's deficiencies are its deficiencies.


Chrome didn't have extensions when it came out. Neither did Edge.


The rendering engine is not the part that makes it hard to do multiprocess Firefox. Gecko has had the ability to do multiprocess stuff for years; all the stuff on the b2g process was multiprocess Gecko.

The hard part that took this long has been updating the browser UI to not directly poke at the web content (which is now in a different process) and not breaking all the extensions which like to do that sort of thing too badly.


It was common for security-savvy users of Firefox to combine it with third-party solutions like SandboxIE, DefenseWall, and AppGuard. Later browser VMs. A sandbox plus NoScript, Flasblock, Adblock, etc actually made it a stronger option than competitors that were just sandboxing renderers, JS, and do on.


Use a VM and sandbox it yourself


Roll your own android? What a stupid statement.

How about airgap every website you visit by buying a new computer?

I hope there's an invisible ;) in there somewhere.


A bit of a ;)

My advice used to be "run it as its own user"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: