I hoard tabs and easily reach 40 tabs in a single session. It is typical to have 60-100 tabs open at any point in time. I find that firefox handles this much better than chrome does. Any time the tab count exceeds 15 or so in Chrome, my whole system slows down and freezes. I am forced to close some tabs and restart chrome to get some sanity. Often system reboot is the only option.
In firefox the same system slowdown happens, but rarer than chrome, usually I have nearly 200 tabs open and running for a long time. In such cases, I just have to restart firefox. Firefox presents me with the restore window which is a much saner way to select only the tabs I want. Even in that case, when I restore, it does not load all the tabs, only the active ones, so it does not impact system performance.
The only time firefox becomes slow even on startup is when I have restore windows several levels deep (yes, there is such a thing :) ) and I believe I am wholly responsible for that and not firefox.
So it really surprises me when people say Chrome is more performant.
Also, maybe because of familiarity, I find firefox Developer Tools more usable than Chrome's. But what do I know? I am a server side person dabbling in UI for my personal projects, not a web designer :)
I dread Firefox following the footsteps of Chrome!
Oh, and not to mention the scores of chrome processes that pollute the process list!
when I run the `ps` command or the `top` command, it is difficult to locate the process that I am actually looking for. Also, it is much, much harder to estimate how much memory and processor time Chrome is really consuming when there are so many processes of Chrome running. May be I am doing it wrong. Some one please enlighten me.
Chrome is designed for web apps. If you keep several apps open in Firefox, like say Gmail, FastMail, Slack, Gitter, Facebook Messenger, WhatsApp Web and Google Music or YouTube, pretty soon Firefox ends up being really, really sluggish. And I keep these apps pinned, and the startup experience for Firefox gets horrible.
Firefox also has problems with many services, where loading a single link can make your whole browser unresponsive for a long time or even crash it.
To see what I mean, load this Travis log in Firefox and compare with Chrome: https://travis-ci.org/monixio/monix/builds/148774867 ; and this is just one sample, there are others that bother me, like the OpenTSDB UI.
Personally I don't keep 200 tabs open, because I'm focusing on just a few at a time. If I want to go back to something valuable, I use bookmarks (or more recently Pinboard.in). This is because tabs are hard to manage and search, unless you have one of those fancy extensions, but I dislike those as well.
On add-ons, I very much appreciate Mozilla having a good review process, but those add-ons have no isolation and everything is allowed to run even in private mode.
The fact is, if I can't comfortably run my web apps in Firefox, I'll keep going back to Chrome. And yes I keep trying using native apps, like Thunderbird and Adium/Pidgin. It isn't working out.
I think it is unfortunate that insanely heavy web applications, which can kinda get away with it in Chrome, are forcing everyone to Chrome.
It's people writing stuff, then it being slow in all browsers, then them optimizing it in Chrome only (basically changing stupid stuff they did that ended up slow in Chrome but maybe fast in other browsers, but not changing stupid stuff that was fast in Chrome but slow in other browsers) and shipping the results.
Just the same, it's hard to convince people not to bring in the jungle for the banana (appropriated analogy). With today's tooling, it's very easy to use smaller frameworks, and piece together what you need.. but in goes angular, jQuery, lodash and a few other large libraries for good measure. About the only one I'm guilty of bringing in these days is moment... and most of that is because the internals for Date are poorly lacking (maybe it's time to standardize some non-mutating, moment-like ES extensions to Date already).
And if you have concrete examples of other pages where there are performance problems, that would be very much appreciated!
To me, that's a good thing. I don't like walled gardens because then you, the commoner, have to beg your benevolent ruler to please allow a particular use-case to be supported in their fiefdom.
What you say about web apps and Chrome you can say about FF with multiprocess on as well. If some page takes too much cpu you can find it in your OS task manager and kill it and you will see which tab crashes (TabData is an usefull addon showing how much memory pages take https://github.com/bobbyrne01/tab-data-firefox (don't sample too often, it slows FF down if it's active and you have dozens of tabs open and sample every few seconds)
I don't know the number of process they will use as default, I configured 128. I regularly use 20-60 tabs with pinned Twitter, Gmail, Reddit, WhatApp and some more ( TabMixPlus with multirow tabs makes it no problem) and it's really nice browsing experience.
I had to restart FF every morning, sometimes multiple times a day and when something crashed the whole browser went down. Now you just reload the crashed tab or plugin - was common few months back and is rare nowadays. It has a warning if some addon is slowing FF down - I hope addon authors will update them but right now I just ignore the warnings because I don't see any subjective slowdown.
Wow, that one brought FireFox Dev Edition (OSX El Capitan) to a dead stop over here.
I'm on Firefox Nightly, Linux, which should be sort of the combination of the two use-cases that you guys have...
I have had several hundred tabs open in Firefox many a times (yes, there are valid reasons), and that affects startup time (even with loading tabs on demand) and exit time (the time taken for process to terminate after the windows disappear). I have read about people who do the same (or a lot more).
In my observation over several years, Firefox has been able to handle many more tabs with lesser CPU and memory usage (aka sluggishness) than Chrome ever could. Chrome is always sluggish after opening several tabs and consumes a lot more memory. What's worse is recovery in Chrome. Restoring sessions and tabs in Firefox has been a breeze for a long time, and even as recent as a few days ago, I had to struggle doing that in Chrome (the session manager extension there is not reliable at all, nor is Chrome's own crash recovery).
On the multi-process model, Firefox is now using one process for the entire browser chrome (UI) and one process for the content. I have also dreaded that Firefox multi-process would become sluggish like Chrome, but we'll have to wait and see how the development progresses. The Firefox developers do know a lot more about Chrome's issues on this front, and I'm sure they will tread cautiously and adopt some kind of trade-offs that provide adquate security (better sandboxing), stability and performance.
To check memory use in Chrome, go to about:memory and you should be able to get details of all Chrome processes and the totals. Pressing Shift+Esc when in Chrome would also show all the Chrome tab stats (similar to a task manager window).
You can easily leapfrog Chrome with a better web interface and web page helpers. I see no radically visible innovation with Firefox or Chrome for that matter.
I had an Apple Powerbook, that with age couldn't cope with 'modern' web sites. So I took great pains to make browsing easier. Now I notice the same creep on 'modern' machines. JS may be processed faster, and pages rendered quicker, but many web resources are becoming more bloated and this ruins the browsing experience.
But the bog standard web pages that eat my computer's resources to display a few adverts alongside a passage of text or a picture is pretty dreadful. Being able to easily identify and manage these laggy sites would be a help. There are tools but they aren't that accessible or easy to read or use.
And after running Windows 10 for the first time at the weekend, I can see how the Desktop UI and admin interface is still comparatively stuck in the stone age.
So in short I think there is plenty of room for improvement, across the board.
The per-tab process switch is bad for users like us (I have 300+ tabs right now) but what is even worse in FF48 is the implementation of a walled garden for extensions.
I've been terribly unproductive today. Darting back and forth to sites. I have a tendency to close tabs if I can. But sometimes when distracted, open something back up, like a forum to see if there is change. And I may in truth have only opened it about 10 minutes earlier. Last visited/popular site list helper could be of use there. Or of course not procrastinating in the first place.
Don't both Firefox and Chrome have it, right in the URL bar?
Assume you mean:
Chrome -> burger menu -> History and recent tabs.
Firefox -> History -> Recently closed tabs/windows.
Not the best or easiest method to get to that info. Nested menus are only useful for occasional functions. They are a bit of a faff.
Not sure about Chrome now, I thought it was the same but turns out it doesn't work.
A lot of folks use https://addons.mozilla.org/en-US/firefox/addon/tree-style-ta...
Personally I use separate windows for separate tasks, so I can close a whole window when I am done with something (research, work, tabs from HN, etc.)
Firefox has UI for searching your tabs. You focus the location bar, type "% " followed by a string (but don't hit enter!) and it searches the urls and titles of the things in your tabs for that string and shows you a list of results. Selecting one of those results will select the corresponding tab.
If you leave out the "% " it will still search your open tabs, but also your history; some of the history results can appear above some of the tab results, depending on how often you visit pages vs selecting their tabs and whatnot.
>How do you know which tab is which
Firefox always shows the site icon and first word or two of the page title in the tab; unlike Chrome it has a minimal tab size (precisely so you can see what the tab is!) and a scrolling UI so you can go swipe through the list when there are too many to fit in the window and look for the one you want if you don't want to use the text search option.
There are also various extensions that help people manage tabs, but even the built-in setup is quite usable (speaking as someone who has a lot of tabs open and does not use those extensions).
I agree that Chrome's non-scrolling UI, which just shrinks the tabs until nothing is visible in the tab and then starts not showing the later ones at all so you have to use keyboard shortcuts to even consider trying to get to them, makes using more than about 10 tabs pretty much impossible. But that's just because the UI sucks, not an inherent property of having many tabs open.
Another use case for lots of tabs is using the great Spaces feature in Firefox. You can fire up and maintain multiple groups of tabs and windows per project. It's a great way to switch contexts. The main reason I used Chrome/Chromium was to have a sense of which Tab was a runaway process. Hoping Firefox v48 is a step in teh right direction.
I have to agree on it being harder to track how much memory chrome is really using... also, hard to tell which process is for what tab or extension.
I haven't been a heavy Firefox user for a while, I find the opposite case when trying to debug in Firefox.
For me the bigger draw to chrome is I happen to like the UI more... I no longer see the need for the separate search input, sometimes the fast paced changes to chome's UI can be cumbersome. That said, I work in OSX, Windows and Linux (Ubuntu Unity) pretty regularly, so it's always different somewhere.
I usually run Firefox for general browsing and Chrome for one-off dedicated app.
Just click on the button if you notice a slow down and restore the tabs later either selectively or as group.
That said, I use evernote to clip urls that will be useful in the future. The tabs that are open are transient ones, that are required only in the context of a current research or feature implementation and would not be needed once concluded.
I suppose, I am a pathological reader and researcher who needs to look at all opinions, find all apps that provide a feature and try them all, dig deep into each solution etc etc. Sometimes that leads to analysis paralysis. Not at all a good thing.
As one of those pathological users that often keeps 50+ tabs open (somewhat manageable with the tabgroups addon), I'd hate to see one of the principal Firefox advantages go away.
Without you as a user having to do anything other than simply open up new tabs as is natural in your workflow, you create an explicit structure which shows the relation of various browser tabs to one another. The best thing about this is that you can treat an entire tab tree or subtree as a task, and when you are done, clear it out.
Other elements of TST are that it can be placed as a sidebar, which, in this world of Very Wide Screen Displays, takes up extraneous space (for textual presentation) on the side of your screen, and frees up the precious and limited vertical space for more text state.
I've tried several times, and largely given up, on explaining to Google's Peter Kasting (one of the core Chrome devs) why this is so useful. Google are apparently adopting the GNOME and systemd view that:
1. If you're a technically advanced user, you're not our target user base.
2. If you're not a technically advanced user, you're not qualified to comment on what's wrong with the product.
Somehow, that never quite seems to work out right.
Firefox's flexibility, particularly with tabs, is why I continue to use it despite some performance and functional advantages to Chrome.
(On Android, Chrome's lack of adblock makes it almost entirely useless. I've salvaged it slightly through putting a hosts + DNSMasq adblock on my router -- DD-WRT. But that's only just barely sufficient. Adverts are a complete pox on the Web now.)
There is something that I've wondered about for a while on this Electrolysis fanciness: currently Firefox is said to be single-process. So why do I see tens of "firefox" threads in htop?
So this means any performance increase will be coming from rewriting old code in a more efficient way, which could be done independently of e10s, right?
I mean, since the new multiple processes will not share address space, there will have to be explicit communication between processes with corresponding overhead and blocking? So a priori one would expect the new code to be slightly slower? And use more memory, since some data needs to be duplicated between processes?
What also comes to mind is that introducing e10s is like taking an OpenMP-parallel code and turning it into an MPI+OpenMP-parallel code.
If you are anything like me you have a number of windows open for some project or research thing and you had one extra tab that you left open as the last active tab in the window but it's unrelated to the other 10 tabs in that window. The windows menu doesn't help you because that is the tab that is listed as the title of that window.
It's free and opensource for Chrome and Firefox.
If you try it and like it/hate it let me know what you might want put into future versions to keep it good/change your opinion of it to the better. It's still early in its development but what's implemented works well so far. Cheers!
Edit: There is a bug in Firefox that I have a patch submitted that will allow the tab titles to show up correctly even when they have been unloaded. for now it shows the URL of the tab until it has been loaded. The patch should land by Firefox 51 hopefully.
I've raised a few issues on github, feel free to ignore them!
Using the Awesomebar and typing "%" and a tab name will find the tab. Navigating to it and clicking (or pressing enter) will open the tab in its existing window.
Docs here: https://support.mozilla.org/en-US/kb/awesome-bar-search-fire...
If you liked the extension I'd love to know what you'd like to see. If you hated it I'd love to know why. Either way thanks for trying it out!
> For each change to Mozilla's inbound repository, our build infrastructure
> generates a build which we test and include in the graphs here.
IIRC, Firefox has(had?) a lot of old code written long before modern concerns were even fathomed; addons back-in-the-day had access to the entire process-- every tab, all the DOM, and everything shared the same namespace (shady calls from webpages were blocked somehow, not sure if it was true sandboxing or not), so AMO mods would block addons (or updates) that didn't adhere to their namespacing policy. I'd bet a lot of that integration is still there, and most of what we're seeing is the result of what is probably a very messy untangling.
Firefox thrived and grew in an environment where competitors where terrible. They lost the lead to Chrome and unless google really drops the ball I don't see how they can get it back by just being better. They'd have to dramatically leapfrog the competition in a very compelling way and since both MS and Google are investing heavily in their browsers, it isn't clear to me how they could ever do that.
Mobile Firefox allows extension, crucially adblockers. I don't think mobile Chrome will allow those anytime soon.
It also seems to have cleaned up their tabbed interface, closer to current chome (though I actually preferred having chrome tabs with the app views). May also give the current Opera another look. All in all, much improved even in just the past 3-4 months or so since I last tried it.
Opera Mobile is another tech that I'm enjoying using more and more.
Instead Moz executives seem set on forging all sorts of weird new business partnerships, and matching Chrome/Safari feature-for-feature. So FF becomes the sad little me-too browser, users everywhere wondering just how long their addons will continue to work...
Both IE/Edge and Chrome (and the current Firefox) are based on really old tech stacks with years of legacy code and design decisions.
Servo, if it pans out, will be the only "real" modern browser, which should provide a boost in performance and experience that it will take Chrome and IE/Edge years to match.
It would also make iterating on the browser much faster, which could slowly erode Chrome and IE/Edge's lead.
If Firefox can capitalize and successfully market their new browser it could mean becoming the lead browser for years.
Or just a few months to fork and improve. I'd really like to see the Chrome and Firefox teams converge over Servo. But maybe Google suffers from too much NIH syndrome to do that at this point.
The MPL is similar to the LGPL in that way, and KHTML is licensed under LGPL, which is what WebKit was forked from, so should be legally settled at this point, too.
I'm not a lawyer, though, so no guarantees that this is correct.
But Servo being written in Rust isn't the argument that he's making.
Servo is built with a modern architecture, which all current browsers don't have, because they all started out somewhen pre-2000, before multi-core processors were really a thing, and before the web was really graphical.
What Servo can improve is mainly, that the rendering of HTML itself is parallelized in Servo, and that with WebRender, they utilize graphics cards quite similar to the way that videogames do it, which both do result in noticeable performance improvements.
My expierence with other product development tells me otherwise, and I trust that the people behind mozilla who are specifically building a more performant browser using rust are aware of what does or does not greatly affects performance.
(Of course, Servo is still a long way from production ready).
Maybe I'm wrong but that's just how it appears as someone looking in from the outside and who doesn't use Firefox on a daily basis.
Also just in general, Google can display an advertisement 24/7 on the world's most popular webpage, and bundle their browser on Chrome OS and Android. Microsoft can bundle their browser with the most popular desktop OS. And Apple can bundle their browser with iOS and OSX, which are both still pretty popular in their space.
Mozilla has none of that, and would have to invest a lot of money to get it. They can afford the occasional billboard and organize the occasional PR event, but that's about it. The entire rest of their user gain depends on them being decisively better than the competition, so that people themselves recommend Firefox over other browsers.
They are the only company to put the user's rights, privacy and best interests first and foremost.
For years they were the only ones to carry the open source flag in the browser space. They had the Mozilla Suite when other browsers were closed, they came out with Firefox which was a blast at the time, and now Servo.
They don't play to a corporate agenda, but tend to do «what is right».
You have the core rendering engine, ancillary features to the application, and the shell of the application itself. From the core rendering engine, you have several areas that are subsets, from svg parsing/rendering, parsing/processing of text/xml/html/css, audio/video, almost all the APIs that the browser offers outside rendering can be implemented by different teams (web-audio, websocket, webrtc, etc).
The communication libraries themselves can be broken off too. There is really a lot that can be handled by many developers.
Now if there were say 15+ developers on any one feature, that is probably too many to be very productive. On the flip side, given the push towards rust, there's room for some duplication of effort there even.
Any browser is a massive, rich attack surface with bugs, but Chrome has been cutting edge in its sandboxing, privilege separation, and overall security.
True, Chrome is and has always been a Google marketing vehicle, where the user is the product. But has Mozilla always been at the forefront of privacy, even? Something as basic as private browsing, let's see:
April 29, 2005 Safari 2.0 Private Browsing
December 11, 2008 Google Chrome 1.0 Incognito
March 19, 2009 Internet Explorer 8 InPrivate Browsing
June 30, 2009 Mozilla Firefox 3.5 Private Browsing
Apple, Google, and Microsoft all beat Mozilla to the punch.
One of Chrome's many small victories was including sensible plugins by default, whereas Firefox's best functionality had to be downloaded separately.
2) On a related note, Firefox left out lots of features because there were well-supported, popular plugins that covered the same thing. Whenever a browser gets built-in ad blocking, should we say that all the other browsers are worse privacy options because they chose to leave those as third-party plugins?
They did for me recently when they changed the behaviour of the backspace key without a valid option to set it back to the way it was, so I went back to Firefox instead of futzing about with half-broken 3rd party extensions.
These might be select regressions (Google Maps, Slack - so I'll skip the bugzilla references), but every-time a user has to open Slack or Google Maps or Google Docs in Chrome, they are that much more likely to switch from Firefox (ultimately due to Sync issues: bookmarks, history) it just makes so much sense to use a single browser.
2) Developer Tools
I don't know how it is now, but half a year ago when I tried to switch to Firefox Developer tools (again for the Nth time) it would take 2-4 seconds for the console to open. Comparatively Chrome Dev tools opened < .5. As a result even trying to use Firefox for everything else, I still end up in Chromium almost daily.
Using Google Chrome has distinct advantages when syncing with an Android device and other Google services (performance issues aside) - most of the things that I considered to be an advantage in Firefox mobile (the top button with <number-of-tabs> for tab switching) Chrome has actually copied.
Minor point - I'm almost sure there is some somewhat underhanded user-memory choice (it's just too smart and annoying to be a bug) - because as someone that is often in foreign countries my Firefox (where I am signed into Gmail btw) always tends to give search results in the native language search, while Chromium (where I'm not signed into anything displays English ones).
Over all I really think Mozilla should focus on getting an advantage on 3) -- I don't think Google and definitely not pocket (which I wish they would unbundle from Firefox) provide a good service for syncing information / personal links / knowledge. There was a browser concept that floated by HN some-time ago allowing users to organize their tabs into sessions / subjects - that could be great! (e.g. check a button to sync only my places/people/events window to my phone and leave my work stuff out).
1.) Refreshed my Windows install and got rid off Lenovo crap (auto, just make backup, just make sure you have bckups of settings, incl. gpg keys etc. Documents were totally fine.)
2. Reintroduced a hosts file that redirects a few thousand domains to 0.0.0.0
3. Started using NoScript liberally. IIRC both google front page and search results would poke my processor every few seconds even when left alone. Now Firefox CPU usage is more sane. When you do have a number of search results open at most of the time this can tax even a modern CPU.
It took me quite a while to discover this as I don't know a good way to get pr domain processor usage and I honestly did not belive that Google would let such major performance problems pass QA.
Now I just use Firefox search box (ctrl-k) to get autocomplete.
If you type "about:performance" into the URL-bar and hit enter, that should give you at least a rough overview of how individual tabs are eating up resources...
Has this been available fo a long time or is it a recent improvement?
1) Advertising/promotions budget
Microsoft and Google can both easily outspend Mozilla in TV and online advertising as well as pay companies to bundle and promote their software.
2) Network effects of related protects
Microsoft and Google are able to leverage their control over their own products (Windows, Google Search) to promote their software. All Mozilla has is a browser.
I use Firefox about 80% of the time. I use Chrome for a few sites I need for work that still use Flash.
I really don't notice this performance issue. I don't use Slack, but do use Google Maps, Google Drive/Docs/Sheets and Gmail and it's all just fine in Firefox from my perspective.
I tend to not keep large numbers of tabs open (just habit more than anything) so that may be part of it. E.g. 10 tabs would be a lot for me.
I also don't sync settings. In fact I have no settings: I have firefox set to basically dump everything (cache, history) upon exit.
It's freaking sad that Firefox is stumbling out multi-process in 2016. This has plainly been the way to go for years. Multithreading is a disaster, it's just not a workable model for browser-scale applications.
But hey, if they could recover Firefox from the ashes of the bloated disaster that Mozilla Suite became, I have hope that the community can catch up eventually. Maybe.
All browsers, Chrome as much as any other, are heavily multithreaded, so this is clearly untrue.
Give it another go, especially Firefox Developer Edition: https://www.mozilla.org/en-US/firefox/developer/
It's tools are fast and amazing, arguably better than Chrome's.
Chrome has this also: https://support.google.com/chrome/answer/173424?hl=en
Suggesting an "underhanded user-memory choice" is a serious claim. It might be worth comparing your browsers' language settings.
> Suggesting an "underhanded user-memory choice" is a serious claim. It might be worth comparing your browsers' language settings.
I'm saying two things: a) Chromium / Chrome prioritizes recognizing the same-user. b) Maybe they are (accidentally?) very bad are remembering non-chrome users. As I said if it's an accident I'd be surprised, try it next you visit a foreign country.
anecodatal: Have seen and paid a large price differences on airline tickets (google flights referrer? ) with and without /ncr ($500++)
It gives you the choice to use the System proxy, but gives you options to specify your own.
The proxy dialog is (was?) convenient to swritch between proxies quickly; the env var, of course, cannot offer that.
While currently it doesn't really add anything from a security perspective the ability to run rendering in a separate process will allow it to very naturally support a number of operating system mitigations that it otherwise would not have in addition to adding restrictions on the process itself (sandboxing).
I believe this will allow Firefox to start to adapt to a more defense in depth philosophy although I admit it's probably still a long way to actually getting there. There's no getting around the fact that security in Firefox has been rather stale but e10s gives something to look forward too since the multi-process architecture allows you to do so much more for defense in depth.
For example, with Firejail  (completely unrelated to Firefox) one can do firejail <application> to run a sandboxed instance. Firejail comes bundled with rules for many applications to limit the files they can view from each sandboxed container. In case of a security breach, that keeps data leaks under control. E.g., an attacker might be able to steal stuff from your Firefox instance, but not from your home directory.
 With grsecurity they can become much more chroot-jail like. Chrome uses chroot's I believe so it's very beneficial there.
The multi-process trend is sold as "crash-proof" which reads like "can't fix exception handling".
See the problem is always local code execution. Once you have that, it's over. There are real mitigations (W^X, ASLR, stack cookies) and then there seems to be lore ("multi-process", "sandboxing").
Local code execution doesn't mean a thing (except for wasted CPU cycles and/or memory) if number of syscalls the process can do is limited to the bare minimum.
We've had these for nearly a decade with other software. Have they completely stopped bugs/exploits in that time?
I'm not trying to take away from the usefulness of those mitigations against certain classes of exploits but the point of "lore" such as sandboxing is to promote defense in depth. If there's a buffer overflow which is exploitable then containing that in a restricted sandbox with no permissions to do anything requires more work to break out of.
I'm not an expert, but I wonder if this means you could restrict GPU access to the master process only (for the sake of hardware-accelerated compositing) without needing to expose it to the website-facing process (due to the security vulnerabilities in GPU drivers).
My biggest problem with GPU drivers is that they stick out like a sore thumb on a hardened system. All the protection and isolation in the world won't help you when you have a stock-compiled, PaX-disabled blob loaded into your binary that communicates directly with the kernel.
For this reason and this reason alone, I am forced to basically limit OpenGL access to X.org and mpv.
Indeed. My knowledge on Linux is pretty limited but I seem to remember that Nvidia fixed something which let you use Pax/grsecurity protections you otherwise couldn't. This still implies loading a binary blob but certain kernel protections could still help you IIRC (DEP?). I could be misremembering. I can't check since grsecurity set their twitter to protected.
I'm not sure what Chrome does aside from having a separate GPU process and whether or not any sanitizing takes place. They're pretty good with stuff like that so it would surprise me if some amount of protection wasn't offered.
Edit: There are some patches from the Pax folks for Nvidia drivers which I believe help with PAX_USERCOPY? Although that may just be for getting it working...
 https://grsecurity.net/~paxguy1/nvidia-drivers-367.35-pax.pa... (example)
(Indeed, I have to use those patches otherwise the nvidia kernel module wouldn't compile)
It's just a refresher for me, I did a lot of low-level and assembler in the 8 bit days, so it's nice to see what's current. It's a wonderful counterweight to otherwise doing very high level programming in dynamic languages and learning (more) FP (Scala). It's nice to be able to find plenty of use cases for 32 kByte of RAM on a tiny board (http://www.ti.com/tool/ek-tm4c123gxl).
I think some low-level embedded programming (directly in C to the chip, not on a highlevel board that even runs a full Linux OS) is ideal to keep me grounded and remember how wasteful those many abstraction layers actually are. Yes I know what they do and appreciate their service - but when I compare what I get vs. how much more I put in (in Giga-Hertz and Giga-Bytes) I'm not convinced that there isn't a lot of waste going on that cannot, should not, be justified and sold as "price of progress".
I used a site that rendered a grid of images. Literally, that was the only user-visible function. It took 5s to render when throttled to 1GHz i5. Unthrottled i5 required 2.5s.
It is lunacy. It is an artifact of having a terrible language (JS) and a terrible layout system (HTML) mixed up into a modern "standard". Plus naive developers abstracting so far away they have no idea what's going on, and partially don't care.
Wasn't too long ago someone asked (I think unironically) why HN was so "fast". As if such a page shouldn't load basically instantly.
Can you honestly say info browsing (not apps like GDocs) overall is better than it was 10-20 years ago? Ignore the increase in content. Images load faster, sure, but that's offset by all the other junk. Twitter, for instance - it's actually sluggish on a nice ThinkPad! WTF.
What we wanted was a way to describe interactive hyperlinked documents backed by a distributed collection of servers. What we got was a virtual machine running on a virtual network. I would call the web the third generation of application layer and it's an even larger step back for some fundamental promises of computing (composition, interchangeability, others) than window managers were (second) from shells (first). Window managers gave us a better way to interact, but we lost things like piping applications together, easy to use environment variables, sharing other programs (most applications I install these days just ship with their own copies of everything; I blame Microsoft), etc. The web gives us cross-platform access anywhere features, but has terribly degraded any promise of applications working together unless they are owned by the same company. Not at all the shared interactive interlinked document layer we wanted.
And yeah, of course a grid of images is going to be fast, especially if they aren't even scaled. It's literally just a blit. What gets expensive is when you add bilinear filtering, alpha compositing, text, shadows with blur, path filling and/or tessellation...
All of those things are things people now expect from apps, native or otherwise.
Yes, just block ads.
The expectation and dynamics of ad-driven publishing have created a huge proliferation of sites that have no reason to exist. Taboola, NewsMax, Outbrain, and a slew of others, pimping nothing but crap. It's anti-information. You're dumber for having read or even seen it.
I've been keeping up a 60k+ element blocklist (sorted for dupes, and treating entire domains via DNSMasq), which helps some. But even that just chips away gently.
I've got another tab open with a Guardian article, "How Technology Disrupted the Truth". It dives deep into a space I've been pointing at for a while -- that advertising isn't merely bad for having created bad UI/UX and malware distribution networks, but it's actively screwing with media's primary role of communicating true facts.
Until and unless authors and publishers take the heat for dumping crap on people -- I'm increasingly a fan of the block -- that's not going to turn around.
(And yes, figuring out how to finance quality information also matters -- admitting it's a public good, and should probably be financed like one, would be a good first step.)
Ad blocking is necessary, but insufficient.
You can go further and note that all models are wrong, but some are useful. And note research which suggests that perceptual systems evolve to fitness rather than accuracy, a crucial distinction.
But the perception still needs to provide useful predictive or explanatory power over the range of experienced conditions. And if yo're deliberately violating that condition, you're going to end up with some less than beneficial behaviors.
Contemporary politics in various parts of the world, and interactions with media, demonstrates this well.
> You should compare 1999 Yahoo Mail to Gmail or Fastmail of today.
I don't think your examples relevant because they are well in the realm of "apps" and not "info browsing" OP talks about.
I'm also implying that the attitude of JS and HTML - ignore errors - seems to carry over into users of those things. The fact that having wrappers like jQuery, which accomplish basic tasks that should have been part of the standard, makes it worse. Look at all the "shadow dom" and other hacks around terrible performance due to HTML's model.
Even poorly written desktop apps don't seem to be as bad as common web dev. I'm not sure that most desktop devs are better trained or care more. It's just harder to screw up.
Most people consider Python and Ruby "sane languages", and JS has been running circles around them for years and years. Even the most trivial JS JIT beats CPython and MRI/YARV hands down.
> I'm also implying that the attitude of JS and HTML - ignore errors - seems to carry over into users of those things.
JS doesn't ignore errors. It throws exceptions for all kinds of illegal operations.
And error recovery is not a reason for CSS's performance problems. Of all the wrong reasons I've heard suggested for CSS's performance issues, that is one of the weirdest.
> Look at all the "shadow dom" and other hacks around terrible performance due to HTML's model.
Shadow DOM isn't really about performance. Are you thinking of virtual DOMs as implemented by React, etc.?
By error handling I mean scripts don't break the page. Same as HTML - the browser tries to ignore source errors. This sloppiness spills over, I think, into the attitudes of developers. But maybe this is an invalid personal projection.
I suppose I mean virtual DOM. In other GUI platforms, can't you usually just provide a command to temporarily stop painting, then resume after you've made changes that need recalculating?
But perhaps web dev isn't special; sure there's plenty of terrible server software written and maybe I'm poorly extrapolating from too little experience.
The end result is that, day-to-day, I can count on basically every desktop app to be fairly responsive. Whereas loading any given content (low interaction) website has a high chance of feeling sluggish to operate. When the actual functionality of said websites is fairly unremarkable, it makes me think perhaps the platform has some responsibility for it, somehow.
Edit: I'll admit I'm armchair engineering. I haven't done serious web dev in over 10 years. Just extrapolating based on some observations and how the JS community seems to be, overall. I might be way off here.
I don't think there are any real performance improvements in 48 other than the improvement you'll get from restarting a long running process.
Some addons use compatibility shims with multi-process and this can significantly harm performance in some cases (even potentially making it worse than e10s disabled).
This site is useful regarding the addon compatibility: http://arewee10syet.com/
Unfortunately I couldn't use my addons with it and together with the constant updates it was a no go for me.
Servo is a fantastic research project but it will be a while before it is "production ready".
A version of Gecko that contains only the things necessary to browse 99% of the Web exists. It's called Gecko.
Sites use the Web features.
It'd certainly be very interesting if the could maintain the latest browser fixes (even with less extension compatibility), but I doubt the team has the bandwidth to keep that up.
Just installed it. It doesn't support High DPI displays on Mac OS X. Unfortunately, that's a bit of a blocker for modern Mac laptops (everything looks fuzzy).
apply those to seamonkey.
As a personal preference, I like my browser (and most of my applications) to be a single process.
Honest question: Why is this your preference? What difference does it make whether an app is using 1 process or 10 behind the scenes?
There are tradeoffs. If all of the evergreen browsers are headed this route, so be it, I just want to know.
Sandboxing is easier with more processes, at least if the processes are split up to make this easier. One of the reasons for e10s is to allow for sandboxing, so it is now easier to apply sandboxing rules to Firefox.
anecdotally I've found there is overhead for each tab process (of which I may have a ton of very tiny ones in my tree tabs
This is understood and one reason why Mozilla is conservative here.
The plan is to make it the default in Firefox 51 on desktop, assuming the staged roll out is successful.
Also, the current approach will not considerably increase resource usage for heavy tab users as FF will only run one content process shared across all tabs.
A few things would make my Firefox life much easier though:
- The ability to launch multiple Firefoxs with the same profile, but completely separate processes. So when one Firefox crashes, the others are left alone. At the moment, no matter how many Firefox windows you have open, they are all children of the first Firefox you started.
- an equivalent of Chromes --APP=[URL] start up parameter (and the accompanying no-url-bar, custom icon, and unique window location/size settings) - this is a wonderful feature for web apps, and effectively transforms them into (almost) desktop apps in appearance. If firefox had that, I'd be over the moon.
- A way of allocating a session when you start it for 'X project' and then be able to save off all those tabs in one action to a folder or tab 'startup list'. I'd like this to be native, and not an add-on I'd have to keep track of or suffer incompatibilities when the maintainer has lost interest in it.
Finally, now that Windows and Linux roll with good default Browsers, and that Firefox (in the main) is a manual install, I can't see it recouping its percentage share. It feels like the slow death of the original Opera, all over again.
The existing "Bookmark all tabs" option that creates a bookmark folder from all the tabs in a window and the existing option to open all the tabs in a bookmark folder sort of address this use case, unless 'X project' needs multiple windows...
> Our long-term plan is to:
> - Incrementally replace components in Gecko with ones written in Rust and shared with Servo.
> - Determine product opportunities for a standalone Servo browser or embeddable library (e.g., for Android).
Check the "Oxidation" page on the wiki: https://wiki.mozilla.org/Oxidation
Traditionally this was all handled in one process. The tabs themselves are like fancier iframes.
By making firefox multi-process, this hack is no longer needed.
Such heavy interactive pages will no longer negatively impact the browser, especially on multi-core systems.
There was a lot of discussion about whether to try to move the UI to a separate thread or separate process, and the decision was that separate process has a _much_ lower risk of bugs due to racing memory accesses and whatnot. Basically, it removes a bunch of ways that you could accidentally shoot yourself in the foot while trying to retrofit threading onto a large existing C++ codebase....
Using OldBar on FF48 on a MAC
type text into address bar - Key down to select first entry
Hit enter - nothing happens
When you contrast it with Chrome which uses basically every single operating system mitigation in addition to their sandboxing and the difference really is striking.
I'm looking forward to the future of e10s Firefox since it now enables them to move forward with more advanced security mitigations and better defense in depth. I believe Mozilla released a plan for the future of these things which it showed what they wanted to do step by step (e.g. plugins first, etc).
Flash and Media Plugins (video decoders, EME/DRM) have already been sandboxed for several releases. There is a content sandbox in the development versions of Firefox. Of course it won't ship before e10s is considered stable, because that's a hard prerequisite for it. The amount of protection also varies by operating system (Windows and Mac OS X are pretty OK, Linux is still pretty crappy) but obviously that is improving week by week.
Firefox provides its own sandboxing now? Flash used to use a subset of the Chrome sandbox for Flash but that was restricted to the 32-bit version of the browser. As far as I was aware Firefox just ran it in the plugin-container processes for crash protection and nothing else (if protected mode wasn't being used or if you were on 64-bit Firefox). Does Firefox now make use of OS mitigations and integrity levels for sandboxing the plugin process?
Here is the Firefox bug tracking the 64-bit sandbox work:
People's hate of Mozilla is very similar and as misguided as their hate for Microsoft and it really shows in your original statement that they can do no right.
Instead of a congratulatory "welcome to the club (of one)", it's "why weren't you a member all along?"
I believe most Chromium based browsers could also fall under that category although I admit that's just being pedantic.
Furthermore at least Edge and to the lesser extent IE(11) do have some sandboxing which purpose is to enhance security. Their (renderer) processes do run at a low integrity level and are ran within an Appcontainer. On IE11 this is enabled through the use of Enhanced protected mode with 64-bit processes. This allows it to use AppContainers even with the desktop browser. Edge always uses AppContainers AFAIK.
I'm not sure it's sandboxed to the same extent as Chrome but it is a level of defense in depth. Edge also uses some security mitigations that Chrome does not such as CFG (control flow guard) although that's not dependent on a sandbox so CFG, baring performance issues, could be used in any browser sandboxed or not.
For IPC they use something custom (part of the WebKit repo) called "CoreIPC".
That hasn't been true for at least 5 years. Safari has used a sandboxed, multi-process architecture for Web content since version 5.1.
I was referring to the internal sandboxing Safari does to isolate plugins from everything else.
Web content and plug-in processes are XPC services, yes.
The multiprocess architecture in IE8 isn't an isolation layer as the browser can and does frequently render multiple tabs under the same process. It's not uncommon to hear reports of 40 processes for 2 tabs or 40 tabs for 2 processes.
Chrome for example doesn't even let render processes touch OpenGL. All the rendering and layout logic for the page is computed in a separate process, and draw commands are sent over a IPC pipe instead to a controlled renderer, which actually issues draw commands to the GPU. (If my memory is still correct, anyway). Given the complex layout and rendering engines in browsers, this is good to have - they're almost purely computational logic, so they don't need many capabilities like "Open File" or "Spawn Process". It's sort of a forced realization of real capability based security, like Eros or Capsicum.
Getting outright code execution (calc.exe) is only useful once you also have a way to escape the sandbox, after you've got code execution in the process you exploited. And on (all?) OSs, this is enforced pretty much by the kernel and many other things. So you need a kernel exploit, with a viable triggering mechanism from within the sandbox, on top of the browser exploit if you actually want to break out further.
In contrast, in Firefox, etc, once you've exploited the singular process rendering your page, you have full access to the whole system, at the privilege level of the application. There are no restrictions on what your payload can do, so spawning calc.exe is trivial. This is also why multi-process is a necessary, but not sufficient, part of a sandboxed design. Firefox still has a huge, huge amount of ground to cover to catch up to Chrome, even once it's gone full multi-process.
That said, none of these attacks are impossible, even with Chrome. They only mitigate/ban certain exploit mechanisms as a consequence of design, making things much harder, but fundamentally you can still get by. With a few infoleaks and one or two good bugs, you can take the cake. And, purely by the fact these projects are so large, those things exist. But Chrome has a much higher barrier to full compromise, I'd say, and it had the advantage of being designed that way from the start.
The next step is to do things like enforce very fine-grained control flow integrity over the whole browser, which will help stop code-reuse attacks (e.g. ROP/JOP), thus killing a whole class of exploitation mechanisms outright. grsecurity's RAP work has already been tested on the whole of the Chromium code base, and has excellent performance in general. Hopefully in time something similar can come to a wider audience.
Except on Android, where I believe it does in fact let them do that.
The hard part that took this long has been updating the browser UI to not directly poke at the web content (which is now in a different process) and not breaking all the extensions which like to do that sort of thing too badly.
How about airgap every website you visit by buying a new computer?
I hope there's an invisible ;) in there somewhere.
My advice used to be "run it as its own user"