Yea, I also noticed that, save for the pages with tons of images, firefox has actually been better than chrome in that respect for the last few releases. Now with this fix, I'm seriously thinking of jumping back. Then I'll finally have my sweet DownThemAll again. Many kudos to the firefox devs, performance and memory-usage is very important, and it seems they realized that before it was too late. Even with my 16GB I'd really like to run an extra VM for development as opposed to letting my browser gobble it all up.
Or even not trashing. When one tab asks for username/login with a pop-up ("Authentication Required" one) and you have it saved in some email opened in another tab you can't switch to email tab until you kill the login one.
I call it "god window" (god tab in this case) pattern. Very typical for most old and some new Windows desktop apps.
As I understand it, windows are responsible for drawing their own window decorations on Windows and OS X, but on X they're drawn by the window manager. So it's harder to mess with the title bar because you're starting from nothing and don't even know which of a dozen decoration engines is actually being used. Upside is that decorations are remarkably consistent under X.
Wayland matches the Windows/OS X behavior (probably because it's easier to just accelerate one big rectangular canvas the process draws on), so who knows what will happen as that catches on.
If you can't do it through extensions, you may be able to remove window decorations using the window manager. The configuration is dependent on the specific window manager that you are using, but it should be possible.
Sorry I took a while, but here:
App Button Remove
the RSS reader in the top-right is Digest
The rest is just moving stuff around/removing stuff with the built-in customization reachable by right-clicking blank space at the top. An even more vertical-space saving configuration I use is putting the URL bar up with the tabs - in that case I remove the extra stuff from it like that star with another extension I forget the name of, but it's easy to find.
Extensions require a restart to be installed, perhaps, but that doesn't seem to be a big issue: you only install any given extension once. If you want to install a bunch at the same time, you only need to restart once.
Which extension are you using? I use Mate and found a Gnome extension that was known to be unreliable and indeed did not work...
I agree the restart is a minor issue, as is the menu bar showing by default, as is the bookmarks bar being always/never there (rather than the more sensible Chrome behavior), as is the interruption of my workflow when Firefox decides to check for updates when I start it, as is the relative laggy UI, ... But these issues add up to a worse experience. Worse by enough that despite wanting to avoid Chrome, I came back to it :(
That's actually Hacker News being terrible. Their font declaration is just "font-family: Verdana;" so if you don't have Verdana installed (which just about never is installed by default on Linux), it defaults back to the one mentioned in the browser settings (which tends to be a serif font).
Hacker News could've made their font declaration have a fallback (like "font-family: Verdana, sans-serif;") and things would look way better by default.
As for me, I solved it by editing userContent.css and adding a font-declaration for Hacker News myself. (or you could use something like the Stylish addon)
All in all, I'd say it's the fault of website creators.
I too wish it had split processes (although I haven't had a freeze in months), but other than that it beats Chrome in every single way. I am not affected by your UI problems, and I install extensions every other week so I'm not bothered.
So, yeah, I guess you didn't really try. Perhaps you're just attached to Chrome.
How do you get rid of Firefox's windows decorations in Linux? All I could find was for Windows, except one gnome-based extension which is known not to work reliably.
Also, checking for updates when I start my browser is the worst possible time UI-wise. I start my browser when the site I want to visit is at my fingertips, and my working memory is fading as I am waiting for my computer. How do you customize this away?
Sure, the bookmarks bar being always-visible or never-visible is a minor defect, but it is still worse than Chrome's show-in-new-tabs behavior. Showing menu bars and window decorations by default shows bad judgement, but indeed that wasn't the reason I couldn't stay with Firefox. It felt slower, had annoying wasteful window decorations, started with annoying extension version checks, and let bad web apps interfere with all my tabs.
Do you have actual responses for my actual points besides "you didn't really try"?
Maybe multi-process is the reason... I've recently started using Chrome more on my sub-notebook because Firefox gets really choppy even though it has lower memory usage. Perhaps the explanation is Chrome can be scheduled to all four of my 1GHz CPU cores, as compared to Firefox which will be on just one?
I switched to Firefox from Chrome a couple of months ago, and liked it so far. The UI is fine, and the speed is also fine. But the MacOS+Firefox+Gmail bug (https://bugzilla.mozilla.org/show_bug.cgi?id=341886) drives me nuts. Can't jump to the start or the end of line while editing the message (OK, it is possible with emacs shortcuts instead of standard Mac shortcuts). Can't select text from the current position to start/end of line with keyboard (emacs shortcuts don't help). I haven't appreciated how much I used these shortcuts for email editing before switching to Firefox. And the bug is open since 2006.
 http://i.imgur.com/eBiIZHc.png (Also, no close button on tabs and they shrink down to the size of pinned tabs. All other browsers do that shrinking stuff by default. There used to be a property for that, but one developer decided to remove it. Now you need a custom userChrome.css and other voodoo.)
 Sounds close, but 50% of that space are actually unused. There is a lot less room for tabs.
I'm not in KDE, I'm in Mate which has no such feature by default.
Thanks for the second tip, if I try Firefox again I'll do that. Is it possible to get auto-updates at browser shutdown or in the background, when it doesn't interrupt my flow? I would probably forget to ever update them if it's completely disabled...
Not when most of that 16GB is in use. At the moment, chrome is using up 3.4GB. Yea, I have a lot of tabs open. Yes, I should work on that. Yes, I'd also like being able to open that many tabs and have my browser perhaps release some of that memory and just reclaim it when I open the tab for reading again.
Docker is real nice, although I might go for pure LXC the nex time I build something. But I'm developing on OSX (as many of us, I believe). So _not_ running a VM is not really possible here. I deploy on Linux, and like to have my environment very consistent, hence the VM. Granted, with Docker/LXC I could probably get by with just one VM and a lot of little apps running side-by-side, but that's not how my current setup and projects work, yet.
At any rate, less memory used by browser is more memory for me, to use any way I want.
I already love the firefox behaviour that when it restores your tabs, it doesn't actually load them until you click on them. Awesome startup speed improvement.
The joke being that Docker requires LXC and aufs, which means you are stuck running Ubuntu for now. Fedora is soon going to announce Docker support when 0.7 is released.
For now if you want to run docker on your Mac, the best bet is to use Vagrant + virtualbox (or fusion if you have the provider). This means you have to run another VM to run docker which defeats the purpose of the suggestion of running Docker, so you don't have to run another VM instance.
This has been a myth floating around since the early days of Chrome. I am and have for a long time been a big fan of Chrome, but it has never been light weight. It's just fast so people think of it that way. But you've always been able to check on the memory usage of tabs and see it just eating up your memory.
That seem's to be the desirable outcome, if the trade-off for perceived speed is a larger memory footprint. Users directly feel the speed, but are less likely to be affected by a larger memory footprint.
Just closing the tab isn't enough to release that memory though.
Lets say you have 200 tabs open, check your computers memory usage, close all of the tabs but one and see how much your memory usage drops. Then close that last tab (or fully quit the program on a mac), and see how much memory you get back.
I typically get about 1/4-1/2 of my total RAM back after closing the program entirely.
The half-blank pages you're seeing is a feature that's recently been enabled. This is a case of latency improvement that visually appears to be worse.
When rendering can't keep up with your rate of scrolling, Chrome will display the part that has been rendered so far and display a checkerboard pattern for the not yet rendered part. Previously, Chrome would just flip the display buffers less often if rendering can't keep up.
Yep. In fact, I've noticed performance appearing to get worse and worse in both my Canary browser and regular Chrome. Canary always seems to perform quite a bit worse than regular Chrome. Not sure if that's just due to bugs because of the unstable build, or further changes they're making that is continuously hurting performance.
Like many of the other Firefox memory changes, this one leaves me feeling a little ambiguous. On the one hand, obviously keeping memory usage down is good. On the other hand, waiting to decompress/decode images until they are visible to the user means that there are obviously going to be performance trade-offs (and also, obviously, design/engineering trade-offs).
Speaking just from my personal usage: I don't think there has ever been a time where delaying the image decompression on a page would ever have had any utility for me (at least from a memory standpoint). On the other hand, there have been plenty of times when eagerly performing image decompression would make my browsing a bit smoother.
All in all, it seems (from a not-particularly-in-touch outsider perspective) as if Mozilla went from being totally dismissive of memory complaints to being hyper-responsive to memory complaints with no (publicly stated) rationale behind the change. It would be interesting to see the reasoning behind this (if only from an intellectual standpoint), particularly given that in many cases I tend to feel that it would have been better to simply be a bit more diplomatic when rejecting memory-based bug reports.
...it seems ... as if Mozilla went from being totally dismissive of memory complaints to being hyper-responsive to memory complaints with no (publicly stated) rationale behind the change.
I don't know what their internal rationale is.
"Memory is cheap," we like to say, but it's really only true if you're not wasting it like crazy all the time. Ten years ago you could surf the web with 512MB of RAM; now we're getting to the point where 4GB isn't considered enough. If 4GB now costs the same as 512MB then, then in the sense of "dollars worth of memory required to perform function X acceptably," memory hasn't actually gotten any cheaper.
If you can get performance that used to require 4GB down into 1GB, you've upgraded the performance of a bunch of older and low-budget devices from "barely usable" to "fine."
Take the big-picture view: updates like this can actually result in a slower flow of "obsolete" computers and devices into the landfills. This is one of those bottlenecks where a little good engineering can have really disproportionate positive aggregate effects.
This sort of thinking has been one of the selling points of desktop Linux for a long time. I wish Microsoft thought more like this.
Right. Also, no matter how cheap memory is, you run into limits of things like maximum RAM capacity. I'm there now... I have 8GB of RAM in this laptop, and that's all it can hold. I can't spend more money to put more RAM in here even if I want too, as it just won't take it. I'd love to go up to 16GB, but that won't happen until I buy a whole new machine.
Needless to say, I'm very happy about this firefox news. AS somebody who routinely runs a lot of apps (and more than a few memory hungry ones) I want every app to be a freaking memory frugal as it possibly can, and be a good citizen when RAM has to be shared between apps.
Playing the whole "Unused RAM is wasted, take up as much RAM as you can" mindset is fine if you only run one app at a time, but, really, who the f%!# actually does that???
If image decoding is asynchronous (don't know - guessing it is already?) and you decode them when they get near to the viewport and not just when they become visible, then it should always be decoded by the time you scroll to it and yet never jank the page. Scrolling a long way really fast probably means there's a small delay while it decodes images, but surely that's worth it to save gigabytes of memory?
It's a huge advance for those of us who usually have 50-60 tabs open for weeks at a time, where hitting a really heavy image page would tip Firefox over the top and crash it.
I switched from Chrome to Firefox a couple of months ago for its ability to handle more tabs without slowing/crashing, and for its better performance with image-heavy pages. So, for me, it's not just "who cares?" it's a killer feature.
Why should one use bookmarks rather than many tabs? I have many concurrent projects going, and things I refer to regularly, so I just leave them grouped in my tree of tabs on the sidebar. Much more efficient than digging through nested bookmark menus, or trying to make sure the bookmarks are properly classified.
Basically, my tabs are bookmarks, except tabs are integrated into the web-browser better.
If they would make the tree-style tabs very aggressively swap to disk, or even discard state entirely and seamlessly become a bookmark that would reload when clicked (perhaps this behavior could work only for white-listed domains, like wikipedia.org), that would be perfect.
The main usability problem with bookmarks over tabs AFAIC is that the bookmark UI does not encourage pruning bookmarks, so most become stale over time.
Another idea I've been kicking around for a while is "tab decay". Have tabs start to decay and close themselves after not being viewed for a configurable amount of time. 1 month would probably do it for me, it would save me a lot of cleanup time.
So store the state on disk if the user hasn't been to the page for a while. When I've I last visited a tab three days ago I don't mind a little disk delay when I get back to it, in exchange for not having disk churn everywhere when I'm trying to get work done.
> I'm not sure _save_ is the right word here...using available memory has no particular cost to me over not using it.
The browser not using available memory means that other processes on the system have the memory available to use. For those of us who may have the browser running while something else is actively running on the computer, that's potentially pretty important.
You would be right if Firefox's memory usage was opportunistic, like the OS disk cache. Problem is, Firefox (or any browser for that matter) will tend to bloat in memory as it sees fit, which for any device with 4GB or less of RAM may mean swapping to disk, which is still the same killer as it always has been.
4GB seems to be the threshhold right now. Windows consumes maybe 0.5-1GB, budget 0.5-1GB for a handful of other programs, bloat Firefox up to 1-1.5GB, and suddenly you're swapping. (Naturally the OS does not wait right up until you have 4GB of RAM consumed to start swapping, it seems to start around 3GB)
Image decompression is pretty easy, CPU-wise. Well over a decade ago, computers were happily rendering JPEGs on-the-fly without storing a decompressed version (RISC OS apps did this). Your desktop/laptop/phone has plenty of ability to do this.
https://wiki.mozilla.org/Performance/MemShrink has some reasons. One reason is explicitly "Fewer people will complain that Firefox is a memory hog and that Mozilla ignores memory usage". Even if complaints are technically nonsense, addressing complaints is still important.
We've been using stuff like Infinity.js (http://airbnb.github.io/infinity/) for quite a while now, and it's basic functionality is to improve long scrolling panes by dynamically adding and removing elements (including images) in real time as you scroll to keep the number of elements low and the page responsive. Sites that use this technique usually feel just as responsive as if the site really did have a page with 10,000 items on it instead of a dynamic viewpoint that only shows 100 items as a time.
But in the meantime browser workarounds like this one from Firefox will help avoid the huge memory usage in sites that don't use smart techniques like Infinity.js
I don't know that I'd characterize this as a workaround. Infinity.js is definitely a good approach, but you should expect good baseline performance from the browser even without more complicated techniques.
The reason why I call the Firefox fix a workaround is because the proper way to solve the problem of 2 GB of image data on a page is by using a UITableView style wrapper.
But HTML 5 doesn't have such an element for us to make use of. The proper fix would be to add such an element to HTML itself, but obviously that's something that takes years to do and to get it adopted across all browsers.
So in the meantime tricks like what Firefox is doing are nice to help prevent inexperienced web developers who create abominably inefficient webpages from consuming too much system resources when an unfortunate victim downloads the page.
Honestly, this is pretty similar in effect to a UITableView-style wrapper, except that it's handled automatically by the platform. Sophisticated JS-only webapps are perhaps a different case, but a site that consists primarily of HTML and CSS should work just fine across a variety of different devices with different performance characteristics and memory capacities, automatically. This is one of the advantages of a declarative model like HTML/CSS: it gives the implementation a lot of freedom to apply optimizations and adapt to the limitations of the platform it's running on.
I guess what I'm trying to say is that this isn't a workaround - it's one of the primary reasons HTML and CSS are architected in the way that they are. This has nothing to do with helping inexperienced web developers and everything to do with preserving the declarative nature of basic web content.
Have you tried the new version? You should try it before you dismiss it due to theoretical concerns. As I wrote in the blog post, the time before the page displayed dropped from 5 seconds to 1 seconds. And scrolling performance is unaffected AFAIK; if you have evidence otherwise Timothy and I would love to hear about it.
Firstly... thanks! I love firefox, and appreciate any memory savings.
Did you do any tests for animation performance, and test that preloaders are not affected? Are these issues addressed?
Hardware accelerated decoding(like IE, and image viewers use) would help. Much modern hardware can do the whole decode-resize step in dedicated hardware.
For software only, an improvement is resizing in jpeg-space to the exact size required. It is much faster than decoding the entire jpeg (eg 5 megapixels 2MB jpeg, into 20MB of uncompressed image data) then only displaying it at 320x200 pixels(1MB at the 16bit needed for this YUV data). You can resize the smaller jpeg-space data, then not have to allocate that 20MB of data (perhaps never going back to the OS).
As well, some images compress very nicely with hardware texture compression (but many look awful). There are heuristics that can be used to decide the best type of texture to use based on the image data.
Why turn 6bit-16bit per pixel jpeg data into 32 bit per pixel data? It only takes more memory and is slower to draw. Surely jpeg usage is common enough to special case its handling?
This was a follow-up to an earlier change which also delayed image decompressing. In the description of that change the author wrote: "Andrew McCreight wrote a comment that I can’t find now, but basically he said that on one of his machines, that page was extremely slow and painful to scroll through before the change, and after the change it was smooth and pleasant."
If you have any examples where the linked changes causes performance degradation, you should try reporting it to the developers.
I have been running aurora for a while so I already have this fix. I just tested it and there was no noticeable performance impact for the memory saved.
Chrome on the other hand used all my memory and caused me to start swapping. Any website with lots of images loaded will be impacted by this ( facebook , G+ , tumblr , ... ) so I think its a rather important change
With the capbility being built, if there is a performance tradeoff (I didnt look through the bugs, but imagine it was already thought about), then its now possible to do 'hey you have a load of spare memory' pre decode all the things type optimisations
> All in all, it seems (from a not-particularly-in-touch outsider perspective) as if Mozilla went from being totally dismissive of memory complaints to being hyper-responsive to memory complaints with no (publicly stated) rationale behind the change.
I suspect that being a company that wants to have a unified code base for their browser and that is now newly pushing a browser-centric OS for smartphones may explain this, at least in part.
In fact, it's fortunate that MemShrink had been up and running for a while because it meant we had already fixed a lot of leaks and inefficiencies, and we had good tools for identifying memory problems in Firefox OS.
Everyone's crapping on me for the choice of graph. I originally tried a 2d plot but the lines obscured each other.
More importantly, it wasn't intended to be a rigorous data representation because the results vary significantly between runs. I was just trying to communicate the results in an approximate fashion; the improvements are dramatic enough that approximate is good enough, IMO.
And in case anyone is thinking about suggesting errors bars: please don't.
Technically it's three trials of 2D data, with the axes being Time and Memory Usage. The width of the lines given by the version axis is for visibility, not to encode data.
A 2D line graph with 3 colors for the different versions would be an appropriate visualization. This one suffers spatial distortion and is only useful in confirming that yes, they use less memory in step 2 now.
>The width of the lines given by the version axis is for visibility, not to encode data.
Bar graphs do exactly the same thing in a 2D graph. Would you suggest that all bar graphs be replaced by a one pixel wide stacked bar graph?
Of course not, because even though the samples in a bar graph are discrete, they convey order. It's the same here. The width of the lines is only for visibility, but the Z position of the lines conveys version order. A 2D line graph would only convey that by labels. And if labels were as useful as visual position, we could just replace all graphs with lists of numbers.
I personally found this graph trivial to read and comprehend. The spacial distortion is canceled out by the fact that the differences we're looking at are very large, and the steps axis is spaced out sufficiently to line up the points visually.
My only complaint is that it shouldn't be a line graph, as the steps are not continuous.
I do agree that a style with discrete steps would also be appropriate.
To the other points, no, not all bar graphs do the same thing. Bar graph bars are often the width they are due to data binning, where the width has significance.
The graph was trivial, but made less clear by the 3D isometric projection. the spatial distortion is reduced but not canceled out by the space.
3D pie charts share the same failure. Yes, as long as the tilt is slight, they are still mostly readable. But it will make "closer" wedges appear larger and "farther" wedges appear smaller. It's visual dishonesty and unnecessary.
Just because one can finagle understanding out of a visualization does not make it a good visualization.
Firefox will never ask you to sign-in for the best browsing experience like chrome does. That's when I switched back. You can tell google is an ad company and users are secondary by their default choice of 3rd party cookies. I'd rather not have an ad company developing my browser. The incentives are all wrong.
I never saw a reason to switch to Chrome to begin with. Performance was acceptable for me, while other benefits always made Firefox more useful (privacy with encrypted Sync, better add-ons and etc.). And some time ago, Firefox clearly became a leader in performance as well.
All this "jumping to Chrome" was simply a hype thing IMHO, and I generally don't care about hype.
For me, it is time. I switched back about a year ago, when Firefox started to not be painful any more, but still was a step back from Chrome. Some tabs would still cause stuttering in the whole app and sometimes scrolling was less than smooth.
These days however, I have not noticed any of these things for months in Firefox. Really I can't recommend it enough!
I regularly compared browsing performance. I looked at how fast websites were loading and how fluidly they scrolled. Initially, I switched from FF to Chrome because it was so much faster. But then FF improved, while Chrome got worse and worse. So at some point, I switched back.
I never accrued much state in any browser. A handful of bookmarks is all I keep. All the rest is in several services that are independent of the browser. So I don't know about syncing, really. That said, I have since learned to love FF sync, which not only syncs bookmarks and plugins, but also the history and passwords. I even use it with Firefox Mobile on my Android.
> The "many tabs" situation where Firefox performs better than Chrome is more like 50+ or 100+ tabs than 20.
For me it's much sooner... on my work machine with limited memory, firefox is almost always noticeably better after 3-4 tabs, or even fewer when they're memory-hungry sites like gmail. With only 1GB on that machine, I tend to be hyper-sensitive to memory usage, and unfortunately chrome tends to basically use up all memory on the system once I reach 8-9 tabs. FF gets a fair bit further.
The upside of chrome on that machine, however, is that the process-per-tab thing makes much easier to control memory usage: although tabs chew up memory quickly, closing a tab gives you back all the memory it was using, whereas with FF the relationship between closing tabs and giving memory back to the system is much fuzzier....
I am addicted to tabs and have 260 tabs (yes, I have a problem) open and running smoothly in Firefox. That was not possible in Chromium last time I used it.
Firefox currently uses about 3GB memory, which means I have plenty of memory left on my computer. So I do not think memory usage in the general case is so bad for Firefox and neither is scalability or performance. The problem Firefox historically has had and still to a degree has is lack of responsiveness.
20 represents an extreme slimming down for me. I usually run somewhere between 80 and 200 spread across a couple of windows, but can often go above. I installed a tab counter in FF just for fun. I'm not sure what site you have open in your one FF tab, but it must be a very specific one.
I've been using FF lately for cross browser testing. Until tabs run in their own process / thread I won't be switching back. Also, while the dev tools have come a long way and are pretty good now they're still not quite as good as Chrome.
I have 4 different Chromium profiles, and two different firefox profiles, with different levels of privacy / security. Each profile is also linked to a particular context, for example, a tab each for project A, project B, personal, etc.
Tried the page with Opera (12.xx) and it peaked at around +850MB and within seconds backed down to the initial value (+0MB). No change during scrolling. How is that possible? Does the browser hold bigger pool and just got rid of other stuff?
Completely unscientific test: if I load the following website in Firefox 24 (on Windows 8), the page scrolls very sluggishly (as it did in previous Firefox versions). The same page scrolls fine in Chrome. I don't know what is causing the difference. Is it the images? Or something else? For info, Firefox is my main web browser and I have never switched to Chrome (nor do I intend to).
Warning - the website is ridiculously image heavy (it's a gallery of Android app screenshots)
I returned to Firefox (Aurora) a couple of months ago, but i am currently starting to inch towards chrome again.
I really don't want to remake my FF profiles every 2 weeks just to avoid performance issues.
> When loading the page, physical memory consumption would still jump to almost 3 GB, because the images are still decoded. But it would soon drop down to a few hundred MB, as the decoded data for non-visible images was discarded, and stay there (with some minor variations) while scrolling around the page.
This is a great news. The application that uses the most memory on my machine is the web browser.
I wonder if this fix would make scrolling the page less smooth.
Interestingly, this optimization is already unnecessary if you are using the common technique of setting the CSS background-image on an element rather than using IMG src - on Chrome, Firefox and even Internet Explorer memory usage will be stable on a grid of images that way and, as a bonus, you trivially get cross-browser scaling and centering for vertically scaling thumbnails to be the same size.
I switched to Chrome recently because Firefox was growing overnight to the point where it started thrashing and becoming unusable. I just checked and it says I already have the latest version. This recent change must not be the whole story.
The latest changes done as part of this are only present on Aurora (Firefox alpha) and the feature is turned off by default.
Also, I have my Firefox (Aurora again) running for over a week now on my dev machine with > 50 tabs open an I have had no issues with memory. I suppose there is some problem with one of your add-ons. Try creating a clean profile and see if it's any different.
On one hand true, on the other - it isn't that those fixes are some magic, extra smart hackery (at least on the idea level, dunno about the implementation). Don't load an image that user don't see, remove an image that user saw but don't see anymore - those seem as rather obvious solutions.
Just determining whether or not the user can see an image is quite tricky when you consider that there are a number of things which could cause it to be invisible to the user and thus safe to remove:
* It might be placed as a normal block element on the page but out of the current viewport.
* It might have had visibility: hidden CSS statement applied to it.
* It might have had display: none CSS statement applied to it.
* It might have the alpha channel set to fully transparent and then fade in.
* It might be inside a <div> that has overflow: scroll and then have that specific div scrolled such that the image is outside the viewport of that <div>
* It might be using top and left CSS properties to animate the position of the element such that it is off the screen.
* It might have it's CSS width or height set to 0 so that it can be start out "hidden" and then have animate open.
So I wouldn't discount this as just a simple solution because I see a lot of edge cases where things could get confusing just due to the complexity of how many ways there are for a page element to be visible versus not visible.
Except, selectively targeting (bad) cells in a heavily complex system (physiology) that you didn't design and have limited access to makes it tough.
Implementing these fixes in FF is easy. They didn't even have to be geniuses to realize it, just keeping an eye on friends (/rivals) would have given them the idea a few years ago.
NB: Not criticizing FF devs. They do awesome work! And sometimes, such silly little optimizations can just stay hidden at the back of everyone's minds for ages until showing themselves through a veil of obviousness.
right on. Please spend as much time reading the bug referenced above as you have this thread, and think about the obvious problems/hard solutions proposed, then make a better one! Just keeping a few million WinXP machines from a recycling barge in Bangladesh for another year would be worth the trouble. and hopefully smarter image handling by everybody's browser/phone/whatever will be a long term result.
Yeah the original comment isn't all that wrong technically (though not very nice either). Firefox is my main browser but loading long and heavy pages takes noticeably longer than with Chrome or IE and causes laggy scrolling. Glad they fixed it with v26 though, hopefully should land in the beta channel soon.