Twenty five what? Megabytes? Mebibytes? Percent? Libraries of Congress? Furlongs per fortnight? Inverse femtobarns?
I can guess you mean "MiB" (mebibytes) from the charts, but units are always important. Bare numbers leads to confusion! It's good practice to always include units, even if it's a simple "All numbers are in MiB" at the top.
I've just updated the table header to reflect that it's MiB - plus its sorted now. Interesting to see is Google Inbox at the top vs Gmail (vintage) at the bottom
My FastMail accounts (personal and work) tend to sit stably around 10–12MB (when open for days) if I have only opened the mail part, and nudge up to 15–17MB if I have also opened the calendar, settings and address book (which are modules loaded separately on request). As you would imagine, large emails will inevitably increase the memory footprint while they’re loaded, until they perhaps get evicted from the LRU cache after viewing many more emails.
This is what comes of caring about performance and memory footprint. It doesn’t hurt also that almost all of it has been done by one guy rather than having fifty or more people all adding, adding, adding in uncontrolled fashion.
And Topicbox, our group email (mailing lists) product, is using 7–8MB for browsing archives. (It’s definitely simpler than FastMail.)
Somehow, large teams like Gmail’s, which have vastly more resources than us, are never good at memory usage, and seldom good at performance. I have some vague ideas about why this is, but it’s initially quite counterintuitive. It does seem to be a fairly consistent observation, though: small teams actually have a big advantage in such matters.
I’m almost sad that this was all under control before I started working at FastMail early last year, because it’s hard to justify improving it further, and I do find optimising things like memory usage and running performance to be such fun. (I know of a couple of ways memory usage and startup performance could made reduced; but the main thing for startup performance will be service workers and a persistent data cache.)
Meanwhile, I often have to interact with a Jenkins instance with a plugin that has a habit of redrawing a large table from scratch every second or two when a build is running, and keeping a reference to the orphaned DOM node. It can consume almost a gigabyte an hour.
I feel a desire to switch back to the original HTML version - credit to Google for keeping it going. Here's the handy support page [0] with a link to convert back.
Edit: it appears you just need the `/h` appended to the URL [1]
The very lightweight HTML Gmail lacks all of "normal" Gmail's latency-hiding features, which is one reason it uses so little memory. Gmail preloads all of the messages in the thread list so when you click them they are displayed instantly. HTML Gmail doesn't, and when you click a message it fetches the body from the origin. The tradeoff is yours to make. I find the HTML version infuriating when I'm tethered on mobile because every mouse click takes 10 seconds. On the same tether I can leave normal Gmail open all the time and it's fast. Ironically the lightweight Gmail is more usable on a fast, reliable wired connection.
I think that the contents of the emails in a thread is a minor part of 150MiB taken by the full version. I suppose my entire inbox (the not-archived part) is much smaller that that.
Indeed, it's not the data but the code to support it. There's a bazillion features in there designed to avoid the user having to do additional page loads. For example in my Gmail memory profile there's 20+ MB of code to support the real-time chat feature in the sidebar. You can argue about whether there should be a middle ground implementation that has the email preloading but not the real-time chat. Developer and project manager time is unfortunately finite.
Wasn't this the initial reason gmail was designed? It had a small footprint and wasn't a performance hog which made it fast, responsive and easy to use?
I've read lately Inbox is probably going away after the recent gmail redesign which is incorporating some of Inbox's features.
This is interesting, but it's not the whole story. Apps that are media heavy can often use large amounts of memory outside the JavaScript heap.
E.g., if you load and decode a 3MB MP3 with the Web Audio API you can easily find yourself swallowing 30MB of RAM, depending upon the uncompressed sample rate. Another example: image decompression can lead to large amounts of GPU memory being swallowed.
You can see the effect of situations like this by using Chrome Task Manager, which will give you a more realistic view of total memory usage by a page.
Nice to see The Guardian so low, especially compared the NYTimes. I often come across issues with the NYT where the page starts to load and then just goes completely blank.
I think overall this list is a good indication of sites that have respect for their users.
> where the page starts to load and then just goes completely blank.
That's almost certainly a bug on your end. You do have 50MB of free RAM available, right? Other than actually running out of memory (and swap), normal usage should never result in a no-render.
My best guess would be a content blocker interfering.
By making it visible, people would be more cognizant of which sites have poor experiences or have a big impact on their computer.
You can't simplify this problem down to "This uses more RAM than an arbitrary threshold therefore it's a problem." If I spend 99% of my time using an app then I want it to cache hundreds of megabytes of data in to memory so I can work fast. Saying it's a bad experience if it does that is wrong.
Take a particularly bad example: Gmail uses over 150MB of RAM, and achieves similar results to what FastMail does with 11MB. The main difference is that Gmail includes the Hangouts chat system—and indeed, last time I checked Hangouts was responsible for most of the megabytes downloaded, most of the time spent and most of the memory consumed. Suppose then we decide to let Hangouts off scot-free for its profligacy, calling it 50% of the footprint, and just focus on comparing the email part of the system.
Gmail is still using an estimated 80MB for what FastMail needs only 11MB for.
Where is this 80MB coming from? Is it caching 80MB of emails? I think not. You’ll be lucky if that 80MB corresponds to even half a megabyte of email.
Most high memory usage is not because it’s caching things so it can work fast. Most high memory usage is simply because the app is inefficient.
I'd be interested to see how the offline capability of Gmail ties into this. Whatever it uses, localstorage or service workers or whatever, it does a pretty dang good job.
I can’t speak about how Gmail’s offline functionality works, but I would expect it to be broadly similar to the basic approach we’re planning on in FastMail, which is that you start by simply adding a persistent cache. There’s already a robust object syncing protocol in place (JMAP), and you already have what amounts to simple tables (Mailbox, Email, CalendarEvent, &c.), so all that’s really changing is that you’re moving most of that cache onto disk rather than keeping it in memory. Most likely, content is moved out of the JS heap into an IndexedDB. I’m not sure what the memory characteristics of IndexedDB are yet, but at least theoretically that should be memory that can easily be freed, because it’s all flushed to disk. There’s the possibility that the browser may keep more in memory than is necessary, but I would actually expect memory to be a little lower in the presence of service workers, rather than higher.
That's not about right or wrong, and by the way, the proposed indicator indicates numbers, not wrongness level.
I'd add to it the slider "cache this app more-less", because you know how do you usually use this app or page.
And the "complain" button with auto-redirect to some support, or uservoice form of this app, if there is any, when e.g. I moved the slider to the "less" as possible and even then it still eats 200 megabytes of RAM.
I'm not convinced that such a number could be calculated in such a way that wouldn't be utterly meaningless to all but the biggest nerds.
The resources a 'web app' 'should' use is highly context dependent. As a web developer, I can determine some of that context, as I know what functionality is resource intensive. I don't think that you can distill that down in any useful way.
Compute this from the start of page load. Adjust the scale of the final output if desired.
# automatically scaled to local hardware
# in units the user actually experiences
t := "total wall-clock CPU time used (in ms)"
# yes, allocations - NOT total usage
m := "Total heap allocations (in kB/kiB)"
# magnitude of about:blank
a := typical_m_for("about:blank") *
typical_t_for("about:blank")
# score is computed similar to decibel
score := 10 * log10( (m * t) / a )
A variant of the score that excludes media data in <audio> or <video> tags should also be computed. Both versions should be presented.
Continuously update these numbers, so the user can see the impact of any background JS/etc.
> utterly meaningless to all but the biggest nerds
Please don't assume people are stupid. If you give people real data consistently and it affects their life, they will figure out how to use it. Their interpretation may not be technically rigorous, but it will mean something to them. Also, most people will understand and expect the score for youtube to be (much) larger than a messaging service like twitter and both larger than a simple static text-only page.
> as I know what functionality is resource intensive
You only know this an isolated abstract sense. You do NOT know how much network bandwidth, CPU time, or RAM the user actually has available and wishes to use for you page. They may be using their bandwidth for other important things on another computer. Their RAM an CPU might be needed for other uses - inside or outside the browser.
Approximately nobody only uses one application at a time. Your app or webpage is almost always going to be competing for resources in an environment you do not control and can never truly understand (you do not know the user's ultimate goals, environment, or requirements/restrictions).
I'm wondering if this could be made a relative score instead of an absolute one that measures the current site's memory consumption relative to your available free memory. It might be more useful to see the impact it has for you.
That seems like a convoluted scheme to get "normal" people to care about something that you care a lot about, and they don't.
They don't care because it's not a problem for them. Yeah, maybe they could have bought a cheaper phone if everyone had spend twice as much time writing the code. But hardware advances have actually caught up with requirements (and then some), and all these sites work fine on even lower-end current phones.
You care about it like a watchmaker cares about the mechanical drive of his watch.
They don't need a watch. They have smartphone. It's 8 magnitudes more precise than your mechanical watch.
I care about bandwidth because I don't have unlimited data on my mobile plan.
I wish I knew how much each site/domain was costing me. Then I could have a better idea which sites are profligate wastrels to be avoided and which sites I can feel free to visit at will.
I agree, but I believe this is referring to memory usage, not data transfer/usage over the network. But that number would be interesting if more readily available.
It is absolutely a problem for them. It is what makes them pissed off and ruining my day by getting in anger fits over how their phone is shit (it's not their phone - it's the accumulated bloat of the apps they use). It's what makes them buy a new computer ("my old one was too slow").
It's just regular users know so little about technology that they accept what they're given without any question. "Thinks work slow so it's definitely the fault of my computer, maybe it has viruses".
I use the uBlock origin badge in my browser as a proxy for making page judgements - I avoid revisiting sites with a high number (>5?) of blocked elements...
It's definitely noticeable on the Google/Facebook/Twitter properties, which is unfortunate because they're the ones behind a lot of the tech (and ads) commonly used across the web.
I wonder how much of the bloat comes from everyone using React/Angular/Polymer/Bootstrap and the layers and layers of libraries and another DOM and rendering engine.
FastMail uses our in-house framework Overture, which doesn’t use a VDOM, but rather uses computed properties with explicitly-declared dependencies, observers, and things like that. It’s definitely more effort to work in, but the results speak for themselves: it’s markedly faster than anything done in React or Angular or similar, and uses around 11MB of memory.
That's nice, but now please do the same after having that tab open in the browser for 24 hours... it's unbelievable how much memory some crap (i'm looking at you google docs) can leak.
One of the sources of bloat may be not just apps, but also extensions.
E.g. I see the Okta authentication extension (which is a required part of work setup) spending nearly a hundred megs after prolonged usage in Firefox. Mong other things, it appears to allocate a lot of identical strings (like 500M of them), likely by a thoughtless `substring` call somewhere.
Awesome analysis, quite instructive. I am even considering adding it to my server performance tool, as a frontend performance metric, since I guess it could be easily automated. Imagine little README badges saying "quite bloated" and "pretty good" :p
If feeding this back to use as a general performance metric, you would have to be very careful to make sure you were measuring the same thing each time which for a complex application could be difficult unless you are only measuring on initial page load (which might not be as useful as you are hoping for). Without this control you would need a lot of results to make any average or other analysis of the metric meaningful.
For controlled tests run by yourself in dev (rather than a performance metric for your app in production) it could be useful though.
Opened https://www.reddit.com/ in Private Browsing and measured it after some ten or fifteen seconds, 92.69MB (38MB of objects—of which 16MB is ArrayBuffer—33MB of scripts, 7MB of DOM nodes, 5MB of other, 2MB of strings).
https://old.reddit.com/, 13.70MB (1MB of scripts, 1MB of other, 2MB of objects, 6MB of DOM nodes, 782KB of strings).
Their desktop counterparts will more likely than not be the exact same shoddily thrown together heap of disjointed third party bloat, for the occasion bundled with the affront to professional developer pride and integrity known as 'Electron'.
The web has come a long way and expectations from consumers have evolved over the years. While admittedly a lot of the bloat likely comes from negative things like trackers, ads, etc there are also assets loaded on a typical page that genuinely enhance the user experience. On the list, Google Mail scores the highest in terms of bloat, but it's a full web app and daily users probably have come to be spoiled on some of the UX niceness that makes Google Mail enjoyable to use. Users would rather have that vs a plain HTML page with links. The empirical evidence is clear on that, otherwise the web would not have evolved as it did.
Software is built on layers of abstraction[0], and necessarily, there will be bloat as a byproduct of layering. If we had to map out complexity of what's really going on in a typical computer, the "bloat" floating at the top layer caused by JavaScript would be put to shame by the complexity in the underlying browser and the OS underneath that, all of which arguably are too "bloated" for 90% of daily use.
> Users would rather have [bloated JS webmail] vs a plain HTML page with links. The empirical evidence is clear on that, otherwise the web would not have evolved as it did.
There's got to be a special name for this kind of logical fallacy.
> Software is built on layers of abstraction, and necessarily, there will be bloat as a byproduct of layering.
Except we're layering less powerful, less abstracted APIs over an already high-level document model, or we're layering a primitive record access API over SQL.
> There's got to be a special name for this kind of logical fallacy.
Voting-with-your-wallet fallacy?
Users choose from what's available on the market. Between high marketing, network effects and lack of technical understanding of the average user, there is close to zero feedback going back to service providers. The providers get to unilaterally decide what's on the market, and users have no choice but to take it.
> There's got to be a special name for this kind of logical fallacy.
There also needs to be a Fallacy fallacy: the irrational believe that merely dropping the "fallacy" moniker is an argument.
In this case: Yes, considering there are many hosted email services competing for customers, with quite a lot of variation between "huge web app" and "minimalistic list of links". And considering Google is known to excessively A/B test any change (including more-complex v. just-html interfaces), it's quite valid to conclude that people generally prefer what GMail is doing.
GMail even has a fallback html interface it switches to when noticing slowdowns. The number of people keeping that mode on manually (few) is another indicator of people's preferences.
> Except we're layering less powerful, less abstracted APIs over an already high-level document model[...]
"Abstract" != "good". Abstraction layers can be stupid, or deliberately constricted. They are still, tautologically, by definition, one step more "abstract" than the API they access.
Funny you use gmail as the example, since I use the basic html version of gmail[0] whenever I can, and find a vastly superior experience for just, yknow, reading my emails.
I do the same, especially as my laptop dies a slow and painful death...do you know of any browser extensions etc that reduce a page’s overhead? I want a basic html version of the entire web...
Other than using ublock/umatrix to block as much 3rd party crud as possible, there's not much else you can do since most sites will be tightly coupled to their javascript.
If you're lucky, certain popular websites have their own extensions, such as Alternate Player for Twitch.tv[0], which help make the experience quicker and nicer, but you'll have to search for these if they exist.
I wonder how Gmail as it originally came out (during the invite-only stage) compares to Gmail now? That already had a lot of the functionality (but not, say, chat).
Of course, it's rather difficult to go back in time to measure the old interface now.
> While admittedly a lot of the bloat likely comes from negative things like trackers, ads, etc there are also assets loaded on a typical page that genuinely enhance the user experience.
Twenty five what? Megabytes? Mebibytes? Percent? Libraries of Congress? Furlongs per fortnight? Inverse femtobarns?
I can guess you mean "MiB" (mebibytes) from the charts, but units are always important. Bare numbers leads to confusion! It's good practice to always include units, even if it's a simple "All numbers are in MiB" at the top.