Hacker News new | past | comments | ask | show | jobs | submit login
Tell HN: RAM Requirements are going through the roof
102 points by babebridou on Jan 28, 2012 | hide | past | favorite | 89 comments
There have been a few talks about how Moore's law is coming to an end, about how you can now build servers with close to a TB of RAM, how you can now scale clusters of servers and achieve great metrics when it comes to raw data storage or memcaching, how incredible feats in both parallelization and miniaturization will reverse the downhill trend for high-end computing.

Then there's the desktop. My desktop PC, for instance. It's a great 6 months old working computer, with 8 cores and 16GB of RAM. "This ought to be enough for all my needs", I thought. The Bill Gates in me was right so far, but unfortunately this won't be enough for long.

Yesterday I took a look at my memory usage, out of curiosity: 11.4GB. Woah. I had a look at the breakdown.

  - Chrome : ~3GB
  - Firefox : ~1.5GB
  - Java (eclipse) : ~1.2GB
  - Rest in tons of various work-related apps.
I'm of course responsible for letting this accumulate over several days, but still, a third of my RAM taken up by web browsing tabs? Chrome on its own clogging more than twice as much as Eclipse?

What worries me now is how hard we are getting struck with RAM-hogging web pages. Since I began writing this post, my freshly restarted Chrome browser with 9 open tabs (Hacker News (x2), Coding Horror, Google Search for "Mac osX Lion review 6 months", MoPub Ad Service monitoring, Google Analytics Visitors Overview, Android Developer Console, Twitter, Gmail) is taking up roughly 500MB of RAM. That's insane. On my 2010 Macbook Pro with 4GB of RAM, this means 20% of my overall caching capacity is taken up by my core web needs. Needless to say, I can't use my macbook anymore.

Last summer I was working on a little java experiment - a cross-platform 3D labyrinth. I wanted the overall memory and data footprint to be as low as possible so it could be played on a low-end android phone with really fast loading, and yet to keep the game space as big as I could, so I designed my own dedicated data structures overnight. I made the following calculation: over 500MB of raw, uncompressed data, I could store the description of an area roughly equivalent to a map of Europe with a resolution of 2.5 meters per pixel.

I'm not claiming any feat here, just stating the obvious: web programmers are doing something that's definitely not cool for our current RAM budget. We forgot any sense of measure. It's unnecessary for a twitter feed following 175 persons for an hour or so to claim a 70MB RAM footprint. You're not the only nor the worst offender, Twitter. Except that I had 26 new tweets to display, I clicked, and the footprint suddenly grew to 76MB. 26 tweets = 6MB. We're talking about 140 characters tweets, let's be generous and multiply that amount by 100 to take into account the tweeter's profile (which are all 10,000 characters essays as we all know), and we get a total of 364 000 new characters, which end up claiming more than 6 million bytes in RAM. Which means the RAM impact of adding 26 new tweets to a webpage is at very very least 10 times higher than it could be, and probably more like a thousand times too high.

Like I said, I'm only using Twitter to state a point. I mean nothing wrong with Twitter's web devs - actually I'm myself a very poor web developer - and I'm pretty sure the blame could also be put on Google Chrome instead, but I remember a day in 2002 where my Internet Explorer was trying to load a 4MB webpage from the hard drive, causing a RAM footprint above 40MB and a failure after 20 minutes of waiting in front of a white screen. Back then I was merely a junior consultant working on QA, and I was the one to tell the devs that they were definitely doing something not cool at all for the user's computer. True, we were showing rather complex and impressive amounts of "data" on our webapp with much simpler but oh so wrongly implemented "UI" (in that case, hundreds of unnecessary nested table anchors), but I can't help thinking back at those times where it was simply impossible to make a product that was not cool for the user's computer, because the computer would refuse to run it at all. We're way past this line today. My Twitter's tab has now garbage collected some data. It's back to 73MB ram footprint. There are 32 new tweets to show. I click them, and the footprint bumps back up to 78MB. Meanwhile, my overall Chrome footprint is now showing 550MB private memory. That's 50MB for two clicks on Twitter and ~4500 characters in a HackerNews' submit form.

Moore's Law nowadays affects the computing power requirements of software rather than the computing performance of hardware, and this is killing us.




Chrome is designed to be memory hungry. It doesn't NEED the memory, but if it's there it sure as hell will use it to try and speed up your internet experience.

What's the point of having a tonne of ram only to let it sit idle?

I "only" have 4 gb of ram and chrome is taking up 1.2 gb of that, even though i have 12 windows open with ~5 tabs in each.


"Idle" memory is used by the operating system as cache. On my computer, 800 MiB of my RAM is in use by programs, but 1.7 GiB of my 2 GiB RAM is used in total, by programs, buffers and cache.

The problem is that an extension like "1-Up for Google+" doesn't need 20 MiB of RAM just to play a sound and display a green mushroom, for a total of about 150 KiB of assets and maybe a few KiB of code.


I've never used "1-Up for Google+" but are you sure it's really only 150KiB worth of assets, if you are measuring file sizes you are looking at compressed data which is useless to your computer.

You'd be surprised how big uncompressed data can get, compare a .BMP with a .PNG of the same image for example.


I had uncompressed data in mind, but it was only a rough guess. I'll do a more in depth calculation.

2 seconds of uncompressed sound, at 44.1 kHz, in stereo and 8 bits per sample is 170 KiB. 17 images of on average 24 by 24 pixels at 32 bits per pixel is 68 KiB. Three other, bigger images give another 40 KiB of uncompressed data. A total ~280 KiB. There's a few KiB of HTML but I don't know whether and how I should count that. Even if we take into account headers, padding, objects and higher quality data (e.g. 16 bits per sample), I assume all assets will still account for less than half a megabyte.

Now that I look at the code, I see it's including a bundled version of jQuery. I don't know how big the representation of Javascript code is in memory, but 29 MiB seems a bit much (correction, while I was typing this, it went to 31 MiB).

It might seems disingenuous to single out this extensions, but truth to be told all my extensions need a similar, way too high amount of private memory.


Most likely it's the javascript etc, it may also be holding syntax trees etc in memory too?

If it causes the load of another library which nothing else was using that could count especially if it's a different version of jquery etc.

If 2 processes are both using the same library and it is linked to their addresses spaces, who's memory usage does it count against?


Neither, as I am only looking at private memory, which is by definition not shared. Interestingly, most of my extensions have 0K shared memory, which implies they're all keeping their own version of JavaScript libraries in their own memory space. That might explain most of the problem. It doesn't seem easy to solve a problem like this, without reinventing the shared library and the accompanying "DLL hell".


And in fact the memory isolation between javascript execution contexts is one of Chrome's most touted security features.


Why is the default to let any app take as much as it can? Or, perhaps more to the point, why can we not force apps to live within X megs of RAM, regardless of what we have available (for whatever reason we choose)? If it's a possibility to do that, it's pretty nigh hidden to everyone.


You have to give an app whatever memory it asks for, or it will crash. If it doesn't use that memory, the OS will swap it to disk.

Apps don't generally change their behavior based on the size of physical RAM, but maybe Chrome does this as a very advanced optimization.


We should be able to tell the OS - "don't give app X more than 300meg" and let the OS handle swapping directly, regardless of what the actual RAM is.


How would you know how much RAM is going to be needed by a process?

Let's say you load up you video editing program, maybe it only needs 512MB of memory to fit the whole program into memory, now what happens when you load in LOTR directors cut?

Especially if you aren't running any other large programs, would it still make sense to restrict it to 300MB (or 1GB , or 2GB etc)?

I think it's safer to assume that OS developers know what they are doing and can be trusted to handle memory contention issues better than most of us can.

As an aside some programs (like redis , memcached etc) do allow you to restrict the amount of RAM which they use but this is because they are explicitly greedy with memory and are probably used by people with enough skills to measure actual usage and tune accordingly.


"How would you know how much RAM is going to be needed by a process?"

I don't necessarily. I just know how much RAM I want to be able to give it.

I'd love to be able to test apps in lower-RAM scenarios. Just because I've got 8g or 16g doesn't mean everyone does, but I have no way to test what that experience will be like.

Additionally, I may have two processes that are both rather RAM hungry that I need to run. Give me the option of constraining one of them so that I can give RAM priority to one or the other rather than "oh, well the OS people are smarter than me, so I guess I have to trust that app 1 will get RAM priority over app 2, or they'll just both have to be equal".


If you want to artificially constrain memory in order to test performance in low memory systems then this is a different use case entirely and I'd be surprised if there is not software available to do this for you.

Although you should bear in mind that computers with less memory probably have less everything else (including slower memory busses) so your best bet is to actually find a slower computer and test with that.

Most OSes will let you prioritize other processes over others (e.g Linux has "nice"), whether this is just for CPU preemption priority or whether it does other resources as well I'm not sure.

However I think most swapping systems will try and keep the most frequently used pages in RAM , so if one process has preemption priority over another this problem would solve itself as it's memory area would just get accessed more often.

Again this is kind of finger in the air stuff so anyone more familiar with OS design should feel free to correct me.


The operating system can use this to cache files, which greatly improves read and write performance.


Exactly, free memory is wasted memory.


for the open source chromium, there's a command line option to set how quickly it should release memory to the OS:

https://wiki.archlinux.org/index.php/Chromium_Tips_and_Tweak...


RAM is used by an operating system and managed. You may use 11/16 GB at once, but someone could browse the internet with the same tabs at 3 or 4 GB and be fine. Hell, they could at 2GB and be fine.

The operating system plays a huge role in memory usage, many modern ones cache indiscriminately, preload most used applications, etc etc

Comparing RAM usage is almost apples and oranges these days, unless we're talking similar setups.


To add to this, I think it's important that OP knows this is a good thing. I built my computer just over 3 years ago when DDR2 prices bottomed out, so I have 8GB, and this thing is still more computer than I need. I boot up, load all my applications into RAM, and everything is super snappy - because RAM is fast. Prior to this rig I found myself compelled to upgrade at least every 3 years because things would get slow. Times change.


I browse with 1.5 gigs, with both firefox and chrome open. Firefox will have up to 30 tabs, and chrome up to 20. I can still browse fine.

Edit: Debian minimal install using the Awesome window manager.


I routinely open more than 30 tabs on Chrome on both Windows and Ubuntu. I have 4 GB. I have had problems on Ubuntu, but never on Windows.


I would say that it's not so much that RAM Requirements are going through the roof as it is that the benefit you can get from having more RAM is going through the roof.

Having said that many of the languages we are using such as Javascript are not exactly storage efficient combined with combined with the fact that many people who would probably not have considered themselves "programmer" enough to write native apps in C++ back in the day are writing things in Javascript and putting them online.

I often see people "benchmark" things like operating systems or browsers based on RAM usage (people coming to the conclusion that 32bit Win XP is more "efficient" than Win7 for example) but I think it is far more important to measure swap file usage as this is the thing that actually hurts performance. Afaik there is no performance hit in overwriting a flip-flop in memory that already has a value stored but there is a huge penalty for hitting a disk. So I'd be quite happy to have all my RAM utilized all the time if I have 0 swapping.

First of all if you are using a 64bit platform your pointers are going to be twice as big as they used to be but you get to access so much more memory that this issue solves itself.

Also IIRC Windows 7 does things like arrange your frequently used data into a contiguous area of the disk and then reads all that stuff into VM at bootup. This will result in higher RAM usage (due to many pages being loaded into physical RAM) but better overall performance since if other stuff needs the RAM more urgently it can either swap the cache back to disk or overwrite the cache entirely.

Let's say you have 8GB of RAM and are watching a HD video which is 7GB uncompressed and the rest of your system only really needs 1GB of your RAM, surely it makes sense what whilst you are watching the video you can use spare CPU cores to uncompress the rest of the video and put it into RAM.

I think it's more the case that computing will always expand to fill the amount of memory available. Bearing in mind that even the cost of swapping is dramatically reducing with SSDs etc.


> Let's say you have 8GB of RAM and are watching a HD video which is 7GB uncompressed and the rest of your system only really needs 1GB of your RAM, surely it makes sense what whilst you are watching the video you can use spare CPU cores to uncompress the rest of the video and put it into RAM

It makes a ton of sense for all those people (and other species, from different time/space geometries) who can comprehend a movie by experiencing its time dimension as a single event. Ordinary humans may find it less useful, albeit having no delay when skipping back and forth may have some utility for those of us who experience time in a more traditional way ;-)


Let's say you have 8GB of RAM and are watching a HD video which is 7GB uncompressed and the rest of your system only really needs 1GB of your RAM, surely it makes sense what whilst you are watching the video you can use spare CPU cores to uncompress the rest of the video and put it into RAM.

That's not clear to me. Why do something ahead of time that can be done faster than needed on the fly anyway? You're basically assuming that when system resources are needed for something else halfway through your video, processing power will be more useful than space.


True , with video decompression that is probably the case.

I'm sure there are lots of examples where doing complex calculations whenever you have CPU spare and swapping them out to disk as needed would be preferable to having to contend with whatever else needs to use it at the time.

I think the key is how to OS deals with memory contention issues rather than just minimizing memory usage.

Someone with a better understanding of OSes than me could probably clarify.

I guess the performance hit is when you write a page of RAM to the disk, perhaps there are some cases where you don't bother to do this if memory contention is high?


Really, you don't want to randomly waste cpu cycles for nothing, like pre-decompresssing video. It just increases the power usage and heat of the computer. It would also make it harder to skip, since the program needs to finish the portion it is working on.

One thing people often don't realise is there isn't really 'wasted' RAM, your OS usually uses the rest to cache things, especially files. File caching can greatly speed up disc reads and writes. (The following only applies to linux, I'm not sure about other OS's; all three have very different memory models). When memory is getting high, a few things can happen (depending on whether you have swap). The file cache is preferentially cleared first, but swapping pages to disc is also considered readily. An interesting thing is that swapped pages aren't deleted from swap straight away, only if they are modified (so swap usually doesn't decrease in size). There is also the Out Of Memory killer, which can kill applications using too much memory. It really is an interesting area. You can read more from this page http://tldp.org/LDP/tlk/mm/memory.html. Just remember all OS's are different.


If you can spin up the HD, read the file into RAM, and spin down the HD, it will take a lot less energy than reading the file slowly over the course of an hour.


Spinning down the hard drive is a pretty bad idea as it doesn't save much power & there is usually a noticeable lag to spinning it back up again. Also most applications & OSes these days expect it to be always on. Your carefully crafted scenario goes up in smoke if your OS likes to log stuff, your browser wants to dump it's cache or your word processor wants to autosave.


That example isn't really good, but think back to the original post: if your browser caches every element of every already rendered page, it can be super fast in case of a page reload/tab reopen.


You can only store about 20 minutes worth of decompressed RGB24 1080p video in 7GB of RAM. There really is no point to doing this.

The only time this would make sense is in a content creation environment where quick access to uncompressed frames makes a difference. However, those environments are already CPU starved. Typical consumers would not see a benefit from this scenario. More benefit will be had with the transition to SSDs, they will be eventually be much much faster & require less power than current hard drives.


Are your numbers the VM size, or the RSS?

On the laptop I'm writing this from, I've got a Chrome process with 827 MB of VM size -- but only 62 MB of pages actually in use. The rest is large memory-mapped regions consisting mostly of untouched pages -- which makes sense, because on a 64-bit machine, virtual address space is practically free.


Figures are taken from the "private" column of chrome://memory-redirect/

Chrome says: "This is the best indicator of browser memory resource usage"


Ok, that should be the RSS -- unless it's broken on your platform, which is entirely possible, of course.


When I visit chrome://memory-redirect/, I see this message:

  (Bug: We seriously overcount our own memory usage: Issue 25454.)
Issue 25454: http://code.google.com/p/chromium/issues/detail?id=25454


I agree that this is completely getting out of hand. Memory was (and still is) very cheap but we are reaching a limit. The problem is that there is little we can do about it. I doubt that both Mozilla and Google don't know that their browsers consume disproportionate amounts of memory. But developing a web browser isn't easy and I assume a huge memory footprint is unavoidable. I've often wondered whether the problem lies not with the web developers nor with the programmers but in the expressiveness of the HTML+CSS+Javascript combination. This allows an enormous diversity in websites. But I assume that storing an exhaustive amount of metrics for each element on a page can be quite some data.

I wouldn't say your macbook is useless though. I have only one computer and it's a 2008 laptop with 2 GB RAM. And I cope very well, although I experience the occasional lockup (I don't use swap).


I'm wondering where the memory useage really comes from. I'm coming from an dev culture where if I do things right, i can trust the underlying system I built on to make it run fast enough and use up as little resources as it can, so most of the performance & optimization work comes from me.

Are the underlying systems running web applications (aka browsers) doing it wrong? Or are the webapp devs not doing what needs to be done in general? Or am I raising a false alarm and is everything alright?

As for my macbook, it has an unrelated problem, that's for sure, but all it does is magnify the memory issues for me.


With memory prices as low as they are right now, you should upgrade if possible.

8GB of DDR2/3 from a good vendor has been less than $50 for a while now.


I've considered it, but I wonder whether I don't better buy a whole new laptop instead of cobbling this one up, but that's a lot of money and this one works fine, so I keep postponing it.

But really, buying more RAM is only a temporary solution. I sometimes feel like the memory demands of web browsers are out of line and I wonder what causes that.


Just out of curiosity, but why don't you have a swap partition?


Because I used to think I didn't need it and it's a bit of hassle. I also thought swap was almost useless because of the comparatively slow speed of hard drives and that I'd rather have the memory hogging process killed than my computer lock up completely.

I might try it out sometime though.


SSD swap is an excellent way of mitigating memory requirements. It's a huge factor in the MBA being a viable development machine.


A lot of apps will take more memory than they need so they can keep more data "live" and be more responsive when they need to perform actions.

If you ran the same set of apps on a 2GB machine you'd get different numbers.


Still I fail to see why this "live" data takes so much space.

Besides, I'll have to take your word on that, since my 4GB macbook pro has become completely unuseable due constant swapping, but it could be completely unrelated.


At the same point as you are. I have 8 GB of RAM in my mid-2009 MacBook Pro and usually have one browser running with a number of tabs. I quickly run out of RAM and go to swap, and the only solution I've found so far is to use the purge tool that came with Xcode and manually free up the "inactive" RAM that seems to just sit there.


I can't speak about your swapping problem, but for example an uncompressed (ready to display) image with 640 * 480 resolution, 24 bits per pixel takes 900 Kb memory. The mentioned 3 GB memory consumption of Chrome equals the size of 3333 of such images, which can easily be the residue of the several days of browsing the OP talking about: 333 pages with ten medium sized images. Of course this is oversimplification for the sake of easy of calculation, the browser caches many other things besides the images, but you can see that a few GB is not that much, considering the modern "need" for that much graphics.


I have a windows machine I use for web browsing and it only has 2 GB of RAM. I often have 30+ tabs open for days on end and it works just fine - I barely get any swapping (in fact, I had the swap file turned off for a few weeks before Christmas and had no problems at all).


> A lot of apps will take more memory than they need so they can keep more data "live" and be more responsive when they need to perform actions.

Which is annoying, because it leads to double-buffering.


For those with the RAM to spare, this seems like sensible behaviour -- If there are ways to speed up your web browsing by using otherwise idle RAM, then great!

Unfortunately, the flipside is that if you're using a computer more than a few years old, then anything more than your set of 'core' sites tends to slow your computer to a halt as your browser causes everything to be constantly swapping in and out of memory.


PROTIP: Use noscript to take RAM requirements back to 2007.

I'm on a 2GB Laptop with XP typing this and the fan is silent. SILENT. The second I turn noscript off it overloads, then of course, crashes. Some scripts are running constantly in the background, with click tracking and even mouse motion tracking becoming more commonplace.


Also using RequestPolicy addon on Firefox will dramatically cut down on stuff loaded from 3rd-party websites (including: Google analytics, Facebook like button, ad- and tracking services of all kind)


For Chrome users on the dev channel, try http://silentorbit.com/kiss/


And as an added bonus, you're reclaiming a lot of your online privacy. Unless that was your main reason to install all these addons in the first place (as is the case for myself). It also tends to speed up page load times, often by magnitudes, because of all the crap some sites want to load.

I've been doing this since years and I seriously do not understand when people complain about extreme RAM usage of their browser. I have 4GB of RAM and often sit at 50-100+ tabs for weeks and don't really have any issues - Firefox Nightly is currently hogging ~16% of my memory according to htop - which is perfectly okay.


Any more tips for speeding things up/keeping RAM low?

How about once those 50-100 tabs crash, do you have an open link backups strategy or just hope restore catches it all in time?


I don't have any strategies, really. I just never had problems with a clogged RAM to begin with.

Concerning crashes (which became incredibly rare after I decided to just not have Flash installed anymore[1]), Firefox usually is able to just restore the session. I can't actually remember if I ever had the issue of "loosing" all my opened tabs.

A lot of my open tabs are children (I use Tree Style Tabs) of sites like HN, which I just keep open and refresh every now and then to see if the discussion has progressed, or a tree of tabs dealing with a specified topic (for example if I am researching something, or a web-shop where I just left open some interesting articles). And then there are tabs which are just open because I forgot to close them. I clean up every now and then.

[1] This is seriously baffling. I'd wager that >90% of all browser crashes can, in some way or another, be attributed to Flash. I have nothing to back up that claim, it's just anecdotal experience, but I thought this was worth mentioning.


Turn off flash and javascript. That should reduce your browser's memory use by 90%. And use Opera. I can easily fit 200 tabs in under a GB.


Or turn off your computer completely to have 100% free RAM. This advice is not as helpful as it seems if your line of work is, let's say, a JavaScript or Flash developer or you want to use anything more advanced than a blog.


I use a second browser just for JS and flash. It never has more than 3 tabs open and is quite light on memory usage.


That does not address lots of use cases. It also introduces an inconvenience that is pretty much a no-go for regular Joe's out there. The problem is that an average user is now forced to buy a new computer every 3 years because Flash and JavaScript take up too much RAM. Sure an above average user might go through the inconvenience of setting up two browsers and learning what JavaScript and Flash are, but your standard spherical bear will not.

As for developers, matters are worse. For one, I have the pleasure of working with (debugging, deploying, writing) JS and Flash code. I would love to use the extra 2 GBs of RAM for running 3 more virtual machines to do more testing, but I cannot.

The solution is not to bury our heads in the sand, but to fix the memory leaks and/or memory requirements. As the OP points out, there is no reason that a web page that can be encapsulated in a half a meg of HTML needs to take up hundreds of megs of RAM once parsed and running.


your operating system uses the ram available to it. you see 11.6GB of ram usage because you have 16GB of ram. my laptop has 4GB and short of running virtual machines i've never wanted for more.


The apps simply try to make the best use of the memory available, for example they could use it for caching. It would be bad if they weren't using up all the memory available.

That said, there might be the occasional rogue app or web site, which should be uninstalled or avoided.


>there might be the occasional rogue app or web site, which should be uninstalled or avoided.

How would I know if a web site is "rogue"?


I guess Rogue is a term to describe both malicious or poorly coded sites. As a rule of thumb, if scrolling isn't perfectly instant, then a site is "rogue". I found a recent example of a unintentional rogue website: it was a tumblr blog using a preview service that would extract opengraph meta data and display images as a caption to the link. One of the link was an article on a news website where the og:image was incorrectly pointing on a 3MB jpg. The whole tumblr blog would be totally unresponsive to scrolling due to a 50x50 image with a 4096x2048 src.

This was absolutely unintentional and involves at least two intermediates over which the original blogger had no control whatsoever.

Link to the page in question: http://fh2012.tumblr.com/page/6 It so happens it's the tumblr blog of the current favorite to the French Presidential election, and it was the first post so I noticed it :P


Thanks.

I would have preferred a web in which scrolling (and certain other operations, like the Back button) was always instant regardless of how the page is coded, and consequently I did not have to watch out for rogue pages or sites.

ADDED. Or at least a web in which there were fewer ways for a page to go unresponsive, which would have made it easier for readers and writers to anticipate responsiveness problems.


You are correct to be calling out applications that are indifferent to memory usage. The commenters who answer "ram is cheap", "your time is worth more than the ram",etc. are missing critical points:

1) RAM is a finite resource 2) Managing excessive RAM usage (even without swapping) requires OS effort 3) indifferent memory management is a sign of sloppy coding and usually is a memory leak 4) casual memory usage does not fly in the mobile power constrained world that is now upon us.

#1: Ram is finite

If a computer with 1 terabyte memory with all but 1 gb consumed by a memory-indifferent program, is going to behave poorly. The OS is spending a lot of time trying to find free space every time the program or OS needs new memory for any reason.

#2 Managing excessive Ram usage is not free:

The OS is constantly shuffling stuff around in memory, updating pointers, etc. If the OS has more active memory to manage then the CPU is spending more time to do this overhead work that is not available to actually run the program.

#3 Excessive memory usage = sloppy programming (ers)

Whenever I have found code that is memory-indifferent I have found bugs beyond memory usage: 1) massive http-session size. On a web server if each session takes "only" 2 megabytes, that means 1 high-end server can handle about 4000 users (8gb of session data) before bogging down. 2) memory leaks - temporary data created but never released to the OS.

#4 : mobile

Memory uses power. Remember this is a key reason why Flash is dead on mobile. Adobe could never reduce the memory/power hungriness of Flash.

As a Java developer, I routinely constrain the JavaVM's memory during testing to just above the target footprint. I want to know if there is a memory issue before it blows up in my face on a busy day.


I have 25 tabs open in Firefox right now, and that's taking up 600MB. So, about 24MB/page. Seems a bit high, but considering the number of images and amount of JS, not so high that I'm panicking about it.


Try to think about this from an optimizer's standpoint. Here are some excellent reasons why giant memory footprints are a good thing:

1) Freeing memory is expensive. In order to do it properly in a running process, you need to check that each piece of memory is not in use, and that typically requires traversing large graphs. It's only something you want to do when absolutely necessary. (Hint: it's not necessary when you have 16 GB of RAM.)

2) RAM is really fast. Things in memory can be read in a few CPU clock cycles. On disk, you need to wait for a plate of metal moving at 50 mph to swing by.

3) It's totally free! It's not like unused RAM can enter power-saving mode or anything. If your RAM usage isn't at 100% you are wasting valuable space. This isn't a checkout lane where lower usage is a win for everyone. It's like an everlasting candy fountain: it all rots if you leave it, so get the whole neighborhood in for a piece.

So honestly, what are you complaining about? So chrome wants to cache hour-old pictures just in case you hit the back button 45 times. What's the big deal? YOU HAVE 6,600,000,000 FREAKIN' BYTES LEFT! You could probably fit Project Gutenberg in there. If and when you run out of memory, and pages take 15 seconds to load because your hard drive is swapping like crazy, then you have a problem. But you won't have that problem. And just because chrome is taking up 3 GB at the moment doesn't mean it won't be a nice citizen when the operating system starts to cry about low memory, which it won't do because YOU HAVE 6 MORE GIGABYTES.


Certainly all true points. However, for the people without the luxury of 16GB (I'm running on an old machine, with 3gb of RAM), Chrome consistently freezes the computer due to the resource hogging. I've learned my lesson and switched to Firefox.


"I've learned my lesson and switched to Firefox."

Good luck with that.


Doesn't 1) contradict 3)? If freeing memory is expensive, and memory is full, then there is certainly a cost of having full memory.


Instead of complaining that your RAM is 75% allocated, you should be complaining that 25% of it is unused. (Imagine buying a processor and noting that it never went above 75% utilization.) Memory deallocation isn't free (it's often not even cheap), and it's completely wasted work if the application is closed. In addition, for every allocation that's not unnecessarily deallocated, you don't waste cycles redrawing bitmaps, reparsing text, redecompressing images, and so on.


While I agree in general, I think you're pointing at the wrong target here. It's not that "web programmers are doing something that's definitely not cool for our current RAM budget". The javascript is loaded and probably jitted to a smaller space than its text (depends on the number of paths, etc - I'm ignoring lots of stuff here), the actual text passing through the wire is also pretty small. So where's the memory going? All the supporting stuff...

There's the whole javascript framework with it's own allocator, memory pools which will stay around for a long time. Some of the browser elements use the same kind of vm to actually display the UI to you, adding some more mem usage. Then there's the whole page which had to be read, parsed into a tree, processed to create appropriate model for display (all stages are preserved in an editable way). This has to be a live model for the whole time, since javascript might need to interact with it - you can't just flatten it into a screenshot and display that. Now the images - they also need to be loaded and decompressed for display. Also the text isn't that simple - there's the whole engine behind loading fonts, analysing each letter / pair / triple / ... for special spacing rules to be applied. There are multiple glyphs, font variants, sizes, etc. to rasterise. That text needs to be distributed properly inside of the DOM model generated above. That text flow actually affects the model again - they're linked, so they get processed again (text metrics need to be recomputed on each page resize, which may make some elements longer/shorter). Then the theme goes on top of this - since the application displays itself, the needed libraries / theme elements will be loaded too. But of course not all browsers use the theme from OS for displaying the form elements, so another layer needs to be loaded here. .... I could go on forever about elements which are needed for rendering the page.

So yes - lots of that could be much lighter. Lots of that could be made into modules or pulled some levels higher to make them more general. Lots of that might be just a result of sloppy coding noone ever got to fix.

But just remember that what you're actually using is a system on top of a system on top of a system on top of... If you used gopher on a real 80x25 term you'd use kilobytes. If you used first web browser with no image support and no scripting, you'd be under megabytes. But you don't - you're looking at a realtime rendering of 10 different models coming together and interacting with each other with next to no lag. And it all probably renders with hardware support to give you a very smooth scrolling experience. So no, web developers themselves have almost nothing to do with this whole mess. Unless they made a silly mistake and are leaking memory like crazy... they're not the ones to look at first.

(I simplified in many places, not all facts are relevant to every OS/browser combination, etc. just trying to make a point here)


Shouldn't it be the responsibility of the Web dev to realize this and work to minimize it? In a way, it's not much different than building on top of any VM. And here it definitely impacts the UX. I've certainly closed down tabs from sites that I know are bogging my system down.


Yes, it should be, but in reality the UI usually seems to see the least attention for optimization of this sort, for one, because I think it tends to have people working on it who aren't necessarily true programmers, they are more of designers who do some JavaScripting, and also the company is much more concerned with not wasting the memory on their own servers, they figure that if its just taking up load on a client machine its not as important. Its a bad approach and I will often refuse to use a site if it has bad client side performance / seems like a memory hog but thats just the way things are now


I implied in another comment that my FUD about browser memory came from being wrong by an order of magnitude in the wrong direction when applying my notion of memory footprint to web browsing.

You and others just filled that gap, thank you, I feel less stupid now.


Your reasoning is technically flawed.

That aside, i recently hired the cheapest VPS i could find; 128MB RAM. I could not run apt-get; i ad to upgrade to 256MB.


Like I said I'm in no way a browser or web expert. I only have a notion of what could potentially fit in a certain amount of memory, and when I look at webpages I'm at least an order of magnitude off, in the wrong direction. This is why I'm worried. I sure hope I'm completely wrong!


had to blog: http://williamedwardscoder.tumblr.com/post/16634123844/128mb...

To be honest, I don't really address where your 3GB for Chrome is going. All those memory figures you see should be taken with a pinch of salt. Reserving address space, using it and such are not so straightforward to untangle.

Your OS doesn't offer anything like http://williamedwardscoder.tumblr.com/post/13363076806/buffc... sadly.


Similar to your caching FS trick, it would be cool if Cleancache was exposed to the userspace somehow. (http://lwn.net/Articles/386090/)


Yes, the official blessing of a mechanism for caching would open the door to browsers using it

I like file schmantics because it gives us access control and an api that is straightforward to use


To get better performance the operating system will use as much as the RAM as possible. Why only use 4GB when you have 16GB to use? Cache as much of that as possible. Remember locality in CS and how it relates with caching, this is exactly the reasons why your OS does this.


Exactly, OP does not understand the role RAM plays in computers.

It is a disappointment for me to see the link get so many points in this community where you would expect the baseline of knowledge of technology to be 'above the average'.


surely op knows what he's talking about, he even wrote his own data structure. it must be the moores' law. i read on wikipedia it uses too much memory...


It isn't a problem with Twitter or Gmail, it is a bug in Chrome.

Chrome leaks memory. Check memory usage again after force-quitting Chrome and relaunching it. I have to restart Chrome daily to regain 3-5 gigs of swap.


I hate these kinds of questions about ram usage. Os developers and browser developers are smarter than you give them credit for. They will do garbage collection when memory becomes tight. You are slowing yourself by clearing ram when you might need it and you still have free ram.

Browsers have sub second back buttons, how much pre-rendered ram space do you think that takes up?


> Browsers have sub second back buttons, how much pre-rendered ram space do you think that takes up?

I'm sure the os & browser devs are all good and great. I'm sure the web devs are all fine and accurate; that the subject is a complex one and we're already all doing the best we can.

But that's not my point: I'm worried that one day we'll run out of RAM because we're designing our webapps as if client RAM was no longer a finite resource, we simply forgot it even existed in the first place.


Pretty sure the browsers follow a similar approach to windows: If the memory is there it gets filled on the off chance that it shaves a few ms off somewhere because the needed item is already in memory.

I think wiskers is right though: It depends on the box. On my PC (2gigs) the browsers rarely make it to 1gig.


It's one thing for the OS to be making this sort of decision, since it has a global view of resource needs, a single application does not.

Actually, I don't even get the sense that chrome has an adequate view of its own resource needs. My wife often finds that her browser and computer is underresponsive because chrome is using so much memory for background tabs.


what stack are you guys running?

I'm running ubuntu 11.10 with awesome wm, I have chrome open with 11 tabs, thunderbird, and pidgin and rhythmbox running. a number of terminals.

I'm using 1 gig of ram according to htop.


Here: Debian Sid with Awesome, Firefox with 12 tabs right now but open for days, bunch of terminals, MySQL, mpd and torrent client: 473MB of actually used memory.


What are you panicking about? A gig of RAM costs less than a fast food meal nowadays.

EDIT: and the cost of programmer hours required to save a gig of RAM is orders of magnitude more.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: