
How to fill 90% of free memory? - yiedyie
http://unix.stackexchange.com/questions/99334/how-to-fill-90-of-the-free-memory
======
mikeryan
My first "tech job" was doing support in the late 90s for DOS games at EA. At
that point most folks had computers with 8 or 16MB of RAM. One of the old
Jane's Flight simulator games had a check for minimum RAM requirements which
for some reason would overflow at 128MB and say you didn't meet minimum memory
requirements. So these folks would spend around $1000 on memory to build
premium machines with 128MB and then get these old games which would say they
couldn't play the game due to insufficient memory. The fix was to create a RAM
drive which would caption of part of the RAM available to storage and leave a
limited amount of available RAM to run the game which would clear the memory
validation.

I believe at one point someone was able to actually install a game into the
RAM Drive and play it on the left over available RAM but it required a game
that could be installed and played without a reboot.

[http://en.wikipedia.org/wiki/RAM_drive](http://en.wikipedia.org/wiki/RAM_drive)

~~~
sampk
Dick move to refuse running the game anyway, just show a notice and have a
"Continue" button.

~~~
Wingman4l7
Well, maybe a message more along the lines of "Hey, this game isn't built to
run on so little RAM, you're going to have a terrible gaming experience, so
don't you dare blame us and tell people our game sucks."

------
jve
Infinite loop: HN links to this question, StackExchange links to HN for an
answer.

~~~
dhughes
It's almost as fun when you ask a question on a forum and maybe figure it out
or not but months or years later run into the same problem Google it and find
your own question and answer as a top search result.

------
ck2
Just fill /dev/shm via dd or similar.

    
    
        dd if=/dev/zero of=/dev/shm/fill bs=1k count=1024k
    
    

(option #2 is to limit the amount of ram available to the kernel via
grub.conf, see my comment below)

~~~
levosmetalo
But would that really help for "low resource" testing, since OS will just swap
out that unused filled zeroes to clean up some space for actually running
programs?

Maybe creating a VM with just as much memory as the target system would be a
better solution to get more predictable results?

~~~
ck2
There is a way from grub to limit the amount of memory the kernel boots with.

Probably easiest way if you want low memory without fiddling.

    
    
        mem=1G
    
        mem=512M
    

etc.

~~~
pilif
that reminds me of that day when I set

    
    
        mem = 512
    

before rebooting. Ever since then, I know why my primary school teacher always
got pissed when we left off the units :-)

On a related note: Yes. The kernel needs more than 512 bytes of RAM to boot -
even back in the 2.2 days (when this took place)

------
WestCoastJustin
I recently recorded a screencast [1], about linux cgroups, and how you can
restrict/shape various program resources. For one of the examples, I wrote the
following C program to take 50MB of memory, at around the 15:45 mark in the
screencast you can see it in action, but this could easy be modified, and add
a sleep, to hold the memory for a while. You would most likely need to disable
swap if you wanted to use 90% of the free memory, run "free -m; swapoff -a;
free -m" as root, then use "swapon -a" to enable it again.

    
    
      #include <stdio.h>
      #include <stdlib.h>
      #include <string.h>
      
      int main(void) {
    
          int i;
          char *p;
    
          /* intro message */
          printf("Starting ...\n");
    
          /* loop 50 times, try and consume 50 MB of memory */
          for (i = 0; i < 50; ++i) {
    
              /* failure to allocate memory? */
              if ((p = malloc(1<<20)) == NULL) {
                  printf("Malloc failed at %d MB\n", i);
                  return 0;
              }
    
              /* take memory and tell user where we are at */
              memset(p, 0, (1<<20));
              printf("Allocated %d to %d MB\n", i, i+1);
    
          }
    
          /* exit message and return */
          printf("Done!\n");
          return 0;
    
      }
    

[1] [http://sysadmincasts.com/episodes/14-introduction-to-
linux-c...](http://sysadmincasts.com/episodes/14-introduction-to-linux-
control-groups-cgroups)

------
takefive
Or you could use 'stress':

[http://linux.die.net/man/1/stress](http://linux.die.net/man/1/stress)

------
Morgawr
Just run several instances of Firefox for a couple of days non stop

~~~
josteink
I thought that was Chrome. I've had Chrome consume 512MB per tab for 15+ tabs.

I have 32GB ram so I can cope, but calling out Firefox as the worst offender
in class is a wee bit rich.

~~~
broodbucket
They're both ridiculous. I know browsers are really feature-intensive
nowadays, but really, 500mb for a few tabs?

I think the story is that Firefox uses less, but because Chrome has an
individual process for each tab they don't all die at once, and Chrome has a
nicer about:memory page.

I've tried using lighter browsers like Midori but can't get away from having
10+ extensions for various things.

------
guerrilla
I'm genuinely curious, why is this being posted to HN? This seems like
something any good systems programmer (i.e. C on UNIX) would know and I'm sure
there are plenty of people like that on StackExchange.

~~~
michaelhoffman
Because there are a lot of people on Hacker News who aren't good systems
programmers.

I imagine most StackExchange posts here are because people might be interested
in the answers rather than know them already.

~~~
guerrilla
I'm not against it per se, I just thought that's what StackExhange was for. I
like the idea that it might have helped contribute a better answer, but I just
thought it was really odd to see on HN when I already get alerts from SE on
things I may be interested in.

------
chollida1
I answered a very similar question on stackoverflow here:

[http://stackoverflow.com/q/1229241/25981](http://stackoverflow.com/q/1229241/25981)

In this case the user wanted the program to run out of memory and the best
solution I could come up with was to use ulimit, to limit the amount of memory
available to the process.

------
raverbashing
Option 1: tmpfs or another memory backed fs

Option 2: Quick C program, but two gotchas: make sure you touch every page
after allocating (and keep touching it, otherwise they will be swapped out)

(Also turning off or limiting swap space may be helpful)

~~~
pritambaral
That is why POSIX provides the mlock() and mlockall() system-calls, to prevent
memory pages from being swapped.

~~~
bnegreve
Right, but Linux won't bind any physical memory page until you actually read
or write in it. So if you malloc() and mlock() 1GB of memory without
reading/writing in it, that will not use any bit of physical memory.

~~~
pritambaral
Will writing just one byte one time suffice? Genuinely unaware and curious.

~~~
bnegreve
If you only touch one byte, the system will only allocate one memory page. A
memory page is typically 1024 KB so that wouldn't suffice.

~~~
nkurz
On Linux, 4K is still a much more common page size. Most "Huge Pages" ("Large
Pages" in Windows speak) are 2 or 4 MB, and have been available since 2.6, but
I don't think they are widely used yet. x86_64 also supports 1GB pages, but
these are even less frequently used.

[http://lwn.net/Articles/374424/](http://lwn.net/Articles/374424/)

~~~
bnegreve
Yes 4K is default page size. Sorry.

------
Theodores
Virtualbox running a Windows VM seems pretty good at clinging onto memory and
not swapping out. You also get a nice little graphical slider to determine how
much memory is allocated to a given VM.

------
allanb
Zeno's little known memory paradox?

------
memracom
Nowadays I would just run Linux in a VirtualBox configured with the amount of
RAM that I wanted to simulate. I've done the same thing with CPU cores to
compare performance with 1, 2, 4 and 8 cores. Of course I run VirtualBox on a
16-core server...

------
rplacd
I'd wonder whether methods that make use of zeroing or /dev/zero'd be staved
off for longer on Mavericks - perhaps it'd compress the recurring patterns
that'd result?

------
singular
Would a malloc (alone) work? Doesn't it typically act as if the memory were
allocated but not actually use physical RAM until the data is filled?

~~~
0x0
I'm pretty sure that even if you malloc an enormous amount of memory, it will
occupy close to no resources as long as the contents are not touched. Also
related: "overcommit memory".

~~~
e12e
Linux is a bit strange when it comes to memory allocation, see eg:

[http://www.win.tue.nl/~aeb/linux/lk/lk-9.html#ss9.6](http://www.win.tue.nl/~aeb/linux/lk/lk-9.html#ss9.6)

------
escaped_hn
A java hello world application should do the trick.

