Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Keep binaries in system memory never removed till manually done so
14 points by dogol 3 months ago | hide | past | favorite | 23 comments
How do we have Linux OS to keep some small executable binaries in a system memory location never getting removed till manually instructed so, with purpose to prevent inefficiency repetitive read then load - close then unload the binary from storage and memory?



https://en.wikipedia.org/wiki/Sticky_bit

> superuser could tag these files to be retained in main memory, even when their need ends, to minimize swapping that would occur when another need arises, and the file now has to be reloaded from relatively slow secondary memory. This function has become obsolete due to swapping optimization.

.... "the Linux kernel ignores the sticky bit on files."


> This function has become obsolete due to swapping optimization.


Yeah this has been a thing since computers.

I had a lock feature in a distributed-client cache back in the day. You could lock in some tool file etc that you used frequently, so performance would be stable over some long operation e.g. a build.

I always wondered if people were really any better at choosing, than my cache algorithm. Hard to say. Can't really take statistics, because you can't measure the pattern they didn't use!


Even further back in the day, on systems with ferrite core memory, the contents of memory were retained even when the machine was powered down. So if you could toggle in some bootstrap code into core, leave it there, and make sure nothing overwrites it, you won't have to toggle the bootstrap in next time.


VMTouch may be helpful. https://hoytech.com/vmtouch/

Just lock the file into memory using:

```vmtouch -l /path/to/binary```


I think you could make a program to mmap() all the files you want and then call mlockall() and never exit.


A bit clunky, but can you mount a RAM disk and then put your executables in there? This would at least stop the Disk I/O


I thought only the disabling of swap partition would guarantee no subsequent disk i/o.


Yup, I'm pretty sure that Linux will swap out tmpfs if the files are not used. RAM is just a big disk cache if you have swap.


This is probably the easiest way.

You could also write a small c program that mmap() and mlock() all the executables and shared objects you need and then stays around in the background.


The easiest way is to load the whole OS into RAM and run it from there, while having no SWAP at all. Which is absolutely possible with unbloated distros, especially with current hardware, but even 8GB are sufficient, only 4GB begins to feel uncomfortable if there are more than a few applications open. OTOH 'the browser' is the application, and 4GB are still sufficient for about 40 tabs, so...

shrug?


/dev/shm is convenient, if available


Do you know that you actually need this? Have you measured how much time you're losing to this process?


Seems like it could be useful for a remote access trojan


The filesystem already has a cache


This. Today, every modern OS is pretty good at automatically figuring out which files (e.g. binaries) get used most often and will cache them in RAM. Having to manually specify which ones to cache in RAM wastes time and resources.


any further real experience so to prove it, that's e.g. some Linux tools that can trace/benchmark for this so as to prove it?



Best to write a launcher that simply forks an existing process that is in the right initial state. Other folks mention caching the ELF file but that will be slower than the loaded executable being cloned for a new process.


Unless you manually lock the executable in memory it will be evicted from RAM just like any other file on the filesystem. When you start up the usually-idle launcher it will load everything from disk (well load what it needs then the new process will load the rest as it starts up). I doubt the performance difference is significant unless your process does lots of work at startup and you will lose out on ASLR by always forking the same memory layout.


On Windows I believe one can call an API like VirtualProtect to prevent swap/eviction from RAM. Should be possible on Linux too?

Yeah, it'd only really be useful if one was running some type of web service that called an executable, and there is no source code available to do any fastCGI or whatnot.


If the binaries are small then to what extent are inefficiencies of read/load affecting the workload?

Modern kernels are probably smart enough to already handle this for repetitive behavior.

Of course you could always profile things to determine what’s happening and whether any of the suggestions actually improve anything.


tmpfs?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: