How do we have Linux OS to keep some small executable binaries in a system memory location never getting removed till manually instructed so, with purpose to prevent inefficiency repetitive read then load - close then unload the binary from storage and memory?
> superuser could tag these files to be retained in main memory, even when their need ends, to minimize swapping that would occur when another need arises, and the file now has to be reloaded from relatively slow secondary memory. This function has become obsolete due to swapping optimization.
.... "the Linux kernel ignores the sticky bit on files."
I had a lock feature in a distributed-client cache back in the day. You could lock in some tool file etc that you used frequently, so performance would be stable over some long operation e.g. a build.
I always wondered if people were really any better at choosing, than my cache algorithm. Hard to say. Can't really take statistics, because you can't measure the pattern they didn't use!
Even further back in the day, on systems with ferrite core memory, the contents of memory were retained even when the machine was powered down. So if you could toggle in some bootstrap code into core, leave it there, and make sure nothing overwrites it, you won't have to toggle the bootstrap in next time.
You could also write a small c program that mmap() and mlock() all the executables and shared objects you need and then stays around in the background.
The easiest way is to load the whole OS into RAM and run it from there, while having no SWAP at all. Which is absolutely possible with unbloated distros, especially with current hardware, but even 8GB are sufficient, only 4GB begins to feel uncomfortable if there are more than a few applications open. OTOH 'the browser' is the application, and 4GB are still sufficient for about 40 tabs, so...
This. Today, every modern OS is pretty good at automatically figuring out which files (e.g. binaries) get used most often and will cache them in RAM. Having to manually specify which ones to cache in RAM wastes time and resources.
Best to write a launcher that simply forks an existing process that is in the right initial state. Other folks mention caching the ELF file but that will be slower than the loaded executable being cloned for a new process.
Unless you manually lock the executable in memory it will be evicted from RAM just like any other file on the filesystem. When you start up the usually-idle launcher it will load everything from disk (well load what it needs then the new process will load the rest as it starts up). I doubt the performance difference is significant unless your process does lots of work at startup and you will lose out on ASLR by always forking the same memory layout.
On Windows I believe one can call an API like VirtualProtect to prevent swap/eviction from RAM. Should be possible on Linux too?
Yeah, it'd only really be useful if one was running some type of web service that called an executable, and there is no source code available to do any fastCGI or whatnot.
> superuser could tag these files to be retained in main memory, even when their need ends, to minimize swapping that would occur when another need arises, and the file now has to be reloaded from relatively slow secondary memory. This function has become obsolete due to swapping optimization.
.... "the Linux kernel ignores the sticky bit on files."