I'm starting to believe that memory usage may be a red herring in modern operating systems. Memory prices have been crashing, every day on Slickdeals I see 8GB of notebook (and netbook) memory for less than $30. Is this a problem worth solving anymore?
Memory usage is reflected in general performance, even when the whole system fits into RAM, because RAM hasn't infinite bandwidth. Larger memory usage -> more traffic between CPU and RAM. And cache effects make the issue even more important. Modern systems may have gigabytes of RAM, but still only couple megabytes of CPU cache.
Memory usage is pretty significant. All those 10s add up a lot. Remember 50mb less from the OS = 50mb more for your running app.
Bigger issue: The low priority memory. AVs are a big memory hog and cause problems, this will mitigate it. MS is acknowledging that AVs are a necessary evil and there are not the only ones that do this, and this allows you to just make friendly programs.
Yes, more RAM improves performance, but using less RAM with your program also improves performance independantly of RAM volume. As an extreme example, a program that fits in L2 will flay the (excuse my french) living sh*t out of a 2GB program loaded in RAM.
When I read that, I wondered how long we will keep using DRAM in mobile devices. If Moore's law holds, we will easily be able to put 10GB of DRAM in a phone in ten years. I think having 1 GB of static RAM might be preferable.
If 1 GB of static ram were even remotely cheap enough and dense enough, our processors would have more than a few megs of L1-3 combined cache. Especially since high-end processors cost hundreds of dollars already.
CPUs have a few megs of L1-3 cache because any more would slow it down. If you made the caches larger, it would increase the distance a signal has to travel, meaning more latency in the signal. With the clock speed of modern processes, this does matter.
The best possibility would be fully realising the NUMA architecture, and giving each core a stack of dedicated SRAM or DRAM at sizes of 1GB (these would have to be off-die though).
Yes, that is the thing about SRAM. It is not 10x more expensive than DRAM. Oh no. No, no, no, no, no. If it were to ever fall to only 10x more, it would be like the 2nd coming of Memory.
On modern CPU's, HALF- or MORE of the silicon is used to afford 4-16MB L3 caches. A CPU die is not much smaller than a DRAM chip, and a 1GB chip of DRAM is less than $10 these days judging by the prices of 16GB, 16-chip sticks of DRAM.
Besides the direct performance issues already discussed, it is my opinion that:
1) A programmer who programs something of decent size and ceases to concern themselves with memory entirely will write code that will continue to bloat unneccesarily-so for the life of the software. At least some attention to memory is neccesary to keep usage reined in. You don't have to fight for KiB, but think about it. It rather seems to be a resource you can use 5% of, but without proper attention rapidly wind up consuming 100% of.
2) A programmer who disposes of the idea of using memory efficiently has probably discarded the idea of algorithmic efficiently in any way whatsoever. Pursuing memory optimization is a decent proxy for all forms of optimization.
This seems largely misguided. Stuff like merging duplicate memory spots (such as that reserved for future use by applications) seems to question programmer judgment in the aim of saving available memory.
If the memory is available, you'd do better to use it, no?
> If the application tries to write to the memory in future, Windows will give it a private copy
It is textbook Copy-on-write. To me it seems no judgement is made about the programmer.
>If the memory is available, you'd do better to use it, no?
I don't understand... if I have 2 GB of RAM, I should always use all of it? The case being made in the article is to minimize memory consumption to increase battery life - something that will be crucial on tablets I assume.
If they use a DLL version x for a component A and a version y for a component B, they raised memory usage only for convenience of not updating one of the components to use the latest version of the DLL.
And it's a really good one. It's easy to ensure that all the system software or all of a particular package (e.g. Office) uses the same version of a DLL, but approximately impossible to ensure that all third-party software is always refreshed to use precisely the version that's loaded on a particular machine (where that version can very based on Windows release, service pack deployment, etc.).
This is especially a problem in enterprise deployments where there may be a variety of spottily maintained internal and third-party applications loaded on a machine. Having to choose between refreshing every single one of them or throwing the unrefreshed ones out is an impractical choice. Thus DLL versioning.
Avoid resulting problems by either maintaining your apps in a way that allows them to all use the same build of a DLL or simply by running as few apps as possible to minimize library loading in general, duplicate or otherwise.