Hacker Newsnew | comments | ask | jobs | submitlogin
Reducing runtime memory in Windows 8 (msdn.com)
84 points by ghurlman 929 days ago | comments


emp_ 929 days ago | link

Relevant: http://www.youtube.com/watch?v=lgU9cjUtrD4 Windows 8 running on 128mb RAM.

-----

nextparadigms 928 days ago | link

"(loading time ~2-3h)."

So, pointless demo. It probably run everything from the HDD.

-----

blntechie 928 days ago | link

I'm not sure whether he meant the boot time here. May be the Windows 8 installation time. I could be wrong.

-----

sliverstorm 928 days ago | link

S3 state?

-----

mcastner 929 days ago | link

I'm starting to believe that memory usage may be a red herring in modern operating systems. Memory prices have been crashing, every day on Slickdeals I see 8GB of notebook (and netbook) memory for less than $30. Is this a problem worth solving anymore?

-----

zokier 928 days ago | link

Memory usage is reflected in general performance, even when the whole system fits into RAM, because RAM hasn't infinite bandwidth. Larger memory usage -> more traffic between CPU and RAM. And cache effects make the issue even more important. Modern systems may have gigabytes of RAM, but still only couple megabytes of CPU cache.

-----

dlikhten 928 days ago | link

Memory usage is pretty significant. All those 10s add up a lot. Remember 50mb less from the OS = 50mb more for your running app.

Bigger issue: The low priority memory. AVs are a big memory hog and cause problems, this will mitigate it. MS is acknowledging that AVs are a necessary evil and there are not the only ones that do this, and this allows you to just make friendly programs.

-----

magic_haze 928 days ago | link

> Larger memory usage -> more traffic between CPU and RAM.

Well, if the memory were lower, the call would have been made CPU->HDD which is orders of magnitude slower, so you'd expect more RAM would necessarily improve the general performance.

-----

sliverstorm 928 days ago | link

Yes, more RAM improves performance, but using less RAM with your program also improves performance independantly of RAM volume. As an extreme example, a program that fits in L2 will flay the (excuse my french) living sh*t out of a 2GB program loaded in RAM.

-----

mcastner 929 days ago | link

Oh I should've read the whole article first...apparently reduced memory usage leads to longer battery life which is a worthy goal.

-----

binarycrusader 928 days ago | link

Right. CPUs and other components of the system now use relatively little power in many cases or at least far less than they used to.

However, RAM remains one of the largest power consumers, and so reducing memory usage also reduces power usage, which improves battery life.

Tom's hardware has a great article on this here: http://www.tomshardware.com/reviews/lovo-ddr3-power,2650-2.h...

Note in particular the figures at the bottom of the article page.

-----

Someone 928 days ago | link

When I read that, I wondered how long we will keep using DRAM in mobile devices. If Moore's law holds, we will easily be able to put 10GB of DRAM in a phone in ten years. I think having 1 GB of static RAM might be preferable.

-----

adgar 928 days ago | link

If 1 GB of static ram were even remotely cheap enough and dense enough, our processors would have more than a few megs of L1-3 combined cache. Especially since high-end processors cost hundreds of dollars already.

-----

keeperofdakeys 928 days ago | link

CPUs have a few megs of L1-3 cache because any more would slow it down. If you made the caches larger, it would increase the distance a signal has to travel, meaning more latency in the signal. With the clock speed of modern processes, this does matter.

The best possibility would be fully realising the NUMA architecture, and giving each core a stack of dedicated SRAM or DRAM at sizes of 1GB (these would have to be off-die though).

-----

sliverstorm 928 days ago | link

Yes, that is the thing about SRAM. It is not 10x more expensive than DRAM. Oh no. No, no, no, no, no. If it were to ever fall to only 10x more, it would be like the 2nd coming of Memory.

On modern CPU's, HALF- or MORE of the silicon is used to afford 4-16MB L3 caches. A CPU die is not much smaller than a DRAM chip, and a 1GB chip of DRAM is less than $10 these days judging by the prices of 16GB, 16-chip sticks of DRAM.

-----

Someone 928 days ago | link

I am not saying your conclusion is wrong, but:

- CPU caches are wired much more complex than SRAM memory modules would need to be. It may be shared between CPUs, and being n-way surely requires silicon, too.

- if the ratio is way more than 10, why, then, do I find zillions of references stating that a) DRAM needs a transistor per bit, and b) SRAM can be built with 6 transistors per bit?

-----

sliverstorm 928 days ago | link

Besides the direct performance issues already discussed, it is my opinion that:

1) A programmer who programs something of decent size and ceases to concern themselves with memory entirely will write code that will continue to bloat unneccesarily-so for the life of the software. At least some attention to memory is neccesary to keep usage reined in. You don't have to fight for KiB, but think about it. It rather seems to be a resource you can use 5% of, but without proper attention rapidly wind up consuming 100% of.

2) A programmer who disposes of the idea of using memory efficiently has probably discarded the idea of algorithmic efficiently in any way whatsoever. Pursuing memory optimization is a decent proxy for all forms of optimization.

[/not a programmer by trade]

-----

sid0 928 days ago | link

2) A programmer who disposes of the idea of using memory efficiently has probably discarded the idea of algorithmic efficiently in any way whatsoever.

Space and time have to very often be traded off against each other in algorithms.

-----

sliverstorm 928 days ago | link

For all the negative reputations Windows has aquired, it really seems like the architects of 7 & 8 are taking this seriously and doing some real Computer Science. This is some really good stuff.

-----

gburt 928 days ago | link

This seems largely misguided. Stuff like merging duplicate memory spots (such as that reserved for future use by applications) seems to question programmer judgment in the aim of saving available memory.

If the memory is available, you'd do better to use it, no?

-----

singh 928 days ago | link

How is this misguided?

> If the application tries to write to the memory in future, Windows will give it a private copy

It is textbook Copy-on-write. To me it seems no judgement is made about the programmer.

>If the memory is available, you'd do better to use it, no?

I don't understand... if I have 2 GB of RAM, I should always use all of it? The case being made in the article is to minimize memory consumption to increase battery life - something that will be crucial on tablets I assume.

-----

dorian-graph 928 days ago | link

It seems like everyone and his grandma know better than Microsoft engineers.

-----

wmf 928 days ago | link

...question programmer judgment in the aim of saving available memory.

These are Windows programmers we're talking about. The culture doesn't seem to encourage fastidiousness.

If the memory is available, you'd do better to use it, no?

Yes, but what if you run out later? Should you do the dedupe when some other process is blocked trying to allocate memory or should you do it earlier?

-----

acqq 929 days ago | link

Interestingly, they don't mention their latest incarnation of "DLL hell," the multiple versions of the same libraries which are used by different pieces of their own system.

http://blogs.technet.com/b/askcore/archive/2008/09/17/what-i...

If they use a DLL version x for a component A and a version y for a component B, they raised memory usage only for convenience of not updating one of the components to use the latest version of the DLL.

-----

jeffreymcmanus 929 days ago | link

That's not "DLL Hell," it's an attempt to alleviate it.

-----

wanorris 928 days ago | link

And it's a really good one. It's easy to ensure that all the system software or all of a particular package (e.g. Office) uses the same version of a DLL, but approximately impossible to ensure that all third-party software is always refreshed to use precisely the version that's loaded on a particular machine (where that version can very based on Windows release, service pack deployment, etc.).

This is especially a problem in enterprise deployments where there may be a variety of spottily maintained internal and third-party applications loaded on a machine. Having to choose between refreshing every single one of them or throwing the unrefreshed ones out is an impractical choice. Thus DLL versioning.

Avoid resulting problems by either maintaining your apps in a way that allows them to all use the same build of a DLL or simply by running as few apps as possible to minimize library loading in general, duplicate or otherwise.

-----

barista 929 days ago | link

The article talks about size on disk. Not how it gets loaded in memory.

-----

sid0 928 days ago | link

Well, multiple versions of code being loaded means that there are fewer common pages to share across processes. However I think it is the right tradeoff to make.

-----




Lists | RSS | Bookmarklet | Guidelines | FAQ | DMCA | News News | Feature Requests | Bugs | Y Combinator | Apply | Library

Search: