
Preparing for nonvolatile RAM - willvarfar
http://lwn.net/SubscriberLink/498283/0470778058f0ed2c/
======
gouranga
Crazy futurist rant...

Traditional operating systems such as Linux and windows are 100% dead when non
volatile memory comes along in force. Paradigm shift time.

There is pretty much no reason to use any filesystem APIs or filesystem any
more. You just keep your data in the process address space - its just not
going to go anywhere. Just make a data segment persistent across processes and
you can survive restarts. If you backup, you can just dump the address space.
Screw hard disks as well. i imagine some form of rpc will be in place between
processes so they can talk to each other and that is it. Lots of small redis
instances would be a similar concept.

Imagine an mp3 server process which can provide persistence in the heap,
metadata and decoding services and you're there.

it'll be like a small internet inside your machine.

Lisp would fit nicely in this world. Imagine A persistent root environment.
load a defun once and it's forever. Teracotta do something similar with java.

Then again I could just be insane.

~~~
ajross
I'll buy the crazy part. The "100% dead" bit loses me right off the bat.
Surely there will be space for hardware abstraction, process models, memory
protection mechanisms, networking, etc... in your futurist OS. And, amusingly,
that code is already there in the OSes of today! Storage management is merely
a big subsystem.

And as for dropping "files", I think that's missing the point. Files are just
pickled streams, and there will always be stream metaphors in computing. How
else do you represent "inputs" to your program (which, remember, still has to
run "from scratch" most of the time, if only on test sets)?

I guess I see this as a much narrower optimization. A computer built out of
perfectly transparent NVRAM storage is merely one with zero-time
suspend/restore and zero suspended power draw. That doesn't sound so paradigm
breaking to me, though it sure would be nice to have.

~~~
Cushman
I think you're assuming a level of opacity to the OS that is true
theoretically, but not realistically. Conceptually we can treat the computer
as a black box which does the same thing, eventually, whether it's using
cache, RAM, HDD, or the network, but realistically the limitations leak out
all over the place and are embedded all over user-facing workflows in the form
of opening, saving, uploading, and such things. They may always be happening
in _some_ sense, but there is no intrinsic need to involve the user in them.

Assuming that NVRAM becomes dense enough to replace storage in practice --
which is a big assumption, but it's happened to tape and is happening to hard
drives right now -- concepts like launching a program, opening and closing a
file, even _booting_ will become mostly academic. Certainly they'll be of no
crucial interest to users, to whom the distinction between what something _is_
and what it _does_ has never made that much sense.

Sure you _could_ apply all the same abstractions over the top, but if you were
designing your OS from a blank slate, why on Earth would you? And it will only
be a matter of time before one of those blank-slate OSes is compellingly
superior enough to the old-school paradigm, and users will start switching en
masse.

~~~
ajross
Fine fine. Let's just say that the last "blank-slate" OS to acheive commercial
success did so, what, 35 years ago? If I'm assuming too much transparency in
the NVRAM technology (and honestly, I don't think I am -- DRAM is hardly
transparent already, c.f. three levels of cache on the die of modern CPUs),
then you're assuming far more agility in the OS choice than is realistic in
the market.

~~~
snprbob86
I think that OS choice agility is increasing rapidly. Consider the number of
people whose primary computers are mobile phones (replaced every 1 to 3 years)
and whose secondary computers are glorified web browsers. This is rapidly
becoming true for businesses as well, as they adopt more web-based tools.

~~~
ajross
All mobile phone OSes are still based on a filesystem. If you want to claim
that the _user's_ perspective of the computer is going to move away from a
"file", then I agree. If you think the underlying software is going to do so
simply because it got no-power-to-maintain memory, I think you're crazy.

~~~
snprbob86
Straw man. I said nothing of my opinion on non-volatile memory. I was only
pointing out that more and more users are less and less tied to any particular
operating system.

~~~
ajross
Uh... the whole subthread was _about_ NVRAM and the likelihood of it replacing
the filesystem with different storage models. You'll have to forgive me for
inferring an opinion about _the subject we were discussing_ ; I just don't see
how that can be a straw man. It's just what happens when you inject a non
sequitur into an existing discussion.

~~~
snprbob86
Not a non sequitur at all. You wrote "you're assuming far more agility in the
OS choice than is realistic in the market" which was a point to support your
case about NVRAM. I was merely stating that point was weak because,
realistically, the agility in OS choice is increasing within the market.

I actually wrote in defense of the traditional filesystem model in another
post on this thread. Just because I don't agree with your reasoning doesn't
mean I don't agree with your conclusion.

------
guard-of-terra
What you really need is stick a virtual memory manager on top of NVM instead
of RAM. In this case NVM works like disk and RAM as the same time. In your
file system you just have a /ram file which is used as system memory.

Then, fun things start to happen: You no longer need disk buffers, since your
RAM is your disk. And mmap()ing no longer consumes "RAM" because you just map
a part of your file to whatever virtual address you want. You need no swap.
You never have to swap in or out. You never have to sync your disks or observe
buffer dirtiness (if only on CPU cache level) because your buffer is a part of
your file - once you write to it the file is already updated.

Of course, software will have to adapt: prefer mmap()ing file to read()ing
one; use persistent memory structures. The goal is to minimize copy-on-write
by either read-only access or safe in place data transformation.

~~~
carterschonwald
While I look forward to that future, realistically nvm ram is going to be at
least as pricey as ssds or volatile ram, which means that a more likely in
between approach that tries To minimize the amortized io between fast nvm (our
ram replacement) and cheap and relatively slow nvm (hd or ssd).

~~~
guard-of-terra
If it would cost as SSDs do, it's pretty simple then: you mount NVM as /, and
/var & /srv are still on disk. All your programs lose start-up time and most
of swapping.

If it would cost more, you can use it as a persistent block cache (that
survives reboot), but it doesn't make much economic sense since nobody is
willing to pay money for just faster start-up and warm-up after reboot. If you
have 8G of NVM, you could just read 8G off disk in something like 30 sec and
be settled with regular RAM.

------
sweis
Widespread adoption of NVRAM may require a significant change in security
models, since data once assumed ephemeral may be persistent. For example, it
may be trivial to recover cryptographic keys from a running system.

(Disclaimer: I'm working on a solution to this.)

~~~
aidenn0
There are already solutions out there for the paranoid who are afraid of
cryogenic attacks on DRAM

~~~
sweis
Yes and no. There are some solutions out there, but they are typically not
ready for production, require custom hardware, or make assumptions about
physical security controls.

(Disclaimer: I am biased since I am working on this.)

------
raverbashing
I will only believe this once it's sitting in my desk

And, for now, it won't change filesystems much. Unless you can get a similar
ammount of it as a disk (or maybe a compromise, let's say today around 8GB ram
is common, and 1TB of HD, then if you can get around 128GB NVM this can be
your new 'SSD')

It is of course, a very important development, and may make things faster

~~~
sp332
How about 240 GB almost as fast as DDR 200 (PC-1600) RAM?
[http://www.newegg.com/Product/Product.aspx?Item=N82E16820227...](http://www.newegg.com/Product/Product.aspx?Item=N82E16820227744)

Edit: according to reviews it uses compression to boost its bandwidth (but not
capacity). Still seems like a decent tradeoff.

~~~
egillie
Crazy! I was looking into how much it would cost to get 500GB of RAM, looked
like it was going to be around $3500. I hadn't even considered this...

------
oh_sigh
Can anyone tell me why we don't just hook normal RAM up to a small
rechargeable battery such that it can maintain its state during a power loss.
Alongside that, there is an equivalent amount of flash memory. The flash is
never used, except when you get within, let's say, 5% battery life, at which
point the entire contents of the RAM are dumped to the flash. Then, on system
start up, if the RAM is still loaded, swell. If there is a ram image on the
flash drive, load that and continue as normal.

Isn't this essentially NVRAM? What are the downsides to it?

~~~
shabble
We do, something like <http://techreport.com/articles.x/16255> or
<http://www.anandtech.com/show/1697/5> although I don't think they actually
have a full NV backing store. They definitely exist though, and have done even
before SSDs, it just had a normal disk hanging off it.

Battery-backed cache has probably been around even longer in write caches for
large RAID systems.

The 5% power and shutdown is also how the Macbook laptops handle sleep. Sleep
mode is a low power nothing-but-ram mode, and when the battery gets too low,
it goes into 'safe sleep', basically a dump-to-disk all-powered-off hibernate.

The main reason it's not all that common is that for the sort of workloads
you're prepared to pay for a shitload of RAM, you're probably just using it as
a cache for a DB or some monster app, and actually keeping it around isn't
that much of a priority. You've got failover somewhere else in the stack, and
it's one less thing to buy and maintain.

The other critical flaw is that there is a (potentially huge) performance hit
in presenting as a disk vs hanging off the northbridge MMU. Even the latest in
new fancy SATA is hilariously slow compared to the actual memory bus (6Gbps
for SATA3 vs maybe 100Gbps for DDR3[1]), and having all the filesystem
abstraction on top, as in the titular article of this thread mentions, is a
whole lot more overhead.

So yeah. We can. Sometimes people do. But it's probably easier and better to
just stick it in the actual RAM slots, and use it differently for everything
except 5-second boot times.

[1]
[https://en.wikipedia.org/wiki/List_of_device_bandwidths#Stor...](https://en.wikipedia.org/wiki/List_of_device_bandwidths#Storage)

------
vicaya
Commodity NVM is huge for big data transactions.

    
    
      Bye bye WAL
      Bye bye fsync
      Hello NVM replication
      I think I'm gonna cry

------
leoc
Obigatory mention that the DG Nova
<http://en.wikipedia.org/wiki/Data_General_Nova> was the star of
<http://en.wikipedia.org/wiki/The_Soul_of_a_New_Machine> _The Soul of a New
Machine_.

------
mmphosis
What about processors that rather than accessing external RAM and levels of
cache, instead a large amount of (register) memory (nonvolatile or not) is
directly included within each CPU?

