Hacker News new | past | comments | ask | show | jobs | submit login
Preparing for nonvolatile RAM (lwn.net)
76 points by willvarfar on May 30, 2012 | hide | past | web | favorite | 90 comments



Crazy futurist rant...

Traditional operating systems such as Linux and windows are 100% dead when non volatile memory comes along in force. Paradigm shift time.

There is pretty much no reason to use any filesystem APIs or filesystem any more. You just keep your data in the process address space - its just not going to go anywhere. Just make a data segment persistent across processes and you can survive restarts. If you backup, you can just dump the address space. Screw hard disks as well. i imagine some form of rpc will be in place between processes so they can talk to each other and that is it. Lots of small redis instances would be a similar concept.

Imagine an mp3 server process which can provide persistence in the heap, metadata and decoding services and you're there.

it'll be like a small internet inside your machine.

Lisp would fit nicely in this world. Imagine A persistent root environment. load a defun once and it's forever. Teracotta do something similar with java.

Then again I could just be insane.


I'll buy the crazy part. The "100% dead" bit loses me right off the bat. Surely there will be space for hardware abstraction, process models, memory protection mechanisms, networking, etc... in your futurist OS. And, amusingly, that code is already there in the OSes of today! Storage management is merely a big subsystem.

And as for dropping "files", I think that's missing the point. Files are just pickled streams, and there will always be stream metaphors in computing. How else do you represent "inputs" to your program (which, remember, still has to run "from scratch" most of the time, if only on test sets)?

I guess I see this as a much narrower optimization. A computer built out of perfectly transparent NVRAM storage is merely one with zero-time suspend/restore and zero suspended power draw. That doesn't sound so paradigm breaking to me, though it sure would be nice to have.


I think you're assuming a level of opacity to the OS that is true theoretically, but not realistically. Conceptually we can treat the computer as a black box which does the same thing, eventually, whether it's using cache, RAM, HDD, or the network, but realistically the limitations leak out all over the place and are embedded all over user-facing workflows in the form of opening, saving, uploading, and such things. They may always be happening in some sense, but there is no intrinsic need to involve the user in them.

Assuming that NVRAM becomes dense enough to replace storage in practice -- which is a big assumption, but it's happened to tape and is happening to hard drives right now -- concepts like launching a program, opening and closing a file, even booting will become mostly academic. Certainly they'll be of no crucial interest to users, to whom the distinction between what something is and what it does has never made that much sense.

Sure you could apply all the same abstractions over the top, but if you were designing your OS from a blank slate, why on Earth would you? And it will only be a matter of time before one of those blank-slate OSes is compellingly superior enough to the old-school paradigm, and users will start switching en masse.


Fine fine. Let's just say that the last "blank-slate" OS to acheive commercial success did so, what, 35 years ago? If I'm assuming too much transparency in the NVRAM technology (and honestly, I don't think I am -- DRAM is hardly transparent already, c.f. three levels of cache on the die of modern CPUs), then you're assuming far more agility in the OS choice than is realistic in the market.


Well, there hasn't been a major upset in the PC paradigm in 35 years :) Really, I agree with that bit-- marketing the thing would be a nightmare. But whether it's the first company to try it or the tenth, at some point the utility benefits will become too great for users to ignore.

> DRAM is hardly transparent already, c.f. three levels of cache on the die of modern CPUs

Keep in mind I'm talking about interface, not implementation. My code might care about cache misses, but my users have no reason to (except in the very, very aggregate). We leak a user-facing distinction between storage and memory because the difference is too significant to pretend it doesn't exist.


I think that OS choice agility is increasing rapidly. Consider the number of people whose primary computers are mobile phones (replaced every 1 to 3 years) and whose secondary computers are glorified web browsers. This is rapidly becoming true for businesses as well, as they adopt more web-based tools.


All mobile phone OSes are still based on a filesystem. If you want to claim that the user's perspective of the computer is going to move away from a "file", then I agree. If you think the underlying software is going to do so simply because it got no-power-to-maintain memory, I think you're crazy.


Straw man. I said nothing of my opinion on non-volatile memory. I was only pointing out that more and more users are less and less tied to any particular operating system.


Uh... the whole subthread was about NVRAM and the likelihood of it replacing the filesystem with different storage models. You'll have to forgive me for inferring an opinion about the subject we were discussing; I just don't see how that can be a straw man. It's just what happens when you inject a non sequitur into an existing discussion.


Not a non sequitur at all. You wrote "you're assuming far more agility in the OS choice than is realistic in the market" which was a point to support your case about NVRAM. I was merely stating that point was weak because, realistically, the agility in OS choice is increasing within the market.

I actually wrote in defense of the traditional filesystem model in another post on this thread. Just because I don't agree with your reasoning doesn't mean I don't agree with your conclusion.


the stream metaphore isn't appropriate for all types of data. memory allows a storage paradigm to be picked for the task at hand.

Consider redis which is a great example of this. How do you store redis' data efficiently? Well it turns that aof files are slow to start up and disk backed virtual memory is slow. The problem goes away instantly with NVM - the jib is done with no filesystem api used.


I dare say that the stream metaphor is a better fit for more types of data than Redis is. To first approximation, all data in the modern world is video files. You really want to store those in a raw memory space?


That's ok until you need to read hundreds of multiple streams from a stream device. You end up with random access then at which point the stream paradigm breaks down and you have to use memory.


Filmmakers would kill for that ability.


Filmmakers don't work with compressed video files (which I picked precisely because they're inherently a stream, and because I'm not kidding when I say that they're basically all the data there is in this world). And seem to be doing quite well with DRAM anyway.


Fair point. Although, I'd wonder if it's right that video is inherently a stream, and not that that's a limitation imposed by storage speeds. I might only want to watch video front-to-back, but (without knowing much about encoding) I could easily imagine that my experience might be improved by my computer having random access.


Modern video encoding is based on interpolating from the last frame, that is, only storing a real frame occasionally, and most of the time just having diffs from the last frame. ^+ This means that the data is inherently a stream -- a decoder needs to have completed decoding the previous frame before it can start on the next.

(^+ and B-frames, but let's not overcomplicate things)


It's still effectively a stream even if you skip around in time. Each frame is so large that it is a stream itself. So even if you watch a few seconds here and a few there, your use case is still pretty much optimized for streaming data.


This seems like one of those "sounds good on paper" ideas, like shells that pipe objects. In practice, file and process oriented systems are separate from REPL systems on purpose.

The most important property of a filesystem or the Unix-style pipes&filters that permeate them is the fact that serialization is a fundamental property component of persistence and communication. Any finite quantity of space can be addressed by a linear scheme and any unbounded sequence is inherently linear. Consider what happens as soon as you want to send a stream of data across the wire. Or checkpoint an operation. You need to store stuff in a non-volatile and linear form. And since data outlives code, you better invest time into thinking about that linear form.

Now, if you use immutable data structures (ie. non-cyclical) with a proper linearization and corresponding reader/printer (like in Clojure), you can get many of the same benefits, but you still need to have a whole bunch of other things to worry about. Just look at the weird things that Clojure needs to do with print-dup and the like.

I'm not saying that there isn't a kernel of a good idea in here. I'm just saying that it's going to have a lot more in common with filesystems and traditional shells than you'd first expect coming from a Lisp REPL school of thought.


> shells that pipe objects

Powershell rocks!

No wait, sorry it doesn't. I agree!


> There is pretty much no reason to use any filesystem APIs or filesystem any more

Don't agree. Filesystems are used more for organizational purposes than anything. Most of my folders are logically named and organized rather than physically based because of the types they contain. Probably the filesystem API will evolve into something more tag-based than hierarchical, but you never know. The filesystem hierarchical folder metaphor is re-used in tons of places where there's really no need for it physically. Why? Because there's a logical need for it.

Additionally, file system APIs provide lots of useful abstractions such as opening resources, closing them, reading to, writing from, appending to, etc. When you go to do your backup in process space, which data is in a self-consistent state that can be copied? You'll need something like a filesystem API to coordinate.


Tagging works on a heap. Heirarchies are terribly inefficient for categorising information. The metadata is more important than the file location. Look at the way music is stored on smartphones.

Ever heard of STM?


> The metadata is more important than the file location.

The web is a great example of a system where there's no particular need to store things like you do in a filesystem. The data for most sites is stored in various SQL and NOSQL databases... yet we still predominately see hierarchical paths used for resource identification. I wonder why?



But you're the one claiming an absolute with your original futurist prediction. Finding a counterexample doesn't detract from my argument that the filesystem API and its notion of hierarchical resource identification and management is extremely useful regardless of the storage.

Ie, current operating systems won't die and filesystem or filesystem-like APIs won't be going anywhere for a long long time.


> There is pretty much no reason to use any filesystem APIs or filesystem any more. You just keep your data in the process address space

The Windows NT kernel is primarily a filesystem-backed address space for committed RAM. Originally you actually had to have a pagefile at least as large as physical RAM. Except for nonpageable kernel structures, all the program accessible RAM was part of a memory-mapped file. [[EDIT: There's plenty of text from Microsoft that implies this, (e.g. "you should set the size of the paging file to the same size as your system RAM plus 12 MB...because Windows NT requires "backing storage" for everything it keeps in RAM") however, offline discussions have convinced me that it was never strictly true.]]

This was back when drive capacity was "more larger" than RAM capacity and disk bandwith was "less slower". The kernel has evolved away from this design a bit, but it did bring a certain purity. For example the filesytem cache and virtual memory paging system could be largely the same thing.

> You just keep your data in the process address space - its just not going to go anywhere. Just make a data segment persistent across processes and you can survive restarts.

This more or less what happens when the kernel bluescreens and the page file was at least as large as RAM. It makes debugging kernel crashes easier. (Spare me the Windows jokes please I'm not advocating for it, just saying this part of it that no one ever sees had a relatively elegant design opposite to what crazy futurist rant suggests.)


One might think about the paradigm shift when hard disks became so cheap that tape drives no longer made sense.

• Tape drives still make sense, to some people that store insanely large data sets with extremely infrequent (or never) access.

• I do not mourn my tape based backups for a single instant when using my rsync based backups. Life is good.

• Despite tape being dead for most of us, we all know a program called "tar". We still make tape archives, just on other storage.

The paradigm survives on virtual tapes because it is a useful cognitive model. Sure I could make a block file be a virtual disk and put a filesystem in it and send you the files that way, but you'd rather have a good old "tape" archive.

Likewise, disk filesystems are not going to go away if disks go away. They are too useful for reasoning about problems.

Scratch the surface of an iPad. It is full of files, yet empty of disks. Go to Linux, land of speciation, try to find a persistent storage system for flash memory which does not treat it as a disk. You will find some filesystems optimized for flash, but you will come up (nearly) lacking for a completely new way of looking at storage.

Persistent full speed RAM should be enough of a change to spawn that new thing, but I'll bet people keep the "real" copy in a filesystem for a long time. When that alpha particle corrupts a bit and trashes your clever RAM based data structure, what are you going to do? I'll reload from my file.


Being able to reboot is nice in case you screw up really bad. Having a transparent storage (file system) instead of opaque cross-linked in-process structures is a good thing too.


I disagree.

If you use a safe programming language which doesn't piss over memory (I.e. Haskell, Python, Ruby, Lisp etc) you won't need to reboot. Just reload the broken function into your environment and carry on. As for data, the same thing can happen to your disk...

The filesystem only exists because we couldn't put data in process due to the cost of memory. We cram everything through the filesystem API if it fits or not due to this.

It's why our machines are slow and primitive.

Consider the case of video - it is better represented as streams of audio, picture and metadata. To get this from a filesystem, we have to mux them all into a single stream and then demux them afterwards and hand them over to other APIs to process. This just wouldn't need to exist.


Yeah, and none of those VMs have any bugs in them.

One point of a filesystem is to have a consistent state that you can recover from after an errant process stomps on memory, or your machine suffers a kernel panic, or your memory becomes so fragmented that you can't even read a 100mb data source, or any other number of issues that can only be resolved by rebooting or reloading.

Once you've committed to non-volatile memory and ditched files, you're tightrope walking without a safety net, at the mercy of the next system level bug. I'd rather know that my data is safe and double-backed up at multiple physical locations, with recoverable history. Files give me that in a well supported, (mostly) system agnostic manner.


> Consider the case of video - it is better represented as streams of audio, picture and metadata.

Not true. Video components are packaged together because they go together logically. When you want to send the video to another machine or give a copy to a friend, it's entirely logical and useful that the various components are packaged together in some way.


I'm talking about storage, not distribution. They are two different things.


> Traditional operating systems such as Linux and windows are 100% dead when non volatile memory comes along in force

Seems like you were talking about a lot more than just one particular facet of filesystems when you made the futurist proclamation above.


Yes I admit that but for the scope of discussion, it's rather hard to separate the two.

My idea is that communication should be transparent.


Figuring out which function has broke might take a year (literally). Sometimes reboot is just reboot.


bugs happen. sometimes the cheapest bugfix is to reboot.


You don't set your standards very high do you?

I've worked on kit that is never upgraded or rebooted. It's active 24/7 for 365 days a year and is expected to work for 30 years.

I epxect the same thing from a normal computer, epecially considering the engineering budget is larger for them.


You don't use Microsoft products, do you? :-)


I do which makes me want for more :)


>If you use a safe programming language which doesn't piss over memory (I.e. Haskell, Python, Ruby, Lisp etc) you won't need to reboot.

This is a man who's never used Xmonad (which is written in Haskell) for an extended period of time before. If you use it for a long enough time, it gets slower and slower until workspace changes start taking whole seconds.

Eventually you say fuck it and reboot...

...'cept you can't do that without a filesystem with a "base state" to reboot from.

GG


I've had this happen before with xmonad.

It turned out I had a memory leak in my configuration file :(


As someone who's had an sbcl process running for just over a year on one box, I disagree.

xmonad is probably just a turd.


So what, in the future you're relying on everyone to magically start coding well?


Don't we already expect that from kernel contributors? :-P


So we're going to expect this from the entirety of userspace?


No userspace can be wrapped quite nicely:

In fact we don't really need processes, just suitable environments set up for execution of code. For example, stop a lisp definition calling eval:

   (defun no-eval-wrapper (form) 
      (let ((eval nil)) (form)))
I've not tried this btw and it'll probably only work with scheme dialects.


If all the non-volatile in-process data structures have the same semantics, with type annotations etc, one can imagine those structures being just as transparent and ready for inspection as modern filesystems are.


> one can imagine those structures being just as transparent and ready for inspection as modern filesystems

Then why not use filesystem APIs to talk about these structures? There's nothing about opendir() that requires it be backed by a block device instead of nvram.


Modern file systems are not very transparent, but any process guts are much worse. My debugging experience suggests so.

Files are data and structures are part code. You can't do much with code 'cause of turing completeness.


http://www.cis.upenn.edu/~KeyKOS/ looked pretty much like that. The implementation treated RAM as a cache of the disk, IIRC. This seems like a better way to live than Unix, to me, but you can't say it hasn't been tried.

See also http://en.wikipedia.org/wiki/Single-level_store


That's exactly the sort of thing I'm thinking.


Crazy futurist rant...

Haha, this is exactly how the IBM AS/400 works!

http://en.wikipedia.org/wiki/Single_level_store


thanks for the pointer.

I now to you IBM (but not the COBOL bit)


This could be two decades away. NVM will first be introduced with limiting amounts of storage so there will still be other hardware.

Hopefully current OSs will be re-imagined anyway just due to the vast time difference.


I agree about it being a way off but I think more 5 years than 20. Look at the adoption rate of SSDs


SSDs haven't replaced spinning HDs yet. And probably won't for slow read but massive storage. They handle the functionality of general use with 300GB HDs but that's some ways off current multi-terabyte spinning disks. And those might increase storage capacity faster than SSDs. So server OSs will have to maintain compatibility.

And NVM tech will probably be more limiting in storage to current RAM than SSDs are to spinning HDs. To replace even 30% of RAM/Storage in 5 years seems like a stretch.


Slight modification...

HDs haven't replaced tapes yet. And probably won't for slow read but massive storage. They handle the functionality of general use with 20GB tapes but that's some ways off current multi-terabyte tapes. And those might increase storage capacity faster than HDs. So server OSs will have to maintain compatibility.


Traditional operating systems such as Linux and windows are 100% dead when non volatile memory comes along in force.

This implies there is something rearing at the starting gate to replace them.


There will be and I'm going to jolly well be the person writing it :)

(I'm going to try - 10 years of embedded followed by 10 years of business facing is a good foundation and I've spent the last 15 years on the problem in my mind waiting for the technology to arrive)


Hmmm, I'm remembering back to the Apple Newton's "soup" memory… (You don't want each process to have exclusive access to it's own data, sometimes you want different processes to access the same data)


> Traditional operating systems such as Linux and windows are 100% dead when non volatile memory comes along in force. Paradigm shift time.

I doubt they'd be dead, they just won't be able to do the new things that NVM would allow you to do (at first, anyway).

At a minimum, mobile battery life could be better, depending on how much power is usually used to keep things going. Get an event from a radio or button and the system can instantaneously wake up.

The magic trick that the NDS does when you close its lid will become universal.


Is it possible that the switch to no filesystem in iOS userspace (and probably OSX in the future) is in anticipation to this in some degree?

<tinfoil hat on>


in this world, how do you programs on different computers reference the same piece of data? ipv6 address/memory address?

the whole path/file thing is really useful for distributed computing.

Also what would users use for object naming(especially across address spaces)? How do I search for a presentation or spreadsheet without a reference to it?


a) global address space.

b) no it's not. Distributed computing is normally message based.

c) you use an index in (a) and pull it into your local scope.


a) global address space for all computer data for all time? how big does that need to be?

b)yes, but in those messages are references to data. How do you reference that data across computer address spaces? Right now, we use an URI that is based on host/path/resource semantics. How do you get path/resource without a filestystem like construct?

c) I'm not following you.


a) Large, sorry HUGE. Current CPUs cannot address it. 128 bit is probably enough to encode most of the universe and not insurmountable bit width. PAXOS consensus algorithm + hybrid virtual memory system for management. Literally a GLOBAL P2P heap. Your CPU resides in a unique section of it.

b) The above solves that. You just read an address and it pulls that block over the network into your local address space. You write to it, and it pushes it back.

(STM comes in here as you can semantically wrap such things in transactions).

c) Someone gives you a pointer to the root of a catalogue or there is one built in then you can navigate the data structure be it a full text index or linear linked list to find the data you need. There is no filesystem.

I think someone related my ideas to AS/400 which TBH after doing some reading is a pretty good comparison, although I'd do it on a larger scale.

This is really "fringe" computer science if you want to call it that. It's pushing thing boundaries of what is possible intentionally.


yea I think I get it now. Thanks for explaining it to me. It is a bit mind blowing. It's like taking the AS/400 model and applying it to the entire internet (or all of computing).


A descendent of Smalltalk seems like it might fit the bill here..


>Traditional operating systems such as Linux and windows are 100% dead when non volatile memory comes along in force. Paradigm shift time. There is pretty much no reason to use any filesystem APIs or filesystem any more. You just keep your data in the process address space - its just not going to go anywhere.

Yes, only we want to have those data shared among many processes, with abstractions, names, rich metadata etc. (Not to mention different machines, backups, etc).

It's not like we are currently forced to use filesystems because memory is volatile.

Actually, you got it backwards, we use filesystems specifically in NON VOLATILE memory, that is hard disks.


Personally I don't like the idea of "image" based computing very much. I mean, it's fine in a lot of circumstances, but the file system provides a nice broadly compatible database that is consistent across programs.

I think the need for such a well defined and accepted user organizable data store isn't going away regardless of the underlying storage medium.

I like that my file system can be reasoned about in very concrete ways. I'm okay with using a tag based system like gmail, but I find it very flat and more difficult to organize vs traditional hierarchy of folders.

And btw, GET OFF MY LAWN!


What you really need is stick a virtual memory manager on top of NVM instead of RAM. In this case NVM works like disk and RAM as the same time. In your file system you just have a /ram file which is used as system memory.

Then, fun things start to happen: You no longer need disk buffers, since your RAM is your disk. And mmap()ing no longer consumes "RAM" because you just map a part of your file to whatever virtual address you want. You need no swap. You never have to swap in or out. You never have to sync your disks or observe buffer dirtiness (if only on CPU cache level) because your buffer is a part of your file - once you write to it the file is already updated.

Of course, software will have to adapt: prefer mmap()ing file to read()ing one; use persistent memory structures. The goal is to minimize copy-on-write by either read-only access or safe in place data transformation.


While I look forward to that future, realistically nvm ram is going to be at least as pricey as ssds or volatile ram, which means that a more likely in between approach that tries To minimize the amortized io between fast nvm (our ram replacement) and cheap and relatively slow nvm (hd or ssd).


If it would cost as SSDs do, it's pretty simple then: you mount NVM as /, and /var & /srv are still on disk. All your programs lose start-up time and most of swapping.

If it would cost more, you can use it as a persistent block cache (that survives reboot), but it doesn't make much economic sense since nobody is willing to pay money for just faster start-up and warm-up after reboot. If you have 8G of NVM, you could just read 8G off disk in something like 30 sec and be settled with regular RAM.


Widespread adoption of NVRAM may require a significant change in security models, since data once assumed ephemeral may be persistent. For example, it may be trivial to recover cryptographic keys from a running system.

(Disclaimer: I'm working on a solution to this.)


This. I'm surprised nobody else mentioned security. Instead of chilling DRAM sticks immediately after shutdown and trying to read its contents with specialized tools, people could just take the NVRAM out and analyze it at leisure.

I don't think it will be too difficult for security software to wipe their keys from memory before shutting down, and many programs already do this. But so much more would remain vulnerable unless the decrypted data structures were also wiped from memory. Implementing effective security with NVRAM-equipped computers might therefore negate much of the benefit of using NVRAM in the first place.


There are already solutions out there for the paranoid who are afraid of cryogenic attacks on DRAM


Yes and no. There are some solutions out there, but they are typically not ready for production, require custom hardware, or make assumptions about physical security controls.

(Disclaimer: I am biased since I am working on this.)


That's fairly easy to deal with - TPM module in PCs does this already.


The TPM does not provide full memory protection. TPMs are primarily used to attest the state of running software. It does not prevent someone from reading memory contents.


I will only believe this once it's sitting in my desk

And, for now, it won't change filesystems much. Unless you can get a similar ammount of it as a disk (or maybe a compromise, let's say today around 8GB ram is common, and 1TB of HD, then if you can get around 128GB NVM this can be your new 'SSD')

It is of course, a very important development, and may make things faster


How about 240 GB almost as fast as DDR 200 (PC-1600) RAM? http://www.newegg.com/Product/Product.aspx?Item=N82E16820227...

Edit: according to reviews it uses compression to boost its bandwidth (but not capacity). Still seems like a decent tradeoff.


Crazy! I was looking into how much it would cost to get 500GB of RAM, looked like it was going to be around $3500. I hadn't even considered this...


Wow, very tempting

Too bad it doesn't fit my MBP =)


You can easily run a desktop pc wih a full featured distro like Ubuntu desktop off a 16GB disk. You certainly don't need 128GB.


Can anyone tell me why we don't just hook normal RAM up to a small rechargeable battery such that it can maintain its state during a power loss. Alongside that, there is an equivalent amount of flash memory. The flash is never used, except when you get within, let's say, 5% battery life, at which point the entire contents of the RAM are dumped to the flash. Then, on system start up, if the RAM is still loaded, swell. If there is a ram image on the flash drive, load that and continue as normal.

Isn't this essentially NVRAM? What are the downsides to it?


We do, something like http://techreport.com/articles.x/16255 or http://www.anandtech.com/show/1697/5 although I don't think they actually have a full NV backing store. They definitely exist though, and have done even before SSDs, it just had a normal disk hanging off it.

Battery-backed cache has probably been around even longer in write caches for large RAID systems.

The 5% power and shutdown is also how the Macbook laptops handle sleep. Sleep mode is a low power nothing-but-ram mode, and when the battery gets too low, it goes into 'safe sleep', basically a dump-to-disk all-powered-off hibernate.

The main reason it's not all that common is that for the sort of workloads you're prepared to pay for a shitload of RAM, you're probably just using it as a cache for a DB or some monster app, and actually keeping it around isn't that much of a priority. You've got failover somewhere else in the stack, and it's one less thing to buy and maintain.

The other critical flaw is that there is a (potentially huge) performance hit in presenting as a disk vs hanging off the northbridge MMU. Even the latest in new fancy SATA is hilariously slow compared to the actual memory bus (6Gbps for SATA3 vs maybe 100Gbps for DDR3[1]), and having all the filesystem abstraction on top, as in the titular article of this thread mentions, is a whole lot more overhead.

So yeah. We can. Sometimes people do. But it's probably easier and better to just stick it in the actual RAM slots, and use it differently for everything except 5-second boot times.

[1] https://en.wikipedia.org/wiki/List_of_device_bandwidths#Stor...


If you want battery backed ram you might as well use a UPS. There are also corruption issues. A failure can leave data structures in an inconsistent state that applications typically don't worry about in RAM.


Commodity NVM is huge for big data transactions.

  Bye bye WAL
  Bye bye fsync
  Hello NVM replication
  I think I'm gonna cry


Obigatory mention that the DG Nova http://en.wikipedia.org/wiki/Data_General_Nova was the star of http://en.wikipedia.org/wiki/The_Soul_of_a_New_Machine The Soul of a New Machine.


What about processors that rather than accessing external RAM and levels of cache, instead a large amount of (register) memory (nonvolatile or not) is directly included within each CPU?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: