Hacker News new | comments | ask | show | jobs | submit login

if they really wanted to make waves they would unveil the world's fastest AND the world's largest hard-drive, two in one, with an onboard battery and hybrid 64, 128, or 256 GB of RAM (not SSD) in 2x, 4x, or 8x 32gig dimms exposed as a physical Drive, costing +/- $800, $1600, and $3200 respectively, in addition to the 16 TB second physical drive, all integrated in one package so you can't disconnect the battery and nuke your lightning-fast drive without being extremely aware that you're doing so.

The hard drives would have ironclad firmware that keeps the RAM refrehsed until its battery goes down to 15% (or whatever the conservative 10 minutes of power is), at which point it takes the ten minutes to dump the contents of that RAM to SSD, and reverts to having that drive also be SSD until the power is reconnected long enough to charge battery back up to 80%. Then it reads it back into RAM and continues as a Lightning Fast 64 GB + Very fast 16 TB drive.

You would store your operating system on the lightning-fast drive.

The absolute nightmare failure state isn't even that bad, as even though the RAM drive should be as ironclad as SSD, in case it ever should lose power unexpectedly through someone opening the device and disconnecting the battery or something, it can still periodically be backed up, so that if you pick up the short end of six sigma, you can just revert to reading the drive from SSD rather than RAM and lose, say, at most 1 day of work.

thoughts? I bet a lot of people would be happy to pay an extra $800 to have their boot media operate at DIMM speed, as long as the non-leaky abstraction is that it is a physical hard drive, and the engineering holds up to this standard.

There is a lot of software out there that is very conservative about when it considers data to be fully written - it would be quite a hack for Samsung to hack that abstraction by doing six or seven sigma availability on a ramdrive with battery and onboard ssd to dump to.




The basics of your idea were captured in a device in 2009: http://2xod.com/articles/ANS_9010_ramdisk_review/

It would be very interesting to see a similar product being introduced using contemporary technology, though. One question is what sort of interface it would communicate over to leverage the higher transfer speed.


I wonder if they have a patent. The idea is important and good enough to patent, and if they do I would defer to them on it. I wonder why they're not making anything these days?

I think it is fine not to have any higher interface to leverage the transfer speed. RAM latency and speeds can obviously saturate disk interaces, but I doubt SSD's come close. So it should be a large jump in performance regardless.


it goes way back to 2005: http://www.anandtech.com/show/1742


I think there have been RAM-based storage devices even older than that, connected to IDE. I have no idea how one would go about finding a reference these days though.


Yeah, I'm pretty sure I remember a similar card being announced when DDR was being widely adopted, but I can't find anything (which is pretty interesting in itself on the Internet, I once searched for specs on a dialup modem and could not find a mention of it, like it never existed :-))...


By relying on closed software on the Drives controller chips wouldn't it just get more complicated? With the price of RAM being as cheap as it is, why not just build a series of systems with 100GB+ of RAM and cache away?

Relying on a drive controller might seem the right way to go but especially for corporate installations I would believe it would be beneficial to have the fine grained control a dedicated server could provide.


Because systems, including servers, don't expose an abstraction of being permanent (always-on) storage that never gets suddenly lost (rebooted, loses power) for any reason at any time. How could you run a laptop with a server inside that cached 100GB+ of RAM? You would have to wait for it to load from RAM. Not so with the device I outline: if it has 48 hours of trickle charge (my calculations could be wrong but apparently it just takes mere milliwatts to refresh a DIMM), then as long as it has received power in the last 48 hours it would be instant-on. Everything you do is instant, EVEN IF it's written against software that writes to permanent storage.


Sounds very much like the current suspend to RAM functionality that every laptop has, at great expense and complexity.


only if you think having every disk access occur at ram speed rather than SSD speed makes no difference. CPU's and RAM are so fast, I think often in starting an application or the like, disk access really is the limiting factor. I suppose you can disagree. things like compilation could easily end up twice or five times as fast in my estimation. know anyone with a compilation in their workflow?

I should have said "C: drive"/"HDA1" but wrote boot media so I could save having to think about my phrasing. I meant that's where you would install anything that is primary to your workflow and might read and write lots of files, because that's how it was programmed, git, your ide, compiler, test suites, database, webserver and log files, or whatever programs you create and handle your workspace with, whatever that may be (photoshop, design software, etc).

the point is, things you would never risk not having on permanent storage, and which are written with the expectation that they will be. if it's ironclad (six/seven sigma, and backed up to real permanent storage behind the scenes in case worse comes to worst), you wouldn't have to give up this abstraction. it would still be a hard drive and not, you know, the current contents of your ram since you booted.


I just can't seem to figure out where you are coming from on this. My first question is, why would you plug a device with transfer rates in the 20-40GB/s range into a SATA3(6Gpbs) port? Next is although we can wax poetically on what exactly is the best case for every ones use cases how are you going to guarantee that the micro controller will work the way you want it to? Databases with properly configured indexes will retain the important data in RAM with out further modifications, and again how would you ensure that the records you feel should be cached are cached since the small micro controller would barely have the resources to analyze the data stream to begin with.

Lastly if you do care about data retention during power outages and sags then you would likely want an APC/backup battery. Even though the data stored in the SSD/RAM hybrid might have enough backup power to flush to disk how about the data that is currently in RAM waiting to be flushed as well?


I still don't understand. If you have tons of RAM, your OS uses it for cache, so disk access is at RAM speed. I only reboot my computer a few times a year, so if I had 256GB of RAM in my computer everything I use at least once a month would be in cache.


I doubt very much that this is the case. Your OS can't possibly report to, say, your database, that something is written, if in fact it is still being written. Likewise if your compiler produces a bunch of object files before linking them, your OS won't just stick them in RAM and say "well, there's your file, it's written" while not actually being written. I just don't think it works that way!

If it did, SSD's wouldn't be so much faster than spinning-platter HDD's...


A buffer cache in write-back mode would do this, but DBMSs are usually very strict when it comes to waiting for data to hit long-term storage. Most of the implementations access the disk directly, bypassing such mechanisms in the process.

http://www.tldp.org/LDP/sag/html/buffer-cache.html


right, and this is just one example.


SSDs are still faster for non-cached reads, which are significant, since most people don't have as much RAM as they have of permanent storage.

By the way, what you're proposing in terms of software has been available for a long time; multiple distros (including Ubuntu) can/could be booted completely to RAM, using tmpfs as the filesystem. For example:

At the boot prompt, type "knoppix toram". Knoppix will load the contents of the CD into ram and run from there. After boot up, the CD can be removed and the cd drive will be available for other uses. Because this will take up a lot of ram, it is recommended for those with at least 1 GB of ram.

It's definitively faster, I just don't have the necessary RAM to fit all my system in there.


Not only that, at the first power loss you would lose all your data. I wouldn't boot anything critical straight to RAM! It's just not the kind of guarantees we're used to.

If it were all in a sealed package that 'guarantees' the RAM will never power down, at a very low firmware level, that is a different matter.


TMS RamSans, even they migrated from pure ram to SSDs




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: