Hacker News new | comments | show | ask | jobs | submit login

This for the most part assumes traditional disks and the problems with those. The problem with the RAMcloud is that it's expensive.

The problem is that memory is expensive, and when you can only put ~64GB in a 1U server it's a lot of rack space costs too. For about the same money as 64GB of RAM you can buy 1TB of Intel SSD storage and that (6x160GB disks) will fit in a 1U server too. SSDs have very good random read performance and that is likely to get significantly better in the coming years. RAM is already fast enough for this type of job and the real problem is lack of capacity / high cost.

I suspect that the people like facebook who have relied on RAM mostly so far will start moving to SSDs to cut costs - they have already moved away from netapp for their storage to cut costs.




This isn't an either/or proposition.

The 5 minute rule gives us a simple planning rule of thumb for comparing storage technologies:

Given some access frequency, if an item is accessed more frequently than this break even time iops dominates the cost calculation. Likewise if an item is accessed more rarely, capacity cost dominates the calculation.

For current technology, roughly speaking, the break even time between RAM and SSD is 5 minutes, and about 6 hours between SSD and spun disk.

To cost optimize for a particular system you need to know the distribution of access times. For something like facebook the access times are heavily biased towards recently created items (ie, almost all users start walking the data graph from the top of the news feeds, and days old data is very unlikely to be touched).

This can make it difficult to get utility out of something like SSD that straddles a thin band in the middle of the storage hierarchy. Your dollars may be better spent on ram and disk in the proper proportions.


But as the article points out, capacity per dollar/watt/cm^3 is increasing exponentially for all of these storage mediums, and is expected to do so for many years to come. However, throughput and latency per dollar/watt/cm^3 is not improving nearly as quickly.

This means that throughput and latency will be the scarcest resource in the future, and regardless of the demands of your application, a time will eventually be reached where SSD has enough capacity at $X, but HDD does not have good enough throughput or latency at $X, so you switch to SSD. And eventually, the same logic will apply for SSD -> RAM.


Well HDDs never really increased in access time in the last 20 years [as they state] because the problem is basically rotational latency, so get a better speeds than a 15k RPM disk you need a 30K RPM, 60K RPM, 120K RPM disks etc which would be crazy.

SSDs are pretty new and are very fast despite several problems that the makers haven't quite worked out yet. Random access seek time can be improved by adding more chips. Also seek times will no doubt increase with faster clock speeds and reduced feature sizes in the same way CPUs and memory do now.

SSDs are lower power than HDDs and Memory is high on power usage not to mention the savings by having fewer servers.

SSDs are simply too new for people to design for them, if you look at what http://www.rethinkdb.com/ are doing, or the TRIM feature, or log based file systems, there is quite a lot to be done to update the software people use to take advantage of SSDs. It has all been written with the limitations of HDDs in mind.


To put it another way, Spinning disks can do ~1MB/sec random writes, SSDs can do about ~40Mb/sec and with only 3 (pairs if RAID) of disks you can saturate a 1GBit network connection.

http://www.anandtech.com/storage/showdoc.aspx?i=3631&p=2...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: