Hacker News new | comments | ask | show | jobs | submit login
Samsung unveils 2.5-inch 16TB SSD (arstechnica.com)
292 points by twsted on Aug 13, 2015 | hide | past | web | favorite | 166 comments

Every piece of storage news I've seen for the past year or two reinforces my opinion that there is a great deal of price-fixing happening in the consumer storage market. The price trend of 2TB HDDs, for example, just does not make sense.

When I see that a company can now create SSDs with ~16x more capacity than the best consumer option, I feel like something fishy is going on that is artificially slowing the pace of larger capacity drives making it into the hands of consumers at a reasonable price.

In the HDD market there are actually just two companies: WD and Seagate. There are other brands (Toshiba, HGST) but these are not completely independent (HGST is owned by WD, Toshiba still exists mostly because WD feeds it with technology whenever they are threatened by anti-monopoly rulings, so they can say competition still exists).

In the SSD market there are lots of brands, but only a few flash chip makers - so it's the same lack of competition but better hidden to the consumer.

For SSDs, there are four independent flash manufacturers, three of which are pretty big. They all have an in-house controller product line, and there are at least three major independent controller providers. It's a lot more competition than hard drives.

And the main reason the number of hard drive manufacturers is so low is that SSD is the competition and is winning for many use cases.

Cloud storage providers must be sucking up a huge amount of HDD supply... I wonder if that is putting a floor under consumer HDD prices.

That could be true, since the most cloud Providers don't use "enterprise" hdd's, they just buy stuff they tested.

Nah, it was only really WS and Seagate even before SSDs came around at all.

> Toshiba still exists mostly because WD feeds it with technology whenever they are threatened by anti-monopoly rulings

From what I have read, Toshiba's 2.5" consumer/enterprise and 3.5" enterprise HDD business was acquired from Fujitsu [1] and has nothing to do with WD/HGST. WD had to transfer some/all (reports diverge) HGST 3.5" consumer HDD assets to Toshiba on the insistence of the EU Commission.

The Toshiba/HGST thing is a little opaque to me: HGST still sells a few DeskStar HDDs, notably DeskStar NAS, and newer Toshiba 3.5" consumer HDDs (MD-series 4TB+) look more like the enterprise HDDs that Toshiba has been selling for a while [2], not like the DT01ACA... models that were relabeled HGST DeskStar 7K3000 [3], although there are also pictures of the very same MD models that do look HGST-style [4]. Makes me unsure what to buy 3.5"-wise when I want the Hitachi reliability that I am used to (buying IBM/HGST for over a decade and never had a failure).

Then again, maybe I'm just naive and Toshiba is more dependent on WD than I think.

[1] http://www.cnet.com/news/toshiba-buys-fujitsu-hard-disk-driv... [2] https://www.alternate.de/p/o/a/Toshiba_PX3009E_1HP0_4_TB__Fe... [3] http://cdn1.goughlui.com/wp-content/uploads/2013/02/IMG_4989... [4] http://www.toshiba.eu/hard-drives/internal-hard-drives/sata-...

Interestingly, Seagate also owns Samsung's HDD manufacturing business. Samsung is focusing on chips as the future of storage, and obviously, they're right...

Keep in mind that this 16TB SSD costs $20,000 or more; it's not comparable to a $200 consumer hard disk.

Where in the article did it say that? The only mention of price is:

"We don't have a price for Samsung's 16TB PM1633a, but we can't imagine that it'll be cheaper than £5,000."

Based on that, you could reasonably say it will cost $7800 or more per current GBP->USD exchange rate.

Also, you might need to subtract VAT.

Do we know it will cost this much? A 4TB one is $2200 or less.

Price often goes asymptotic at the cutting edge.

nitpick, but you almost certainly mean to add "asymptotically vertical"; asymptotic alone doesn't tell you anything a bout which direction the price curve is heading, only that it is approaching some limit.

Context makes it quite clear that it's going up. "Asymptotic" is a perfectly good description of an acceleration.

You can always add more words to take something that is perfectly clear and try to make it more clear. Add enough and you make it harder to understand.

So I prefer it written the way it is.

y = (-3x^2 + 2)/(x-1) is asymptotic both as x->1 and x->\infty.

Precise language matters, one should not rely on the crutch of context if one wants to be clear.

you're reducing precision of the expression by adding extraneous data. The orientation of the asymptote is fully expressed by the context of price and the notion of high-end computing.

It's obvious from the context. Adding "vertical" is unnecessary. This is conversational writing, not legal nor instructional writing.

But this is not Reddit either.

Isn't that a tautology?

He wrote "often", not "always".

I meant the "asymptotic at the cutting edge" part, ignore price. An asymptote is the cutting edge

One thing about HDDs is that the marginal cost to the manufacturer is basically constant regardless of size. It costs them just over $100 per drive to make a drive. The cost of making 6 or 10TB drives is all in the R&D. Large cloud providers buy drives in such large volumes that they get access to pricing that normal people don't which is why they can provide things like Youtube, S3 and Glacier without going broke.

If it costs $100 to make an HDD, how are they selling 1TB drives for ~$50, and 3TB drives for ~$85? I sincerely doubt drive manufacturers are working with such terrible (i.e. negative) margins in the consumer space...

I think you're neglecting fabrication yield.

What's the marginal price on a part you can only make successfully one time out of ten?

I couldn't find info on SSD yields, do you know where to find it? I found that HDD factories generally target 97+% yield. So, if SSD yields are much lower, you bring up a good point.

I was referring more to the NAND fabrication yield, eg:



I'm making the assumption that NAND flash is binned and derated based on defects. ie, to make this drive, Samsung needs to fab 256Gbit dies. As this is on the leading-edge of their production capabilities, I'm guessing that the defect rate is not negligible, and that they accidentally make lower-capacity chips a significant portion of the time.

SSDs probably have very very good yield because unlike a CPU if any parts are damaged they can just work around them.

They can even sell them as lower capacity drives. I'd be surprised is the yield was not 100%.

The pricing isn't much better than what consumers get. They can charge 200% of cost for the product, though.

Have you seen the price of 10Gbit Ethernet ports? A technology that's over 10 years old, and hardly more advanced than things like thunderbolt....

Not just storage. If you go backwards through the supply chains, you inevitably end up with a limited number of companies supplying core parts.

I feel that as well. We are told it will be a while until we see anything beyond single digit TB and suddenly Samsung drops a massive bomb with 16TB? I doubt that an average consumer would use 16TB but who knows. Also I have some qualms about storing on SSD knowing the data won't last forever

Ignoring that it's a wholly new flash/storage chip just being introduced to the market, scaling up flash storage is not a great mystery or difficulty -- add more chips. The reason consumer drives are at the sizes they are is a matter of consumer tolerance for pricing levels, and we saw all of the noise when 1TB flash drives were announced, people annoyed that it isn't for less than $100.

I don't see much justification for the "consumer tolerance" argument. It seems like most people are holding off on SSD purchases, or choosing lower capacities because they think the drives are too expensive.

Your conspiratorial argument is that they're holding off higher capacity consumer drives (note that in the Enterprise market you have been able to get multi-TB solutions for years) for some reason, but that the existing options are too expensive. This is a very entitled argument where your core complaint is that you can't have difficult, expensive technology for cheap.

>your core complaint is that you can't have difficult, expensive technology for cheap.

That's a circular way of putting it, where you incorporate "expensive" (the thing that is under question) into your argument.

His core argument is that it shouldn't be that expensive, and maybe they are using their monopoly / cartel power to fix the prices, so they can milk the < 2TB consumer market first AND charge way higher in the enterprise for "premium" drives that aren't.

That might or might not be the case (they could still be as expensive to manufacture as they claim), but price fixing is something that disk and memory companies have been found guilty of doing in the past (in courts et al).

His core "argument" is that it's fishy that a device can appear that has 16x the storage space. That is precisely what they said, so this really isn't open to debate. But that device will be unfathomably expensive. We all know it will be. It's meant for extraordinarily rich tech companies that need hyper-density, and has positively nothing to do with the consumer market.

What it has to do with the consumer market is that they have the ability to manufacture a drive in the same form factor as the consumer space, at a large enough scale to offer it for sale. That has a lot of implications for the capabilities of their manufacturing processes.

In the past (and I think still currently), the true "bleeding edge" of the SSD market is made up of drives that take up full rack units. So, if this 16TB SSD was in a funky form factor, or even just 3.5", it would make way more sense to me and be less fishy.

So now we're on to completely different argument #3, which is that you don't like the form factor they use.

Again, aside from the fact that it's a new manufacturing process of flash cells, which is rather an important point, the people willing to pay the big dollars -- enterprises -- didn't want flash in hard drive/SSD form. When you're paying 10s of thousands for an extremely high performance system, you don't drop it behind an abstraction of SAS/ATA, but stick it in a PCI-E slot to get extraordinary performance and direct access. The choice of form factor was targeting the market, just as the choice of how much flash to put in a SSD drive is targeting a market.

And the truth is that Samsung didn't even announce a price for this, and it's unlikely this iteration ever becomes a real product. They put it in an SSD case to get press because it's neat, not because it makes sense.

>So now we're on to completely different argument #3, which is that you don't like the form factor they use.

Please stop putting words in his mouth (first you dismissed his accusation of price fixing as "your core complaint is that you can't have difficult, expensive technology for cheap", which was far for his "core complaint").

And, no, we're not "on to completely different argument".

This isn't a rhetorical competition and he didn't put forward false argument to trick anyone.

He just explained further the reason behind his original objection.

I've put no words in anyones mouth. I have no idea why you keep responding while adding literally nothing of substance, but their first argument was that there's a conspiracy because how could densities jump so significantly, clearly showing that they were holding back. The next, in the face of the reality that the consumer market has no stomach for the price of devices with magnitudes less storage, is that the conspiracy is that prices are too high, making larger devices untenable, completely contrary to the first argument. The third is that the conspiracy is that enterprise flash devices come in non-consumer form factors (see points #1 and #2).

This is profoundly ignorant. Save trying to amble up to the high ground when you simply add more noise. It is the sort of tiring entitled nonsense you see on every Facebook post by Anandtech, everyone stomping their feet that they can't have a 2TB SSD for less than $100.

Actually, being the originator of the words in question, I do agree that you are "putting them in my mouth." At the very least, you seem to be inferring a lot of things which I did not imply or intend to imply.

The previous commenter is correct. I view all of these statements as clarification of a single argument and its originations. That argument involves no conspiracy theories, simply a suspicion of price fixing (that's not an outlandish suspicion, in my opinion. I mean...drive manufacturers have been found guilty of it before...)

I don't think it is a conspiracy that some enterprise drives come in odd form factors, you completely misread that part of what I said. My point was that, normally the most advanced enterprise drives come in strange (i.e. much larger) form factors. This is because they have not yet worked the size and power requirements down to fit the smaller form factor, which makes complete sense. However, this drive uses a standard form factor, which tells me it is not "bleeding edge." There are probably rack mount SSDs with much higher capacities than 16TB.

So, the reason it seems weird is that they have obviously passed that hurdle with the 16TB offering. They can now fit 16TBs of SSD storage in a standard form factor. It is odd because if they can fit 16TBs, and we assume that at scale, the variable costs are almost exclusively due to the capacity of the chips, why is there not a consumer drive even a factor of 4 smaller (i.e. 4TB) available? I understand there not being a 16TB drive...or a 12TB, 8TB, etc... available. But a factor of 16 seems oddly large.

I'm reading the Innovator's Dilemma right now, and I just finished the chapter about the storage industry. The author draws the conclusion that solid-state drives may eventually move upmarket from cash registers and embedded applications to PCs and such.

Having seen the move from 5.25" HDDs to 3.5" HDDs, then the move from desktops to laptops, and now seeing SSDs becoming extremely common in laptops, tablets, and phones, I have to believe that the author predicted the future when he wrote the book.

Since PC sales have dropped, people are not buying as many HDDs, and buying more SSDs, usually indirectly. Cloud infrastructure has likely gobbled up the existing HDD supply.

But even there, SSDs are preferred for many applications, such as databases, since they're faster overall, storage limitations be damned.

And now we're seeing the first SSD that has a capacity greater than HDDs, in a similar sized package. And no current HDD company has an SSD offering worth mentioning.

It's disruption happening right before our eyes. History seems to repeat itself all too often!

You do realize that the example of the hard disk industry in The Innovator's Dilemma is actually one of the weakest in the book:

"In fact, Seagate Technology was not felled by disruption. Between 1989 and 1990, its sales doubled, reaching $2.4 billion, “more than all of its U.S. competitors combined,” according to an industry report. In 1997, the year Christensen published “The Innovator’s Dilemma,” Seagate was the largest company in the disk-drive industry, reporting revenues of nine billion dollars."


"Between 1982 and 1984, Micropolis made the disruptive leap from eight-inch to 5.25-inch drives through what Christensen credits as the “Herculean managerial effort” of its C.E.O., Stuart Mahon. (“Mahon remembers the experience as the most exhausting of his life,” Christensen writes.) But, shortly thereafter, Micropolis, unable to compete with companies like Seagate, failed."

Note that he published his book long after this stuff had been shown to be hopelessly wrong, without making any corrections.

Many of his examples are similarly terrible. And those are the ones he cherry-picks to back his thesis.


I found The Innovator's Dilemma to be infuriating. It's a classic 20/20 hindsight -- just vague enough to apply to everything but excuse itself from any conspicuous counter-examples (Apple, say). Like most management fads, I guess.

(Prediction: wait until electric, self-driving car tech becomes practical. Ford and GM will crush the "nimble" disruptors like bugs. Well, unless it's Apple. Then all bets are off.)

Now, the hard disk makers may well be going under, but let's set aside Clay Christensen's over-rated terminology.

Samsung is hardly a small startup "disrupting" big, slow stalwarts. Samsung dwarfs the hard disk makers -- it's more like Google "disrupting" libraries, or Walmart "disrupting" local businesses.

BTW in Christensen's terms, this new disk is a "sustaining" innovation -- it's a better, faster, more complicated, harder-to-manufacture iteration of an existing, successful product.

Since when is Samsung not an HDD company?

~2011, when Seagate acquired their HDD manufacturing business: http://www.notebookreview.com/news/seagate-acquires-samsungs...

Samsung has focused on SSD storage ever since...

Holy that is a lot of storage in a very small amount of space. Besides the fact that I want one right now I am starting to wonder how much heat this will generate.

A lot of 1 unit rack servers can fit about 8 2.5" drives. 128TB of storage in 1U is pretty crazy storage density.

Everytime they reveal a larger capacity drive I just wonder what the backup strategy is going to be. Longer tapes?

Redundant sets of drives... or high density backup tapes. Tape backups have come along with drive size increases:


looks like LTO-10 is planned to be 48TB per cartridge.

LTO6 is current. LTO7 is "future". LTO8 may never happen, let alone LTO9 or LTO10 (If SSDs prove to be superior to tapes... investments will change for sure.)

The chief benefit of LTO is that it still remains the cheapest and does in fact have huge sequential read/write speeds. Random read/write is even worse than discs but for backup purposes, sequential is king.

SSDs may have issues with long term 'disuse', as in you unplug the drive and leave it on the shelf, data loss may occur. With most tapes you can put them in cold storage and they should last decades.

> With most tapes you can put them in cold storage and they should last decades.

This is _hopefully_ true but operational history is so full of unpleasant surprises that I would hesitate to trust any type of storage which isn't regularly verified.

With previous generations of LTO, a colleague had encountered fun failure modes like the media degrading rapidly (unrecoverable in less than a year) when a tape was stored on its side, which turned out to be an “everyone knows” fact not mentioned anywhere in the tape or drive documentation. A difference coworker had encountered some issues with a batch which had a defective lubricant causing the surface to break down over a couple years.

One place we worked with had to carefully de-tune a new tape drive after learning that the old one had drifted out of alignment for at least a year before physically failing, which meant that most of their tapes were no longer readable by a drive in standard calibration.

This is not to say that tape doesn't have a place - analogous failures happen for everything else and the cost-per-GB is appealing. I just don't think we actually have a toss-in-on-the-shelf storage media which can be assumed to work over a long term. You can address those issues with a regimented approach for rotation and mixing physical devices, media, and location but that increases the cost of adding a new storage technology into the mix since you need to develop that operational confidence for each class.

"One place we worked with had to carefully de-tune a new tape drive after learning that the old one had drifted out of alignment for at least a year before physically failing, "

This sounds like a story two decades old, LTO and all modern tape drives have servo tracks on the tapes.. AKA the drive realigns itself to the tape track on the fly as the media passes the head. If the drive cannot do this, you get track following check conditions during the write.

That's certainly possible – I thought that was their first generation LTO system but it might have been the previous one which was being replaced. It was a decade ago so both would still have been in service at that time.

The main point in mentioning it wasn't to say that tape is terrible but just that each unique class of hardware brings unique challenges which might not be obvious at first until you have a fair amount of operational time. (Thinking about the people who learned the hard way why RAID arrays should mix hard drives across batches and manufacturers)

Yeah but with drives and machines so cheap, most users can keep all their data live, on active drives. Actually on 3 or more drives geographically dispersed.

We asked Dell for Backup Solutions a while ago, the only thing they told us was that more and more the LTO way of backing up date will decrease and people will focus on offsite cold disk coldstorage. However they forgot that there are customers which don't have many datacenters all over the place...

For flash devices, power consumption / thermal dissipation is usually a small constant for the control logic, plus a very small amount per read, plus a small amount per (page) write.

If you are limited to the write rate allowed by the SSD interface, then that will serve to limit the heat dissipation as well.

Backup to disk; replicate to an offsite disk array for DR purposes.

EDIT: I should have specified `backup to an SSD storage array.` Disk is just embedded in my brain.

Tape?? You can get 6TB spinning HD for 300 bucks. Call it 12 grand for 1:1 redundancy on spinning disks @128TB. Spinning disk sequential throughput is more than big enough to do this overnight every day, and probably enough to backup almost live.

Brand new LTO4 tapes can be had for about $30, and those go up to 1.6tb compressed, and LTO4 is old tech at this point.

So... 10 tapes per SSD if your data compresses well. I don't see that as practical at all.

You mean the tens of thousands of $$$ SSD in the topic?

This can be had right here, right now. Upgrade it to modern tech (LTO6), and you get ~2.5TB/tape for about $60/tape. That's still the absolute best capacity-to-cost ratio for bulk storage that you'll get in 2015.

LTO7 is just around the corner and that tops out at 6.5TB/tape.

Nobody is arguing that tape doesn't provide the best capacity per dollar. However I get the sense that tape hasn't kept up with spinning disks, let alone SDDs (on Moore's law).

It's just not the 50x factor it once was. More like 10x. Tape is on nobody's radar anymore, whereas as recently as 15 years ago it was still discussed by the mainstream IT commentariat. Gees I even remember PC Magazine recommending desktop users buy some DAT-based tape system. Now, tape is nowhere to be seen.

And that's not really surprising to me when you think that a petabyte of highly responsive spinning disk array can be done for less than a 150 grand, hotswap redundancy included. That's for a superfast 2-d medium as opposed to an eternity-seek-time 1-d medium. Spinning disk is a medium which can be used for live redundancy, a crucial requirement in the internet age.

Sure I think the NSA and GOOG might have need to archive exabytes onto tape. But that's not a mainstream market and for precisely 100% of hacker news readers, tape doesn't exist anymore.

Considering that I just picked up a 24 unit LTO5 stacker and a box of tapes for ~750, you may wish to decrement that 100% by a few points :)

The problem with live redundancy is just that, it's live. If your backups are live and online, they are not backups. (For the same reason that RAID is not backup, it's redundancy).

Think less "deployed broken code to prod" and more "this special snowflake server crashed" or "the datacenter caught on fire".

Disaster recovery.

The use case for tape is the same as the use case for something like Amazon Glacier. Perhaps you don't want multiple terabytes of your personal data being sent over the wire to a company who's no doubt been infiltrated by TLA's. Perhaps you want total control of your own data. Perhaps you don't want nasty surprises when it comes to Glacier's retrieval/data in/data out/you-didnt-wait-long-enough fees.

A single LTO5 drive can be had for about $300, tapes for $40/$50, and that'll get you around 2TB of storage each, more if your data compresses well.

Let's not pretend that there aren't real benefits to be had, here. Your use case isn't everyone else's use case, and certainly not enough to be making broad sweeping statements that nobody on HN uses it.

Can someone explain to my why SSDs still cost more than HDDs?

When I look at all the moving parts in an HDD, I'm shocked they can still be produced for less.

You're paying for $5 billion in lithography steppers and other fab equipment for one factory (you'll LOL at the cost of 1 deep-UV immersion litho stepper). That's amortized over 5 years. There's also masks, and process research costs, in addition to the basic materials costs.

Did you know that a silicon wafer is a perfect crystal, structured like a diamond? Silicon is right underneath Carbon in the periodic table, which means it shares the same outer electron shell configuration. Making that ain't cheap.

And if one atom is in the wrong place, you have to throw away the chip.

That kind of core expense doesn't exist in a hard drive factory. The disks in a hard drive don't have to be perfect crystals, for example. It's a LOT more expensive to produce chips.

Multiply all the distribution and sales costs, and you'll understand why it's so expensive.

> And if one atom is in the wrong place, you have to throw away the chip

this is certainly the case for CPU

but DRAM & NAND ? this is the typical case of designs where you can add redundancy to accomodate for manufacturing defects.

I thought the reason that there are so many different models (i3, i5, and i7, for example) is that the more defects it has, the more the surface that becomes unusable, thus reducing the amount of transistors in use (which is why i3 are less powerful than i5). Don't they technically have some sort of redundancy?

The most obvious is that they have multiple cores, and it's easy to completely disable a non functional one.

Now can Intel be more granular than the core level, like running a core with some defect ALU, I really don't know.

What's publicly known from the binning process is that it involves disabling core, reducing total cache size, and finding the maximum working frequency.

The bottom line is that it requires more effort to deal with defects in complex logic, for DRAM they would reduce the total memory size.

At the cost of increased die-size.

If it helps any just think of the costs as buying diamonds.

"Wow that's 16TB of diamonds!"


"this GPU uses a bigger diamond than that GPU"

It doesn't exactly help your cause that many people also believe the cost of diamonds is artificially inflated as well.

The cost of diamonds IS artificially inflated. They are useful for purposes other than the one in which the price inflation is a big deal, but it is a demonstrable fact that the price is inflated.

The cost of industrial diamonds is -one presumes- not subject to much artificial inflation.

Only problem with that is that a silicon wafer is like $50 off the shelf individually. They are incredibly cheap.

That's still a lot more expensive than aluminum platters. And you will need a lot of silicon to produce an SSD. You don't do it with just one chip.

That price is for single wafers off the shelf. They get a lot cheaper if you buy bulk, and you can slice them a lot thinner then the sizes I used to use (where I know the price from).

> If it helps any just think of the costs as buying diamonds.

This is an unhelpful analogy.

Comparing the retail price of diamonds to the retail price of CPUs, RAM boards, and GPUs, I am lead to believe that whatever is used as the substrate for modern high-performance ICs is actually rather cheap. I can -after all- get a reasonably fast combination CPU and GPU for $45.

If we ask the USGS, we discover that in 2003, the price of synthetic diamond suitable for reinforcing saws and drills sold for $1.50->$3.50 per carat. However, large synthetic diamonds with "excellent structure" suitable for -one presumes- processes that rely on the crystal's fine structural properties -just as CPU manufacture relies on silicon wafers with fine structural properties-, sold for "many hundreds of dollars per carat". [0]

One carat is 200 milligrams. An entire Core i3 appears to weigh 26,800mg [1]. Let's be generous and assume that the CPU die is 1/1000th of that weight, or 268mg, or 1.32 carats. Given that CPU manufacture requires a substrate with excellent structure, just how much of a substance that costs many hundreds of dollars per carat can there be in a 1.32 carat device? (Especially when ones of similar weight constructed with similar materials can be had for $45 per, retail?) :)

[0] http://minerals.usgs.gov/minerals/pubs/commodity/diamond/dia...

[1] http://www.cpu-world.com/CPUs/Core_i3/Intel-Core%20i3-2100%2...

Yes, a wafer is something like a hundred dollars, that's not the expensive part.

I know that the substrate isn't the expensive part. Even a cursory gut-check reveals a claim to the contrary to be bunk.

I felt that a somewhat detailed analysis of the inappropriateness of the analogy was better than a "Nuh uh! You're wrong!" response.

Well the thing is you're analyzing in a way that's both in-depth and shallow at the same time. It doesn't matter that they have 'excellent structure' unless you care about actual wafer costs. Just use diamond prices and dimensions.

> ...you're analyzing in a way that's both in-depth and shallow at the same time.

I can't really dispute that. I'm no expert in the field.

> Just use diamond prices and dimensions.

Isn't that more or less what I did?

Diamond price per gram depends on the quality of the diamond. If we're gonna address an opinion that includes statements like "Think of the cost of a modern high-performance IC as if it was made of diamonds, because diamonds and silicon are both crystalline structures, and silicon is chemically much like carbon, therefore the substrate manufacturing costs are bound to be very similar." [0], then it seems that we need to look at the cost of high-quality diamonds that are used for their crystalline properties, rather than just for their hardness.

I'm not at all sure, but I would suppose that it would be far more expensive to make one high-quality diamond sheet the size of a silicon wafer than it would be to make a bunch of high-quality diamonds each the size of a CPU die, or maybe cut down a larger one. If it is, then an analysis based just on like-sized crystals would be dramatically unfair. Perhaps you know far more about this than I do? Industrial crystal production is not exactly in my wheelhouse. :)

[0] Direct quote: "Did you know that a silicon wafer is a perfect crystal, structured like a diamond? Silicon is right underneath Carbon in the periodic table, which means it shares the same outer electron shell configuration. Making that ain't cheap." via [1]

[1] https://news.ycombinator.com/item?id=10056870

What I'm saying is: it might be appropriate to look at specific kinds of diamond because of the complexity of lithography and such. But the purity of the wafer doesn't matter because that has nothing to do with chip cost.

That post was wrong about that being a driver of costs, and it's not fruitful to build on that wrongness.

An analogy that leads you to the right conclusion for the wrong reason is a toxic thing.

Do you mean to say that silicon wafers with higher guaranteed purity are not more expensive than those with lower guaranteed purity? I'm seriously asking; I don't know.

To speak to the rest of your comment:

mozumder made an incorrect argument and backed it up with a dangerously misleading analogy. I attacked the analogy by demonstrating its inappropriateness.

In my most recent post, I have attacked his argument with an analysis of what appear to be the actual costs of the thing he's talking about.

A wafer cost isn't insignificant. It's still a lot more expensive than aluminum platters in a hard drive, especially when you're dealing with gobs of chips in an SSD.

Add in processing costs and it really becomes a mess.

So, yes, wafer costs matter when you have to produce tons of silicon for an SSD.

> A wafer cost isn't insignificant.

This [0] seems to indicate that in mid 2009, one could get a 300mm silicon wafer for -worst case- ~$120.

Likely usable wafer area: 90,000mm^2

Largest Intel i3 processor (Haswell) die area: 181mm^2

Max dies per wafer: 497

Silicon wafer cost per die:

* Assuming 0% defect rate: $0.24

* Assuming 50% defect rate: $0.48

* Assuming 99% defect rate: $30.00

Cheapest (Celeron) Haswell on sale at Newegg today: $44.99. Average i3 Haswell price: $140. [1]

Unless Reuters is misinformed, or wafer costs have exploded in the past six years[2], the cost of the wafer truly does appear to be insignificant, even if we assume that wholesale prices are 50% of retail prices.

[0] http://www.reuters.com/article/2009/07/21/shinetsu-idUSBNG50...

[1] https://pcpartpicker.com/trends/price/cpu/

[2] This seems unlikely, as memory and chip costs haven't exploded in the past six years.

This thread is about SSDs. You're forgetting that you need a hundred of these to make a 1TB SSD.

Consider the $0.24/die, and multiply that by 100 to get a 1 TB SSD drive.

Your SSD now has a minimum cost of $24, just for the silicon. That's extremely expensive. You can never sell your SSD for cheaper than that, just to cover the silicon costs of a 1TB drive, never-mind processing, manufacturing, distribution, sales, and profit. And you're competing against 5TB hard drives that sells for $100. (the 16TB SSD meanwhile apparently uses 500 chips..)

This is why wafer costs are like diamonds, instead of aluminum platters.

> This thread is about SSDs.

This sub-thread is about your unhelpful and misleading equivalence and analogy. :)

Silicon wafer costs are like silicon wafer costs. Your diamond analogy is simply inappropriate.

We don't say "Aircraft grade aluminium costs are like diamonds, rather than hard drive aluminium platters." or "Fission reactor grade steel costs are like diamonds, rather than..." because this is an immensely silly thing to say that obfuscates the true cost of the material in question.

What's more, we can generally discover the high end of the true price of the material in question with a little work. As I demonstrated in my replies to you, silicon wafer costs are substantially cheaper than equivalent diamond costs.

If you had said something along the lines of "Due in part to the cost of silicon wafers, silicon-based data storage technologies are now and will be for the foreseeable future substantially more expensive on a per-GB basis than spinning rust or tape-based technologies.", I would have had absolutely nothing at all to object to.

> Consider the $0.24/die...

That figure is based on a particular die area. I would expect a flash memory die to be substantially smaller than a CPU die. This would drive the base cost per die down even further. Moreover, that figure was from 2009. Up to date figures are required to really put a floor on chip prices. :)

> And [that 1TB SSD] competes against 5TB hard drives that sells for $100.

Sort of. For every use that I have except for bulk data storage, I recognize the vast superiority of an SSD. The only HDDs in my computers are the ones I got for free with my laptops-turned-servers that don't do much disk IO, and the disk array that holds my 5TB-and-growing Postgres database.

For the average computer user, I would strongly recommend replacement of the HDD in their computer with an SSD. If you don't need to store more than 1TB of data[0], the performance gains over HDDs are just too great to use anything else.

I'm fairly confident that HDDs will be substantially cheaper per GB than SSDs for the foreseeable future. I'm -however- not convinced by your implicit argument that SSDs will always be -price-wise- unattractive when compared to HDDs. SSDs seem to be sold at the price-per-GB of the HDDs of ~3->5 years ago. We will inevitably see 500GB SSDs at the $80 retail price point.[1] This will make them a no-brainer for every big computer manufacturer. A really fast disk makes slow kit feel really fucking fast.

[0] In my experience, almost no non-technical user has more than 500GB of data that they care about on their machine at any one time.

[1] They're only a little more than twice that price now.

unfortunately, SSDs are going to be more expensive for the foreseeable future, and part of that is because the material costs are much higher.

We need to make sure that everyone understands why, and part of that is because we're using lots of silicon crystals, which have the same lattice structure as diamonds, which are going to be more expensive than aluminum platters.

If you take it to the limit, an SSD won't be cheaper than hard drives even as processing costs go down, because they use so much silicon.

You say the silicon costs are insignificant, but it will be a limit as prices go down.

The diamond analogy works appropriately, and it's unhelpful and inappropriate to claim material costs are insignificant.

And people are always going to end up using as much space as given, so that's another mistake you're making. They will find ways, especially given high-res smartphones everywhere with cameras.

> ...people are always going to end up using as much space as given...

I used to be certain of that, based on my personal space usage habits. Based on my ongoing survey of both technical and non-technical computer users, I no longer believe that to be true.

The rise of The Cloud(TM) means that there are shockingly few users who intentionally keep a local copy of their data. Media streaming and synced storage means that a wide swath of the computer-using population store that shit remotely and throw away data when The Cloud(TM) gets full.

> ...it's unhelpful and inappropriate to claim [silicon wafer] costs are insignificant.

When the analysis demonstrates that the costs are an insignificant fraction of the total cost, then it is entirely appropriate to make that claim. :)

Silicon wafers may well remain more expensive than harddrive platters. The price of silicon wafers may well mean that SSDs will never reach price parity with HDDs. These facts don't magically make $0.25 per chip a significant factor in the manufacturing cost of a product that also required substantial original research and development to come to market. :)

You need to look at the amount of moving parts in the factory, not in the product.

Exactly, I just went to a talk by someone from ASML, a modern processor may go through 700 sequential steps of (UV) illumination, "sanding" (to moleclar flatness) and adding a new layer of lithographic material. It's insane and there are hardly any working specimens in the first batches.

>there are hardly any working specimens in the first batches

Why is that?

Way too many atoms in the wrong places.

Building a transistor on a chip is like making a building by bombarding meteors from space and hoping the craters form the shapes you want.

Building a chip is like making a city with that process.

I think the post was was asking why the first batches are special in that regard (atoms in the wrong places) as opposed to the (presumably successful) later batches.

process tuning

It's ultraviolet voodoo, and every successive generation is tackling new problems of minuteness that have never been dealt with before.

When was the last time your first program in a language you've never used before compiled & ran on the first try?

Back in 1963, and it was my first dozen programs, one a week for 12 weeks. The 13th week, I wrote my first assembly language program, and it had a comma where it should have had a period (or was it the other way around?:-) ). I never did get that program working, because that was the week NASA sent up a new weather satellite, and there was no more computer time to waste on high school students and their trivial programs.

A large piece of spinning rust and some mechanics are cheaper to produce then billions of NAND cells.

Humans are pretty damn good at making small precise mechanical movements.

And it's a more mature arena; mechanical watchers are I'm sure a better example (Gerry Sussman likes to tinker with them to relax, or at least did that while doing his Ph.D. thesis to stay (semi-)sane), but in the area I know best, everything came together for high power rifles in the 1898 Mauser (although reliably getting the steel right took longer world-wide for the first world).

Or look at internal combustion engines. The two absolutely critical technology improvements that changed things from WWI to WWII were "merely" refinements in those and radios. WWI IC engines were so rough they pretty much required wooden frames, and engine power was anemic, as WWI tanks show. Fast forward not very many years and we have much smoother and reliable powerful engines, and had developed the seeds of today's jet engines before the first transistor was demonstrated in 1947.

HDDs cost more because they consist of technologies understood and implemented in factories for long. Manufacturing of SSDs is different, and it'll take some time for manufacturing companies to get up to speed, at least to overtake the scaled assembly line operations of HDD manufacturing.

My bet is by the end of next year, SDDs will be cheaper than their equally sized HDDs. HDDs are certainly on their way out.

SSDs are just chips on boards, which are manufactured using very mature, very high volume technology and yet they improve at best 2x/year. There is no reason why SSD prices would drop 10x in one year.

Hetzler et al. have calculated that the industry would have to spent over $800B to build enough fabs to replace hard disks with SSDs. http://storageconference.us/2013/Papers/2013.Paper.01.pdf

I think the big story here is the number of chips they can stack. From the article you pointed out to they were using 8 stacked dies.

"The later capacity is accomplished using 8 stacked and thinned (< 75 um) NAND chips in a 1.2 mm package"

The samsung one has 48 layers per the article. So that's a 6x improvement.

You're kind of mixing up a few things. 3D NAND (aka V-NAND) has many (48) layers per die, and then they also stack those dies. But they made each cell much larger, so going from 1 layer to 48 layers is only a net density increase of 2x.

You are absolutely correct. Thanks.

It's not always intuitive! Another incredible one- the silicon industry absolutely bends over backwards to keep using silicon and UV, instead of GaAs, E-UV, and other exotic processes. As much as silicon & UV can be a colossal billion-dollar pain in the butt, it's still easier & cheaper than E-UV.

Or automotive engines. We have all sorts of technology tacked on around the basic ICE gasoline engine to make them better- the combustion chamber is just a tiny piece of the machine, which is tended by countless devices managing temperature, airflow, fuel flow, air velocity, etc. Tremendously complex compared to electric motors- but in the end, they are still more popular than electric motors because of the fundamental problem of batteries.

A version of Moore's Law seems to apply to storage, which is very much a good thing. The first IBM Winchester I used cost a couple year's salary and stored 30MB on 14" platters. The next I used was an 8" ~150MB and only cost a couple months salary. Forward 30 years and I can buy a 500GB drive the size of a stick of gum for a couple hours salary. 30 more years? Can't wait to see. I assume I will eat the stick of gum and by doing so know everything in the Library of Congress.

c:> park.exe

Moore's law is passing the baton from GHz to the storage stack. Whereas you once had a simple RAM + HD setup, you now have a teamworking hierarchy of storage technologies: Cache / 3d stacked mem / DRAM / X-point / SSD / HD. Each one of these is behaving just like GHz did: doubling in speed/capacity every 18 months. Given that this is where the performance bottleneck has been, we're looking good on exponential performance upside for a long time to come if we extrapolate the recent trend. Excellent.

Won't be long till the network is the bottleneck again. 10Gb is still relatively expensive for the end user, and 40 and 100Gb are out of reach for most budgets.

Funnily enough I was just this morning googling for 10Gbit Ethernet cards and a whole bunch of "Copper wire 10Gbit will hit primetime in 2015" results game up. Totally agree. Throughput is where it's at.

Moore's Law has always been concerned about transistor size. Solid State Drives are nothing but a big pile of transistors, so it's no surprise we're seeing incredible capacity growth.

However, some manufacturers have been backpedaling on NAND process size recently, since larger cells have better endurance. Capacity still scales well thanks to tech like Triple Level Cells (TLC) and 3D-stacked dies.

The really amazing thing is one of their other announcements [0]:

Samsung has designed the PM1725 to cater towards next-generation enterprise storage market. This new half-height, half-length card-type NVMe SSD offers high-performance data transmission in 3.2TB or 6.4TB storage capacities. The new NVMe card is quoted with random read speed of up to 1,000,000 IOPS and random writes up to 120,000 IOPS. In addition, sequential reads can reach up to an impressive 5,500MB/s with sequential writes up to 1,800MB/s. The 6.4TB PM1725 also features five DWPDs for five years, which is a total writing of 32TBs per day during that timeframe.

[0] http://www.storagereview.com/samsung_announces_tcooptimized_...

That does kick it up a notch versus Intel's current top of the line. The P3700 is max 2.0TB, 450,000 read iops, 175,000 write iops, and 2,800 / 2,000 MB/s.

It's also $3.25/gb for 800GB vs the Samsung PM1725's $2.15/gb for 800GB.

Hopefully there is a P3710 waiting in the wings that is competitive with Samsung's new offerings. I have had infinitely better luck in terms of reliability and performance consistency with Intel than any other SSD brand, and I think I'm not alone on that front.

"6.4TB is rated to handle five drive writes per day (32TB) for five years"

~10K cycles sounds good

Interesting given the reliability news Facebook posted on their SSDs. With a 5x10^11 UBER you could not even read all the sectors on a 16TB disk reliably. Something I'll be looking at when I get my hands on one.

what's that in stationwagons full of LTO6 tapes?

Beats me. I've never dealt with fractional station wagons before...

A compressed LTO6 is 6.25 TB, right? Let's just go with 3 of them.

So, I figured out how many carts we need. You calculate the fractional station wagon part. My math was never _that_ good ;-)

Just as a point of reference, a single LTO6 tape costs around $25-$35 only. So we're talking about $105 at absolute most for three.

That is likely to be cheaper than a 16 TB SSD for a very long time to come. Tapes aren't going anywhere.

That's the problem, tape isn't going anywhere. A decade back, it boasted a huge capacity advantage. Now with all the newer tech, the spinning media storage advantages, etc, it can barely hold a big drives worth of data. And the tape drives are expensive... Lto7 should hold like 50T on a this and dollar tape drive to look really interesting again.

At scale, tape still has the lowest cost per byte of any storage medium.

It's true for sufficiently large values of “at scale” but tape has uniquely high overhead costs – hardware, software, staffing – which have to be balanced out by those lower storage costs. HDD/SSD costs have been declining at a much faster rate so we're already at the point where only the largest storage consumers are going to reach the point where they see a return from the initial investment in tape.

LTO-7 is 6.4TB uncompressed. LTO-10 is 48TB.

Hopefully we'll see LTO-7 this year but probably in 2016. That puts the diff between LTO6 and LTO7 at 4 years.

Extrapolating... LTO8 in 2020 LTO9 in 2024 LTO10 in 2028

LTO-10 would have ideally come in 2020 in keeping with LTO's earlier pace of 2 years per revision which is also more in line with storage increases

Not to be a killjoy, but given that LTO6's and SSDs are so similar in size and shape, and you're assuming they pack into a station wagon equally well, it seems like it would be a lot simpler to just compare their storage per unit volume directly. :)

You're not thinking hard enough! How hot is it outside? Can you pack some of the LTO6's on the dashboard? Or will they melt in the sun? I bet the SSDs can handle higher storage temps than the tapes, and therefore you can pack more of them in.

I'm off to research the weight differences, and see if the wagon's suspension factors in to this...

If anybody ever asks me what Hacker News is, I'm going to screenshot this thread and send it to them.

tip to tip efficiency... :-D

Yeah, my calculation only takes into account the actual cargo space in a station wagon. If you can fit 5 people in the station wagon too and a human is about 66 liters in volume...

Yeah, they're probably more heat resistant, but heating them up makes those trapped electrons more eager to flee.

I wonder how this will compare to Intels 3D NAND flash chips (http://www.ipwatchdog.com/2015/08/12/intel-micron-develop-3d...). Some competition on similar technologies is never wrong!

You linked to an article about Intel's 3D XPoint memory, which isn't NAND flash or any other kind of flash. They are also doing 3D NAND flash, and that's what will be competing against Samsung's 3D NAND flash.

I'm slightly surprised by the numbers given for IOps. The example they give is 48 drives giving 2MM IOps:

2,000,000 / 48 = 41,666.66… IOps

45k IOps for 16TB limits its use cases a bit. I don't know enough about storage to make an educated guess, but anyone know what the constraint there might be? Aren't there controllers that can do 1MM IOPS on single EFDs? 45k is still a ton of operations, but I expected more somehow.

45k iops is not terrible, but it's not competitive with current Intel enterprise SSDs (S3500 is 70k+, S3710 is 85k). I suspect that Samsung had to make huge sacrifices to the controller and DRAM portions of the drive to fit that many NAND chips into the 2.5" form factor. They're basically trying to create a new class of flash storage, which is space-optimized rather than performance-optimized.

I'm sure there's a market there, but I don't know how big it is. This is denser than current hard drives, but total cost is probably heavily in favor of hard drives for most use cases.

I find it particularly confusing that Samsung (seems) to have gone for a SAS SSD versus NVMe. NVMe would allow them to do a PCIe card form factor, which would surely be easier from a physical space perspective. And it's not like anyone has a PCIe flash product at 16TB either -- Fusion-io tops out at 6.4TB.

NVMe also might allow them to improve the iops. Intel's P3500 NVMe is 430k iops at 2TB. Night and day compared to this Samsung drive. So in one 2U chassis you could have any of:

  24x2TB Intel P3500
  = 48TB
  = 10,320,000 iops (read 4k)

  24x1.6TB Intel S3500
  = 38TB
  = 1,572,000

  24x16TB Samsung PM1633a
  = 384TB
  = 1,000,000 iops

  (meanwhile HDD would have far lower iops, but also probably a lot cheaper)
While the Samsung one is alluring from a space perspective, I can't really see replacing either the 'fast SSD' tier or the 'slow HDD' tier with it in my deployments.

> I suspect that Samsung had to make huge sacrifices to the controller and DRAM portions of the drive to fit that many NAND chips into the 2.5" form factor.

Really? I've got a couple of 128GB SDHC cards here -- and while they might be less performant than SSDs... I just tried to stack them on the back of a 2.5" hdd -- and I guesstimate that you'd at least be able to fit 6x6=36 of them (plastic frame and all) on the back of a 2.5" drive -- and stacking them 5 high would still be way below the width of a 2.5" hdd.

And that's not just 128GB of storage, but including 36x5 controllers etc? (Not to mention lots of plastic).

I'm prepared to be dead wrong -- but "fitting" 16GB flash into the behemoth size that is a 2.5" hdd -- doesn't seem like much of a challenge?

I don't know what the Samsung drive looks like internally, and obviously they did figure some way to do it. For comparison, here's a teardown of an Intel S3710: http://www.tomsitpro.com/articles/intel-dc-s3710-enterprise-...

It has 16 NAND packages, the controller, two 1GB DRAM chips and capacitors. No idea if the Samsung drive includes capacitors, but I sure hope it does.

The Intel board fits in a 7mm enclosure, but 2.5" enclosures can go up to 15mm. To be generous, lets say that Samsung fit two double-sided circuit boards into the enclosure and also squeezed another 4 NAND packages in per-board. The NAND dies are 256Gbit vs Intel's 128Gbit, so with similar NAND packages that gets them to 10TB.

So now you either need to fit more NAND per-package -- no idea what die size they are -- or add more packages. Maybe their packages are physically smaller or maybe they're able to get >256GByte per-package. Either would help tremendously.

But regardless, that is a lot of packages for your controller to handle and if you're constrained on physical space you aren't going to be able to put additional DRAM chips on the board. You could replace the 1Gbit chips with 8Gbit chips in a similar footprint and maintain your 1,000:1 ratio of NAND:DRAM, but those chips will obviously cost a substantial amount more. I feel like this drive is going to really blow minds in terms of cost.

I'm sure there's a market there, but I don't know how big it is. This is denser than current hard drives, but total cost is probably heavily in favor of hard drives for most use cases.

I'm not an expert on this, but my impression is that a lot of organizations that need a lot of space would be much happier with larger-capacity-but-slower drives because those drives can be so much cheaper than trying to build out more space.

Density is nice, but I always look at that from a total cost perspective. So the real question is how much will this drive cost. I suspect it will be at least $1.20/gbyte -- not unreasonable considering that the Intel enterprise SSD lineup ranges from $0.80 - $1.60/gbyte.

With the Samsung 16TB SSD, I could fit 384TB in a 2U chassis and a total of 8.8PB in a rack (of 23 hosts). That's $10.6mm in disks in that one rack.

Or I could go with hard drives (8TB, 7200rpm, enterprisey, $700) and fit 288TB in a 4U chassis and 3.1PB in a rack. I would need three racks instead of one rack to equal the storage capacity. However, it costs me $832,000 in disks.

There's really no way that your fixed costs for 2 racks can make a dent in $9.7mm, even factoring in the differences in power utilization between the two. So you'd have to get a substantial benefit from the performance differential between a HDD and this SSD, but not to the point where you need the 82x performance improvement of a faster NVMe drive (such as the Intel P3500).

if they really wanted to make waves they would unveil the world's fastest AND the world's largest hard-drive, two in one, with an onboard battery and hybrid 64, 128, or 256 GB of RAM (not SSD) in 2x, 4x, or 8x 32gig dimms exposed as a physical Drive, costing +/- $800, $1600, and $3200 respectively, in addition to the 16 TB second physical drive, all integrated in one package so you can't disconnect the battery and nuke your lightning-fast drive without being extremely aware that you're doing so.

The hard drives would have ironclad firmware that keeps the RAM refrehsed until its battery goes down to 15% (or whatever the conservative 10 minutes of power is), at which point it takes the ten minutes to dump the contents of that RAM to SSD, and reverts to having that drive also be SSD until the power is reconnected long enough to charge battery back up to 80%. Then it reads it back into RAM and continues as a Lightning Fast 64 GB + Very fast 16 TB drive.

You would store your operating system on the lightning-fast drive.

The absolute nightmare failure state isn't even that bad, as even though the RAM drive should be as ironclad as SSD, in case it ever should lose power unexpectedly through someone opening the device and disconnecting the battery or something, it can still periodically be backed up, so that if you pick up the short end of six sigma, you can just revert to reading the drive from SSD rather than RAM and lose, say, at most 1 day of work.

thoughts? I bet a lot of people would be happy to pay an extra $800 to have their boot media operate at DIMM speed, as long as the non-leaky abstraction is that it is a physical hard drive, and the engineering holds up to this standard.

There is a lot of software out there that is very conservative about when it considers data to be fully written - it would be quite a hack for Samsung to hack that abstraction by doing six or seven sigma availability on a ramdrive with battery and onboard ssd to dump to.

The basics of your idea were captured in a device in 2009: http://2xod.com/articles/ANS_9010_ramdisk_review/

It would be very interesting to see a similar product being introduced using contemporary technology, though. One question is what sort of interface it would communicate over to leverage the higher transfer speed.

I wonder if they have a patent. The idea is important and good enough to patent, and if they do I would defer to them on it. I wonder why they're not making anything these days?

I think it is fine not to have any higher interface to leverage the transfer speed. RAM latency and speeds can obviously saturate disk interaces, but I doubt SSD's come close. So it should be a large jump in performance regardless.

it goes way back to 2005: http://www.anandtech.com/show/1742

I think there have been RAM-based storage devices even older than that, connected to IDE. I have no idea how one would go about finding a reference these days though.

Yeah, I'm pretty sure I remember a similar card being announced when DDR was being widely adopted, but I can't find anything (which is pretty interesting in itself on the Internet, I once searched for specs on a dialup modem and could not find a mention of it, like it never existed :-))...

By relying on closed software on the Drives controller chips wouldn't it just get more complicated? With the price of RAM being as cheap as it is, why not just build a series of systems with 100GB+ of RAM and cache away?

Relying on a drive controller might seem the right way to go but especially for corporate installations I would believe it would be beneficial to have the fine grained control a dedicated server could provide.

Because systems, including servers, don't expose an abstraction of being permanent (always-on) storage that never gets suddenly lost (rebooted, loses power) for any reason at any time. How could you run a laptop with a server inside that cached 100GB+ of RAM? You would have to wait for it to load from RAM. Not so with the device I outline: if it has 48 hours of trickle charge (my calculations could be wrong but apparently it just takes mere milliwatts to refresh a DIMM), then as long as it has received power in the last 48 hours it would be instant-on. Everything you do is instant, EVEN IF it's written against software that writes to permanent storage.

Sounds very much like the current suspend to RAM functionality that every laptop has, at great expense and complexity.

only if you think having every disk access occur at ram speed rather than SSD speed makes no difference. CPU's and RAM are so fast, I think often in starting an application or the like, disk access really is the limiting factor. I suppose you can disagree. things like compilation could easily end up twice or five times as fast in my estimation. know anyone with a compilation in their workflow?

I should have said "C: drive"/"HDA1" but wrote boot media so I could save having to think about my phrasing. I meant that's where you would install anything that is primary to your workflow and might read and write lots of files, because that's how it was programmed, git, your ide, compiler, test suites, database, webserver and log files, or whatever programs you create and handle your workspace with, whatever that may be (photoshop, design software, etc).

the point is, things you would never risk not having on permanent storage, and which are written with the expectation that they will be. if it's ironclad (six/seven sigma, and backed up to real permanent storage behind the scenes in case worse comes to worst), you wouldn't have to give up this abstraction. it would still be a hard drive and not, you know, the current contents of your ram since you booted.

I just can't seem to figure out where you are coming from on this. My first question is, why would you plug a device with transfer rates in the 20-40GB/s range into a SATA3(6Gpbs) port? Next is although we can wax poetically on what exactly is the best case for every ones use cases how are you going to guarantee that the micro controller will work the way you want it to? Databases with properly configured indexes will retain the important data in RAM with out further modifications, and again how would you ensure that the records you feel should be cached are cached since the small micro controller would barely have the resources to analyze the data stream to begin with.

Lastly if you do care about data retention during power outages and sags then you would likely want an APC/backup battery. Even though the data stored in the SSD/RAM hybrid might have enough backup power to flush to disk how about the data that is currently in RAM waiting to be flushed as well?

I still don't understand. If you have tons of RAM, your OS uses it for cache, so disk access is at RAM speed. I only reboot my computer a few times a year, so if I had 256GB of RAM in my computer everything I use at least once a month would be in cache.

I doubt very much that this is the case. Your OS can't possibly report to, say, your database, that something is written, if in fact it is still being written. Likewise if your compiler produces a bunch of object files before linking them, your OS won't just stick them in RAM and say "well, there's your file, it's written" while not actually being written. I just don't think it works that way!

If it did, SSD's wouldn't be so much faster than spinning-platter HDD's...

A buffer cache in write-back mode would do this, but DBMSs are usually very strict when it comes to waiting for data to hit long-term storage. Most of the implementations access the disk directly, bypassing such mechanisms in the process.


right, and this is just one example.

SSDs are still faster for non-cached reads, which are significant, since most people don't have as much RAM as they have of permanent storage.

By the way, what you're proposing in terms of software has been available for a long time; multiple distros (including Ubuntu) can/could be booted completely to RAM, using tmpfs as the filesystem. For example:

At the boot prompt, type "knoppix toram". Knoppix will load the contents of the CD into ram and run from there. After boot up, the CD can be removed and the cd drive will be available for other uses. Because this will take up a lot of ram, it is recommended for those with at least 1 GB of ram.

It's definitively faster, I just don't have the necessary RAM to fit all my system in there.

Not only that, at the first power loss you would lose all your data. I wouldn't boot anything critical straight to RAM! It's just not the kind of guarantees we're used to.

If it were all in a sealed package that 'guarantees' the RAM will never power down, at a very low firmware level, that is a different matter.

TMS RamSans, even they migrated from pure ram to SSDs

Am I right to assume that NAND flash has higher storage density than magnetic disks? I've been trying to find some definitive data about this but failed so far. I'd really appreciate if someone can point me the right direction to search.

I found a paper from 2013 that compares storage densities up to 2012:


In 2012 they list the density of magnetic disks as 750 Gb/in^2 and nand flash as 550 Gb/in^2. I'm not sure how the numbers have changed with 2d nand, but 3d nand probably pushes the density way over magnetic.


No 3d nand is just a few layers for heat dissipation reasons not a giant chunk of storage.

Yes, flash density is much higher. Partly that's because putting circular platters in a rectangular case wastes a lot of space and partly because you can stack eight flash dies into a package around 1 mm thick.

Easy solution: square platters

Complete with read/write heads hanging off an X-Y plotter mechanism.

Either that or keep the platters circular but make the case hexagonal.

Smallest height 2.5" HDD is 100x70x5 (35,000 mm^3), max size 4T

MicroSD is 15x11x1 (165 mm^3), max size 200G

MicroSD is less than 0.5% the volume of 2.5" HDD, but 5% of the capacity. So MicroSD is an order of magnitude more dense than 2.5" HDD.

MicroSD still has to fit a little controller in there, so the comparison isn't particularly fair on flash. I expect heat management to be a problem scaling up though, apart from any manufacturing difficulties.

I was screwing around with writing firmware images to a micro SD card last night. Placed in a small USB adapter, the card got surprisingly hot after just writing a 4GB image to it. Fill a 2.5" form factor with the things and you'd probably start a fire.

What ever the price of them, you need to double the cost because you'll need to run them at least raid10 to be remotely safe.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact