When I see that a company can now create SSDs with ~16x more capacity than the best consumer option, I feel like something fishy is going on that is artificially slowing the pace of larger capacity drives making it into the hands of consumers at a reasonable price.
In the SSD market there are lots of brands, but only a few flash chip makers - so it's the same lack of competition but better hidden to the consumer.
From what I have read, Toshiba's 2.5" consumer/enterprise and 3.5" enterprise HDD business was acquired from Fujitsu  and has nothing to do with WD/HGST. WD had to transfer some/all (reports diverge) HGST 3.5" consumer HDD assets to Toshiba on the insistence of the EU Commission.
The Toshiba/HGST thing is a little opaque to me: HGST still sells a few DeskStar HDDs, notably DeskStar NAS, and newer Toshiba 3.5" consumer HDDs (MD-series 4TB+) look more like the enterprise HDDs that Toshiba has been selling for a while , not like the DT01ACA... models that were relabeled HGST DeskStar 7K3000 , although there are also pictures of the very same MD models that do look HGST-style . Makes me unsure what to buy 3.5"-wise when I want the Hitachi reliability that I am used to (buying IBM/HGST for over a decade and never had a failure).
Then again, maybe I'm just naive and Toshiba is more dependent on WD than I think.
"We don't have a price for Samsung's 16TB PM1633a, but we can't imagine that it'll be cheaper than £5,000."
Based on that, you could reasonably say it will cost $7800 or more per current GBP->USD exchange rate.
You can always add more words to take something that is perfectly clear and try to make it more clear. Add enough and you make it harder to understand.
So I prefer it written the way it is.
Precise language matters, one should not rely on the crutch of context if one wants to be clear.
What's the marginal price on a part you can only make successfully one time out of ten?
I'm making the assumption that NAND flash is binned and derated based on defects. ie, to make this drive, Samsung needs to fab 256Gbit dies. As this is on the leading-edge of their production capabilities, I'm guessing that the defect rate is not negligible, and that they accidentally make lower-capacity chips a significant portion of the time.
They can even sell them as lower capacity drives. I'd be surprised is the yield was not 100%.
That's a circular way of putting it, where you incorporate "expensive" (the thing that is under question) into your argument.
His core argument is that it shouldn't be that expensive, and maybe they are using their monopoly / cartel power to fix the prices, so they can milk the < 2TB consumer market first AND charge way higher in the enterprise for "premium" drives that aren't.
That might or might not be the case (they could still be as expensive to manufacture as they claim), but price fixing is something that disk and memory companies have been found guilty of doing in the past (in courts et al).
In the past (and I think still currently), the true "bleeding edge" of the SSD market is made up of drives that take up full rack units. So, if this 16TB SSD was in a funky form factor, or even just 3.5", it would make way more sense to me and be less fishy.
Again, aside from the fact that it's a new manufacturing process of flash cells, which is rather an important point, the people willing to pay the big dollars -- enterprises -- didn't want flash in hard drive/SSD form. When you're paying 10s of thousands for an extremely high performance system, you don't drop it behind an abstraction of SAS/ATA, but stick it in a PCI-E slot to get extraordinary performance and direct access. The choice of form factor was targeting the market, just as the choice of how much flash to put in a SSD drive is targeting a market.
And the truth is that Samsung didn't even announce a price for this, and it's unlikely this iteration ever becomes a real product. They put it in an SSD case to get press because it's neat, not because it makes sense.
Please stop putting words in his mouth (first you dismissed his accusation of price fixing as "your core complaint is that you can't have difficult, expensive technology for cheap", which was far for his "core complaint").
And, no, we're not "on to completely different argument".
This isn't a rhetorical competition and he didn't put forward false argument to trick anyone.
He just explained further the reason behind his original objection.
This is profoundly ignorant. Save trying to amble up to the high ground when you simply add more noise. It is the sort of tiring entitled nonsense you see on every Facebook post by Anandtech, everyone stomping their feet that they can't have a 2TB SSD for less than $100.
The previous commenter is correct. I view all of these statements as clarification of a single argument and its originations. That argument involves no conspiracy theories, simply a suspicion of price fixing (that's not an outlandish suspicion, in my opinion. I mean...drive manufacturers have been found guilty of it before...)
I don't think it is a conspiracy that some enterprise drives come in odd form factors, you completely misread that part of what I said. My point was that, normally the most advanced enterprise drives come in strange (i.e. much larger) form factors. This is because they have not yet worked the size and power requirements down to fit the smaller form factor, which makes complete sense. However, this drive uses a standard form factor, which tells me it is not "bleeding edge." There are probably rack mount SSDs with much higher capacities than 16TB.
So, the reason it seems weird is that they have obviously passed that hurdle with the 16TB offering. They can now fit 16TBs of SSD storage in a standard form factor. It is odd because if they can fit 16TBs, and we assume that at scale, the variable costs are almost exclusively due to the capacity of the chips, why is there not a consumer drive even a factor of 4 smaller (i.e. 4TB) available? I understand there not being a 16TB drive...or a 12TB, 8TB, etc... available. But a factor of 16 seems oddly large.
Having seen the move from 5.25" HDDs to 3.5" HDDs, then the move from desktops to laptops, and now seeing SSDs becoming extremely common in laptops, tablets, and phones, I have to believe that the author predicted the future when he wrote the book.
Since PC sales have dropped, people are not buying as many HDDs, and buying more SSDs, usually indirectly. Cloud infrastructure has likely gobbled up the existing HDD supply.
But even there, SSDs are preferred for many applications, such as databases, since they're faster overall, storage limitations be damned.
And now we're seeing the first SSD that has a capacity greater than HDDs, in a similar sized package. And no current HDD company has an SSD offering worth mentioning.
It's disruption happening right before our eyes. History seems to repeat itself all too often!
"In fact, Seagate Technology was not felled by disruption. Between 1989 and 1990, its sales doubled, reaching $2.4 billion, “more than all of its U.S. competitors combined,” according to an industry report. In 1997, the year Christensen published “The Innovator’s Dilemma,” Seagate was the largest company in the disk-drive industry, reporting revenues of nine billion dollars."
"Between 1982 and 1984, Micropolis made the disruptive leap from eight-inch to 5.25-inch drives through what Christensen credits as the “Herculean managerial effort” of its C.E.O., Stuart Mahon. (“Mahon remembers the experience as the most exhausting of his life,” Christensen writes.) But, shortly thereafter, Micropolis, unable to compete with companies like Seagate, failed."
Note that he published his book long after this stuff had been shown to be hopelessly wrong, without making any corrections.
Many of his examples are similarly terrible. And those are the ones he cherry-picks to back his thesis.
I found The Innovator's Dilemma to be infuriating. It's a classic 20/20 hindsight -- just vague enough to apply to everything but excuse itself from any conspicuous counter-examples (Apple, say). Like most management fads, I guess.
(Prediction: wait until electric, self-driving car tech becomes practical. Ford and GM will crush the "nimble" disruptors like bugs. Well, unless it's Apple. Then all bets are off.)
Now, the hard disk makers may well be going under, but let's set aside Clay Christensen's over-rated terminology.
Samsung is hardly a small startup "disrupting" big, slow stalwarts. Samsung dwarfs the hard disk makers -- it's more like Google "disrupting" libraries, or Walmart "disrupting" local businesses.
BTW in Christensen's terms, this new disk is a "sustaining" innovation -- it's a better, faster, more complicated, harder-to-manufacture iteration of an existing, successful product.
Samsung has focused on SSD storage ever since...
A lot of 1 unit rack servers can fit about 8 2.5" drives. 128TB of storage in 1U is pretty crazy storage density.
Everytime they reveal a larger capacity drive I just wonder what the backup strategy is going to be. Longer tapes?
looks like LTO-10 is planned to be 48TB per cartridge.
The chief benefit of LTO is that it still remains the cheapest and does in fact have huge sequential read/write speeds. Random read/write is even worse than discs but for backup purposes, sequential is king.
This is _hopefully_ true but operational history is so full of unpleasant surprises that I would hesitate to trust any type of storage which isn't regularly verified.
With previous generations of LTO, a colleague had encountered fun failure modes like the media degrading rapidly (unrecoverable in less than a year) when a tape was stored on its side, which turned out to be an “everyone knows” fact not mentioned anywhere in the tape or drive documentation. A difference coworker had encountered some issues with a batch which had a defective lubricant causing the surface to break down over a couple years.
One place we worked with had to carefully de-tune a new tape drive after learning that the old one had drifted out of alignment for at least a year before physically failing, which meant that most of their tapes were no longer readable by a drive in standard calibration.
This is not to say that tape doesn't have a place - analogous failures happen for everything else and the cost-per-GB is appealing. I just don't think we actually have a toss-in-on-the-shelf storage media which can be assumed to work over a long term. You can address those issues with a regimented approach for rotation and mixing physical devices, media, and location but that increases the cost of adding a new storage technology into the mix since you need to develop that operational confidence for each class.
This sounds like a story two decades old, LTO and all modern tape drives have servo tracks on the tapes.. AKA the drive realigns itself to the tape track on the fly as the media passes the head. If the drive cannot do this, you get track following check conditions during the write.
The main point in mentioning it wasn't to say that tape is terrible but just that each unique class of hardware brings unique challenges which might not be obvious at first until you have a fair amount of operational time. (Thinking about the people who learned the hard way why RAID arrays should mix hard drives across batches and manufacturers)
If you are limited to the write rate allowed by the SSD interface, then that will serve to limit the heat dissipation as well.
EDIT: I should have specified `backup to an SSD storage array.` Disk is just embedded in my brain.
This can be had right here, right now. Upgrade it to modern tech (LTO6), and you get ~2.5TB/tape for about $60/tape. That's still the absolute best capacity-to-cost ratio for bulk storage that you'll get in 2015.
LTO7 is just around the corner and that tops out at 6.5TB/tape.
It's just not the 50x factor it once was. More like 10x. Tape is on nobody's radar anymore, whereas as recently as 15 years ago it was still discussed by the mainstream IT commentariat. Gees I even remember PC Magazine recommending desktop users buy some DAT-based tape system. Now, tape is nowhere to be seen.
And that's not really surprising to me when you think that a petabyte of highly responsive spinning disk array can be done for less than a 150 grand, hotswap redundancy included. That's for a superfast 2-d medium as opposed to an eternity-seek-time 1-d medium. Spinning disk is a medium which can be used for live redundancy, a crucial requirement in the internet age.
Sure I think the NSA and GOOG might have need to archive exabytes onto tape. But that's not a mainstream market and for precisely 100% of hacker news readers, tape doesn't exist anymore.
The problem with live redundancy is just that, it's live. If your backups are live and online, they are not backups. (For the same reason that RAID is not backup, it's redundancy).
Think less "deployed broken code to prod" and more "this special snowflake server crashed" or "the datacenter caught on fire".
The use case for tape is the same as the use case for something like Amazon Glacier. Perhaps you don't want multiple terabytes of your personal data being sent over the wire to a company who's no doubt been infiltrated by TLA's. Perhaps you want total control of your own data. Perhaps you don't want nasty surprises when it comes to Glacier's retrieval/data in/data out/you-didnt-wait-long-enough fees.
A single LTO5 drive can be had for about $300, tapes for $40/$50, and that'll get you around 2TB of storage each, more if your data compresses well.
Let's not pretend that there aren't real benefits to be had, here. Your use case isn't everyone else's use case, and certainly not enough to be making broad sweeping statements that nobody on HN uses it.
When I look at all the moving parts in an HDD, I'm shocked they can still be produced for less.
Did you know that a silicon wafer is a perfect crystal, structured like a diamond? Silicon is right underneath Carbon in the periodic table, which means it shares the same outer electron shell configuration. Making that ain't cheap.
And if one atom is in the wrong place, you have to throw away the chip.
That kind of core expense doesn't exist in a hard drive factory. The disks in a hard drive don't have to be perfect crystals, for example. It's a LOT more expensive to produce chips.
Multiply all the distribution and sales costs, and you'll understand why it's so expensive.
this is certainly the case for CPU
but DRAM & NAND ? this is the typical case of designs where you can add redundancy to accomodate for manufacturing defects.
Now can Intel be more granular than the core level, like running a core with some defect ALU, I really don't know.
What's publicly known from the binning process is that it involves disabling core, reducing total cache size, and finding the maximum working frequency.
The bottom line is that it requires more effort to deal with defects in complex logic, for DRAM they would reduce the total memory size.
If it helps any just think of the costs as buying diamonds.
"Wow that's 16TB of diamonds!"
"this GPU uses a bigger diamond than that GPU"
This is an unhelpful analogy.
Comparing the retail price of diamonds to the retail price of CPUs, RAM boards, and GPUs, I am lead to believe that whatever is used as the substrate for modern high-performance ICs is actually rather cheap. I can -after all- get a reasonably fast combination CPU and GPU for $45.
If we ask the USGS, we discover that in 2003, the price of synthetic diamond suitable for reinforcing saws and drills sold for $1.50->$3.50 per carat. However, large synthetic diamonds with "excellent structure" suitable for -one presumes- processes that rely on the crystal's fine structural properties -just as CPU manufacture relies on silicon wafers with fine structural properties-, sold for "many hundreds of dollars per carat". 
One carat is 200 milligrams. An entire Core i3 appears to weigh 26,800mg . Let's be generous and assume that the CPU die is 1/1000th of that weight, or 268mg, or 1.32 carats. Given that CPU manufacture requires a substrate with excellent structure, just how much of a substance that costs many hundreds of dollars per carat can there be in a 1.32 carat device? (Especially when ones of similar weight constructed with similar materials can be had for $45 per, retail?) :)
I felt that a somewhat detailed analysis of the inappropriateness of the analogy was better than a "Nuh uh! You're wrong!" response.
I can't really dispute that. I'm no expert in the field.
> Just use diamond prices and dimensions.
Isn't that more or less what I did?
Diamond price per gram depends on the quality of the diamond. If we're gonna address an opinion that includes statements like "Think of the cost of a modern high-performance IC as if it was made of diamonds, because diamonds and silicon are both crystalline structures, and silicon is chemically much like carbon, therefore the substrate manufacturing costs are bound to be very similar." , then it seems that we need to look at the cost of high-quality diamonds that are used for their crystalline properties, rather than just for their hardness.
I'm not at all sure, but I would suppose that it would be far more expensive to make one high-quality diamond sheet the size of a silicon wafer than it would be to make a bunch of high-quality diamonds each the size of a CPU die, or maybe cut down a larger one. If it is, then an analysis based just on like-sized crystals would be dramatically unfair. Perhaps you know far more about this than I do? Industrial crystal production is not exactly in my wheelhouse. :)
 Direct quote: "Did you know that a silicon wafer is a perfect crystal, structured like a diamond? Silicon is right underneath Carbon in the periodic table, which means it shares the same outer electron shell configuration. Making that ain't cheap." via 
That post was wrong about that being a driver of costs, and it's not fruitful to build on that wrongness.
An analogy that leads you to the right conclusion for the wrong reason is a toxic thing.
To speak to the rest of your comment:
mozumder made an incorrect argument and backed it up with a dangerously misleading analogy. I attacked the analogy by demonstrating its inappropriateness.
In my most recent post, I have attacked his argument with an analysis of what appear to be the actual costs of the thing he's talking about.
Add in processing costs and it really becomes a mess.
So, yes, wafer costs matter when you have to produce tons of silicon for an SSD.
This  seems to indicate that in mid 2009, one could get a 300mm silicon wafer for -worst case- ~$120.
Likely usable wafer area: 90,000mm^2
Largest Intel i3 processor (Haswell) die area: 181mm^2
Max dies per wafer: 497
Silicon wafer cost per die:
* Assuming 0% defect rate: $0.24
* Assuming 50% defect rate: $0.48
* Assuming 99% defect rate: $30.00
Cheapest (Celeron) Haswell on sale at Newegg today: $44.99. Average i3 Haswell price: $140. 
Unless Reuters is misinformed, or wafer costs have exploded in the past six years, the cost of the wafer truly does appear to be insignificant, even if we assume that wholesale prices are 50% of retail prices.
 This seems unlikely, as memory and chip costs haven't exploded in the past six years.
Consider the $0.24/die, and multiply that by 100 to get a 1 TB SSD drive.
Your SSD now has a minimum cost of $24, just for the silicon. That's extremely expensive. You can never sell your SSD for cheaper than that, just to cover the silicon costs of a 1TB drive, never-mind processing, manufacturing, distribution, sales, and profit. And you're competing against 5TB hard drives that sells for $100. (the 16TB SSD meanwhile apparently uses 500 chips..)
This is why wafer costs are like diamonds, instead of aluminum platters.
This sub-thread is about your unhelpful and misleading equivalence and analogy. :)
Silicon wafer costs are like silicon wafer costs. Your diamond analogy is simply inappropriate.
We don't say "Aircraft grade aluminium costs are like diamonds, rather than hard drive aluminium platters." or "Fission reactor grade steel costs are like diamonds, rather than..." because this is an immensely silly thing to say that obfuscates the true cost of the material in question.
What's more, we can generally discover the high end of the true price of the material in question with a little work. As I demonstrated in my replies to you, silicon wafer costs are substantially cheaper than equivalent diamond costs.
If you had said something along the lines of "Due in part to the cost of silicon wafers, silicon-based data storage technologies are now and will be for the foreseeable future substantially more expensive on a per-GB basis than spinning rust or tape-based technologies.", I would have had absolutely nothing at all to object to.
> Consider the $0.24/die...
That figure is based on a particular die area. I would expect a flash memory die to be substantially smaller than a CPU die. This would drive the base cost per die down even further. Moreover, that figure was from 2009. Up to date figures are required to really put a floor on chip prices. :)
> And [that 1TB SSD] competes against 5TB hard drives that sells for $100.
Sort of. For every use that I have except for bulk data storage, I recognize the vast superiority of an SSD. The only HDDs in my computers are the ones I got for free with my laptops-turned-servers that don't do much disk IO, and the disk array that holds my 5TB-and-growing Postgres database.
For the average computer user, I would strongly recommend replacement of the HDD in their computer with an SSD. If you don't need to store more than 1TB of data, the performance gains over HDDs are just too great to use anything else.
I'm fairly confident that HDDs will be substantially cheaper per GB than SSDs for the foreseeable future. I'm -however- not convinced by your implicit argument that SSDs will always be -price-wise- unattractive when compared to HDDs. SSDs seem to be sold at the price-per-GB of the HDDs of ~3->5 years ago. We will inevitably see 500GB SSDs at the $80 retail price point. This will make them a no-brainer for every big computer manufacturer. A really fast disk makes slow kit feel really fucking fast.
 In my experience, almost no non-technical user has more than 500GB of data that they care about on their machine at any one time.
 They're only a little more than twice that price now.
We need to make sure that everyone understands why, and part of that is because we're using lots of silicon crystals, which have the same lattice structure as diamonds, which are going to be more expensive than aluminum platters.
If you take it to the limit, an SSD won't be cheaper than hard drives even as processing costs go down, because they use so much silicon.
You say the silicon costs are insignificant, but it will be a limit as prices go down.
The diamond analogy works appropriately, and it's unhelpful and inappropriate to claim material costs are insignificant.
And people are always going to end up using as much space as given, so that's another mistake you're making. They will find ways, especially given high-res smartphones everywhere with cameras.
I used to be certain of that, based on my personal space usage habits. Based on my ongoing survey of both technical and non-technical computer users, I no longer believe that to be true.
The rise of The Cloud(TM) means that there are shockingly few users who intentionally keep a local copy of their data. Media streaming and synced storage means that a wide swath of the computer-using population store that shit remotely and throw away data when The Cloud(TM) gets full.
> ...it's unhelpful and inappropriate to claim [silicon wafer] costs are insignificant.
When the analysis demonstrates that the costs are an insignificant fraction of the total cost, then it is entirely appropriate to make that claim. :)
Silicon wafers may well remain more expensive than harddrive platters. The price of silicon wafers may well mean that SSDs will never reach price parity with HDDs. These facts don't magically make $0.25 per chip a significant factor in the manufacturing cost of a product that also required substantial original research and development to come to market. :)
Why is that?
Building a transistor on a chip is like making a building by bombarding meteors from space and hoping the craters form the shapes you want.
Building a chip is like making a city with that process.
When was the last time your first program in a language you've never used before compiled & ran on the first try?
Humans are pretty damn good at making small precise mechanical movements.
Or look at internal combustion engines. The two absolutely critical technology improvements that changed things from WWI to WWII were "merely" refinements in those and radios. WWI IC engines were so rough they pretty much required wooden frames, and engine power was anemic, as WWI tanks show. Fast forward not very many years and we have much smoother and reliable powerful engines, and had developed the seeds of today's jet engines before the first transistor was demonstrated in 1947.
My bet is by the end of next year, SDDs will be cheaper than their equally sized HDDs. HDDs are certainly on their way out.
Hetzler et al. have calculated that the industry would have to spent over $800B to build enough fabs to replace hard disks with SSDs. http://storageconference.us/2013/Papers/2013.Paper.01.pdf
"The later capacity is accomplished using 8 stacked and thinned (< 75 um) NAND chips in a 1.2 mm package"
The samsung one has 48 layers per the article. So that's a 6x improvement.
Or automotive engines. We have all sorts of technology tacked on around the basic ICE gasoline engine to make them better- the combustion chamber is just a tiny piece of the machine, which is tended by countless devices managing temperature, airflow, fuel flow, air velocity, etc. Tremendously complex compared to electric motors- but in the end, they are still more popular than electric motors because of the fundamental problem of batteries.
Samsung has designed the PM1725 to cater towards next-generation enterprise storage market. This new half-height, half-length card-type NVMe SSD offers high-performance data transmission in 3.2TB or 6.4TB storage capacities. The new NVMe card is quoted with random read speed of up to 1,000,000 IOPS and random writes up to 120,000 IOPS. In addition, sequential reads can reach up to an impressive 5,500MB/s with sequential writes up to 1,800MB/s. The 6.4TB PM1725 also features five DWPDs for five years, which is a total writing of 32TBs per day during that timeframe.
It's also $3.25/gb for 800GB vs the Samsung PM1725's $2.15/gb for 800GB.
Hopefully there is a P3710 waiting in the wings that is competitive with Samsung's new offerings. I have had infinitely better luck in terms of reliability and performance consistency with Intel than any other SSD brand, and I think I'm not alone on that front.
~10K cycles sounds good
A compressed LTO6 is 6.25 TB, right? Let's just go with 3 of them.
So, I figured out how many carts we need. You calculate the fractional station wagon part. My math was never _that_ good ;-)
That is likely to be cheaper than a 16 TB SSD for a very long time to come. Tapes aren't going anywhere.
Hopefully we'll see LTO-7 this year but probably in 2016. That puts the diff between LTO6 and LTO7 at 4 years.
LTO8 in 2020
LTO9 in 2024
LTO10 in 2028
LTO-10 would have ideally come in 2020 in keeping with LTO's earlier pace of 2 years per revision which is also more in line with storage increases
I'm off to research the weight differences, and see if the wagon's suspension factors in to this...
2,000,000 / 48 = 41,666.66… IOps
45k IOps for 16TB limits its use cases a bit. I don't know enough about storage to make an educated guess, but anyone know what the constraint there might be? Aren't there controllers that can do 1MM IOPS on single EFDs? 45k is still a ton of operations, but I expected more somehow.
I'm sure there's a market there, but I don't know how big it is. This is denser than current hard drives, but total cost is probably heavily in favor of hard drives for most use cases.
I find it particularly confusing that Samsung (seems) to have gone for a SAS SSD versus NVMe. NVMe would allow them to do a PCIe card form factor, which would surely be easier from a physical space perspective. And it's not like anyone has a PCIe flash product at 16TB either -- Fusion-io tops out at 6.4TB.
NVMe also might allow them to improve the iops. Intel's P3500 NVMe is 430k iops at 2TB. Night and day compared to this Samsung drive. So in one 2U chassis you could have any of:
24x2TB Intel P3500
= 10,320,000 iops (read 4k)
24x1.6TB Intel S3500
24x16TB Samsung PM1633a
= 1,000,000 iops
(meanwhile HDD would have far lower iops, but also probably a lot cheaper)
Really? I've got a couple of 128GB SDHC cards here -- and while they might be less performant than SSDs... I just tried to stack them on the back of a 2.5" hdd -- and I guesstimate that you'd at least be able to fit 6x6=36 of them (plastic frame and all) on the back of a 2.5" drive -- and stacking them 5 high would still be way below the width of a 2.5" hdd.
And that's not just 128GB of storage, but including 36x5 controllers etc? (Not to mention lots of plastic).
I'm prepared to be dead wrong -- but "fitting" 16GB flash into the behemoth size that is a 2.5" hdd -- doesn't seem like much of a challenge?
It has 16 NAND packages, the controller, two 1GB DRAM chips and capacitors. No idea if the Samsung drive includes capacitors, but I sure hope it does.
The Intel board fits in a 7mm enclosure, but 2.5" enclosures can go up to 15mm. To be generous, lets say that Samsung fit two double-sided circuit boards into the enclosure and also squeezed another 4 NAND packages in per-board. The NAND dies are 256Gbit vs Intel's 128Gbit, so with similar NAND packages that gets them to 10TB.
So now you either need to fit more NAND per-package -- no idea what die size they are -- or add more packages. Maybe their packages are physically smaller or maybe they're able to get >256GByte per-package. Either would help tremendously.
But regardless, that is a lot of packages for your controller to handle and if you're constrained on physical space you aren't going to be able to put additional DRAM chips on the board. You could replace the 1Gbit chips with 8Gbit chips in a similar footprint and maintain your 1,000:1 ratio of NAND:DRAM, but those chips will obviously cost a substantial amount more. I feel like this drive is going to really blow minds in terms of cost.
I'm not an expert on this, but my impression is that a lot of organizations that need a lot of space would be much happier with larger-capacity-but-slower drives because those drives can be so much cheaper than trying to build out more space.
With the Samsung 16TB SSD, I could fit 384TB in a 2U chassis and a total of 8.8PB in a rack (of 23 hosts). That's $10.6mm in disks in that one rack.
Or I could go with hard drives (8TB, 7200rpm, enterprisey, $700) and fit 288TB in a 4U chassis and 3.1PB in a rack. I would need three racks instead of one rack to equal the storage capacity. However, it costs me $832,000 in disks.
There's really no way that your fixed costs for 2 racks can make a dent in $9.7mm, even factoring in the differences in power utilization between the two. So you'd have to get a substantial benefit from the performance differential between a HDD and this SSD, but not to the point where you need the 82x performance improvement of a faster NVMe drive (such as the Intel P3500).
The hard drives would have ironclad firmware that keeps the RAM refrehsed until its battery goes down to 15% (or whatever the conservative 10 minutes of power is), at which point it takes the ten minutes to dump the contents of that RAM to SSD, and reverts to having that drive also be SSD until the power is reconnected long enough to charge battery back up to 80%. Then it reads it back into RAM and continues as a Lightning Fast 64 GB + Very fast 16 TB drive.
You would store your operating system on the lightning-fast drive.
The absolute nightmare failure state isn't even that bad, as even though the RAM drive should be as ironclad as SSD, in case it ever should lose power unexpectedly through someone opening the device and disconnecting the battery or something, it can still periodically be backed up, so that if you pick up the short end of six sigma, you can just revert to reading the drive from SSD rather than RAM and lose, say, at most 1 day of work.
thoughts? I bet a lot of people would be happy to pay an extra $800 to have their boot media operate at DIMM speed, as long as the non-leaky abstraction is that it is a physical hard drive, and the engineering holds up to this standard.
There is a lot of software out there that is very conservative about when it considers data to be fully written - it would be quite a hack for Samsung to hack that abstraction by doing six or seven sigma availability on a ramdrive with battery and onboard ssd to dump to.
It would be very interesting to see a similar product being introduced using contemporary technology, though. One question is what sort of interface it would communicate over to leverage the higher transfer speed.
I think it is fine not to have any higher interface to leverage the transfer speed. RAM latency and speeds can obviously saturate disk interaces, but I doubt SSD's come close. So it should be a large jump in performance regardless.
Relying on a drive controller might seem the right way to go but especially for corporate installations I would believe it would be beneficial to have the fine grained control a dedicated server could provide.
I should have said "C: drive"/"HDA1" but wrote boot media so I could save having to think about my phrasing. I meant that's where you would install anything that is primary to your workflow and might read and write lots of files, because that's how it was programmed, git, your ide, compiler, test suites, database, webserver and log files, or whatever programs you create and handle your workspace with, whatever that may be (photoshop, design software, etc).
the point is, things you would never risk not having on permanent storage, and which are written with the expectation that they will be. if it's ironclad (six/seven sigma, and backed up to real permanent storage behind the scenes in case worse comes to worst), you wouldn't have to give up this abstraction. it would still be a hard drive and not, you know, the current contents of your ram since you booted.
Lastly if you do care about data retention during power outages and sags then you would likely want an APC/backup battery. Even though the data stored in the SSD/RAM hybrid might have enough backup power to flush to disk how about the data that is currently in RAM waiting to be flushed as well?
If it did, SSD's wouldn't be so much faster than spinning-platter HDD's...
By the way, what you're proposing in terms of software has been available for a long time; multiple distros (including Ubuntu) can/could be booted completely to RAM, using tmpfs as the filesystem. For example:
At the boot prompt, type "knoppix toram". Knoppix will load the contents of the CD into ram and run from there. After boot up, the CD can be removed and the cd drive will be available for other uses. Because this will take up a lot of ram, it is recommended for those with at least 1 GB of ram.
It's definitively faster, I just don't have the necessary RAM to fit all my system in there.
If it were all in a sealed package that 'guarantees' the RAM will never power down, at a very low firmware level, that is a different matter.
In 2012 they list the density of magnetic disks as 750 Gb/in^2 and nand flash as 550 Gb/in^2. I'm not sure how the numbers have changed with 2d nand, but 3d nand probably pushes the density way over magnetic.
MicroSD is 15x11x1 (165 mm^3), max size 200G
MicroSD is less than 0.5% the volume of 2.5" HDD, but 5% of the capacity. So MicroSD is an order of magnitude more dense than 2.5" HDD.
MicroSD still has to fit a little controller in there, so the comparison isn't particularly fair on flash. I expect heat management to be a problem scaling up though, apart from any manufacturing difficulties.