Hacker News new | comments | ask | show | jobs | submit login

I'm slightly surprised by the numbers given for IOps. The example they give is 48 drives giving 2MM IOps:

2,000,000 / 48 = 41,666.66… IOps

45k IOps for 16TB limits its use cases a bit. I don't know enough about storage to make an educated guess, but anyone know what the constraint there might be? Aren't there controllers that can do 1MM IOPS on single EFDs? 45k is still a ton of operations, but I expected more somehow.




45k iops is not terrible, but it's not competitive with current Intel enterprise SSDs (S3500 is 70k+, S3710 is 85k). I suspect that Samsung had to make huge sacrifices to the controller and DRAM portions of the drive to fit that many NAND chips into the 2.5" form factor. They're basically trying to create a new class of flash storage, which is space-optimized rather than performance-optimized.

I'm sure there's a market there, but I don't know how big it is. This is denser than current hard drives, but total cost is probably heavily in favor of hard drives for most use cases.

I find it particularly confusing that Samsung (seems) to have gone for a SAS SSD versus NVMe. NVMe would allow them to do a PCIe card form factor, which would surely be easier from a physical space perspective. And it's not like anyone has a PCIe flash product at 16TB either -- Fusion-io tops out at 6.4TB.

NVMe also might allow them to improve the iops. Intel's P3500 NVMe is 430k iops at 2TB. Night and day compared to this Samsung drive. So in one 2U chassis you could have any of:

  24x2TB Intel P3500
  = 48TB
  = 10,320,000 iops (read 4k)

  24x1.6TB Intel S3500
  = 38TB
  = 1,572,000

  24x16TB Samsung PM1633a
  = 384TB
  = 1,000,000 iops

  (meanwhile HDD would have far lower iops, but also probably a lot cheaper)
While the Samsung one is alluring from a space perspective, I can't really see replacing either the 'fast SSD' tier or the 'slow HDD' tier with it in my deployments.


> I suspect that Samsung had to make huge sacrifices to the controller and DRAM portions of the drive to fit that many NAND chips into the 2.5" form factor.

Really? I've got a couple of 128GB SDHC cards here -- and while they might be less performant than SSDs... I just tried to stack them on the back of a 2.5" hdd -- and I guesstimate that you'd at least be able to fit 6x6=36 of them (plastic frame and all) on the back of a 2.5" drive -- and stacking them 5 high would still be way below the width of a 2.5" hdd.

And that's not just 128GB of storage, but including 36x5 controllers etc? (Not to mention lots of plastic).

I'm prepared to be dead wrong -- but "fitting" 16GB flash into the behemoth size that is a 2.5" hdd -- doesn't seem like much of a challenge?


I don't know what the Samsung drive looks like internally, and obviously they did figure some way to do it. For comparison, here's a teardown of an Intel S3710: http://www.tomsitpro.com/articles/intel-dc-s3710-enterprise-...

It has 16 NAND packages, the controller, two 1GB DRAM chips and capacitors. No idea if the Samsung drive includes capacitors, but I sure hope it does.

The Intel board fits in a 7mm enclosure, but 2.5" enclosures can go up to 15mm. To be generous, lets say that Samsung fit two double-sided circuit boards into the enclosure and also squeezed another 4 NAND packages in per-board. The NAND dies are 256Gbit vs Intel's 128Gbit, so with similar NAND packages that gets them to 10TB.

So now you either need to fit more NAND per-package -- no idea what die size they are -- or add more packages. Maybe their packages are physically smaller or maybe they're able to get >256GByte per-package. Either would help tremendously.

But regardless, that is a lot of packages for your controller to handle and if you're constrained on physical space you aren't going to be able to put additional DRAM chips on the board. You could replace the 1Gbit chips with 8Gbit chips in a similar footprint and maintain your 1,000:1 ratio of NAND:DRAM, but those chips will obviously cost a substantial amount more. I feel like this drive is going to really blow minds in terms of cost.


I'm sure there's a market there, but I don't know how big it is. This is denser than current hard drives, but total cost is probably heavily in favor of hard drives for most use cases.

I'm not an expert on this, but my impression is that a lot of organizations that need a lot of space would be much happier with larger-capacity-but-slower drives because those drives can be so much cheaper than trying to build out more space.


Density is nice, but I always look at that from a total cost perspective. So the real question is how much will this drive cost. I suspect it will be at least $1.20/gbyte -- not unreasonable considering that the Intel enterprise SSD lineup ranges from $0.80 - $1.60/gbyte.

With the Samsung 16TB SSD, I could fit 384TB in a 2U chassis and a total of 8.8PB in a rack (of 23 hosts). That's $10.6mm in disks in that one rack.

Or I could go with hard drives (8TB, 7200rpm, enterprisey, $700) and fit 288TB in a 4U chassis and 3.1PB in a rack. I would need three racks instead of one rack to equal the storage capacity. However, it costs me $832,000 in disks.

There's really no way that your fixed costs for 2 racks can make a dent in $9.7mm, even factoring in the differences in power utilization between the two. So you'd have to get a substantial benefit from the performance differential between a HDD and this SSD, but not to the point where you need the 82x performance improvement of a faster NVMe drive (such as the Intel P3500).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: