
Samsung unveils 2.5-inch 16TB SSD - twsted
http://arstechnica.com/gadgets/2015/08/samsung-unveils-2-5-inch-16tb-ssd-the-worlds-largest-hard-drive/
======
jaawn
Every piece of storage news I've seen for the past year or two reinforces my
opinion that there is a great deal of price-fixing happening in the consumer
storage market. The price trend of 2TB HDDs, for example, just does not make
sense.

When I see that a company can now create SSDs with ~16x more capacity than the
best consumer option, I feel like something fishy is going on that is
artificially slowing the pace of larger capacity drives making it into the
hands of consumers at a reasonable price.

~~~
miahi
In the HDD market there are actually just two companies: WD and Seagate. There
are other brands (Toshiba, HGST) but these are not completely independent
(HGST is owned by WD, Toshiba still exists mostly because WD feeds it with
technology whenever they are threatened by anti-monopoly rulings, so they can
say competition still exists).

In the SSD market there are lots of brands, but only a few flash chip makers -
so it's the same lack of competition but better hidden to the consumer.

~~~
wtallis
For SSDs, there are four independent flash manufacturers, three of which are
pretty big. They all have an in-house controller product line, and there are
at least three major independent controller providers. It's a lot more
competition than hard drives.

~~~
justincormack
And the main reason the number of hard drive manufacturers is so low is that
SSD is the competition and is winning for many use cases.

~~~
jimmcslim
Cloud storage providers must be sucking up a huge amount of HDD supply... I
wonder if that is putting a floor under consumer HDD prices.

~~~
merb
That could be true, since the most cloud Providers don't use "enterprise"
hdd's, they just buy stuff they tested.

------
HorizonXP
I'm reading the Innovator's Dilemma right now, and I just finished the chapter
about the storage industry. The author draws the conclusion that solid-state
drives may eventually move upmarket from cash registers and embedded
applications to PCs and such.

Having seen the move from 5.25" HDDs to 3.5" HDDs, then the move from desktops
to laptops, and now seeing SSDs becoming extremely common in laptops, tablets,
and phones, I have to believe that the author predicted the future when he
wrote the book.

Since PC sales have dropped, people are not buying as many HDDs, and buying
more SSDs, usually indirectly. Cloud infrastructure has likely gobbled up the
existing HDD supply.

But even there, SSDs are preferred for many applications, such as databases,
since they're faster overall, storage limitations be damned.

And now we're seeing the first SSD that has a capacity greater than HDDs, in a
similar sized package. And no current HDD company has an SSD offering worth
mentioning.

It's disruption happening right before our eyes. History seems to repeat
itself all too often!

~~~
__float
Since when is Samsung not an HDD company?

~~~
jotm
~2011, when Seagate acquired their HDD manufacturing business:
[http://www.notebookreview.com/news/seagate-acquires-
samsungs...](http://www.notebookreview.com/news/seagate-acquires-samsungs-
hard-drive-business-for-1-4-billion/)

Samsung has focused on SSD storage ever since...

------
jtchang
Holy that is a lot of storage in a very small amount of space. Besides the
fact that I want one right now I am starting to wonder how much heat this will
generate.

A lot of 1 unit rack servers can fit about 8 2.5" drives. 128TB of storage in
1U is pretty crazy storage density.

Everytime they reveal a larger capacity drive I just wonder what the backup
strategy is going to be. Longer tapes?

~~~
pyre
Redundant sets of drives... or high density backup tapes. Tape backups have
come along with drive size increases:

[https://en.wikipedia.org/wiki/Linear_Tape-
Open#Generations](https://en.wikipedia.org/wiki/Linear_Tape-Open#Generations)

looks like LTO-10 is planned to be 48TB per cartridge.

~~~
dragontamer
LTO6 is current. LTO7 is "future". LTO8 may never happen, let alone LTO9 or
LTO10 (If SSDs prove to be superior to tapes... investments will change for
sure.)

The chief benefit of LTO is that it still remains the cheapest and does in
fact have huge sequential read/write speeds. Random read/write is even worse
than discs but for backup purposes, sequential is king.

~~~
pixl97
SSDs may have issues with long term 'disuse', as in you unplug the drive and
leave it on the shelf, data loss may occur. With most tapes you can put them
in cold storage and they should last decades.

~~~
acdha
> With most tapes you can put them in cold storage and they should last
> decades.

This is _hopefully_ true but operational history is so full of unpleasant
surprises that I would hesitate to trust any type of storage which isn't
regularly verified.

With previous generations of LTO, a colleague had encountered fun failure
modes like the media degrading rapidly (unrecoverable in less than a year)
when a tape was stored on its side, which turned out to be an “everyone knows”
fact not mentioned anywhere in the tape or drive documentation. A difference
coworker had encountered some issues with a batch which had a defective
lubricant causing the surface to break down over a couple years.

One place we worked with had to carefully de-tune a new tape drive after
learning that the old one had drifted out of alignment for at least a year
before physically failing, which meant that most of their tapes were no longer
readable by a drive in standard calibration.

This is not to say that tape doesn't have a place - analogous failures happen
for everything else and the cost-per-GB is appealing. I just don't think we
actually have a toss-in-on-the-shelf storage media which can be assumed to
work over a long term. You can address those issues with a regimented approach
for rotation and mixing physical devices, media, and location but that
increases the cost of adding a new storage technology into the mix since you
need to develop that operational confidence for each class.

~~~
StillBored
"One place we worked with had to carefully de-tune a new tape drive after
learning that the old one had drifted out of alignment for at least a year
before physically failing, "

This sounds like a story two decades old, LTO and all modern tape drives have
servo tracks on the tapes.. AKA the drive realigns itself to the tape track on
the fly as the media passes the head. If the drive cannot do this, you get
track following check conditions during the write.

~~~
acdha
That's certainly possible – I thought that was their first generation LTO
system but it might have been the previous one which was being replaced. It
was a decade ago so both would still have been in service at that time.

The main point in mentioning it wasn't to say that tape is terrible but just
that each unique class of hardware brings unique challenges which might not be
obvious at first until you have a fair amount of operational time. (Thinking
about the people who learned the hard way why RAID arrays should mix hard
drives across batches and manufacturers)

------
IanDrake
Can someone explain to my why SSDs still cost more than HDDs?

When I look at all the moving parts in an HDD, I'm shocked they can still be
produced for less.

~~~
mozumder
You're paying for $5 billion in lithography steppers and other fab equipment
for one factory (you'll LOL at the cost of 1 deep-UV immersion litho stepper).
That's amortized over 5 years. There's also masks, and process research costs,
in addition to the basic materials costs.

Did you know that a silicon wafer is a perfect crystal, structured like a
diamond? Silicon is right underneath Carbon in the periodic table, which means
it shares the same outer electron shell configuration. Making that ain't
cheap.

And if one atom is in the wrong place, you have to throw away the chip.

That kind of core expense doesn't exist in a hard drive factory. The disks in
a hard drive don't have to be perfect crystals, for example. It's a LOT more
expensive to produce chips.

Multiply all the distribution and sales costs, and you'll understand why it's
so expensive.

~~~
maxhou
> And if one atom is in the wrong place, you have to throw away the chip

this is certainly the case for CPU

but DRAM & NAND ? this is the typical case of designs where you can add
redundancy to accomodate for manufacturing defects.

~~~
mozumder
At the cost of increased die-size.

If it helps any just think of the costs as buying diamonds.

"Wow that's 16TB of diamonds!"

or:

"this GPU uses a bigger diamond than that GPU"

~~~
hbosch
It doesn't exactly help your cause that many people also believe the cost of
diamonds is artificially inflated as well.

~~~
kedean
The cost of diamonds IS artificially inflated. They are useful for purposes
other than the one in which the price inflation is a big deal, but it is a
demonstrable fact that the price is inflated.

~~~
simoncion
The cost of industrial diamonds is -one presumes- _not_ subject to much
artificial inflation.

------
intrasight
A version of Moore's Law seems to apply to storage, which is very much a good
thing. The first IBM Winchester I used cost a couple year's salary and stored
30MB on 14" platters. The next I used was an 8" ~150MB and only cost a couple
months salary. Forward 30 years and I can buy a 500GB drive the size of a
stick of gum for a couple hours salary. 30 more years? Can't wait to see. I
assume I will eat the stick of gum and by doing so know everything in the
Library of Congress.

~~~
jawngee
c:> park.exe

------
vegabook
Moore's law is passing the baton from GHz to the storage stack. Whereas you
once had a simple RAM + HD setup, you now have a teamworking hierarchy of
storage technologies: Cache / 3d stacked mem / DRAM / X-point / SSD / HD. Each
one of these is behaving just like GHz did: doubling in speed/capacity every
18 months. Given that this is where the performance bottleneck has been, we're
looking good on exponential performance upside for a long time to come if we
extrapolate the recent trend. Excellent.

~~~
pixl97
Won't be long till the network is the bottleneck again. 10Gb is still
relatively expensive for the end user, and 40 and 100Gb are out of reach for
most budgets.

~~~
vegabook
Funnily enough I was just this morning googling for 10Gbit Ethernet cards and
a whole bunch of "Copper wire 10Gbit will hit primetime in 2015" results game
up. Totally agree. Throughput is where it's at.

------
MaysonL
The really amazing thing is one of their other announcements [0]:

 _Samsung has designed the PM1725 to cater towards next-generation enterprise
storage market. This new half-height, half-length card-type NVMe SSD offers
high-performance data transmission in 3.2TB or 6.4TB storage capacities. The
new NVMe card is quoted with random read speed of up to 1,000,000 IOPS and
random writes up to 120,000 IOPS. In addition, sequential reads can reach up
to an impressive 5,500MB /s with sequential writes up to 1,800MB/s. The 6.4TB
PM1725 also features five DWPDs for five years, which is a total writing of
32TBs per day during that timeframe._

[0]
[http://www.storagereview.com/samsung_announces_tcooptimized_...](http://www.storagereview.com/samsung_announces_tcooptimized_highperformance_ssds_pm1633_pm1725_and_pm953)

~~~
skuhn
That does kick it up a notch versus Intel's current top of the line. The P3700
is max 2.0TB, 450,000 read iops, 175,000 write iops, and 2,800 / 2,000 MB/s.

It's also $3.25/gb for 800GB vs the Samsung PM1725's $2.15/gb for 800GB.

Hopefully there is a P3710 waiting in the wings that is competitive with
Samsung's new offerings. I have had infinitely better luck in terms of
reliability and performance consistency with Intel than any other SSD brand,
and I think I'm not alone on that front.

------
ChuckMcM
Interesting given the reliability news Facebook posted on their SSDs. With a
5x10^11 UBER you could not even read all the sectors on a 16TB disk reliably.
Something I'll be looking at when I get my hands on one.

------
markhahn
what's that in stationwagons full of LTO6 tapes?

~~~
tired_man
Beats me. I've never dealt with fractional station wagons before...

A compressed LTO6 is 6.25 TB, right? Let's just go with 3 of them.

So, I figured out how many carts we need. You calculate the fractional station
wagon part. My math was never _that_ good ;-)

~~~
Someone1234
Just as a point of reference, a single LTO6 tape costs around $25-$35 only. So
we're talking about $105 at absolute most for three.

That is likely to be cheaper than a 16 TB SSD for a very long time to come.
Tapes aren't going anywhere.

~~~
TheCondor
That's the problem, tape isn't going anywhere. A decade back, it boasted a
huge capacity advantage. Now with all the newer tech, the spinning media
storage advantages, etc, it can barely hold a big drives worth of data. And
the tape drives are expensive... Lto7 should hold like 50T on a this and
dollar tape drive to look really interesting again.

~~~
jl6
At scale, tape still has the lowest cost per byte of any storage medium.

~~~
acdha
It's true for sufficiently large values of “at scale” but tape has uniquely
high overhead costs – hardware, software, staffing – which have to be balanced
out by those lower storage costs. HDD/SSD costs have been declining at a much
faster rate so we're already at the point where only the largest storage
consumers are going to reach the point where they see a return from the
initial investment in tape.

------
Gladdyu
I wonder how this will compare to Intels 3D NAND flash chips
([http://www.ipwatchdog.com/2015/08/12/intel-micron-
develop-3d...](http://www.ipwatchdog.com/2015/08/12/intel-micron-
develop-3d-xpoint-as-an-eventual-successor-to-nand-flash-memory/id=60268/)).
Some competition on similar technologies is never wrong!

~~~
wtallis
You linked to an article about Intel's 3D XPoint memory, which isn't NAND
flash or any other kind of flash. They are also doing 3D NAND flash, and
that's what will be competing against Samsung's 3D NAND flash.

------
AlexEatsKittens
I'm slightly surprised by the numbers given for IOps. The example they give is
48 drives giving 2MM IOps:

2,000,000 / 48 = 41,666.66… IOps

45k IOps for 16TB limits its use cases a bit. I don't know enough about
storage to make an educated guess, but anyone know what the constraint there
might be? Aren't there controllers that can do 1MM IOPS on single EFDs? 45k is
still a ton of operations, but I expected more somehow.

~~~
skuhn
45k iops is not terrible, but it's not competitive with current Intel
enterprise SSDs (S3500 is 70k+, S3710 is 85k). I suspect that Samsung had to
make huge sacrifices to the controller and DRAM portions of the drive to fit
that many NAND chips into the 2.5" form factor. They're basically trying to
create a new class of flash storage, which is space-optimized rather than
performance-optimized.

I'm sure there's a market there, but I don't know how big it is. This is
denser than current hard drives, but total cost is probably heavily in favor
of hard drives for most use cases.

I find it particularly confusing that Samsung (seems) to have gone for a SAS
SSD versus NVMe. NVMe would allow them to do a PCIe card form factor, which
would surely be easier from a physical space perspective. And it's not like
anyone has a PCIe flash product at 16TB either -- Fusion-io tops out at 6.4TB.

NVMe also might allow them to improve the iops. Intel's P3500 NVMe is 430k
iops at 2TB. Night and day compared to this Samsung drive. So in one 2U
chassis you could have any of:

    
    
      24x2TB Intel P3500
      = 48TB
      = 10,320,000 iops (read 4k)
    
      24x1.6TB Intel S3500
      = 38TB
      = 1,572,000
    
      24x16TB Samsung PM1633a
      = 384TB
      = 1,000,000 iops
    
      (meanwhile HDD would have far lower iops, but also probably a lot cheaper)
    

While the Samsung one is alluring from a space perspective, I can't really see
replacing either the 'fast SSD' tier or the 'slow HDD' tier with it in my
deployments.

~~~
e12e
> I suspect that Samsung had to make huge sacrifices to the controller and
> DRAM portions of the drive to fit that many NAND chips into the 2.5" form
> factor.

Really? I've got a couple of 128GB SDHC cards here -- and while they might be
less performant than SSDs... I just tried to stack them on the back of a 2.5"
hdd -- and I guesstimate that you'd at _least_ be able to fit 6x6=36 of them
(plastic frame and all) on the back of a 2.5" drive -- and stacking them 5
high would still be way below the width of a 2.5" hdd.

And that's not just 128GB of storage, but including 36x5 controllers etc? (Not
to mention lots of plastic).

I'm prepared to be dead wrong -- but "fitting" 16GB flash into the behemoth
size that is a 2.5" hdd -- doesn't seem like much of a challenge?

~~~
skuhn
I don't know what the Samsung drive looks like internally, and obviously they
did figure some way to do it. For comparison, here's a teardown of an Intel
S3710: [http://www.tomsitpro.com/articles/intel-
dc-s3710-enterprise-...](http://www.tomsitpro.com/articles/intel-
dc-s3710-enterprise-ssd,2-915-2.html)

It has 16 NAND packages, the controller, two 1GB DRAM chips and capacitors. No
idea if the Samsung drive includes capacitors, but I sure hope it does.

The Intel board fits in a 7mm enclosure, but 2.5" enclosures can go up to
15mm. To be generous, lets say that Samsung fit two double-sided circuit
boards into the enclosure and also squeezed another 4 NAND packages in per-
board. The NAND dies are 256Gbit vs Intel's 128Gbit, so with similar NAND
packages that gets them to 10TB.

So now you either need to fit more NAND per-package -- no idea what die size
they are -- or add more packages. Maybe their packages are physically smaller
or maybe they're able to get >256GByte per-package. Either would help
tremendously.

But regardless, that is a lot of packages for your controller to handle and if
you're constrained on physical space you aren't going to be able to put
additional DRAM chips on the board. You could replace the 1Gbit chips with
8Gbit chips in a similar footprint and maintain your 1,000:1 ratio of
NAND:DRAM, but those chips will obviously cost a substantial amount more. I
feel like this drive is going to really blow minds in terms of cost.

------
logicallee
if they really wanted to make waves they would unveil the world's fastest
_AND_ the world's largest hard-drive, two in one, with an onboard battery and
hybrid 64, 128, or 256 GB of RAM (not SSD) in 2x, 4x, or 8x 32gig dimms
exposed as a physical Drive, costing +/\- $800, $1600, and $3200 respectively,
in addition to the 16 TB second physical drive, all integrated in one package
so you can't disconnect the battery and nuke your lightning-fast drive without
being extremely aware that you're doing so.

The hard drives would have ironclad firmware that keeps the RAM refrehsed
until its battery goes down to 15% (or whatever the conservative 10 minutes of
power is), at which point it takes the ten minutes to dump the contents of
that RAM to SSD, and reverts to having that drive also be SSD until the power
is reconnected long enough to charge battery back up to 80%. Then it reads it
back into RAM and continues as a Lightning Fast 64 GB + Very fast 16 TB drive.

You would store your operating system on the lightning-fast drive.

The absolute nightmare failure state isn't even that bad, as even though the
RAM drive should be as ironclad as SSD, in case it ever should lose power
unexpectedly through someone opening the device and disconnecting the battery
or something, it can still periodically be backed up, so that if you pick up
the short end of six sigma, you can just revert to reading the drive from SSD
rather than RAM and lose, say, at most 1 day of work.

thoughts? I bet a lot of people would be happy to pay an extra $800 to have
their boot media operate at DIMM speed, as long as the non-leaky abstraction
is that it is a physical hard drive, and the engineering holds up to this
standard.

There is a lot of software out there that is very conservative about when it
considers data to be fully written - it would be quite a hack for Samsung to
hack that abstraction by doing six or seven sigma availability on a ramdrive
with battery and onboard ssd to dump to.

~~~
rzzzt
The basics of your idea were captured in a device in 2009:
[http://2xod.com/articles/ANS_9010_ramdisk_review/](http://2xod.com/articles/ANS_9010_ramdisk_review/)

It would be very interesting to see a similar product being introduced using
contemporary technology, though. One question is what sort of interface it
would communicate over to leverage the higher transfer speed.

~~~
jotm
it goes way back to 2005:
[http://www.anandtech.com/show/1742](http://www.anandtech.com/show/1742)

~~~
nitrogen
I think there have been RAM-based storage devices even older than that,
connected to IDE. I have no idea how one would go about finding a reference
these days though.

~~~
jotm
Yeah, I'm pretty sure I remember a similar card being announced when DDR was
being widely adopted, but I can't find anything (which is pretty interesting
in itself on the Internet, I once searched for specs on a dialup modem and
could not find a mention of it, like it never existed :-))...

------
riobard
Am I right to assume that NAND flash has higher storage density than magnetic
disks? I've been trying to find some definitive data about this but failed so
far. I'd really appreciate if someone can point me the right direction to
search.

~~~
jyxent
I found a paper from 2013 that compares storage densities up to 2012:

[http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6...](http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6558421)

In 2012 they list the density of magnetic disks as 750 Gb/in^2 and nand flash
as 550 Gb/in^2. I'm not sure how the numbers have changed with 2d nand, but 3d
nand probably pushes the density way over magnetic.

------
ck2
What ever the price of them, you need to double the cost because you'll need
to run them at least raid10 to be remotely safe.

