

Micron Demos PCIe RealSSD P320h, Achieves Over 3GB/s Of Sustained Throughput - bigwophh
http://hothardware.com/News/Micron-Demos-RealSSD-P320h-PCIe-SSD-Achieves-Over-3GBs-Of-Sustained-Throughput/

======
rektide
Fuck yes.

This isn't about GB/$.

This is about IOPS/($ x density) and Throughput/($ x density). Databases and
datastores used to be limited by iops, and SSD's are altering the balance of
where that bottleneck occurs: we've already seen absurd 9x SSD systems[1] and
24x SSD systems[2] running off three or four SATA/SCSI cards; this offers
those levels of performance in a single package.

3GBps is 24Gbps. A mere two of these will max out the IO Hub on a Nehalem,
which is good for 40Gbps[3]. As for getting that data into the CPU, the top of
the line MP Xeon offers four QPI links good for 25.6Gbps a piece. That's
100Gbps, which four of these could provide, and it stands as a fixed upper
bound for how much data an SMP (4-way in this case) system can be fed (and
thus crunch) at once (and uses expensive 4-way cpus to do it). The throughput
limit is now (erm, will be) the platform, not the IO storage. This is huge.
Wait, no, strike that, it's incomprehensibly small, small and fast.

References:

[1] "Battleship mtron" <http://www.nextlevelhardware.com/storage/battleship/>

[2] "Samsung SSD Awesomeness"
[http://www.youtube.com/watch?v=96dWOEa4Djs&fmt=22](http://www.youtube.com/watch?v=96dWOEa4Djs&fmt=22)

[3] "Building a Single Box 100Gbps Router" via
<http://shader.kaist.edu/packetshader/>

~~~
rektide
I'd love to hear about the ASIC this runs on.

Will it present SATA's AHCI interface? Will it eschew SATA altogether?

~~~
wmf
AHCI is a serious bottleneck because it requires PIO reads and has a queue
depth limit of 32. PCIe SSDs should use the NVM Express interface, although
this card is probably too old to support it.
<http://www.intel.com/standards/nvmhci/index.htm>

~~~
otterley
AHCI supports DMA, which has been the preferred transfer standard in ATA since
the mid-1990s. Nobody in their right mind would intentionally use the PIO
modes except on very very old hardware.

And for a device that can handle 320,000+ IOPS, a short queue depth won't be a
bottleneck.

~~~
wmf
Sorry, I meant that AHCI requires the driver to do a PIO read to set up each
command, even though the data is transferred using DMA.

This card has 32 flash channels and 128 or more NAND dies, so a queue depth of
32 would leave most of the dies idle.

~~~
otterley
I'm not sure it's relevant. Queueing in the transfer protocol is a useful
feature to address the limitations of physical hard disks (i.e., to delegate
the ordering of temporally-proximate I/O operations to the device, which
usually can do it better than the host OS can). With flash disks, optimization
of seeks is unnecessary (since there are no seeks), and they are usually so
fast that the command queues rarely grow.

------
rosser
What's the crash-safety/power-loss durability on these guys?

The press release mentions OLTP workloads, but if they don't have supercaps or
a battery or something, you'd have to be an absolute idiot to trust your data
to them, no matter how compelling the performance numbers. The pictures don't
show any connectors for a battery or capacitor-looking doodads (or at least
anything I recognize as such), so I'm extremely skeptical. Unf., the linked
site is down for "maintenance", and some quick googling yields nothing on-
point.

(Edit: formatting)

~~~
jpitz
If they don't cache writes, and they honor fsync calls, then not having a
supercap isn't such a liability. That said, when a vendor announces a
enterprise-targeted storage product without mentioning reliability, it likely
isn't there.

There's never any excuse for failing to perform your own power-plug tests.

------
huxley
Would be amazing to see one of these built for Thunderbolt since it is
essentially PCIe over a serial interface.

------
rorrr
We're at a point where regular hard drives are ridiculously cheap $60-80 for
2TB (2.9c - 3.9c per GB), while regular SSDs are $1.5-$2 per GB.

There's a huge gap in between.

Hybrid SSD/HDD drives were supposed to fill that niche, but they are still
pretty crappy, they don't really solve any HDD problems.

~~~
Nrsolis
One of my biggest questions is the lifetime of these drives. I know that flash
memory has a certain number of write-cycles but has there been any study done
on how long these drives last in an almost constant write environment like
OLTP?

~~~
pmh
IIRC, SLC NAND has ~100,000 write cycles and MLC ~10,000 (enterprise-level
SSDs almost exclusively use the former). I'm not aware of any recent studies
specifically on durability, but given that there are no moving parts I'd
imagine that SSDs last at least as long as mechanical drives. There's also
this bit from the press release: "[The] 700GB drive [is] able to write 28
terabytes of data every day for five years."

~~~
wtallis
There's a big tradeoff between reliability and density: 50nm MLC is good for
around 10k cycles, but 34nm and 25nm MLC are rated in the 3k-5k range, and
it's not clear yet whether 20nm MLC will be able to stay in that range. SLC is
much more reliable, but also much less dense than MLC.

It's easy to maintain overall device reliability (and performance) by having
more spare area, but that negates a lot of the benefit of moving to a smaller
process. (And the performance penalty from having fewer flash chips to stripe
across for a given drive size also detracts from the smaller process size.)

