

SSD with New HSDL Interface Boasts Gobs of Bandwidth - MojoKid
http://hothardware.com/Reviews/OCZ-IBIS-HSDL-Solid-State-Drive-Preview/

======
junkbit
The single port version is akin to a dedicated RAID card with 4 Sandforce-1200
SSDs.

The main benefit of this connection is the 4 port version in a single PCI
Express x16 2.0 slot. Anandtech predicts it might reach 2.5GB/s

Hopefully, if this takes off OEMs can drop the internal SATA RAID going on
inside the device for even further gains.

EDIT: also very encouraging is the internal garbage collection that goes on
when the disk is idle. Trim does not work on RAID and it is great to see
performance can still be regained after prolonged use.

------
jtchang
SSD technology is a game changer for data centers. Traditional hard drives
generate loads of heat and being able to outfit a whole data center with them
reduces cooling significantly which directly affects the bottom line (cooling
is the #1 expense for the vast majority of data centers).

Now I'm just waiting for our broadband to inch up to 740MB/sec :)

~~~
reitzensteinm
Acutally this SSD in particular consumes about as much power as a 10k RPM
Raptor desktop drive when active
([http://www.tomshardware.com/charts/2009-3.5-desktop-hard-
dri...](http://www.tomshardware.com/charts/2009-3.5-desktop-hard-drive-
charts/Power-Requirement-Video,1026.html)), although having served requests
quickly it can then go back to its idle state much faster.

In terms of online gigabytes per watt, SSDs will lose out badly to traditional
hard drives. IOPS per watt, of course, is a whole different story, and hard
drive IO bottlenecked servers will run potentially an order of magnitude
faster.

So rather than outfitting whole datacenters with them, both technologies will
live side by side for quite a few years to come. In fact, I wouldn't be
surprised if spinning rust lives on longer in data centers than mainstream PCs
and laptops.

~~~
jcroberts

      >In terms of online gigabytes per watt, SSDs will 
      >lose out badly to traditional hard drives.
    

Incorrect. You forgot time. Think about it this way (with very rough numbers);
The traditional Raptor drive will move data at 70 MiByte/s. These new SSD's
move data 700 MiByte/s. Assuming that they both consume equivalent power, this
new SSD will have a gigabyte-per-watt rating TEN TIMES BETTER than traditional
drives.

~~~
atonse
I think the parent poster is talking about the amount of storage, not the rate
of transfer. And in that metric, seeing how you can get 1TB hard drives for
$70, I don't see that changing anytime soon.

Of course, it seems wasteful to need insanely fast access to things like
movies or media/archival data in your home library.

~~~
jcroberts
If even if reitzensteinm was talking about capacity-per-watt, he's still
wrong. The specific drive he mentioned was the "Raptor" model but that's an
old, slow, small (150G) and power hungry (9.5W) disk. The newer WD
"VelociRaptor" 10K RPM disk is more likely what he meant, and is a more fair
comparison. The new Velociraptor only has a 600 GB capacity. The highest
capacity OCD IBIS drive is 960 GB.

<http://www.wdc.com/en/products/Products.asp?DriveID=821>
[http://www.ocztechnology.com/products/solid-state-
drives/hsd...](http://www.ocztechnology.com/products/solid-state-
drives/hsdl.html)

OCD IBIS Power: 6.6 Watts Idle 9.5 Watts Active

WD VelociRaptor Power: 4.30 Watts Idle 6.20 Watts Active

Now we do the math. A total of 9600 GB would be 10 OCD disk or 16 VelociRaptor
disk.

16 x 6.20 = 99.2 --WD VelociRaptor

10 x 9.5 = 95 --OCD IBIS

Similar is true for idle.

If we were not limited to VelociRaptor, when you get into some of the very
slow but very huge disks (1-2 TB), sure, you could beat the rather specific
OCD IBIS models on Capacity-Per-Watt. But it is very unfair to open only one
side of the rotating vs. ssd comparison to every disk made, and there are
higher capacity SSD's with even better power consumption numbers than the OCD
IBIS.

As you noted, the metric of Cost-Per-Capacity is often 10 or more times more
favorable for rotating disks. If you don't have a valid _need_ for the speed
offered by SSDs, they are certainly not worth the added costs.

~~~
reitzensteinm
I honestly don't think we'd disagree on much if we sat down over a beer and
discusse the issue. I'm as excited about SSDs as you are, and it seems that,
from your above post at least, you agree that speed is the primary driver in
SSD adoption right now.

It was not my intent to be argumentative - I just read your post and thought,
power consumption as a benefit, are you sure you've done the math on this?

By the way, I couldn't find any 2 TB drives that were not PCI-X, so I do stand
by my original analysis. With 1 TB SSDs, you'll still need 2x the servers. If
you can piece together an SSD solution, ignoring cost, and including server
wattage that can beat out an array of 2 TB drives on wattage, I will retract
my original statement. You can use the idle power as the active power for the
drives.

------
zokier
Why is it better to have the RAID controller integrated on the disk instead of
connecting several (cheaper) disks separately and use motherboards or software
RAID

~~~
bcl
Faster throughput due to less busses to pass through. It likely also has
something to do with why it appears to be so fast, using the RAID to spread
the load across multiple groups of Flash chips.

In addition by using the SI3124 chip I think this thing will 'Just Work(tm)'
with recent Linux kernels -- its the same chipset in the SATA card I use for
SW RAID. I'd be happy to test my theory if someone would send me a drive ;)

------
ck2
Why not give them a window right into memory (ala himem or EMM) and then SSD
can also have "execute in place" ability someday when they are fast enough.

~~~
Andys
Intel's looking at doing this, with an interface from the southbridge and
probably soon from the CPU itself. I think eventually for low-end PCs and
netbooks, CPUs will just have big L3 (DRAM) caches, and flash behind that, and
no system DRAM at all.

~~~
pjscott
If they can get the power of the CPU down low enough, they could probably bolt
on a DRAM layer, right on top of the CPU die, and connect them with through-
silicon vias. Your L3 cache could have the same die size as the CPU, and the
connection to the CPU could be very fast, both in bandwidth and in latency.

Actually, these "low-end PCs and netbooks" are sounding like speed demons, the
more I think about them.

~~~
Andys
On-die DRAM cache is a solved problem (though IBM probably owns the patent).
DRAM itself is not that power intensive that it would present a problem for
the CPU TDP.

~~~
pjscott
I checked, and apparently they've now invented on-die DRAM that doesn't
require extra processing steps. Nice! Die stacking would still have advantages
in reducing wire delay (thus reducing the L2 miss penalty) and increasing the
yield.

------
jfb
This is utterly bonkers. I badly want one.

------
MojoKid
240GB model offers up to 740MB/sec max read throughput and 720MB/sec maximum
write throughput. Of course, blazing fast SSD technology like this also comes
at a premium, as you'd expect.

~~~
mhansen
For some perspective:

    
    
        cat /dev/sda | pv > /dev/null
    

Reads 188MB/s on my new Thinkpad's Samsung SSD

~~~
mrb
cat'ing and piping into pv is a poor way to benchmark as the CPU overhead is
too high (the buffer size for a pipe kernel object is 4kB, forcing a high
number of context switches). Try "dd bs=32k </dev/sda >/dev/zero" and monitor
with "iostat -m 1", you will see a higher throughput.

~~~
mhansen
Thanks for the tip!

~~~
icefox
what was your result?

~~~
mhansen
The same - my CPU was about only about 50% saturated on the pv command (if the
disk was twice as fast, I guess the CPU would become the bottleneck)

~~~
mrb
So your SSD is slow-ish... FYI most achieve ~280MB/s these days (which is the
practical bottleneck of a SATA 300MB/s link).

