

Cloud Block Storage Now In Unlimited Availability - bretpiatt
http://www.rackspace.com/blog/cloud-block-storage/

======
staunch
Unless they're doing dedicated disks per user (which they aren't, it seems)
and dedicated networking, there's really no reason to think they'll do better
than AWS, once there are a significant number of users on the system.

The idea of pooling large numbers of drives, dividing it up by space, and then
sharing it over a network (IP or not) will never get you the kind of
performance you get from dedicated locally-attached drives.

For server storage, it just makes more sense (IMHO) to have local disks with
unquestionably reliable performance characteristics. The flexibility of EBS-
type solutions is nice, but it comes at too high a cost.

~~~
ehutch79
It's reasonable to think that this will have similar performance to
'cloudfiles' as from what i've read it's pretty much just a block file on
there.

their cloud servers already have locally attached storage, but it's limited,
because they're not just going to chuck disks in servers at user demand. if
you want that you need to go with a dedicated server.

I really see this product as being more for people who want to use something
like cloudfiles but who's software can't deal with it.

~~~
jrarredondo
(Disclosure: I work for Rackspace)

Hi there ehutch79, just a quick clarification. Cloud Files is object storage,
whereas Cloud Block Storage is block storage. The performance characteristics
of both types of storage are very different. One way I use sometimes to
explain the difference is by talking about how they are accessed: Cloud Files
objects are accessed using HTTP (think REST), whereas Cloud Block Storage
blocks are accessed using low level OS I/O operations (think block read and
write operations). Because of that, you could implement a database on Cloud
Block Storage, but not on Cloud Files as it would not be very performant.
Think Cloud Files when you have a website that needs media, large objects,
application-specific content, files, etc., or when you need CDN for improved
performance at the edge (CDN is a great feature of Cloud Block Storage). Think
Cloud Block Storage when you could have used a regular old hard drive: you
provision it, you format it, you may stamp a file system on it, or use the
block storage for MongoDB or a relational database (with the difference that
with Cloud Block Storage you can pick Standard drives or Solid State Drives).

~~~
donavanm
How many customers asked you for a network block device? I can't think of
anyone who uses raw block devices these days. Qmail? Some db storage engines?

I'd posit that what most consumers actually want is a persistent filesystem.
Clear consistency semantics is probably a bonus at this point.

Vending a fs shim to your domus eliminates a whole mess of abstractions.

~~~
oijaf888
You realize this is essentially the same offering as EBS? It just shows up as
a raw device in your operating system and you can create your preferred FS on
it and its a persistant file system. Some people might want to use xfs some
might want to use zfs so just providing a raw block device is the best way to
have widespread compatibility.

~~~
donavanm
I ask "How many customers asked you for a network block device" but I don't
know what EBS or a "raw device" is?

So yes, I'm well aware that "compatibility" is the general reasoning for
building a block device interface. But how many unique domu kernels do these
providers support? Two, maybe three on the outside? And minus win32 theyre all
posix with shockingly similar vfs interfaces.

Which comes back to Henry ford and his faster horse. Do customers actually
want another layer on the abstraction fest so they can stack their mount
option of choice on top? Or do they want a persistent file system with well
defined consistency semantics.

I've never heard a customer actually request Yet Another Leaky Block Device
Abstraction. And if that customers out there, what's their use case? Because
building an fs shim to the dom0 seems to eliminate a whole mess of underlying
infrastructure. So why isnt anyone doing that?

------
gtaylor
As difficult a time as EBS is having, it's good to see some other alternatives
pop up. This will definitely put the pressure on the EBS team to clean up, if
they weren't already under immense pressure to do so.

A word of caution, though: Rackspace is just getting into this ballgame of on-
demand virtual block devices. There will probably be gaffes (though, hopefully
not as bad as EBS of late), so, as per the virtualization commandments, build
expecting failure.

Edit: Also, Rackspace had perfect timing for this announcement, the day after
catastrophic EBS failure. Coincidence, or astute choice in launch dates? :)

~~~
jrarredondo
(Disclosure: I work for Rackspace in Cloud Block Storage)

Hey gtaylor, I can tell you that this was purely a coincidence. We actually
were going to ship this a couple of weeks ago but decided to delay a few days
to get some updates from OpenStack. I think what happen yesterday is
unfortunate for all those customers who were affected and certainly not cause
for celebration. We do believe however that we have a great block storage
service to offer and are looking forward to competing.

~~~
epistasis
Can you share anything about the backed architecture, for example if you're
using Ceph to provide block devices?

~~~
jrarredondo
(Disclosure: I work for Rackspace in Cloud Block Storage)

Hi epistasis, here is what I can say. Let's talk about two things: what
happens at provisioning time and then what happens at runtime.

Our provisioning engine is based on OpenStack Cinder. At provisioning time, we
provision a SSD or Standard volume on our storage backend. This storage
backend is a storage system we built (called Lunr) on top of standard Linux
and commercially available hardware. Once the volume is created in Lunr it is
then attached to the Cloud Server compute host, which exposes the volume to
the guest as a virtual device.

At runtime, the volume appears as a regular device to the compute node over
iSCSI. Snapshots are created against Cloud Files, our object storage service
that is based on OpenStack Swift.

I hope that is useful.

~~~
epistasis
Thank you! That's exactly what I wanted to know, my apologies for missing
'cinder' in the article. (Though in my very meager defense, page up/page down
is broken on the blog because the top framing hides content.)

------
cagenut
This is perfect timing on their part, how many people are having their post-
mortem meeting today and going back to their desk to look for options.

------
cnlwsu
Definitely been waiting for this feature, was frustrating firing up a new
instance every 300gb. We have 100+ instances in Rackspace and its been
interesting to watch them over last couple years. They have had their own
share of troubles of course but actually having support is huge when it
happens.

------
Erwin
I'd be curious about the practical durability. EBS claims a failure rate of
0.5% per 20 GB per year as per <http://aws.amazon.com/ebs/> \-- I wonder how
an e.g. 200 GB volume on RS would be provisioned (how many drives are
involved?). The cheap storage seems to be SATA disks

------
akh
This looks like it is directly pointed towards AWS EBS and EBS PIOPS offerings
so here some quick cost comparisons:

\- Rackspace standard storage vs AWS EBS (for 1TB of storage with 100 IOPS)
\-- AWS US-East = $126/month \-- Rackspace USA = $150/month

\- Rackspace SSD vs AWS PIOPS (for 1TB of storage with 1000 IOPS) \-- AWS US-
East = $225/month \-- Rackspace = $700/month

[http://blog.planforcloud.com/2012/10/cost-comparison-
rackspa...](http://blog.planforcloud.com/2012/10/cost-comparison-rackspace-
cloud-block.html)

