Hacker News new | past | comments | ask | show | jobs | submit login

That's a very unreasonable expectation at cloud price levels.

Does any cloud provider have SAN backed VMs? If any do, at what price?




Not a huge company like DO, but iwStack [1] provides a SAN backed cloud, with selectable KVM/XEN instances, custom ISO and virtual network support. The prices are similar to DO. [1] http://iwstack.com/


That's pretty damn interesting for more traditional hosting type usage.

Seems like they sell 50 GB of real SAN backed storage for just 3.6€ per month.

How can they afford to be so cheap?


I think one of the reasons is that they only have a small number of datacenters, have a small number of (very friendly) staff and are not going after the mass market like DO, and I guess they don't spend anything on marketing.


That's not even particularly cheap per-GB - BuyVM ( http://buyvm.net/storage-vps/ ) will sell you 250GB for 7USD/mo.

Disclaimer: happy customer


> That's not even particularly cheap per-GB - BuyVM ( http://buyvm.net/storage-vps/ ) will sell you 250GB for 7USD/mo.

Except that is not SAN backed storage.

From the website:

> We only use RAID-60 drive arrays with a minimum of 16 drives

That's somewhat scary. RAID-6 might be almost ok, but striped? No thank you. I bet they don't also have block level checksums.


That's what AWS's "Elastic Block Storage" is. You can turn it off and just use instance storage (and I personally prefer to, for truly ephemeral nodes), but it increases spawn time since your disk image actually has to get copied over to the VM host machine in that case, rather than just "attached" over EBS.


Then why EBS failure rate is several orders of magnitude higher than in SAN deployments? A SAN provider would be quickly out of business with 0.1-0.5% annual failure rate.

SAN reliability ratings start at 99.999%.


Just because it's a SAN doesn't mean a given abstract block device from it is backed by RAID. It's literally just a multiplexed and QoSed network-attached storage cluster.

I actually prefer the lower-level abstraction: if you want a lower failure rate (or higher speed), you can RAID together attached EBS volumes yourself on the client side and work with the resultant logical volume.


On AWS, an EBS volume is only usable from one availability zone. You still need to use application-level replication to get geographic redundancy for important data, and when you have that, EBS just lets you be lazy rather than eager about copying a snapshot to local instances.


I guess I was thinking in terms of using EBS for ephemeral business-tier nodes, rather than as the backing store of your custom database-tier. (I usually use AWS's RDS Postgres for my database.)

For ephemeral business-tier nodes, EBS gives you a few advantages, but none of them are that astounding:

• the ability to "scale hot" by "pausing" (i.e. powering off) the instances you aren't using rather than terminating them, then un-pausing them when you need them again;

• the ability for EC2 to move your instances between VM hosts when Xen maintenance needs to be done, rather than forcibly terminating them. (Which only really matters if you've got circuit-switched connections without auto-reconnect—the same kind of systems where you'd be forced into doing e.g. Erlang hot-upgrades.)

• the ability to RAID0 EBS volumes together to get more IOPS, unlike instance storage. (But that isn't an inherent property of EBS being network-attached; it's just a property of EBS providing bus bandwidth that scales with the number of volumes attached, where the instance storage is just regular logical volumes that all probably sit on the same local VM host disk. A different host could get the same effect by allocating users isolated local physical disks per instance, such that attaching two volumes gives you two real PVs to RAID.)

• the ability to quickly attach and detach volumes containing large datasets, allowing you to zero-copy "pass" a data set between instances. Anything that can be done with Docker "data volumes" can be done with EBS volumes too. You can create a processing pipeline where each stage is represented as a pre-made AMI, where each VM is spawned in turn with the same "working state" EBS volume attached; modifies it; and then terminates. Alternately, you can have an EC2 instance that attaches, modifies, and detaches a thousand EBS volumes in turn. (I think this is how Amazon expected people would use AWS originally—the AMI+EBS abstractions, as designed, are extremely amenable to being used in the way most people use Docker images and data-volumes. The "AMI marketplace" makes perfect sense when you imagine Docker images in place of AMIs, too. Amazon just didn't consider that the cost for running complete OS VMs, and storing complete OS boot volumes, might be too high to facilitate that approach very well. Unikernels might bring this back, though.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: