Not a huge company like DO, but iwStack [1] provides a SAN backed cloud, with selectable KVM/XEN instances, custom ISO and virtual network support. The prices are similar to DO.
[1] http://iwstack.com/
I think one of the reasons is that they only have a small number of datacenters, have a small number of (very friendly) staff and are not going after the mass market like DO, and I guess they don't spend anything on marketing.
That's what AWS's "Elastic Block Storage" is. You can turn it off and just use instance storage (and I personally prefer to, for truly ephemeral nodes), but it increases spawn time since your disk image actually has to get copied over to the VM host machine in that case, rather than just "attached" over EBS.
Then why EBS failure rate is several orders of magnitude higher than in SAN deployments? A SAN provider would be quickly out of business with 0.1-0.5% annual failure rate.
Just because it's a SAN doesn't mean a given abstract block device from it is backed by RAID. It's literally just a multiplexed and QoSed network-attached storage cluster.
I actually prefer the lower-level abstraction: if you want a lower failure rate (or higher speed), you can RAID together attached EBS volumes yourself on the client side and work with the resultant logical volume.
On AWS, an EBS volume is only usable from one availability zone. You still need to use application-level replication to get geographic redundancy for important data, and when you have that, EBS just lets you be lazy rather than eager about copying a snapshot to local instances.
I guess I was thinking in terms of using EBS for ephemeral business-tier nodes, rather than as the backing store of your custom database-tier. (I usually use AWS's RDS Postgres for my database.)
For ephemeral business-tier nodes, EBS gives you a few advantages, but none of them are that astounding:
• the ability to "scale hot" by "pausing" (i.e. powering off) the instances you aren't using rather than terminating them, then un-pausing them when you need them again;
• the ability for EC2 to move your instances between VM hosts when Xen maintenance needs to be done, rather than forcibly terminating them. (Which only really matters if you've got circuit-switched connections without auto-reconnect—the same kind of systems where you'd be forced into doing e.g. Erlang hot-upgrades.)
• the ability to RAID0 EBS volumes together to get more IOPS, unlike instance storage. (But that isn't an inherent property of EBS being network-attached; it's just a property of EBS providing bus bandwidth that scales with the number of volumes attached, where the instance storage is just regular logical volumes that all probably sit on the same local VM host disk. A different host could get the same effect by allocating users isolated local physical disks per instance, such that attaching two volumes gives you two real PVs to RAID.)
• the ability to quickly attach and detach volumes containing large datasets, allowing you to zero-copy "pass" a data set between instances. Anything that can be done with Docker "data volumes" can be done with EBS volumes too. You can create a processing pipeline where each stage is represented as a pre-made AMI, where each VM is spawned in turn with the same "working state" EBS volume attached; modifies it; and then terminates. Alternately, you can have an EC2 instance that attaches, modifies, and detaches a thousand EBS volumes in turn. (I think this is how Amazon expected people would use AWS originally—the AMI+EBS abstractions, as designed, are extremely amenable to being used in the way most people use Docker images and data-volumes. The "AMI marketplace" makes perfect sense when you imagine Docker images in place of AMIs, too. Amazon just didn't consider that the cost for running complete OS VMs, and storing complete OS boot volumes, might be too high to facilitate that approach very well. Unikernels might bring this back, though.)
Does any cloud provider have SAN backed VMs? If any do, at what price?