
What if Microsoft and Danger #Cloudfail turns out to be a #SANfail? - rizzn
http://siliconangle.net/ver2/2009/10/12/what-if-msdanger-cloud-fail-turns-out-to-be-a-san-fail/
======
generalk
Why can't it be both?

As far as I can tell, "cloud" generally means someone else stores my data and
the only way I can get to it is via the Net. In that case it's absolutely a
failure of the "cloud".

And if the data was stored on a SAN, it's a SAN failure. If someone didn't do
backups at all, it's a backup failure.

What's with all the labeling? Look, if I pay someone else to manage my data,
and they screw it up and I suffer data loss, it doesn't really matter what
piece of gear or level of process is responsible -- someone still screwed up.

~~~
eli
Agreed. Sidekick users aren't blaming SANs because, well, they don't care
about SANs. After all, isn't the whole point of this system that you're paying
someone else to figure out how to store and back up your data?

------
gfodor
The fact that this whole thing is being considered a "cloud" failure is
ridiculous. The word has become so meaningless that if I ran over a neighbors
dog I could probably blame it on cloud computing.

------
simonw
I know virtually nothing about SANs, other than that they are frighteningly
expensive. Can anyone clarify how they compare to systems such as GFS, HDFS,
MogileFS and other systems that attempt to store data in a safe, redundant
manner across lots of cheap servers?

~~~
tptacek
SANs aren't filesystems. They're block devices that run over the network
instead of over a SCSI cable. They connect to servers by special HBA cards
which look like SCSI devices, but actually run the SCSI protocol over the
network to a rack full of disks. In a SAN configuration, the admins of those
disks carve out a volume of storage for you, give it a name, and you configure
it on your server; now it looks like you have a couple terabytes of storage
directly connected. You format it like any other disk.

Backup works like any other system; you do it app layer. In high-volume high-
sensitivity apps, the people running the disk server also back it up, by
running protocols that mirror block-level changes to a second rack full of
disks somewhere in Tennessee.

(yes, yes, or IDE or FC, yes yes, or software initiator, etc etc).

------
joe_the_user
Uh,

Sorry but "cloud" or not, SAN or not, the MS/Danger fail was a stunning
failure of _process_. Who cares what the underlying technology was or how it
failed? What matters is that a multi-billion dollar software company actually
managed to loose, _permanently_ , their customers' data through obvious
carelessness, through either lack of a backup in some fashion or other (and
don't even try to argue they could have had an excuse - _"multi-billion dollar
company"_ _"reputation"_ backups might be sort-of hard maybe - perhaps - but
MS is supposed _know what it's doing_ ). Lightning and Asteroids wouldn't
strike five different carefully chosen locations...

My guess is that this will hit MS really hard over time. Even if they actually
were hoping Danger would dry-up and blow away, they've now done the worst case
scenario to _customers_. Repeat after me "never let MS near your data...".

~~~
eli
But that is a failure with the Cloud storage/computing pitch, no?

All along critics have been saying that you'd be nuts to trust someone else to
back up and secure your important data.

~~~
wattersjames
Its easier to trust someone with your data, if they have triple redundancy
built into their architecture. S3 has that built in; a single MS SAN box does
not. That's why its an important distinction.

You trust people with the right, next generation trust worthy architecture.

~~~
rbanffy
If the architecture is trustworthy, I am fine with it being a couple
generations past the leading edge.

When you go for leading edge you often end up with a mix of unproven stuff,
assorted approaches, uneven performance and problems nobody ever had. It's
fine to experiment, but you should only trust your customers' data to new
technology when you find a combination that works well all the time (or, at
the very least, when you have mapped all the circumstances it doesn't).

This whole disaster could have been avoided if there were no single point of
failure - at the very least, three identical clusters being brought off-line
and rebuilt from mirrored data for the upgrade one at a time in order not to
disrupt service to users. I am astonished a telco did not know this.

------
jsz0
When it comes to important data users should mistrust everyone and everything.
It is more difficult to backup data from most cloud sources. There's no easy
way to backup my entire Flickr account, Facebook profile, or Google account
for example. To backup Gmail I have to rely on desktop applications via IMAP.
Is this really acceptable? Why don't we have more sites offering easy offline
backups? No matter how you look at it the user has to be responsible for their
own data. If cloud services are not making this easy to do then we have a
legitimate problem.

------
rit
Can we just point out that it was clearly a #BackupFail ?

No matter _WHAT_ the deployment mode was, the lack of backups is what has
turned this into an issue, NOT the SAN or "Cloud" failure.

------
jf781
this industry is embronic and more innovation will occur. I love the cloud
market very compelling and important for entrepreneurs and big businesses.

We will all be renting computing and storage in the short future.

~~~
jf781
how can i tell who knocked my comment down to -2

