For our longer stateful processing or apps that need to be available 24/7 with no variability in load we have purchased our own hardware (a process that has been going on for over 18 months). Owning the equipment plus the data center will run us approx 1.2 million including growth to build a hot back up.
It should be noted, staffing cost was not a factor. We must have staff to manage 1000s of servers at AWS or at our own data centers. The biggest factor was paying for compute on boxes that crashed and yielded nothing we could use to move our business forward. Well, I take that back, we got really good at check points and rollbacks. Other than that, not much.
No matter how you slice it, AWS and other cloud services are a great service for the right types of processing, and applications.
Paying for 70-100 hours of compute and having the server crash in the middle of calculating your predictive analytics not exactly all that bright either. So, we bought our own gear.
AWS works and worked with us pretty damn closely as we pulled apps out of the cloud.
(Oh, and thanks for the information you've already shared - even if you can't answer my curiosity here…)
As an aside, AWS has everyone beat when it comes to regions however. We can be close to our customers in Europe, US and so on.
To date, no one is spinning up cloud fronts and services in more areas than AWS. It will take the MSFT , IBMs and the like to move the global cloud along. MSFT just needs to realize not everyone wants Sharepoint, SQLServer and .net.
Which service you use is really situational. Plenty of good ones out there. AWS is just one!
I run technical operations for a company with a fraction of your footprint (but growing quickly). We're at the point where we are growing out of the RAX public cloud but by my calculations, the decision to run our own private cloud in colocation vs. lease one from a service provider is (financially) a wash.
From a practicality standpoint, the scales tip towards leasing bare metal from a provider. I'm curious to hear your experiences with colo. How many folks do you have working in your colocation facilities doing hardware maintenance? What about network engineering? I presume you also keep a sizable stock of spares?
Now for the long winded:
1. We do have a decent amount of spares but not a ton. Our contracts require replacement parts within hours to a day. Some items like F5 gear we have two and no spare. They just replace their gear in hours.
2. Each data center has 24/7 support that can do some minor tasks.
3. Yes, Networking is a pain and you need the right people to do it. It is not cheap either! Luckily our VP of Tech Ops is a networking guru.You mess up networking and you are hosed. Our first networking guy wasn't exactly Tops! So, we know first hand.
Having said all of that, for us there are economies of scale. It is the case if we want to test different machines, databases or any other combination at scale it could cost us several hundred thousand dollars just to run the tests. Yep, we have dropped over 100k for testing at scale. Its simply not sustainable and an irresponsible way to spend investors cash. Also, when you add in multiple environments for dev, test, staging and integration you can quickly see we consume a lot of boxes. So, many in fact, a lot of colocation/cloud services will not work with us unless we plop down large amounts of cash. Let's also factor in AWS wants large up front spend for reserved instances. Thus, if you need the capital to get the amount of compute you need to run your business it is not very hard just to call Dell, Cisco, Nimbix, Equinix or any other vendor and negotiate our own deals. If any of these companies can get half our spend a year they are willing to at least talk.