> How many organizations have the millions of dollars of capex needed to get that off the ground and keep it all running?
I think you are exaggerating when talking about "millions". Companies running in old school DC will not build the actual DC. They will rent a 42U cabinet or part of it.
You can typically rent 1/4 of a cabinet for a few hundred bucks a month.
I you operate at a small scale, you typically don't have that many servers, maybe 4 or 5. A decent server is in the range of 4000 to 6000$ and will last generally around 5 years.
And these servers rarely breaks (we have a fleet of more 300 servers, and maybe 1 or 2 "wake the on call" crashes a year).
Throw in 1 or 2 decent switches at about 1000 to 2000$ each and you are mostly good to go.
You end up with a capex of ~50 000$ to get you started, with a depreciation over 5 years.
Power is the dominant cost in datacenters unless you have very, very expensive switches. Power drives direct electricity consumption, the need for backup batteries and UPSes, cooling, and fans.
You can get pretty high-powered supermicro machines for only like 1000 or 2000 dollars these days. Over 5 years, that works out to only $16/month.
A 42U cabinet from HE (Hurricane Electric) that's "on sale" (http://he.net/colocation.html) runs $400/month. You need 20-30 servers before the spend on machines starts to overtake power. And I honestly doubt you'd be able to put 30 machines in that cabinet before hitting their power ceiling. I walked through an Equinix facility a while ago and if you hit 10KW/rack that's considered "hot". It's not hard to do if you stuff an entire 42U rack with 1U multi-core machines that each have CPUs drawing 50-100 watts/core (not unlikely with high-end Xeons). 15KW/rack is really hot.
What kind of specs are you getting for that kind of money? Just played around with AWS pricing calculator and honestly it seems like EC2 is even more cost-effective than I thought, depending on specs...
Edit: Try using https://awstcocalculator.com and see what it claims your savings would be (I'm curious how accurate it is)
+ dual power suply + Rails with cable management + 5 years support
Over 5 years, it's 6000 / 12 * 5 = 100$ per month.
Something comparable like an i3.2xlarge (less CPUs but more storage) is at 455.52$ per month On Demand price.
3 years full upfront, it's at 192.72$, better but still more expensive.
And i3 are less convenient than your server because they can go down at anytime. This means loosing the local NVMe storage you have to come up with a mix of EBS volume+local NVMe if you want some persistencies, or heavy clustering where loosing a node is not a big deal.
AWS is really expensive if you are using it wrong. And a lot of us are using it wrong ('"move <INSERT LEGACY APP> to the cloud" says management' mode). Using it right is quite difficult in fact, it requires a lot of engineering complexity to be resilient when AWS chose to shot the hypervisor under your feet. And truly leveraging the elasticity provided by things like AutoScaling groups or Lambdas, specially at the storage layer, is far from simple. I've seen instance where attempts to build "SERVICE THAT SCALE" ended-up being even worst than <LEGACY APP IN THE CLOUD> in term of costs.
This has been my experience being part of a team managing a large mixed deployment with ~5000 EC2 instances on one side, and, on the other 300 physical servers, each handling ~10 LXC containers in legacy DCs.
Where AWS shines is the fact you get a lot of flexibility. Need more capacity? increase the instance size and you are good to go. Need to create 300 ELB in a rush with some DNS records? it's done in 1 or 2 hours and 80 lines of boto. That level of instrumentation is not maintainable by any companies except Cloud providers and the biggest internet actors.
But if your load is fairly static and/or predictable, if you have the capital to buy the servers upfront, if your customer base is fairly localized, if you can manage the added complexity of having to do some capacity planing and hardware inventorying once or twice a year, then legacy DCs are still cheaper but I know and understand it's a lot of IFs.
That has exactly been our experience when deciding to go AWS vs self hosted. For our environment, it makes sense for time and money to go self hosted. I get it may be different than others, but not for use.
I think you are exaggerating when talking about "millions". Companies running in old school DC will not build the actual DC. They will rent a 42U cabinet or part of it.
You can typically rent 1/4 of a cabinet for a few hundred bucks a month.
I you operate at a small scale, you typically don't have that many servers, maybe 4 or 5. A decent server is in the range of 4000 to 6000$ and will last generally around 5 years.
And these servers rarely breaks (we have a fleet of more 300 servers, and maybe 1 or 2 "wake the on call" crashes a year).
Throw in 1 or 2 decent switches at about 1000 to 2000$ each and you are mostly good to go.
You end up with a capex of ~50 000$ to get you started, with a depreciation over 5 years.