>> That’s $324,000 a year. But for just $120,000, the company could buy all the physical servers it needed for the job
What about power, cooling, rent in the building for the room you're keeping them in, backup generator, internet connectivity, administration costs (setup, repairs, installation), etc?
>> “The public cloud is phenomenal if you really need its elasticity,” Frenkiel says. “But if you don’t — if you do a consistent amount of workload — it’s far, far better to go in-house.”
$120,000 worth of servers is maybe 2-3 cabinets. $1100/mo /w 40A @120V each in most major markets, so let's just say $3,300/mo. Add in a 1Gb commit on a 10Gb link of quality bandwidth for another $2,000/mo, and all-in-all you're looking at $5,300/mo or $63,600/yr in operating expenses. Considering the hardware will probably last you 3 years, the hardware cost is actually only $40,000/yr, for a total of $103,600 vs $324,000. You'd easily be cutting your costs by 2/3rds.
There's also the an opportunity cost to consider if your actual demand turns out to be higher than projected and you're not able to scale up to meet it.
I think we'll see more companies using a hybrid model where you can run your steady-state workload level on dedicated gear but with the ability burst into a cloud to handle peaks and flash demand.
My experience with cloud overflow configurations is that it never works as nicely as you expect, due to a lot of the drawbacks of cloud infrastructure such as variable performance and latency issues.
And how many extra 'DevOps' people has the company now had to employ to manage these physical servers, and what is their annual salary? This never seems to be factored into the calculations.
That was my first thought, too. Physical hardware == more staff. Also consider the risk management of keeping your security up to date, and the opportunity costs of making it easy to use for developers. Spend some time in old-school enterprise, and you'll see just how much pain and grief can happen.
Remember, Amazon built its cloud not for the public, but for itself! The cloud gave them a clean separation of duties and cost management between application development and operations. That sort of thing can save large amounts of money and bureaucratic headache. They started selling AWS to the public only after they had a stable system that could produce excess capacity, which could then be resold.
The "excess capacity" is a pure myth.
When AWS started, it was using a set of Data Centers which had nothing to do with Amazon.com (the Retail part of it).
Now Amazon.com is just another (big) customer of AWS.
One? Your colo probably also will rent you help to rack, wire, and anything else physical. How many days do you expect to be "downtown?" Apart from the physical / racking aspect, it's not like your Ops has any less work to do with respect to clouds. You still have to manage your own software services, security, etc on either platform.
You could triple your provisions with the savings of not being on the cloud, so it's hard to believe you won't see the train coming, if the concern is having to rack more than 20-60 servers a day. Arguably, that would be a good problem to have!
This this this. And what is the turnaround time on getting new servers provisioned for development? Are they using Docker, or are they stuck in Windows land where there's no open source private PaaS solution that makes any sense? How many of those servers have virtual machines provisioned and then those virtual machines sit idle? How long until the cruft virtual machines are so populous that they take up an ever increasing amount of your internal infrastucture and it's a nightmare to manage?
Basically the Cloud is a stepping stone on your way to greater things. You may outgrow the cloud or you may find you have economies of scale where the cloud is becomes the expensive option. Recognise it and change your set up.
TBH you should be doing this with all aspects of the business. Start in a room/garage, move to serviced offices, then rent your own building, then buy your own building (should it become financially viable).
I think this is basic Return On Investment analysis.
It's a stepping stone for some companies. For many others, it's not (e.g. Netflix, Heroku).
I see a false dichotomy here: start off with the cloud, then move to bare metal servers. If you've got an OLTP database that demands high-performance, you may need bare metal. Many companies don't, though.
I've been a consultant to companies who've over-extended themselves on EC2 and ended up with huge monthly bills they didn't anticipate; in every instance, it was because they hand-built servers, cloned them, and then hand-modified them. The result was a set of servers they would like to be able to destroy (when they didn't need them) and then respawn (when they did), but couldn't because of the modifications. So, yeah, you can get in trouble with cloud services. But that doesn't mean you can't use it cost-effectively on a permanent basis. Like anything else, it requires planning.
Switching to bare metal? Sure: as hypervisors to your private cloud. Projects like OpenStack, an EC2 clone supported by Rackspace and Red Hat (among others) make it straightforward and compelling to virtualize your environment.
Netflix is fundamentally a different business than most. Their whole business model is a tenant relationship - they control nothing. Demand for their services can only be met by licensing deals, which also happen to generate demand for their service. All components of their business (IT infrastructure, network transit, end-user networks) are owned by competitors.
Content owners can and do screw Netflix with hardball negotiation tactics. Unlike cable companies, Netflix doesn't have a utility billing regime or local monopoly on services. So at any time, the need for large swaths of their IT infrastructure can disappear.
So what do you do? Put lots of liabilities on your books (datacenters, computers, SANs, etc)? Or rent it from someone?
Yeah I almost flagged this. There's nothing there except a title that might be provocative and a bunch of "Can you use math? If so, you might make decisions based on it" prose. (I didn't flag it because, frankly, the HN audience isn't what it used to be, and to many this actually might be news. Plus the comments promised to be useful for many)
I honestly don't want to insult the publication or author here. I'm sure for many people who don't use math, the cloud might seem like the best choice at all times.
However I'd really like to believe that if you're creating a product involving technology, and you're busy creating a business model, that at some point you're going to be figuring out things like cost of a new customer acquisition, or overhead cost per user. This stuff isn't exactly arcane or even terribly complicated.
Well, let's define "cloud": in this instance they seem to only mean EC2, which can be expensive if not used carefully.
So can bare metal. And what's worse, bare metal means a whole lot of care and feeding that you may not be prepared for.
There is no one prescription that will fit every company in every situation. But this article describes an anecdotal situation, not something typical. And, "the cloud" does not automatically mean someone else's services. For the vast majority of typical computing applications, companies should be using private virtualization (OpenStack, Vsphere, what-have-you) rather than just buying a bunch of pizza boxes or blades.
This all comes down to "do things that make sense to your situation".
I live in New York. If I go to the West coast 2-3 times a year for a few days, I rent a car by the day. It costs like $120/day because I don't book in advance.
If instead I fly to the west coast every month for a week or more, it may actually be more cost effective to rent the car by the month OR lease one and park it in California. By committing to a full month of use, I can actually save money versus paying the day rate for 7-12 days.
If I move to California, I move my car, or buy a car there.
If you have limited funds and aspirational goals, renting IT infrastructure from Amazon makes alot of sense... you pay as you go and reduce your upfront overhead. If you have a solid customer/utilization base, it may be cheaper to build your own.
I think it really depends on your solution and what you are trying to accomplish. The cloud can offer levels of redundancy, speed and security at a much lower cost than if you needed to build it yourself. The process to scale requires less engineering and resources. The ability to link to globally distributed networks and a multitude of technologies with the "flip" of a switch will never be replaced by hardware. I think it takes a combination of the two to architect rock solid infrastructure. Just choose your providers wisely!
Well, yeah, an in-memory database company is going to be underwhelmed by EC2. Their RAM pricing isn't competitive. The Amazon sweet spot tends more towards storage and internal I/O bandwidth.
The article completely ignores this point, but if you're comparing a cloud offering to colo or building a datacenter, the extent to which the cloud system corresponds to the nature of your workload is critical. Cloud gives you one sort of flexibility (size-over-time and provisioning) and takes away another (physical architecture design).
AWS abstracts IT infrastructure and as programmers , we have to love it for that. A server becomes an object that you can play with instead of a scary Linux baremetal box; you don't worry about configuring NGINX as a Loadbalancer or setting up IPTables. YOu don't worry about configuring SAS drives and H/W RAID cards -- which is hairy scary stuff. With Cloud, you can be up and running with an IT insfrastructure with having a specialized/dedicated admin.
edit- typos
The high cost aside, which can make sense up to a point because it saves on operations costs at small scales, the other big problem with the cloud is that for many types of applications the virtualization, storage and network topology destroys the performance. Some types of server engines in particular do not play well with these cloud environments and it is not a defect in the server engine design.
This is the main reason we stopped using the cloud and built our own server infrastructure. We could get 2-3x performance out of the same hardware simply by taking control of the physical machines and the network topology. The combined performance loss and high markup made it difficult to justify the cloud price performance relative to our own clusters.
There is no panacea for infrastructure. Every company needs to consider their own requirements.
When you're just starting out the cloud is significantly cheaper. When you're growing, it's good to have that support network. When you get large enough, it's more cost effective to actually own hardware.
My question is, how large is large enough to warrant owning your own hardware? What's the breaking point?
I found the break even to be less than 1 complete server. The monthly expense of a dedicated cloud server at RackSpace, the performance one gets from it, versus a single $2400 server with 8 cores realized as 16 cpus placed into a colo... there is no comparison. Colo price to performance buries RackSpace. Equivalent performance from RackSpace requires 4 of their dedicated servers, with no where near the disk space. Difficulty of setup? Pretty much none. After a few months, we bought a hardware firewall due to paranoia. Now we have a full rack of fully paid for servers, with a monthly expense of $600 and actual unlimited bandwidth. (We've had periods where briefly delivered 10 gigs of data to the public per minute from our server rack.) We are 2 geek and 1 biz dev guy. Smart geeks, but still. The cloud is an expensive joke.
There's also the matter of how long you expect to maintain certain things, and how fast you're changing stuff around. The cloud is optimized for changing things really quickly, where hardware is optimized for long-term cost effectiveness of relatively static assets.
For example: The Obama campaign famously ran almost entirely in the cloud, because it was run like a startup that knew exactly when it would be shutting down. But the DNC runs mostly on physical hardware, because it's expected to function indefinitely.
well, it's actually a very easy thing to quantify, but the cost/benefit analysis is different for every kind of company and computing requirement -
the REAL problem is it takes experience in running your own datacenter or at least colocated hardware to perform this analysis, and almost nobody does this because they all work for large internal IT orgs or the cloud providers themselves.
You'd be surprized how fast that experience is gained. Try putting a server in colo and you'll be surprised. There is nothing to it. NOTHING. Don't buy the hype. Try it and see for yourself.
1 or 2 is easy, but that's the sweet spot for cloud services.
a half cabinet + sufficient power and smart power distribution + real networking + remote hardware level access + backup is where it starts to get complicated, but profitable.
What I look at is how consistent your demand is: if you're spinning cloud instances up based on bursty traffic, it'll probably make sense to buy servers only to meet the consistent load and figure out the best way to share data with on-demand instances.
The 1/3 rule applies well: If you are only using the server one third of the time then AWS is a good deal. That could mean 8 hours per day or 10 days per month or 4 months per year. In those cases AWS/public cloud is more efficient and convenient than buying servers, but there are additional exceptions.
Certain applications with lower uptime requirements (test environments) can be run on decommissioned older prod server hardware if you have that.
Legacy enterprise applications that rely on extremely fast database access with large relational databases and poor caching are better suited to bare metal DB servers than virtual servers of any kind (public or private).
I think Cloud efficiency is about time saving. Is it cost efficient to rent on AWS expensive hardware with the worse uptime of the market ? Nop
But the cloud is not only that : with a proper solution like PaaS, you can just focus on what is important for you : your code. The point is : find a provider who is responsible for your service be up and running and forget about it. At Clever Cloud we detect the load and adjust resources, we deploy for you, we monitor for you... This is the real goal of cloud computing : forget about hosting, it's just working, and focus on your own added value.
New companies also rent office space but at a certain point they buy/build their own. Should we say 'Why some companies say office rental is a waste of money'?
Cloud services are marketed as extremely scalable solutions that grow with your business. I doubt that everybody starting out realizes that the costs can grow at a much faster rate than your revenue.
Article Title is sensationalist at best. It explains what most data heavy startups do. Start in the cloud and move to baremetal. IMO Amazon pricing is confusing and expensive. Look into Rackspace or Digital Ocean if you want better pricing and more options. There are also a lot of hybrid cloud options now too where you could host web heads in a cloud but have your heavy lifters in baremetal.
This is the correct path. Also, something like Heroku or google app engine can be a first step if you have only one developer for front/backend/"dev ops". I prefer a VPS like like Linode as a first step but there is some specialized knowledge involved in setting up and maintaining a VPS correctly.
For the business person:
- Heroku first (if your dev hasn't setup a VPS, dedicated server before)
- Once something like Heroku starts costing you more than $500 a month, it's time to pay to move it to a VPS like Linode or digital ocean (new than Linode but cheaper). You can start out with just a server for the app and one for the DB
I don't think anybody is recommending small companies host their own hardware, just that they own their own hardware. Renting a full cabinet at a premier datacenter costs under 2k / month and bandwidth there is far less than bandwidth cost from AWS.
The only time that EC2 pricing becomes close to physical equipment is with reserved instances, and in that case, you lose money if you spin those down.
If you want to maintain that flexibility, you can't use reserved and therefore your costs rise proportionally.
I'm trying to think of someone, other than Netflix, that has weathered cloud outages, and even THEY frequently have to deal with the weight of supporting a cloud topology. How many times has AWS's provisioning fabric gone out?
Cloud is Expensive. It is a simple Fact.
The future is more about Hybrid, I dont mean Colo. More like OVH. Where you can have you baseline using Dedicated Servers and use Public Cloud to Scale up and Down during peaks.
That is why i have been asking why all these Cloud Providers dont offer dedicated machines.
What about power, cooling, rent in the building for the room you're keeping them in, backup generator, internet connectivity, administration costs (setup, repairs, installation), etc?
>> “The public cloud is phenomenal if you really need its elasticity,” Frenkiel says. “But if you don’t — if you do a consistent amount of workload — it’s far, far better to go in-house.”
Yup.