Say you have 10 machines at SoftLayer and use 30TB a month. Each machine comes with 3TB and you pool your bandwidth for $25 per server so you can allocate the whole 30TB to your proxies. It's unknown what fraction of your server cost is applied to bandwidth, but we know the point where you start saving.
At Amazon 30TB of US-East EC2 bandwidth costs 10000 x .12 + 20000 x .09 = $3000.
If you estimate the bandwidth portion of your SoftLayer server cost at less than $275 you're saving money when using your full bandwidth allocation. With servers starting at $159, sub-$100 seems realistic.
In our case with dozens of servers and ~60 TB of bandwidth, we're saving thousands a month compared to EC2.
Dedicateds with massive bandwidth plans are very easy to come by, on top of the perks of dedicated io and raid (which also starts very cheap).
If you do happen to have a lot of inbound traffic though, it's a nice little arrangement as the bandwidth may be almost free to the provider (Symetricity requirements) and still very useful to you.
IMHO, a startup should not start with dedicated, but perhaps when you get to a certain size, perhaps, dedicated hardware, team and bandwidth is the way to go.
Just like you will not have a dedicated chef to cook your meal at start, but rather outsource meals, etc.
What's the biggest technology mistake you've ever made - either
at work or in your own life?
Prior to Facebook, I was the chief executive of a small internet
startup called FriendFeed.
When we started that company, we were faced with deciding whether
to purchase our own servers, or use one of the many cloud hosting
providers out there like Amazon Web Services.
At the time we chose to purchase our own servers. I think that was
a big mistake in retrospect. The reason for that is despite the
fact it cost much less in terms of dollars spent to purchase our
own, it meant we had to maintain them ourselves, and there were
times where I'd have to wake up in the middle of the night and
drive down to a data centre to fix a problem.
In my experience, the highest operational cost with running services is managing the application itself - deployment, scaling, and troubleshooting. None of that goes away with the cloud.
The costs are dirt cheap these days. You can get a full rack, power and a gigabit feed for about $800 in many colos in texas. We opted for equinix in san jose, which is all fancy with work areas, meeting rooms, etc when you are there, but the funny part is, we're never there!
I do like the virtualization for some maintenance/flexibility so we have a few servers that are hosts and we run our own private cloud where we get to decide where/what runs. Database servers on bare metal with ssd drives in other cases.
Best of both worlds.
It's so cheap you get a second colo in a different part of the country to house a second copy of your backups, and some redundant systems just in case something really bad happens.
Oh yeah and don't get me started on storage. We store about 100TB of data. How much is that on S3 per month? $12,000/month! A fancy enterprise storage system pays for itself every couple of month of s3 fees.
Consider yourself lucky. We thought the same thing, but when a RAID controller died on us recently we really didn't know what hit us. It didn't just stop working, it started by hanging the server every now and then, then after a day slowly corrupting drives, then after a day or two it stopped completely.
When you get evicted from an EC2 instance you just switch to a new one, the cost is constant. When your piece of hardware at the datacenter goes down, unless you had the ressources for a spare one, you are hosed.
For us, you should keep in mind, the big takeaway is consistent performance not just price.
To be honest, I've never had more sleep in the past 2 years ;).
co-lo: 1x cost
managed: 1.5-2x cost
cloud: 2.25-4x cost
This was with headcount changes figured into the pricing. (We did not see a head count reduction when using EC2)
The primary advantage the cloud offered was that it was an operating expense without a contract, and that you could turn systems off when not used.
If you don't have a contract there is nothing to prevent a provider from raising prices on you. The reason to have a contract is not just to get the best price. It's to have a price guarantee. Edit: Moving 100% of your machines won't be something you will want to do. If you have a contract you can renegotiate well in advance of any price increase.
Prices always drop?
People thought housing prices always go up as well.
How long is the price guaranteed for? I'm assuming mixpanel has this issued covered but it's important to keep it in mind. Not having a contract goes both ways.
SLA, including compensation for outages and outs if they have too many. Sure, without a contract you can leave anytime, but a contract isn't necessarily a permanent trap. They are negociable. You can push for lower incidence allowances, an opt-out part way through the contract, and so on.
Support... more in actual human assistance with the move and issues. Depending on your size and the terms, that contract could be worth 6 to 7 figures to the company. That is some serious motivation to help make the initial experiences good and the help along the way.
You're still paying someone else for servers that you don't own (unless softlayer ships you those machines after 3 years).
This is why I hate the term "the cloud" -- because it is too nebulous and non-descriptive.
Leasing physical servers could be regarded as "cloud", but usually wouldn't because that method of hosting towards not meeting the Resource pooling, Rapid elasticity and Measured Service (esp "automatically control and optimize resource") characteristics (of course, one can argue that it can do those thing, but generally( it doesn't).
"Cloud" isn't* an engineering term, and thinking about it in absolutist, engineering terms makes as much sense as thinking about "web 2.0" in engineering terms 5 years ago.
True, but the blog post didn't say if they are using virtualization or not.
Unfortunately, 'cloud' implies nothing. Apple's iCloud isn't necessarily storing my data using VMs and EBS/BigTable type FS. They could be using dedicated W2K/IIS 4.0 boxes storing BLOBs in MSSQL and still be considered cloud by everyone.
You will take an I/O hit when instead of a single physical machine asking for a set of sequential blocks off the disks, you have 20 virtual machines asking for seemingly random blocks off the disks.
Replace disks with storage array if you'd like.. but the fact remains: more VM's will mean more storage contention. If you have the funds to have dedicated arrays per VM, hats off to you. Most people never do this, and I/O suffers a penalty. Virtualization has is price, and even that being said, I think it's worth it for most people.
Basically, on any reasonably sophisticated hosting infrastructure those arn't a problem.
If your problem is just seq->random conversion due to additional VMs then bcache/flashcache does a surprisingly good join at making that just plain go away with little additional cost.
On any reasonably sophisticated host you have a DSAN, which has a cool little stats trick, which basically means that as you add additonal vm's together the variance on the total IO load drops, and the load pattern itself becomes more and more normal, the larger and more uncorrelated you get.
That gives each vm more 'burst' capacity when required with many fewer failures (ie. the vm asks for more IO then the current capacity of the system).
This leads to a bunch of interesting stuff when you try to apply it in real world systems, either in HPC or in clouds.
I work over at orion, and our standard cloud VM's dominate dedicated hardware in IO. There really isn't much competition.
You find most of the 'virtualisation is bad for I/O' is either 'oversold I/O is bad for io' or 'Trying to push all IO for every VM on a box as well as all WAN traffic through one gigE port is bad for io'.
Agree but wouldn't you say that sole use of physical hardware (whether or not you own or lease that hardware and regardless of physical location) is really the issue that separates cloud from non cloud?
We have servers that we own in our offices.
We have servers that we own in colocation.
We have some dedicated virtual servers at MT http://mediatemple.net/webhosting/dv/
I consider the MT servers cloud since the hardware is shared. And what we are paying for is simply memory, transfer and disk space. I don't even know what hardware we are running on there and I don't know anyone else that is running on that hardware either.
But I would also consider the servers in the office and the colocated servers non-cloud even if they were leased (which they are not).
Actually, nebulous means cloud-like.
Would you prefer the older term "the grid"? They're all just items for buzzword bingo.
( / ( '-.
.-=-. ) -.
/ ( .' . \
\ ( ' ,_) ) \_/
(_ , /\ ,_/
For example, Amazon could estimate 99.95% downtime, because of physical and geographical redundancy, etc. But this analysis would be faulty, as their outage earlier this year showed.
There are a litany of long-tail black swan events that could bring down entire datacenters, that people just can't anticipate. Not even including earthquakes, terrorist attacks, etc, but even simple upgrades or misconfigurations like the one that took down their datacenter in the East Coast. Yet, they still advertise a SLA of 99.95% availability, etc. Is the risk to downtime really only 0.05%? Was the event that occurred really a 3 standard deviation event? I highly doubt it.
This complete lack of true ability to estimate risk means that customers also have essentially an inaccurate view on what their risks are. Like the one commenter who said that a small business ran their POS device over the cloud, if you told them that they would be down 2 days out of the year, would they really be interested in that? Probably not.
In a similar vein, the authors were likely promised great uptime, but no guarantees on I/O or CPU performance, which is something you don't think of. The cloud provider doesn't have to be down for your web service to be drastically affected. I suppose since this is all new, the customers are learning which questions to ask, and the cloud providers are learning which things to guarantee, so hopefully this is worked out in the next year or so.
99.95% availability means your site should be "available" 99.95% of the time. It does not mean you have a 0.05% chance of a disaster, it means you will not have more then 21.5 minutes per month of outages. Those 21.5 minutes might be during your most critical time. They might even all be added together for one 4 hour downtime before you're demoing to VCs and still not violate your SLA for the year.
One, datacenter failure happens with any hosted application that runs within a single datacenter. Most other types of hosting also have a litany of other failure modes that go away with cloud, to be replaced by other modes. For example, you no longer have to deal with failures of individual hard drives, but now you have to deal with failures of EBS clusters.
Two, cloud provides a standard system that makes HA easy. Rather then having to do dual dc failover in hardware and having to deal with bgp, anycast ips/dns failover, dealing with splitbrains and hw stonith/etc. All of this is just taken care of for you and/or made a lot easier.
I'm curious why they didn't just switch to Rackspace's dedicated hosting. It would have given them the performance they needed while retaining the flexibility of being able to quickly spin up cloud machines in the same datacenter as the dedicated machines.
That's not amazing at all.
You do realise that a "cloud" host and a dedicated box are two exact same hardware boxes siting next to each other in a rack? One's just virtualized 10x with Xen, VMWare, KVN, etc.
Now their cloud support is different story.
If you can, I recommend you to use Ganeti with Xen or KVM (I use KVM). Rigorous development, very friendly developers and very well designed tools. No wonder it is used internally at Google.
http://code.google.com/p/ganeti/ - Project page.
http://notes.ceondo.com/ganeti/ - Notes on how to use it with Debian (long).
Well, I think that's more because it was written at Google.
Edit: forgot part of the sentence, stupid me.
Cloud is renting servers: low cap-ex but high op-ex, minimal risk exposure, highly nimble.
Dedicated hardware is buying servers: high cap-ex but low op-ex , more risk, and more consistent.
There's nothing inherently "better" in either strategy; they each suit a different need.
Intel Corei7-2600 Quad core + 16GB DDR3 + 2 x 3TB 7200 for 49 euro.
You don't get these problems with Softlayer, you pay significantly more and get significantly better almost-guaranteed service.
It seems like the only thing cloud really provides best is for:
1) Short lived instances or "now" instances.
2) and... what?
In short, "cloud" services (aka Software as a Service in the cloud) are brilliantly useful. No one will seriously question the usefulness of Dropbox, for example.
Platform-as-a-Service is also obviously useful. Google App Engine is qualitatively different from running your own hardware.
Infrastructure as a Service is useful too, if you know how to use it, but in a more limited set of circumstances. The circumstances where it's useful include things like: having an event-based site (e.g. a site related to a sport event) which will need a lot of hardware for a short time and not much afterwards; an app where, due to your solid marketing channels, you expect fairly rapid and unpredictable growth of the usage and you will need to provision new servers quickly; an app where the load varies significantly throughout the day or the week (e.g. something that does batch processing of a lot of data on a regular basis, but is idle the rest of the time).
Joel makes some good points on why they stayed with physical hardware.
This how I always thought it went, so stories of growing companies moving off the cloud don't seem like a big deal - just a natural progression.
smaller businesses where economies of scale don't kick in, and/or smaller businesses that want to hedge their bets on growth.
Until his Comcast went/slowed down.
Or a local non-profit who's board came up with a great way to save money - host their phone system in the cloud. A local carrier was happy to sign them to a 3 year contact. Even supplied the 42 phones. Now, they lucky if they can make calls mid day. It's so bad, that if 10 phones are in use, the next call will sound like you calling from a wind tunnel. And forget about calling at peak times. What does the carrier suggest? Upgrading to a T1. Of course that carrier never mentioned this when selling the service in the first place. And personally, with 42 phone plus 50+ computers and other devices I'm suggesting T3 (cost down here about 500 - 600 a month).
My point is, our infrastructure (at least in South Florida) isn't there yet. Sure the cloud is a great idea. But if you can't reach it, it's useless. But that doesn't stop the marketing. Or the complaints.
the cloud is not for everyone. physical servers aren't for everyone, either. stories like these don't automatically imply that using the cloud is a bad idea for everyone.
No, no. It's about price.
Most small business buying subscribing to these services aren't tech. In the phone market (as an example), carriers are selling hosted "solutions" for less then $75 a month. Comcast is too. I like Comcast. But you can't run an office with 25 phones and pc's and other devices on Comcast. At least not in Florida.
The other problem, is most IT firms down here are pushing they own "hosted solutions". Everything from email to accounting services. For cheap.
Now let me be clear some services I think make perfect sense in the cloud. Even with unreliable connectivity. Like email, storage, messaging, But your core business, services you must have to run your business need to remain under your control. Period.
And the last thing; many small business don't really understand just how important IT is to their business.
If you don't believe me. Try it. Call the business sales units of the carreirs.
Now I believe in buyer beware. But that's the problem... most small business are suffering cash flow problems. When something cheaper comes along there is no "buyer beware". They think about lowering their monthly bills. An of course, in the end they get bit in the ass.
It's just mind-blogging to me how many business owners are so ignorant about the tech that runs their business.
in my mind, this is the very definition of "uninformed choice". you don't have to be uninformed on purpose, you could also be kept in the dark intentionally by salespeople. you're still making a choice based on an incomplete picture.
Who in the US provides cloud with disks comparable to dedicated servers? (or otherwise 'good')?
I say this because I run a cloud company called orion in Australia, we produce cloud VMs with faster then dedicated disk performance. When we were last in SV pitching, nobody knew of anybody in a similar sort of space.
I'm just asking because you seemed to emphasise 'normally' and 'good'.
Where the cloud comes in useful is when you're just starting out and have no idea what kind of demand you should be planning for. Are you going to get swamped? If so, no big deal- spin up a few more machines. If not, you're sitting pretty. Making those kinds of changes with a bare metal setup takes time.
All this said, I am surprised by the number of successful, profitable companies that still use the cloud. Once you know your numbers and have a reasonable outlook for the future you should at least investigate getting some hosting of your own.
Rackspace offers a good compromise where they have cloud services like Cloud Files and Cloud Servers, but you can also have dedicated servers with fast access to your cloud stuff.
As others have mentioned, it’s a lot easier to achieve geographical diversity with a cloud provider like AWS.
Another thing to keep in mind with the AWS cloud: if you have a huge setup, you can always start the largest instances, and you will almost certainly have the physical server all for yourself. And for a fee of $10 per hour per region, across _all_ your instances, you can have totally dedicated instances where you’re guaranteed that none of your instances will be on shared hardware.
($7,200 per region per month sounds like a lot, but it is a fixed fee and a drop in the bucket for people with huge EC2 deployments.)
The ability to customize our boxes has been a big advantage for us and given the hosting facility has all the redundant power sources and bandwidth pipes we never see any problems. I will mention that most of our traffic is east coast based and given our servers are on the east coast we have not seen any problems. If we see traffic expand we would look to put some boxes on the west coast or midwest.
At one point I looked into us switching into the Cloud with AWS and and Rackspace, the costs were much more then we pay now.
In regards to bandwidth, most of the clouds pricing I have seen are based on total usage, our bandwidth is based on the 95 percentile usage. And it's not capped, so if we have a spike of 20mg/sec the pipe is open to fulfill it. The 95% pricing model as worked very well for us. We average a few mgs/sec and our bandwidth costs are under $50/month. I'd add to author when he talks about negotiating, do it, you can get a great deal (s).
I looked into AWS for another start up I am doing in the communications space and we tried it, for not a lot of users on the cloud it was very expensive. We moved to Rackspace and have limited our alpha users to $100, it's still expensive and as we move to launch over the next year we will go with dedicated servers.
Thanks for the post.
I run a service that constantly pushes over 90mbps over the wire (about 30TB a month) and I pay just over $100 a month for two servers. The same bandwidth usage on EC2 (or any other 'cloud' provider for that matter) would cost me thousands.
"Rackspace Cloud has had pretty atrocious uptime over the year there has been two major outages where half the internet broke. Everyone has their problems but the main issue is we see really bad node degradation all the time. We’ve had months where a node in our system went down every single week. Fortunately, we’ve always built in the proper redundancy to handle this. We know this will happen Amazon too from time to time but we feel more confident about Amazon’s ability to manage this since they also rely on AWS."
There was some statements from Amazon employees that Amazon isn't hosted on AWS.
I should say that the variation this leads to is at max around two seconds. I believe this is due to App Engine doing some dynamic grouping of slow applications. So if your app has fast response times, it will be grouped with other apps having fast response times, so the maximum downside is limited.
Muli-threading for Python will not be available until the launch of Python 2.7, which is on our roadmap. In Python 2.7, multithreaded instances can handle more requests at a time and do not have to idly consume Instance Hour quota while waiting for blocking API requests to return. Since Python does not currently support the ability to serve more than one request at a time per instance, and to allow all developers to adjust to concurrent requests, we will be providing a 50% discount on frontend Instance Hours until November 20, 2011. The Python 2.7 is currently in the Trusted Tester phase.
I've also noticed my app's speed pick up dramatically in the past few days. Perhaps because people are leaving before the new billing takes effect.
I don't mind the GIL much because I can just make a request to get a new thread going. :)
They may also have higher availability requirements than most companies and need 2X (more?) the infrastructure to protect against a data collection failure.
They may be counting nodes used periodically, e.g. a large Hadoop map-reduce run.
Edit: don't get me wrong -- 200 servers is a lot. :)
edit: looks like they did publish some figures :)
Analytics is server-intensive.
Also interesting what is simply an artifact from the fact that none of the current "clouds" out there were built to deal with, well, actual loads.
Some things are small, but seem rather strange. Why does no cloud give out 95/5 billing? Why isn't there more resource limiting/etc?
I see a bunch of things leaking out of EC2. People forget that EC2 was designed to deal with large numbers of stateless servers and it's not good for much else. They take the limitations of that and the rest of the AWS platform and apply it to the 'cloud' overall.
Two examples would be from the 'variability' section. CPU limiting under XEN (the hypervisor used by both Amazon and Rackspace) is trivial. The fact that CPU is so variable, especially for smaller tiers, is thus rather interesting.
Similarly with IO. With Rackspace, you are on local disks. As such, unlike Amazon, Rackspace has no defendable reason for being able to starve other users of disk IO.
Also, just as a general data point. There is no real reason why a cloud should be in the same order of magnitude of cost as anything you could touch. Fairly simple reasoning, everything they buy is at massive scale, and there is a very minimal fixedish management cost to deal with all the hardware. What you can work out is that even given almost list prices you are still looking at thousands of percent ROI on cloud servers. What that then says about the market is that there is a current monopoly due to a lack of cloudsmithing knowhow which is the cause of the current situation. Over time, I would expect cloud products to simply dominate standard dedicated servers/colocated servers for most applications.
With fog (http://fog.io/1.0.0/index.html), I can start up a new ec2 instance in less than a minute and tell it to run the chef process. If it doesn't work properly, I shut it down and try it again.
How does that work on a dedicated machine at, say, SoftLayer?
For testing out puppet processes I use Vagrant with VirtualBox on my local machine.
I tend to agree with his points but, for back ups the cloud is perfect. If he stayed in the cloud, he wouldn't even need the server in question.
For example, why not run the disk performance sensitive DB server on a dedicated machine, while fronting the whole arrangement with proxies and app-servers hosted in the cloud? Ok, so there are latency considerations to be made, but you can see that mixed architectures can make sense.
I think what's stopping people from considering this is that there haven't been good cross-provider network virtualization solutions available. But if you could create your own network topology and your own layer 2 broadcast domains, no matter where your machines are located, things are starting to look up.
There are a number of network virtualization providers out there now, which you might want to look at to see what's possible. Disclaimer: I work for vCider ( http://vcider.com ), which provides solutions for on-demand virtualized networks, which can span providers and data centers.
That's actually exactly what we do:
"We’ve moved 100% of our machines that rely upon performant disks to dedicated servers hosted at Softlayer. Roughly speaking, this corresponds to about 80% of our hosting costs."
20% more expensive but it seems like the easiest way to fix the problem if you're already on Amazon.
That said, I totally agree that Cloud-based IaaS is not a good fit for every situation.
And that dedicated instance is likely still running inside Xen, so you got the normal virtualization overhead, and slow disks.
However the point about pricing is less valid. Cloud hosting providers must invest in lots of extra infrastructure to allow for the flexible provisioning they offer, so any comparison that assumes no need for that flexibility is flawed.
Amazon offers spot instances and various other pricing innovations to help align the customer with Amazon's internal provisioning risk.
I could see Amazon offering lower prices if the user commits to longer term provisioning. This is a simple pricing update that would likely negate any cost advantages of non-cloud services.
The bleeding edge hardware aspect of his argument is valid for some businesses but not likely applicable to most.
Single point of failure issues aren't solved by the cloud. Their solved by eliminating single points of failure. 1 VM is just as much of a single point of failure as one real machine.
Even if it were true about buying the beefiest VM you're betting your company on an implementation detail that you have no control over.
"After deciding to go dedicated, the next step is choosing a provider. We got competing quotes from a number of companies. One thing that I was surprised by — and this really doesn’t seem to be the case with the cloud — is that pricing is highly variable and you have to be prepared to negotiate everything. The difference between ordering at face value and either getting a competing quote or simply negotiating down can be as much at 50-75% off. As an engineer, this type of sales process is tiring, but once you have a good feel for what you should be paying and what kind of discount you can reasonably get, the negotiations are pretty quick and painless.
We ultimately decided to go with Softlayer for a number of reasons:
- No contracts. I don’t think I really need to explain the advantage. You would think that you could get better prices by signing 1 or 2 year contracts, but interestingly enough, out of the initial 5 providers we talked to the two that didn’t require contracts had the best prices.
- Wide selection. Softlayer seems to keep machines around for a while and you can get very good deals on last year’s hardware. Most of the other providers we contacted would only provision brand new hardware and you pay a premium.
- Fast deployment. Softlayer isn’t quite at the cloud level for deployment times, but we usually get machines within 2-8 hours or so. That’s good enough for our purposes. On the other hand, a lot other hosting companies have deployment times measured in days or worse.
One last thing about getting dedicated hardware. It’s cheaper… a lot cheaper. We have machines that give us 2-4x performance that cost less than half as much as their cloud equivalents and we’re not even co-locating (which has its own set of hassles)."
However, the provider I use (Joyent) recently added some kind of disk scheduling that prevents these problems. I don't know how they do it, but hopefully more cloud providers do something similar.
RS is more expensive but the extra management and support you get is well worth it IMO.
AWS is a gigantic money pit. SoftLayer is the only way to go, IMHO.
I've been saying this for ages, and every time people would fall over backwards trying to defend/prove their cloud mistake...
"The cloud is cheaper, faster, and infinitely scalable."
Except none of those 3 is true for any real world use case, but a few.
The moment a popular site like Reddit switches to the cloud, is the moment it becomes barely usable during certain times of the day.
The conclusion I came to is that for a 'web 2.0' type setup, the break even point was about 500 'machines.' That was in part because a 'machine' today has 8 - 24 'threads' and 2 - 40T of 'storage' and (at the time) 2 - 96G 'memory.' So in terms of 'cloud' you could easy run 10 "instances" on these sorts of machine. So 500 machines might be 5000 'instances' in an AWS type cloud.
Its this '10:1' multiplier effect (which is only getting better with bigger machines) and the management techniques of running the same config everywhere, etc. Means your TCO goes up more slowly than the capacity of the resulting infrastructure, so you can 'solve for x' where the two lines cross to identify the break even point. Everything east of that point you're coming out ahead of a 'cloud' based deployment.
What is still a challenge however is geographic diversity. If you wanted to put 500 machines 'around the world' so 125 machines in each 90 degrees (approximately) of longitude, the economics of getting 5 - 10 'cabinets' in places around the world can work against you. (you have more negotiating power if you're putting in 100 racks than if you are putting in 10 racks)
It's interesting, right? At first, you can handle a couple of totally mixed-up machines. Then it stops scaling and you have to start doing the whole "golden + syncer" approach.
Then you go too far and get into a monoculture. When the machines do break, it's impossible for humans to go around and fix them in any reasonable amount of time because there are too many. It's amusing when this happens and the solution put forth is "more administrative controls".
Roll out a deployment to N machines (like say 10), run self checks (you have those, right?), if everything passes give them standard load. Over the next K period of time, periodically check up on them. After that, roll over to N*2 or N^2 nodes, continue until you have rolled out to your entire cluster.
Their response? More administrative measures.
The more custom your configuration the better bare metal is.
That's completely false. The moment reddit blogged about moving to the cloud, it was * perceived* as being slower.
The cloud move happened 7 months before the blog post came out.
It makes it more challenging to load test. When there is no contention you can usually 'burst' to use more of the machine's resources, but you can't necessarily trust that you will always have that capacity.
Even for multiple boxes, you call up your provider, and order a cage with hardware filled. They'll provide everything.
95% of "Cloud" is just a new marketing term for VPS.
Properly built clouds can give you IO faster then a dedicated server, and give you CPUS on demand that far exceed what you would otherwise afford (due to you only needing it occasionally).
And any reasonable cloud provider can discount things such that is unreasonable for you to move off. They have a much lower bottom line then a comparable dedicated server provider.
Is interesting that this doesn't seem to be the case currently.