Someone from AWS proactively called me up out of the blue the other day to ask for input and to interview me about what I need, and offered to quote. I received a phone call or two and two emails and after mentioning the exact specs and price I get from Hetzner, and being genuinely interested in perhaps getting a competitive quote, I waited and received no reply for some time and then the following:
"I'm sorry for my late response. I’ve been pretty busy these days…"
"I had a look at what your technical specifications below and I'd like to understand a bit more what type of business you're doing."
"One of AWS's main strengths lies in the scalability of our platform."
"How much is the % of usage of your dedicated server? What happens if it reaches the maximum usage? At AWS, you can start with smaller instances and use auto-scaling to scale up or down automatically according to your traffic."
"Is scalability a challenge for you?"
There was no quote attached. :)
Hetzner's price/performance is an elephant in the room it seems.
Aside from huge customers like Dropbox, which most likely get deep discounts on AWS, I don't see who else could get a better deal out of AWS (or any similar cloud provider) than they would get by directly renting dedicated servers, even considering the much-hyped scalability factor (which most customers probably don't ever end up needing anyway, because...let's face it, there's just so much attention going around on the net, which means there can be just a very limited number of things that "go viral", while there are many, many more services competing for said attention out there). That's at least true for the European market. AFAIK, the US hosting market is a bit different and generally more expensive, so AWS might have a better price point with regard to the competition there.
Well, how would you handle massive traffic spikes? Through a combination of vertical and horizontal scaling? Through having excess capacity? Except that I would probably want to start with something fast and inexpensive to begin with.
If you wait until the spike hits before you spin up your VM you're still too late.
I don't follow your line of reasoning, you seem to suggest that to build a scalable service you yourself would prefer to use servers with necessarily poor price/performance?
Or are you saying that it's not possible to use dedicated machines to build a scalable service? Or that one should only use VMs, with their inefficiency and resource contention? How do you reason about disk seek performance? What happens when the spike hits, and another AWS customer on the box starts stealing CPU?
Actually, traffic spikes were the reason we moved off AWS. A single dedicated machine at Hetzner gives 10x the headroom at a fraction of the cost. That buys you time and capacity when you need it.
precisely. this is the other elephant in the AWS room. the only way to survive a spike without service degradation while vm spin up on that platform is through lambdas/s3 served pages/api gateway - but even lambdas lag behind traffic.
but then you need to build your whole architecture for it
anyway I'm running on AWS right now, but for other advantages and services, not for its scalability/price performance.
The thing is though, the scaling is nice but most folks just want the auto-recovery. You stick your app over 3 zones, you have autoscaling and you can run with a couple of Ops savy devs and largely forget about it.
You no longer need to pay for that dedicated sysadmin who knows how to manage a datacentre when have a small number of technical staff. The extra hosting bills are less than hiring that other person.
we do have a custom AMI that just fetch and build the latest snapshot release when we need auto-scaling or auto-recovery, but our real problem is that we currently depend on sticky session, so users accumulated on the initial instances get sucky performances. (and yes we are currently working on fixing it, we cannot outright serialize sessions on dynamo because, reasons)
It's shown that many people want all of this outsourced for them and will pay a premium to do so. I always find it a bit silly that every time an AWS service is announced it is compared to some bare minimum other provider that is out there that is cheaper. Of course it is going to be cheaper, but that really is irrelevant.
The company website can take a beating, but onboarding new customers takes weeks if not months. Plenty of time to plan and scale.
In 2008 we started a browsergame that attracted more than 200.000 players. Until 2009 we had to rent big fat servers that could cope with the load and costed over 4000 € per month. Since then hardware got so fast that our PHP software with 500k line of code and 200GB stuff in databases could run on 3-4 dedicated servers that we could rent from almost every german provider for around 300-500 € per month. Since 2014 we have everything on virtual servers hosted by Hosteurope because we wanted managed hosting like AWS and it was even cheaper than running our old dedicated machines. At that time we looked at AWS and we were totally shocked by the prices (we were also shocked how difficult it is to understand the AWS universe)! We're paying less than a tenth for much more performance than we need than we would pay for even the cheapest, 1 year reservation stuff from AWS. Even if our playerbase would triple overnight, our virtual machines would easily cope with that. And upgrading them needs a two clicks and a reboot.
Also: we see it as a plus that we have not to deal with all the Amazon stuff, because they have so much APIs, names and complexity in their system. Nobody wants to learn all this and nobody wants to deal with multi-location and replication just because amazon has unreliable servers. German providers seems to have datacenters and servers that don't crash every once in awhile. I'm flabbergasted by how often I read that AWS has broken disks, crashed servers, crashed datacenters, network problems and so on. Since 2008 we had two or three problems in the datacenters at our different providers, with a total downtime of 8 hours. That would be just a little less than 99.99% uptime over all the years...
I really wanted to try the new shiny thing that is called AWS but no matter how I looked at it, it is just a very very costly solution for something with built in unreliability.
Running ECS is cool and all, and I know they do some container-specific VM optimizations, but I still know I'm running a kernel on a kernel, even if my VMs happen to share a host. I'd love seeing the flexibility of the ECS software, but on metal.
With containers on bare metal, they could not cohost customers.
I'm sure AWS is improving on this, as they need to with Lambda and its underlying containerization architecture. But everyone else? Its going to be a while before you can call it "secure".
Security is hard :/
SDN could easily be handled by an off-system component. I forget who, but someone who presented at ONS 2015  mentioned the use of FPGAs for this.
The cloud is great for distributed systems, and for these the "kernel on a kernel" cost becomes insignificant with the additional power allowed by the horizontal scalability of letting multiple computers talk over a network (not to mention network latency).
I'd be willing to bet that running on absolute bare metal is a sort of niche market in cloud computing that Amazon considers too small to get into.
I have double digit thousands of machines for one application, and have an Excel spreadsheet outlining the additional hardware that would need to be purchased to handle the same workload but virtualized. I pull it out every time some recent Stanford grad tells me the cloud is the future.
Previously, your VM were allocated to a host as Amazon saw fit.
A lot of software in this space, for instance, has a fixed MAC address requirement for their license servers, and you report/pay depending on the number of cores. Whilst you can get around that sometimes, it would certainly void agreements and wouldn't hold up in an audit.
In some companies I've worked, this could drastically reduce the capital costs for engineers needing overpowered workstations that are analogous to Ferraris you only take out on weekends.
*who are already using AWS in other parts of their infrastructure.
You could configure an Elastic Network Interface to have a fixed MAC address, but that would also work without dedicated hosts.
When one decides to go cloud, you need to throw away you prior investment in SO licenses and hardware. Although you have no alternative but throw away hardware, being able to reuse investment in software greatly reduce migration costs.
Of course one can just wait until all his licenses expire and then migrate, but such a hard migration would increase risks.
And of course it would be nice to have a price comparison between running normal instances and using a dedicated host with my own licenses, but without deeper inspection, dedicated is cheaper.
> each Dedicated Host can accommodate one or more instances of a particular type, all of which must be the same size
I'm surprised by the "same size" requirement. It seems like even if you ask customers to stay within a single family (m3, m4, etc.) the customer could do their own hand placement of 8 vCPUs next to a pair of 4s...
Edit: Disclaimer, I work on Compute Engine.
There are many software packages that are licensed to an individual peice of hardware. Tied to that are USB authentication dongles, and even parallel port dongles for some old school commercial software.
I think after 25 years in the industry I know a lot about software licensing. Enough to know that just because lots of software packages are licensed that way doesn't change the fact that they are silly.
You don't have to manage a datacenter just because you don't go for Amazon. There's tons of companies that will do it for you, at various levels. You want colo space, nothing more ? Fine. You want dedi hosts with a few of your own machines in between ? No problems. On top of that, a lot of dedicated server companies will have spare instance so spinning up new devices is certainly easy enough. Feel free to contact me if you need solutions like this.
I think there's one huge advantage of Amazon : management effectively doesn't get to set policy on the datacenter, so they don't get to screw it up. An internal company department would use something like VMWare and won't let you spin something up without endless approvals, whereas Amazon treats you like a customer.
What I understand so far - it is physically bundling a fixed amount of virtual machines to a physical host - in the example "dedicated host" 22 * m4.large.
Bundling your virtual machine to one or a series of physical hosts / on the same network segment is a service you can have from quite a few hosting providers (if you ask).
If you opt for a solution like this, it is also most likely that you will run an enterprise scale solution and you will do so for quite some time - at least 6 months upwards.
Keeping that in mind together with a lifetime of at least 2 years for such HW, you will be paying 8 times the HW cost for a 2y lifetime for a management layer (storage / connectivity you pay per GB with EC2).
I guess everybody will have to see how this fits into their business model for non volatile / predictable resource demand or a set of when physical iron might be a better choice (colo or rent).
For one of my clients that I manage multiple racks in two different locations for with 150+ VMs, "managing the hardware" comes out to about 1-2 days a year in aggregate to bring new hardware in and wire it up (most of that is travel) + 20-30 minutes to investigate the very few issues we can't diagnose and fix via IPMI. I pop a server in, attaches power and ethernet, checks that the IPMI is reachable and that it sees the PXE server, and beyond that "managing the hardware" comes to yanking the occasional dead harddrive and inserting a new one, and ever now and again to confirm a server is dead.
Meanwhile with EC2 I see most of the same non-hardware issues (e.g. kernel panic, applications occasionally spinning out of control and taking a server down) that are just as trivial to handle via IPMI as via the EC2 console, but we also have to engineer around things like the lack of solid, stable, directly attached RAID arrays, which we don't need to worry about with the bare metal servers.
And no, EBS does not count - the number of times volumes have gotten stuck in attached state on a failed instance terrifies me. It also can't in any way match directly attached SSD RAID setups for performance which is another reason why it ends up taking more ops time: You end up with setups that simply take more vms to compensate for platform limitations.
I absolutely think EC2 is great for things like large batch jobs etc. where your requirements vary wildly, but most people don't even have enough daily variance for that to get anywhere near compensating for the cost of EC2 (and nothing stops you from deploying hybrid approaches - in fact I'm working on hybrid approaches mixing bare metal servers with EC2 to handle batch jobs and load spikes now)
Ones that I'm personally familiar with:
Hivelocity in Florida
ReliableSite in NY
WebNX in LA
100TB (a SL reseller in some locations, and they own their own in others)
OVH (lower quality, lower price, great for various workloads, NA data center)
For example, if I want 128GB RAM and SSD disks, their prices suddenly go up to thousands of dollars per month because they're all some kind of beefy Dell or HP, whereas Hetzner can give me a single-processor, 6-core Xeon E5-1650 3.4GHz + 128GB RAM + 960GB SSD for $123/mo. LeaseWeb has cheaper SuperMicros, but they either max out at 32GB, or they don't have SATA disks. They're skewed very differently: With Hetzner you can't pick "less RAM, lots of CPUs" and LeaseWeb doesn't have "lots of RAM, fewer CPUs, SSDs".
A vendor like this needs to offer a much wider range of specs to be worth investing one's entire infrastructure in, to be honest.
It would be interesting to see how much you can squeeze the dedicated hardware with the largest EC2 types though.
This would be a security and privacy nightmare.
> Many of our customers have asked for this feature so that they can run software that is licensed for a particular piece of actual hardware.
So it's more aimed at working with/around archaic licensing schemes, rather than technical advantages.
(Edit: I may have misunderstood the comment. This will certainly have a faster interconnect to EC2 compared to a non-EC2 dedicated server.)
Other than licensing, an advantage I'm guessing is in reducing noisy neighbor affects. In our case, we use a lot of t2.micro instances, which seem to suffer from this.
A disadvantage is that the instances within a dedicated host might all go down together? (similar to if you put all your instances in us-east-1e and 1e goes down?, or of course if all of us-east goes down). Although I'm not sure a dedicated host itself is more likely to go down, while a datacenter remains operational. That's what I'm most interested in knowing - how do these fail?
1) Have a canned VPC-based test server that's off, and only turned on for the test. VPC-based servers do not lose their IP addresses between activations (though a single server may not suit you)
2) Use AWS internal DNS - have your vendor product use hostnames rather than IPs (if possible), and when you spin up the new machine for testing, switch it's IP into the hostname on the internal DNS zone. Again, you'll need to be VPC-based to use this. You can destroy the instance between runs with this method.
This has the added benefit that if you ever need to spin up more instances to run simultaneously, you don't need to get a new whitelisted IP address.
$2.341 (m3 host / hour) * 24 hours * 30 days = $1685.52
before the "up to 70%" reservation discount.
Sure, your jobs aren't fighting another customer's jobs for CPU; now you're fighting your own jobs for CPU.
Check out the EC2 instance types (https://aws.amazon.com/ec2/instance-types/) to learn more.
Per the EC2 Instance Types page (https://aws.amazon.com/ec2/instance-types/), many of the instance types already include SSD storage.
Wouldn't dedicated I/O be a big plus and selling point for dedicated instances?
And you not disclosing your connection considering 90% of your submission history is cloudways makes it even better.
And that you're one of the fake-review-shill spammers too.