Just a friendly reminder that AWS pricing is still insanely high compared to dedicated bare metal servers. On AWS you'd pay around $1500/mo for 24 cores ("m5a.12xlarge")[1] while e.g. Hetzner offers a 24 core AMD bare metal server for $190/mo[2].
Also consider that on AWS you pay for traffic on top of that; prices for which is even more insane. 100TB, while free with Hetzner, would cost you somewhere around $9000 on AWS.
EC2 on-demand/hourly to Hetzner monthly pricing is not really an apples-to-apples comparison. EC2 reserve pricing is more similar and about 1/2 or less of on-demand fees. Hetzner is still cheaper, and if all you need is a low cost dedicated server w/local disks and bandwidth, this is a great choice - but AWS provides an entire ecosystem of services including configurable EBS and S3 storage, as well as more diverse and scalable purchase/provisioning options.
Having a large ecosystem of services should be a reason AWS is cheaper since on average you'll spend money on those integrations -- not a reason to pay a premium.
Increased complexity leads to increased overhead costs. A bare metal instance is going to be cheaper to maintain than a full-service instance that interfaces with a robust ecosystem (even if that instance doesn't use any of it).
You're forgetting geographical availability. Where is their DC in Sydney or Brisbane, Australia? How about Singapore? Or Japan?
Unless you're in Germany, Germany, or, perhaps, Germany (or Finland) then you're adding a lot (yes, a lot) of latency to network requests for anyone outside of Germany and especially anyone on another landmass.
OVH has POPs all over despite being a value server provider. You get quite a bit for how little you pay, plus they have working IPv6 (really important for mobile networks) unlike the major cloud vendors.
From experience, OVH's IPv6 is sadly a bit... broken. They don't send proper router advertisements on the network, which confuses the hell out of any proper firewall or router you try to setup for it.
OVH does have its quirks (eg: pushing systemd-networkd and not using route advertisments) but their network is significantly better if you need to avoid the IPv4 CGNAT that cellphones sit behind, as IPv6 is the only means of doing so (long lived connections, 20ms or so latency savings, skip Firebase messaging, etc).
Other issues we ran into with AWS include congested peering with Centurylink & most Asian Pacific countries. Having gigabit via Centurylink and only being able to fill 2MB/s from AWS is crummy, esp. when competitors can reliably push many multiples of that.
OVH is also quite inexpensive and in our experience is a bit less problematic than Hetzner.
With Hetzner you get a lot of power really cheap but you also get fairly minimal tech support. We had a lot of issues with instances being randomly blocked due to services' traffic being miscategorized as a "port scan" and were able to get nowhere with their tech support so we had to go to OVH. Haven't had any similar issues at OVH or anywhere else for that matter.
In the past I had the exact same issues and server network port blocked for several days by OVH... Also with DO.
The only place where this never happend for me is AWS
Hetzner is cheap, but I'd be reluctant to run any business-critical stuff on it. I wish DigitalOcean would launch baremetal hardware. Even at a 100% markup compared to Hetzner, it'd be worth it for the peace of mind for better support/hardware.
1. Ease of use -- Bare metal servers are harder to use than VMs; you can't trivially snapshot your bare metal server, you can't trivially clone it, there aren't tons of AMIs available. Hetzner has fewer UX designers and has less control of the platform to provide a better UX / console experience / onboarding experience.
2. Cost/availability of personal -- More people know AWS than how to PXE boot machines. Sure, you might not actually have to do complicated stuff for Hetzner's bare metal, but it's still more complicated. You might have to get some real ops people (rather than just saying the word devops and pretending everything is okay).
3. Other cloud services (ELB, RDS, etc) -- Sure, you can connect a hetzner machine to an RDS endpoint, but it's slower and harder than just using the AWS ecosystem. People want S3, ELB, RDS... They don't want to hire 5 more guys to run Ceph, HAProxy, and database clusters.
4. Ecosystem of tools -- People have used AWS and its APIs for so long that you can find pre-built tooling to do all sorts of things, from libraries to make lambda functions, to terraform modules to manage machines. I don't even know if Hetzner has a real API. It certainly has a smaller ecosystem.
5. Mindshare -- The cheaper/better service doesn't actually steamroll people who have mindshare / brand recognition in every case, so even if hetzner were better, AWS already might have enough critical mass.
6. Easier "region expansion" -- With AWS, if you want to reduce latency to some customers for application servers, moving between regions is trivial. Hetzner, not so much. Also, hetzner's networking is qualitatively worse for reaching other services (most of which are on AWS).
I think the main issue here is that people are comparing two things that can't really be compared. AWS includes S3, ELBs, RDS, etc. Hetzner does not help. AWS offers robust APIs and features. Hetzner, not so much.
AWS has brand recognition, and no one will get fired for going with AWS.
I know how to run my own servers, I know how to use AWS. For a startup that had a focus on software, not hardware, I'd still absolutely go with AWS because I know for a fact the ecosystem of services and tools, coupled with the ease of finding more people who know AWS, make it the much safer choice.
> AWS has brand recognition, and no one will get fired for going with AWS.
Now replace "AWS" with "IBM"...
Funny how the exact same thought processes that used to happen on big lazy enterprise corporations now translate to all those startups that used to mock the big lazy enterprise corporations for this sort of thinking.
Well seeing that the executives who chose IBM in the 80s were right. You can still buy new hardware from IBM that is compatible with thier systems from the 80s. Thier competitors not so much.
It would be funny if I agreed with you about who is thinking these thoughts, but I'm pretty sure this line of thought is exclusive to employees at medium / large corporations, not startups.
At a startup, these decisions are often made either collectively or from the top. In either case, no one is getting fired for picking something other than AWS, and startups are more willing to take technology risks.
I'd imagine it's because people aren't spending their own money. The pattern I see: use AWS at work, use DigitalOcean/Linode/Hetzner for hobby projects when $$ is coming out of your own wallet.
I have literally seen this happen. Non-technical management demand forklifting everything into the cloud because "global availability" and "no more wasted time with in house data centers.". Company hit a cash hole due other issues they could have survived if they weren't staring down a monster AWS bill.
Owning your core infrastructure might not be hip and cool, but it sure can make or break a company. Nevermind that your at rhe mercy of your vendor when it comes to billing, features, maintenance windows and much more.
The catch is marketing. Most people don't make decisions based on what is universally better for them, they make decisions based on the very limited information they have and rely heavily on the people they know as well. All of this means that usually the companies/products/services that are the most well known will take the biggest share of the market, pretty much regardless of anything else (unless their offering completely sucks, but then they'd probably wouldn't be able to spend as much on marketing). So it's not about price and it's not about features or quality.
However, Hetzner has apparently been around for 21 years, as I've learned. They actually started doing this before Amazon. It's possible that they're just massively underappreciated, or people had reasons to pick something else over that extended period of time.
A lot of discussion here has pointed out that most people would rather have easy access to managed services from AWS (or GCP, Azure, etc.), rather than hiring engineers to manage and operate their own in-house copies of those services, and I tend to agree.
I had never heard of Hetzner before reading this thread on HN. Been using AWS for almost a decade. Most people I know are using either AWS or Google Cloud. A lot use Linode or Digital Ocean for personal projects.
The hardware isn't identical. Hetzner is known for having more drive failures than is typical as a result of replacing failing drives with "refurbished" ones.
TBH, that's a really lousy alternative. :/ EBS is so, so useful and reduces headaches when running a system by an amount that it's almost by itself worth using AWS (or equivalent).
Depends on your use case, but I agree Hetzner needs EBS/iSCSI equivalent storage. I'm allergic to marrying yourself to AWS (or any ecosystem for that matter).
But it's not necessarily about the hardware of a single server. I know nothing about Hetzner. I know that AWS puts a ton of thought and money into their networking and power infrastructure, but how trustworthy is {random competitor}'s infrastructure? Is there a catch with Hetzer, or how are they this cheap?
I have a lot of experience with DigitalOcean's services. If they offered bare metal, that would be cool, as whitepoplar said. Packet is a bare-metal AWS competitor that I know a lot more about and trust quite a bit. None of this means you're going to trust my experiences.
Of course not identical...I am only guessing but AWS must have tons of in-house chips tailored for networking/routing, let alone the design of their data warehouse...
I've used both. Hetzner definitely has attractive pricing and Amazon tends to charge extemely high amounts for relatively slow machines. For reference, one or two months of usage, basically pays for the hardware cost already with a lot of these instances. The profit margins on this stuff must be insane for Amazon because they run these machines for years non stop.
However being able to pay for these machines by the minute and spin them up and down in minutes is a level of flexibility you don't have in Hetzner. You could spin up a few dozen of these instances, run some expensive batch jobs and shut them down in 40 minutes. There's no need to have them idling while you are not using them.
Also they have lots of stuff you simply don't get in Hetzner. And, support is kind of barebones in Hetzner, their network security is not great, and if you need to be reachable world wide, they are not an option.
If you know what you are doing they can still be a great option. But you'll need to compensate for what they are not doing with expensive devops work. People don't tend to count this but this is by far the most expensive factor in operations these days. You might be paying pennies for the hardware but having a single senior devops person stuck doing the devops work would set you back around 10K/month. And one month is of course not going to be the end of this. Doing things in Hetzner means investing in that kind of stuff. You basically get to reinvent a lot of wheels that you get to use off the shelf in AWS.
Some competition would be nice though. AWS is overcharging because they have very few real competitors and they tend to charge similar amounts. Azure and Google cloud are also not that cheap if you need fast machines.
I think it could make sense if your traffic has a large difference between peak and trough, and you can reliably and automatically scale your system up and down throughout the day; and even more if you can make use of spot pricing. Bare metal makes a lot of sense when you're using the capabilities of a full machine (or several), but provisioning isn't usually fast enough to deal with short term load management.
But at an 8:1 price, why not just over-provision by 4x and save 50% of your HW budget? How many people really have situations where the difference between peak and trough is that extreme? It's way better to have the capacity sitting idle than to be scaling it up and down, not to mention it will help smooth out tail latency to be over-provisioned by that much too.
Autoscaling groups on AWS are very real, very old, and very widely used.
The Amazon Retail site scales its machines up and down. Netflix built a tool called "scryer" to guess when to spin up and down instances more accurately [0].
Jenkins has a series of plugins for provisioning workers on-demand, and multiple companies I've worked at have used that plugin (or a variant of it) to spin up CI workers when needed (during the work day usually) and shut them down when not needed.
Clearly this is something that real companies do. If you wish to spend some time googling on how people use Autoscaling groups, you'll find many other examples.
We have a peak where we use 16 instances for about two hours for the throughput we want and a trough where we only use one. The T2 instances are really cheap. This is a back end ETL message processing job.
We also do an immutable deployment for our web stack - we spin up a completely new stack - VMs, autoscaling groups, load balancers, etc. test it, slowly move users over to it and then kill the old stack.
That doesn’t even include all of the managed services we use and don’t have to worry about managing the underlying servers.
reserved pricing is ~$900 for 1 year contract or $600 for 3 years. So the 3 year term is basically 3:1. It is annoying that they don't abstract this better to make it less complicated to sign up for.
The pricing seems to be around 10% cheaper than the equivalent Intel based servers (m5a.4xlarge $0.688 per Hour vs m5.4xlarge $0.768 per Hour, similar for other instance sizes). I was expecting AMD servers to be somewhat cheaper, but given that the server CPU is only one of many components I guess the savings can't be that much more.
Would be interesting to see how the actual performance per $ is different.
Exciting to have AMD on AWS though. Still far more expensive than rolling your own hardware or getting bare metal servers.
I wonder, how much effort does that entail? Is this some TDP and firmware tweaks, or is it actually different silicon? If the later, that sounds like a reasonably big bet that there will be a lot of AMD chips sold to AWS.
I've heard on the grapevine that nearly all of the large customers ask for (and receive) custom silicon features. The complete set of these special features technically exist on all these chips as they share the same mask, but the different private SKUs have their special features fused/lasered off or hidden behind an MSR knock like all other binning.
EDIT: These features tend to be stuff like first class connections to in house security chips and the like. It's more about integration with their systems than any cool feature or isntruction that only "special" customers are cool enough to get.
Intel may have originally designed Xeon D for Facebook, but now it's available to everyone and the Xeon D that you buy is the same silicon as the Xeon D that Facebook buys.
Offtopic, but does anyone have the stock data / second of AMD on minute 13 till minute 17.
I only see the press-release at minute 15:00 everywhere * and in minute 15 the stock also rose 6-7%. I'm curious on "how fast" the algorithms work and/or if early press releases are possible to a private club :)
Very strange that none of these processors are rated with ECUs in the pricing table. Is AWS still working on bench marking these or is this an extension of whatever agreement AWS/AMD has reached?
DO has compute optimized droplets. I tried running aircrack, and I got really good performance (10k/sec on 16vCPU, vs 13k/sec on EC2 32 core dedicated).
Unfortunately, EC2 was still 8x cheaper because of spot pricing. 2 hours cost me $4 on DO but .50 on EC2 spot instances.
Interestingly, a DO sales guy emailed me, more than once, because I used a compute droplet for 2 hours. They must be hard up for leads! Strange, he didn't get back to me when I told him what I was using it for.
Hi, a developer on the EC2 team here: the underlying CPU topology, in terms of which logical processors share L3 caches (i.e., which CPUs are part of the same CCX [1]) is provided to the instance's operating system through ACPI tables and CPUID values. The m5a.24xlarge and r5a.24xlarge instances show two sockets, six NUMA nodes, and 12 L3 cache slices.
Can you share more details about this CPU?
The page mentions a custom AMD EPYC, and indeed, your topology suggests this is not a standard 24-core CPU. A standard 24-core EPYC would have 3 enabled cores per CCX and 4 zeppelins (NUMA nodes) per CPU.
I guess it isn't updated. I was looking for pricing of the new t3 instances but it says pricing not available (which kinda defeats the whole purpose of sorting by price)
Also consider that on AWS you pay for traffic on top of that; prices for which is even more insane. 100TB, while free with Hetzner, would cost you somewhere around $9000 on AWS.
[1] https://aws.amazon.com/ec2/pricing/on-demand/
[2] https://www.hetzner.de/dedicated-rootserver/matrix-ax