Hacker News new | past | comments | ask | show | jobs | submit login
New Lower-Cost, AMD-Powered M5a and R5a EC2 Instances (amazon.com)
244 points by jeffbarr on Nov 6, 2018 | hide | past | favorite | 116 comments

Just a friendly reminder that AWS pricing is still insanely high compared to dedicated bare metal servers. On AWS you'd pay around $1500/mo for 24 cores ("m5a.12xlarge")[1] while e.g. Hetzner offers a 24 core AMD bare metal server for $190/mo[2].

Also consider that on AWS you pay for traffic on top of that; prices for which is even more insane. 100TB, while free with Hetzner, would cost you somewhere around $9000 on AWS.

[1] https://aws.amazon.com/ec2/pricing/on-demand/

[2] https://www.hetzner.de/dedicated-rootserver/matrix-ax

EC2 on-demand/hourly to Hetzner monthly pricing is not really an apples-to-apples comparison. EC2 reserve pricing is more similar and about 1/2 or less of on-demand fees. Hetzner is still cheaper, and if all you need is a low cost dedicated server w/local disks and bandwidth, this is a great choice - but AWS provides an entire ecosystem of services including configurable EBS and S3 storage, as well as more diverse and scalable purchase/provisioning options.

Having a large ecosystem of services should be a reason AWS is cheaper since on average you'll spend money on those integrations -- not a reason to pay a premium.

Increased complexity leads to increased overhead costs. A bare metal instance is going to be cheaper to maintain than a full-service instance that interfaces with a robust ecosystem (even if that instance doesn't use any of it).

Doesn't this discount the considerable economy-of-scale advantage that Amazon should have?

Not when Amazon's scaling benefits are used to mostly benefit themselves.

Well that's a different point. Complexity, economies of scale, and high-prices charged by an incumbent market-leader, are three different matters.

You're assuming the bare metal instance doesn't nearly have the same economies of scale.

At ~10x the prices hourly is only competitive if you have extreme traffic spikes. Reserve costs less, but it’s still ~4x the price of pure hardware.

You're forgetting geographical availability. Where is their DC in Sydney or Brisbane, Australia? How about Singapore? Or Japan?

Unless you're in Germany, Germany, or, perhaps, Germany (or Finland) then you're adding a lot (yes, a lot) of latency to network requests for anyone outside of Germany and especially anyone on another landmass.

Germany and Europe are smaller than you seem to believe. The coverage of AWS in Europe is in fact quite limited:


I live in Australia. I'm from England. I understand how big Europe is, thanks.

And you missed my point: a server in Germany, despite being cheap, is no good to the communities I host for Australian businesses.

Hetzner is excellent. I've used them and OVH for years. No good for me or my clients here in Australia, though.

OVH has POPs all over despite being a value server provider. You get quite a bit for how little you pay, plus they have working IPv6 (really important for mobile networks) unlike the major cloud vendors.

From experience, OVH's IPv6 is sadly a bit... broken. They don't send proper router advertisements on the network, which confuses the hell out of any proper firewall or router you try to setup for it.

OVH does have its quirks (eg: pushing systemd-networkd and not using route advertisments) but their network is significantly better if you need to avoid the IPv4 CGNAT that cellphones sit behind, as IPv6 is the only means of doing so (long lived connections, 20ms or so latency savings, skip Firebase messaging, etc).

Other issues we ran into with AWS include congested peering with Centurylink & most Asian Pacific countries. Having gigabit via Centurylink and only being able to fill 2MB/s from AWS is crummy, esp. when competitors can reliably push many multiples of that.

OVH is also quite inexpensive and in our experience is a bit less problematic than Hetzner.

With Hetzner you get a lot of power really cheap but you also get fairly minimal tech support. We had a lot of issues with instances being randomly blocked due to services' traffic being miscategorized as a "port scan" and were able to get nowhere with their tech support so we had to go to OVH. Haven't had any similar issues at OVH or anywhere else for that matter.

In the past I had the exact same issues and server network port blocked for several days by OVH... Also with DO. The only place where this never happend for me is AWS

Hetzner is cheap, but I'd be reluctant to run any business-critical stuff on it. I wish DigitalOcean would launch baremetal hardware. Even at a 100% markup compared to Hetzner, it'd be worth it for the peace of mind for better support/hardware.

Vultr has something like that.


What do you think about Packet.net?

It seems great! I just wish the prices were slightly better.

$720 per month, with 64GB less Memory.

Nearly 4x.

What's the catch? If Hetzner were that good at that cheap with no catches, why haven't they steamrolled AWS, Packet, DO, and everyone else?

There are multiple "catches" here.

1. Ease of use -- Bare metal servers are harder to use than VMs; you can't trivially snapshot your bare metal server, you can't trivially clone it, there aren't tons of AMIs available. Hetzner has fewer UX designers and has less control of the platform to provide a better UX / console experience / onboarding experience.

2. Cost/availability of personal -- More people know AWS than how to PXE boot machines. Sure, you might not actually have to do complicated stuff for Hetzner's bare metal, but it's still more complicated. You might have to get some real ops people (rather than just saying the word devops and pretending everything is okay).

3. Other cloud services (ELB, RDS, etc) -- Sure, you can connect a hetzner machine to an RDS endpoint, but it's slower and harder than just using the AWS ecosystem. People want S3, ELB, RDS... They don't want to hire 5 more guys to run Ceph, HAProxy, and database clusters.

4. Ecosystem of tools -- People have used AWS and its APIs for so long that you can find pre-built tooling to do all sorts of things, from libraries to make lambda functions, to terraform modules to manage machines. I don't even know if Hetzner has a real API. It certainly has a smaller ecosystem.

5. Mindshare -- The cheaper/better service doesn't actually steamroll people who have mindshare / brand recognition in every case, so even if hetzner were better, AWS already might have enough critical mass.

6. Easier "region expansion" -- With AWS, if you want to reduce latency to some customers for application servers, moving between regions is trivial. Hetzner, not so much. Also, hetzner's networking is qualitatively worse for reaching other services (most of which are on AWS).

I think the main issue here is that people are comparing two things that can't really be compared. AWS includes S3, ELBs, RDS, etc. Hetzner does not help. AWS offers robust APIs and features. Hetzner, not so much.

AWS has brand recognition, and no one will get fired for going with AWS.

I know how to run my own servers, I know how to use AWS. For a startup that had a focus on software, not hardware, I'd still absolutely go with AWS because I know for a fact the ecosystem of services and tools, coupled with the ease of finding more people who know AWS, make it the much safer choice.

> AWS has brand recognition, and no one will get fired for going with AWS.

Now replace "AWS" with "IBM"...

Funny how the exact same thought processes that used to happen on big lazy enterprise corporations now translate to all those startups that used to mock the big lazy enterprise corporations for this sort of thinking.

Well seeing that the executives who chose IBM in the 80s were right. You can still buy new hardware from IBM that is compatible with thier systems from the 80s. Thier competitors not so much.

A lot of Burroughs stuff is still fairly well supported iirc?

Unisys just uses emulation in their newest machines.

That would seem to probably be good enough? The users probably mostly just need their systems to just go on working with no alterations.

It would be funny if I agreed with you about who is thinking these thoughts, but I'm pretty sure this line of thought is exclusive to employees at medium / large corporations, not startups.

At a startup, these decisions are often made either collectively or from the top. In either case, no one is getting fired for picking something other than AWS, and startups are more willing to take technology risks.

I'd imagine it's because people aren't spending their own money. The pattern I see: use AWS at work, use DigitalOcean/Linode/Hetzner for hobby projects when $$ is coming out of your own wallet.

No one ever got fired buying IBM, err... AWS

But the company may fold from the usurious AWS charges...

I have literally seen this happen. Non-technical management demand forklifting everything into the cloud because "global availability" and "no more wasted time with in house data centers.". Company hit a cash hole due other issues they could have survived if they weren't staring down a monster AWS bill.

Owning your core infrastructure might not be hip and cool, but it sure can make or break a company. Nevermind that your at rhe mercy of your vendor when it comes to billing, features, maintenance windows and much more.

The catch is marketing. Most people don't make decisions based on what is universally better for them, they make decisions based on the very limited information they have and rely heavily on the people they know as well. All of this means that usually the companies/products/services that are the most well known will take the biggest share of the market, pretty much regardless of anything else (unless their offering completely sucks, but then they'd probably wouldn't be able to spend as much on marketing). So it's not about price and it's not about features or quality.

Very hard to compete with Amazon.

However, Hetzner has apparently been around for 21 years, as I've learned. They actually started doing this before Amazon. It's possible that they're just massively underappreciated, or people had reasons to pick something else over that extended period of time.

A lot of discussion here has pointed out that most people would rather have easy access to managed services from AWS (or GCP, Azure, etc.), rather than hiring engineers to manage and operate their own in-house copies of those services, and I tend to agree.

I had never heard of Hetzner before reading this thread on HN. Been using AWS for almost a decade. Most people I know are using either AWS or Google Cloud. A lot use Linode or Digital Ocean for personal projects.

Hetzner is now at about the same place where Digital Ocean was 1 year after launching.

The difference being they’ve already launched separate storage provisioning.

I'm probably not seeing the whole picture here, but location might be part of it.

Here in the UK, especially with Brexit looming, many of us insist on UK-based servers, which Hetzner don't seem to offer.

The hardware is identical.

Does AWS have better support? Only if you pay.

At 100% markup you can replicate everything 10x at Hetzner and still be ahead...

The hardware isn't identical. Hetzner is known for having more drive failures than is typical as a result of replacing failing drives with "refurbished" ones.

You can ask for a fresh drive, IIRC that will cost you about 10€ or so for the replacement in that case.

Though I've largely gotten fresh drives from Hetzner so far, only one that had been worn in a bit.

Please provide evidence that Hetzner has more drive failures than AWS?

Back your data with Backblaze.

TBH, that's a really lousy alternative. :/ EBS is so, so useful and reduces headaches when running a system by an amount that it's almost by itself worth using AWS (or equivalent).

Depends on your use case, but I agree Hetzner needs EBS/iSCSI equivalent storage. I'm allergic to marrying yourself to AWS (or any ecosystem for that matter).

I have several hetzner snapshot-enabled internal network samba mounts (they offer this as a separate service)

Sure - you do get basically the same thing with Azure or GCP, though.


And then you have only one data center.

It’s not just about your data. You need to have your entire stack in a different region.

a 100% markup = 2x cost, not 10x cost, right?

But it's not necessarily about the hardware of a single server. I know nothing about Hetzner. I know that AWS puts a ton of thought and money into their networking and power infrastructure, but how trustworthy is {random competitor}'s infrastructure? Is there a catch with Hetzer, or how are they this cheap?

I have a lot of experience with DigitalOcean's services. If they offered bare metal, that would be cool, as whitepoplar said. Packet is a bare-metal AWS competitor that I know a lot more about and trust quite a bit. None of this means you're going to trust my experiences.

Of course not identical...I am only guessing but AWS must have tons of in-house chips tailored for networking/routing, let alone the design of their data warehouse...

I've used both. Hetzner definitely has attractive pricing and Amazon tends to charge extemely high amounts for relatively slow machines. For reference, one or two months of usage, basically pays for the hardware cost already with a lot of these instances. The profit margins on this stuff must be insane for Amazon because they run these machines for years non stop.

However being able to pay for these machines by the minute and spin them up and down in minutes is a level of flexibility you don't have in Hetzner. You could spin up a few dozen of these instances, run some expensive batch jobs and shut them down in 40 minutes. There's no need to have them idling while you are not using them.

Also they have lots of stuff you simply don't get in Hetzner. And, support is kind of barebones in Hetzner, their network security is not great, and if you need to be reachable world wide, they are not an option.

If you know what you are doing they can still be a great option. But you'll need to compensate for what they are not doing with expensive devops work. People don't tend to count this but this is by far the most expensive factor in operations these days. You might be paying pennies for the hardware but having a single senior devops person stuck doing the devops work would set you back around 10K/month. And one month is of course not going to be the end of this. Doing things in Hetzner means investing in that kind of stuff. You basically get to reinvent a lot of wheels that you get to use off the shelf in AWS.

Some competition would be nice though. AWS is overcharging because they have very few real competitors and they tend to charge similar amounts. Azure and Google cloud are also not that cheap if you need fast machines.

Non-snarky question: can I get a Hetzner server provisioned via API? (I've never tried.)

Yes. They even support terraform, last I checked!

Terraform has support for hetzner cloud[1], not sure if that's the same as bare metal.

[1] https://www.terraform.io/docs/providers/hcloud/index.html

> not sure if that's the same as bare metal

Hetzner Cloud is more like AWS EC2, if you choose dedicated vCPU, and more like VPS, if you don't.

If it were a 3:1 ratio or even a bit higher there are situations where that would still be the right choice. But I’m having trouble justifying 8:1.

I think it could make sense if your traffic has a large difference between peak and trough, and you can reliably and automatically scale your system up and down throughout the day; and even more if you can make use of spot pricing. Bare metal makes a lot of sense when you're using the capabilities of a full machine (or several), but provisioning isn't usually fast enough to deal with short term load management.

But at an 8:1 price, why not just over-provision by 4x and save 50% of your HW budget? How many people really have situations where the difference between peak and trough is that extreme? It's way better to have the capacity sitting idle than to be scaling it up and down, not to mention it will help smooth out tail latency to be over-provisioned by that much too.

Because maybe you have a 1M:1 peak/idle traffic ratio, for example if customers run once-daily batch jobs.

If you run all your customer's daily batch jobs at the same time, I hope you charge extra for that time slot.

I'd simply spread out these jobs over time and have customers pay to get specific timeslots, which you can fill costs for the baremetal servers.

And when not in direct use, you can surely find something else for the server to run on it's CPU (or simply put it in standby and do WoL when needed).

It's not that uncommon actually.

E-commerce, viral, social media, news are all examples.

Is this a real thing or a urban legend? Does anyone have real examples?

Autoscaling groups on AWS are very real, very old, and very widely used.

The Amazon Retail site scales its machines up and down. Netflix built a tool called "scryer" to guess when to spin up and down instances more accurately [0].

Jenkins has a series of plugins for provisioning workers on-demand, and multiple companies I've worked at have used that plugin (or a variant of it) to spin up CI workers when needed (during the work day usually) and shut them down when not needed.

Clearly this is something that real companies do. If you wish to spend some time googling on how people use Autoscaling groups, you'll find many other examples.

[0]: https://medium.com/netflix-techblog/scryer-netflixs-predicti...

But I suppose it's still not cheaper than just renting enough Hetzner servers for the peak?

We have a peak where we use 16 instances for about two hours for the throughput we want and a trough where we only use one. The T2 instances are really cheap. This is a back end ETL message processing job.

We also do an immutable deployment for our web stack - we spin up a completely new stack - VMs, autoscaling groups, load balancers, etc. test it, slowly move users over to it and then kill the old stack.

That doesn’t even include all of the managed services we use and don’t have to worry about managing the underlying servers.

reserved pricing is ~$900 for 1 year contract or $600 for 3 years. So the 3 year term is basically 3:1. It is annoying that they don't abstract this better to make it less complicated to sign up for.

Agree!! insanely high! If you use a lot of bandwidth, aws doen't make sense at all

I'm getting 10x cheaper bandwidth solutions on bare metal, 8x cheaper on digital ocean

Yes you are right - Hetzner is much cheaper than AWS. But that begs the question: why is AWS so much more popular than Hetzner?

The pricing seems to be around 10% cheaper than the equivalent Intel based servers (m5a.4xlarge $0.688 per Hour vs m5.4xlarge $0.768 per Hour, similar for other instance sizes). I was expecting AMD servers to be somewhat cheaper, but given that the server CPU is only one of many components I guess the savings can't be that much more. Would be interesting to see how the actual performance per $ is different.

Exciting to have AMD on AWS though. Still far more expensive than rolling your own hardware or getting bare metal servers.

> custom AMD EPYC processors

I wonder, how much effort does that entail? Is this some TDP and firmware tweaks, or is it actually different silicon? If the later, that sounds like a reasonably big bet that there will be a lot of AMD chips sold to AWS.

I've heard on the grapevine that nearly all of the large customers ask for (and receive) custom silicon features. The complete set of these special features technically exist on all these chips as they share the same mask, but the different private SKUs have their special features fused/lasered off or hidden behind an MSR knock like all other binning.

EDIT: These features tend to be stuff like first class connections to in house security chips and the like. It's more about integration with their systems than any cool feature or isntruction that only "special" customers are cool enough to get.

Wired did an article about how Intel does this https://www.wired.com/2013/05/facebook-and-intel/

It's never different silicon.

It sometimes is if the customer is buying chips in large enough quantities https://www.wired.com/2013/05/facebook-and-intel/

Intel may have originally designed Xeon D for Facebook, but now it's available to everyone and the Xeon D that you buy is the same silicon as the Xeon D that Facebook buys.

Likely with different microcode.

"AMD Next Horizon Live Blog" ( about ZEN2 ) : https://www.anandtech.com/show/13547/amd-next-horizon-live-b...

Sounds like they are doing 7nm chips with TSMC, if true they’ll surely overtake Intel in terms of performance? Wish I’d have bought AMD now...

They are still down $10 - $12 from where they were last month (aka 30%). I bought a lot at ~$2, but just bought more.

Offtopic, but does anyone have the stock data / second of AMD on minute 13 till minute 17.

I only see the press-release at minute 15:00 everywhere * and in minute 15 the stock also rose 6-7%. I'm curious on "how fast" the algorithms work and/or if early press releases are possible to a private club :)

* One minor exception, this one is published on minute 14, but is propably another issue on the website itselve ( https://www.marketwatch.com/press-release/aws-introduces-new... )

Algorithms will usually use real-time data. For that, you're really comparing microseconds (or even nanoseconds).

You can't get that data for free though.

I don't want microseconds, i want to see seconds and how the markets perform in general.

I can't compete in microseconds (ever), i can in seconds :p

Price delayed by 15 minutes. Need to pay for realtime data.

Very strange that none of these processors are rated with ECUs in the pricing table. Is AWS still working on bench marking these or is this an extension of whatever agreement AWS/AMD has reached?


Step 0: Amazon adds AMD server which are cheaper then equivalent intel ones.

Step 1: People prefer these cheaper instance.

Step 2: Amazon AI notices the demand for AMD, goes back to Step 0.


How the mighty have fallen, if simple counting is sold as AI...

(Nothing against you, obviously, it's just a weird trend that's being going around because AI is the current buzzword.)

"AI" is the name we give to algorithms we don't understand.

However, in this case, if basic accounting is an algorithm we don't understand, then we've lost our way.

Happy to see something like the old Opteron days. Competition is good.

Any idea how the compute of a m5a.large would compare to a c5.large? They are almost identical in price, but the m5a instances have double the memory.

I can't seem to find a way to launch these instances through the AWS console. Does anyone else see them yet?

Only in selected regions at the moment. I can see the m5a and r5a instances available in the console from the Ireland and Virginia regions.

So there's Memory Optimized, General Purpose... but there's no Compute Optimized? What gives?

DO has compute optimized droplets. I tried running aircrack, and I got really good performance (10k/sec on 16vCPU, vs 13k/sec on EC2 32 core dedicated).

Unfortunately, EC2 was still 8x cheaper because of spot pricing. 2 hours cost me $4 on DO but .50 on EC2 spot instances.

Interestingly, a DO sales guy emailed me, more than once, because I used a compute droplet for 2 hours. They must be hard up for leads! Strange, he didn't get back to me when I told him what I was using it for.

I guess there's less demand for that?

I would like m5da, too.

How are they managing 96 vCPUs? Is this a new custom quad socket board or is it based on HT?

Whatever AMD's version of that is called, yes — AWS has historically always been 1 vCPU = 1 hardware thread, so pretty much half a HT-enabled core.

Simultaneous Multi Threading (SMT) is the generic term for Hyper Threading

A vCPU is a logical core, not a physical core. So yes, based on HT.

Still EPYC processors have up to 32 cores, 64 HT - where is 96 comming from?

They're not necessarily using 32 core chips. That could just as easily be a 2x24 board.

Hi, a developer on the EC2 team here: the underlying CPU topology, in terms of which logical processors share L3 caches (i.e., which CPUs are part of the same CCX [1]) is provided to the instance's operating system through ACPI tables and CPUID values. The m5a.24xlarge and r5a.24xlarge instances show two sockets, six NUMA nodes, and 12 L3 cache slices.

  [ec2-user@ip-10-0-0-120 ~]$ likwid-topology 
  Socket 0:		( 0 48 1 49 2 50 3 51 4 52 5 53 6 54 7 55 8 56 9 57 10 58 11 59 12 60 13 61 14 62 15 63 16 64 17 65 18 66 19 67 20 68 21 69 22 70 23 71 )
  Socket 1:		( 24 72 25 73 26 74 27 75 28 76 29 77 30 78 31 79 32 80 33 81 34 82 35 83 36 84 37 85 38 86 39 87 40 88 41 89 42 90 43 91 44 92 45 93 46 94 47 95 )
  Cache Topology
  Level:			1
  Size:			32 kB
  Cache groups:		( 0 48 ) ( 1 49 ) ( 2 50 ) ( 3 51 ) ( 4 52 ) ( 5 53 ) ( 6 54 ) ( 7 55 ) ( 8 56 ) ( 9 57 ) ( 10 58 ) ( 11 59 ) ( 12 60 ) ( 13 61 ) ( 14 62 ) ( 15 63 ) ( 16 64 ) ( 17 65 ) ( 18 66 ) ( 19 67 ) ( 20 68 ) ( 21 69 ) ( 22 70 ) ( 23 71 ) ( 24 72 ) ( 25 73 ) ( 26 74 ) ( 27 75 ) ( 28 76 ) ( 29 77 ) ( 30 78 ) ( 31 79 ) ( 32 80 ) ( 33 81 ) ( 34 82 ) ( 35 83 ) ( 36 84 ) ( 37 85 ) ( 38 86 ) ( 39 87 ) ( 40 88 ) ( 41 89 ) ( 42 90 ) ( 43 91 ) ( 44 92 ) ( 45 93 ) ( 46 94 ) ( 47 95 )
  Level:			2
  Size:			512 kB
  Cache groups:		( 0 48 ) ( 1 49 ) ( 2 50 ) ( 3 51 ) ( 4 52 ) ( 5 53 ) ( 6 54 ) ( 7 55 ) ( 8 56 ) ( 9 57 ) ( 10 58 ) 
  ( 11 59 ) ( 12 60 ) ( 13 61 ) ( 14 62 ) ( 15 63 ) ( 16 64 ) ( 17 65 ) ( 18 66 ) ( 19 67 ) ( 20 68 ) ( 21 69 ) ( 22 70 ) ( 23 71 ) ( 24 72 ) ( 25 73 ) ( 26 74 ) ( 27 75 ) ( 28 76 ) ( 29 77 ) ( 30 78 ) ( 31 79 ) ( 32 80 ) ( 33 81 ) ( 34 82 ) ( 35 83 ) ( 36 84 ) ( 37 85 ) ( 38 86 ) ( 39 87 ) ( 40 88 ) ( 41 89 ) ( 42 90 ) ( 43 91 ) ( 44 92 ) ( 45 93 ) ( 46 94 ) ( 47 95 )
  Level:			3
  Size:			8 MB
  Cache groups:		( 0 48 1 49 2 50 3 51 ) ( 4 52 5 53 6 54 7 55 ) ( 8 56 9 57 10 58 11 59 ) ( 12 60 13 61 14 62 15 63 ) ( 16 64 17 65 18 66 19 67 ) ( 20 68 21 69 22 70 23 71 ) ( 24 72 25 73 26 74 27 75 ) ( 28 76 29 77 30 78 31 79 ) ( 32 80 33 81 34 82 35 83 ) ( 36 84 37 85 38 86 39 87 ) ( 40 88 41 89 42 90 43 91 ) ( 44 92 45 93 46 94 47 95 )
  NUMA Topology
  NUMA domains:		6
  Domain:			0
  Processors:		( 0 48 1 49 2 50 3 51 4 52 5 53 6 54 7 55 )
  Distances:		10 16 16 32 32 32
  Free memory:		63028.4 MB
  Total memory:		63291.6 MB
  Domain:			1
  Processors:		( 8 56 9 57 10 58 11 59 12 60 13 61 14 62 15 63 )
  Distances:		16 10 16 32 32 32
  Free memory:		63202.6 MB
  Total memory:		63375.1 MB
  Domain:			2
  Processors:		( 16 64 17 65 18 66 19 67 20 68 21 69 22 70 23 71 )
  Distances:		16 16 10 32 32 32
  Free memory:		63171.3 MB
  Total memory:		63375.1 MB
  Domain:			3
  Processors:		( 24 72 25 73 26 74 27 75 28 76 29 77 30 78 31 79 )
  Distances:		32 32 32 10 16 16
  Free memory:		63322.8 MB
  Total memory:		63375.1 MB
  Domain:			4
  Processors:		( 32 80 33 81 34 82 35 83 36 84 37 85 38 86 39 87 )
  Distances:		32 32 32 16 10 16
  Free memory:		63318.7 MB
  Total memory:		63375.1 MB
  Domain:			5
  Processors:		( 40 88 41 89 42 90 43 91 44 92 45 93 46 94 47 95 )
  Distances:		32 32 32 16 16 10
  Free memory:		63317.7 MB
  Total memory:		63374.1 MB
[1] https://en.wikichip.org/wiki/amd/microarchitectures/zen#CPU_...

Can you share more details about this CPU? The page mentions a custom AMD EPYC, and indeed, your topology suggests this is not a standard 24-core CPU. A standard 24-core EPYC would have 3 enabled cores per CCX and 4 zeppelins (NUMA nodes) per CPU.

The "Rome" EPYC processors have 64 cores and 128 threads and with a dual processor board you can have 128 cores 256 threads in a 1U server.

SuperMicro has systems with 2 dual-socket systems in a 1u form factor[0], so 256c/512t per rack unit is not that far fetched.

[0] For intel scalable: https://www.supermicro.com/products/system/1U/1029/SYS-1029T...

That's just 2 servers with a shared backplane.

Two sockets.

In standard AWS fashion, there doesn’t seem to be any pricing table anywhere.

Visit the EC2 pricing table at https://aws.amazon.com/ec2/pricing/on-demand/ , choose the US East (Northern Virginia) Region, and you will find prices.

If we have intel reserved instances like m5.xlarge can we move to amd instances m5a.xlarge ?

I cannot recomend http://ec2instances.info enough - it is so much easier to filter and sort instance info.

Yet, new instances are not there yet.

The problem is that the site is using a depricated json file to get the instances/prices. This is no longer being updated.

AWS has a new API that is getting the updates but ec2instances.info is still working on updating. More info in this ticket:


I guess it isn't updated. I was looking for pricing of the new t3 instances but it says pricing not available (which kinda defeats the whole purpose of sorting by price)

Applications are open for YC Winter 2021

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact