Byte Order: Little Endian
On-line CPU(s) list: 0
Thread(s) per core: 1
Core(s) per socket: 1
NUMA node(s): 1
Vendor ID: ARM
Model name: Cortex-A72
L1d cache: 32K
L1i cache: 48K
L2 cache: 2048K
So a vCPU is an arbitrary unit but as each is equivalent then they can be benchmarked for the relevant workload.
You might be thinking of ECU (EC2 compute unit), which are intended to be comparable across hardware and are normalized to some old 1.7 GHz CPU that I guess was common in the early days of EC2. Amazon doesn't promote the ECU rating for instance types much, but it's still available if you look.
One vCPU is a single hardware thread, which means that on cores without SMT, one vCPU == one core.
Since the A72 doesn't have SMT, you get one core per vCPU. So in that sense, you are getting more bang for your buck if you just count physical cores: twice as many cores per vCPU...
FWIW, I tried to look up the ECU rating of the new ARM instances but they are listed as "NA" in the EC2 console.
> C5 instances feature the Intel Xeon Platinum 8000 series (Skylake-SP) processor with a sustained all core Turbo CPU clock speed of up to 3.4GHz, and single core turbo up to 3.5 GHz using Intel Turbo Boost Technology. C5 instances offer higher memory to vCPU ratio and deliver 25% improvement in price/performance compared to C4 instances, with certain applications delivering greater than 50% improvement. C5 instances provide support for the new Intel Advanced Vector Extensions 512 (AVX-512) instruction set, offering up to 2x the FLOPS per core per cycle compared to the previous generation C4 instances.
Haven't looked up details for the new stuff yet, might take some time to be up.
An AWS vCPU is a single hyperthread of a two-thread Intel Xeon core for M5, M4, C5, C4, R4, and R4 instances. A simple way to think about this is that an AWS vCPU is equal to half a physical core. Therefore, when choosing an Amazon EC2 instance size, you should double number of cores you have purchased or wish to deploy with.
Don't know how true it is, but there's no reason for me not to trust what Tableau has to say.
"Each vCPU is a hyperthread of an Intel Xeon CPU core, except for T2 instances."
I’ve benchmarked c5 and t3 against t2 and have found single thread perf to be better on c5 compared to t2. When loading all vCPUs the performance suffered on the newer instances (each HT performed worse than a single t2), so you would get more bang for your buck on t2. YMMV of course.
T3 gives you both threads of your physical core and commensurably more CPU credits per hour to utilize them. You may notice that even T3.nano offers 2 vCPUs where T2.small has 1 vCPU.
If you believe the sales pitch is to help you know that 2 ECU will be twice as fast as 1 ECU (without having to worry about, does this chip has 8MB of L2 cache or 12?).
If you're more cynical, you'll probably think they do it to help hide the fact that cloud computing tends to get trounced badly across all performance aspects (raw cpu, ram, disk io and network io) while being quite a bit more expensive.
Am I living in a bubble, or you are living under a rock?
Last hard numbers I saw, Amazon was the single largest by # of servers, but OVH wasn't too far behind. There's thousands of these worldwide, most older than any cloud provider. Some of the more well known: OVH, Rackspace, Softlayer, Hetzner, Hivelocity, Online.net and on and on.
Add to that people doing their own infrastructure (Apple, Facebook, Google, ...). Companies doing huge colo and on premise (banks, governments). Then consider even larger providers and telcos: AT&T (where WoW was hosted, might still be), Telefonica, Equinix (which I believe AWS was merely a customer of in Singapore at some point), ...
That still leaves VPS, shared hosting, and whatever you want to call DO/Linode/Packet/Vultr.
EDIT: I get it, I realize that the value of AWS is in the whole basket of services you can use, not just EC2, but damn do they make it unpleasant. If DigitalOcean gets more enterprise-serious and adds more hosted services, AWS really better watch out.
Its like we stopped the "cattle vs pets" metaphor a bit early and some people are mad that a butcher is selling steak instead of a whole cow.
(And just for the record, my team has pretty minimal AWS usage... we're not one of the cloud kool-aid shops. We buy overflow worker server capacity on EC2 spot instances, but our primary app stack runs completely on colo'd bare metal for our day to day operations. Even on our bare metal though... individual app/worker servers are disposable and persistence is provided elsewhere)
I spent a week trying to figure out why AWS kept charging me a $1.36 every month. Turns out I had a reserved instance that I forgot about. However damned if any of their "cost explorer" can help you at all.
The only way to deal with it is to open the EC2 console and then switch to each zone one by one until you find one that has any running or reserved instances.
The AWS console is a joke, I have no idea why its so popular.
I honestly don't think this matters to most serious AWS (ie paying substantial amounts of money) customers as much as it maybe matters to individuals looking for a 5 buck VPS to host a side project on. The small players like Digital Ocean/Linode/etc exist and cater more easily for this business if you want it, and it doesn't seem to be helping them make major inroads against the big three.
When you actually need specific specs at a huge scale, it's not much more complex to do it over AWS.
they also have memory-optimized or cpu-optimized VM choices.
Azure: "Introducing DTUs, the simplified, easy-to-use pricing metric for SQL Server!"
This is what amazon has successfully done, raised the cognitive sunk cost so fucking high you need to pay for their bs certificqtion i didnt even study but somehow passed even aftwr arriving to the exam 30 min late because the city of vancouver are such fucking master extortionist of parking tickets and somehow everybody has decided to park...i digress
Tldr use do or linode....dont soend decade on aws and realize the greatest devil the trick pulled is that he dont exist well lemmme tell u friends unshackle yourseles fromm aws and use netlify
Learn fundamentals, apply fundamentals. It's pretty simple.
(link may redirect to localized versions)
I have no idea where you'd get the "get a quote" thing, other than corporate customers usually needing (or wanting) some hand holding and guidance given that a) most of them already have applicable discounts as part of current enterprise agreements and b) accurate quotes require more than guesswork at sizing - we take into account usage patterns, whether or not the customer can benefit from reserved instances (which significantly lower costs 1-to-3y down the road), etc.
(I'm a Microsoft FTE working on Azure solutions, and have a personal Azure account which has detailed, per-resource cost breakdowns complete with links to the public pricing pages...)
Now they will own the entire vertical stack like Apple, but unlike Apple, Amazon can live on minimal margins.
BTW, apparently I predicted it in 2011 ;)
"Rumors about ARM-based EC2 instances from Amazon this year. Also SSD based EBS. #aws #ec2 #arm #ssd #cloud #amazon
3:14 AM - 27 Feb 2011"
I don't think that's a useful way to think about pricing a product or service. The price has to be driven by the market, not your costs, which means ultimately your costs have to be driven by the market as well or you're done for.
I suspect they're pricing this a little high because they don't have a lot of them yet, meanwhile plenty of people will pay a bit extra to get a chance to try this out and do proof-of-concepts on them. As long as the market created by the latter matches the availability due to the former, this will work out fine for Amazon. They can always lower prices as these become more mainstream. Novelty value can be real value.
FOUR ThunderX cores for THREE € per month.
AWS offers one core for TWENTY $ and with the typical extra charges… excuse me but what the actual fuck?!?!
That "cloud" industry is approaching the point of market being fully served, and everybody starting to focus on price. Amazon surely wants to be at least prepared for that, if not having a plan to undercut everybody early on ARM based hosting.
To me the most interesting thing will be comparing against t3, which gives you a surprising bang for your buck in scale out workloads, even when you factor in the fractional baseline credit allocation.
But Amazon doesn't appear to be aiming for those users with this product. T3s are probably a faster and cheaper option for the "RPi in the cloud" use-case.
I suspect the real users of this platform are going to be using them for development and testing for mobile platforms.
Cheaper? Yes. Faster? Only in bursts.
The advantage of the A1 instances is you get a full CPU core that you can run at 100% usage 24/7 for ~$18/month. A t3.small is ~$15/month, but you can only use the CPU at 100% for a fraction of the time.
If you're frequently pegging the CPU and so a T3 doesn't work for you, your cheapest option is a C5 which runs ~$61/month.
A1 fits right in between the two needs.
If they’re significantly cheaper to run at similar perf this is an easy win for arch agnostic things like bastion servers or purely interpreted scripting languages.
A vCPU on an A1 instance is a physical Arm core. There is no SMT (multi-threading) on A1 instances. In my experience on the platform, the performance is quite good for traditional cloud applications that are built on open source software, especially given the price. Since the Arm architecture is quite different than x86, we always recommend testing the performance with your own applications. There's really no substitute for that.
Are A1 instances supported as hosts for ECS clusters?
The Arm build for the ECS agent is not published to dockerhub just yet -- but we plan to start publishing soon.
I'll also point out that most docker images you build/run today are x86 and so won't run on arm machines anyway.
In particular I'm wondering how the vCPU's compare to the equivalent m5 instances. I.e. a1.xlarge and m5.xlarge both have 4 vCPU's, though the a1 has half the memory. So for tasks that aren't memory bound, would these have similar performance?
I'm disappointed that they aren't retiring the vCPU designation, too. ARM doesn't have anything like hyperthreading, so an ARM vCPU seems to equal a complete core. The a1.xl has 4 cores, 4 threads. The m5.xl has 2 cores, 4 threads.
I am very curious to see how it plays out. I've been waiting for a cloud ARM offering from a major provider for a while now (there have been some smaller but notable players like Scaleway, but demand seemed to always outrun supply, and I couldn't get VMs running near me in America.)
> If your application is written in a scripting language,
> odds are that you can simply move it over to an A1 instance
> and run it as-is. If your application compiles down to
> native code, you will need to rebuild it on an A1 instance.
They're clearly putting a hundred-mile moat between them and cross-compilation from the start :D
I wonder what their support stance will be when confronted with customer cross-compile scenarios.
I’ve spent a lot of time making cross-compile work for complicated builds. It can be tedious and frustrating fighting compilers and build systems to get it right.
Sometimes it is necessary because the target didn’t have a complete native toolchain, but other times because the target was just so incredibly slow for large builds.
I haven’t built anything with them, but these Arm servers appear to be neither of those two cases.
I doubt it's that hard if you're using a language with a reasonable cross-compile toolchain. That will probably influence choice of technology for new things aimed at this kind of performance envelope.
No doubt a lot of existing software will be left out by cross-compilation being too much of a pain.
Put another way, there isn't anything in the base C spec which can provide a guaranteed memory ordering barrier, which is why you absolutely have to depend on 3rd party specifications to get those guarantees. For example, if a program is using pthreads or openMP, their synchronization primitives must be used as well to assure portability.
That isn't to say that given a particular piece of code and compiler/switches/version the resulting program is wrong, just that its quite possible changing compilers/flags may result in "incorrect" code generation.
If you want the Cheapest, go to DigitalOcean. AWS is engineering the Best. Why HN has this obsession with quibbling over a couple dollars while hosting their hobby projects on the same cloud provider that Epic Games and Netflix pay hundreds of millions of dollars to for their massive workloads, I'll have no idea. It's not made for you.
Amazon can get on stage and throw valid infrastructure resiliency complaints at Google. They can't do the same thing against DigitalOcean or Linode, because that'd be like Usain Bolt making fun of a toddler's quarter mile time.
AWS is a leader in "running a web app at scale". But when it comes to Healthcare, Finance, Government, Education, Telecommunications, etc, they are small fish in a big pond.
So there you have Oracle with essentially zero growth and a failed cloud business that is not only running from behind, it's a disaster. Things are so bad, Oracle has begun trying to hide their cloud numbers when they report.
Simultaneously, Oracle's balance sheet is turning into a toxic wasteland. Net tangible assets have gone from positive $6 billion to negative $12 billion in just three quarters. They're now spending the equivalent of ~22% of their net profit on debt interest alone. Ellison will have to try to turn to another very large acquisition soon to bail out the ship that is about to sink. As the large cloud competitors get far larger in the next few years, they're going to begin not just robbing Oracle of growth but taking their existing business away. The scale is at a point where for AWS and Azure to double in size again (guaranteed to happen), Oracle is going to lose big.
I switched off of DO after they took my droplet offline because it was getting DDoS'd. I understand they want to protect their network and other customers becoming collateral damage, but it was still annoying. My little node was handling the attack just fine until DO kicked it off.
EDIT: And I've heard of DO customers getting their droplets taken offline for being DDoS'd even when they weren't under attack and were simply receiving a lot of traffic from reddit or something.
It's a similar situation with AWS. They're not /cheap,/ and they're not /cheaper/ than a lot of alternatives. The advantage of AWS is flexibility and depth of tooling, not price, though a lot of people are under the general impression that "it's cheaper on AWS".
It requires care to do a comparison which actually measures the subset of features which you use — I've seen the other side of that where someone justified dropping a ton of cash on a particular option based on a specific feature but, jumping ahead a few years, never ended up using that due to performance/stability/security issues.
You realize now the power of branding, cult personality, marketing, get foreign cops to beat foreign workers at warehouse....sure is hel--i shit you not my book atrived from Amazon.ca
TPUs are already a better deal than GPUs for training many models. These CPUs don't seem to have a similar niche yet, but who knows what else they have ready to switch on.
The next ten years are going to be really interesting in the hardware space.
The ThunderX2 based machines aren't A72's but a probably easier to source and are basically in the same conceptual ballpark.
Arm has a bunch of extensions which would be really useful on these in some circumstances. And that is ignoring the "custom silicon" that Amazon claims to have.
The long term, however, is where things start to get interesting. I suspect that by the second generation we're going to see ARM servers able to deliver a better price-per-core (and perhaps therefore price for computing performance) than the x86 alternatives. This will be particularly beneficial for applications which require a lot of CPU power and are highly parallelizable. Picture a workload that can take advantage of 64 cores, with ARM you can get this many cores for a lot less cost than x86.
A76 would be pretty interesting...
processor : 0
BogoMIPS : 166.66
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant : 0x0
CPU part : 0xd08
CPU revision : 3
Indeed. A72 is like the minimum viable core :) It's no eMAG or ThunderX2, but it's kinda comparable to the original ThunderX.
Byte Order: Little Endian
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 4
NUMA node(s): 1
Vendor ID: ARM
Model name: Cortex-A72
L1d cache: 32K
L1i cache: 48K
L2 cache: 2048K
NUMA node0 CPU(s): 0-7
Flags: fp asimd evtstrm aes pmull sha1 sha2
Will this be much cheaper than Intel instances?
I’m thinking incredibly cheap ARM instances in the future.