Hacker News new | past | comments | ask | show | jobs | submit login
EC2 Instances Powered by Arm-Based AWS Graviton Processors (amazon.com)
420 points by mcrute on Nov 27, 2018 | hide | past | favorite | 152 comments



Very cool. As others point out, it does seem a bit expensive, but I do find it impressive that AWS has added AMD and ARM offerings in such a short period. Forgetting the idea of using ARM servers purely for the potential cost savings, I feel this could be a boon to those wanting to run ARM CI builds on actual hardware instead of Qemu.


Do hosting companies no longer give you the speed of a processing core? I couldn't find anywhere that explains how fast 1 of the ARM cores go? Is it 1ghz or 100mhz? Seems like that would be quite important. Seems like 1vCPU is a bit of an arbitary figure. That could be 1 vCPU that goes at 4ghz but then 2 vCPU's that only go at 1ghz. I feel like there's information missing.


Just created a new instance. Processor speed is 2297 MHz. lscpu result:

ubuntu@ip-172-31-15-145:~$ lscpu Architecture: aarch64 Byte Order: Little Endian CPU(s): 1 On-line CPU(s) list: 0 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 1 NUMA node(s): 1 Vendor ID: ARM Model: 3 Model name: Cortex-A72 Stepping: r0p3 BogoMIPS: 166.66 L1d cache: 32K L1i cache: 48K L2 cache: 2048K


But clock speed doesn't really mean anything these days either. 1Ghz is just as arbitrary - the instruction rate through two CPUs at the same clock rate can vary massively.

So a vCPU is an arbitrary unit but as each is equivalent then they can be benchmarked for the relevant workload.


vCPUs are not at all equivalent across instance types! One vCPU is a single hardware thread on an actual core (ie, two vCPUs is a full core of whatever the underlying machine is if it is an SMT2 core). So the power of a vCPU is quite different across different hardware types.

You might be thinking of ECU (EC2 compute unit), which are intended to be comparable across hardware and are normalized to some old 1.7 GHz CPU that I guess was common in the early days of EC2. Amazon doesn't promote the ECU rating for instance types much, but it's still available if you look.


Just to clarify, if you read below, you'll find that vCPUs on the A1 instances are each physical ARM cores.


Right, I didn't mean to imply otherwise - but I can see how what I wrote could come off that way.

One vCPU is a single hardware thread, which means that on cores without SMT, one vCPU == one core.

Since the A72 doesn't have SMT, you get one core per vCPU. So in that sense, you are getting more bang for your buck if you just count physical cores: twice as many cores per vCPU...

FWIW, I tried to look up the ECU rating of the new ARM instances but they are listed as "NA" in the EC2 console.


GHz doesn't matter anyways. In general nothing matters since threads can be individually throttled without you noticing (apart from declining performance). See my Lightsail review: https://www.karoly.io/amazon-lightsail-review-2018/


That depends on what you're paying for. If you're using Lightsail or one of the t-series EC2 instances you'll definitely get throttled as you don't get dedicated hardware. All other EC2 instance types give you dedicated hardware, on the Intel instances you get one hardware thread per vCPU and on the ARM instances you get a physical core. Those instance types don't throttle.


Challenge accepted ;-) Will benchmark them as well.


Details are there in appropriate pages.

> C5 instances feature the Intel Xeon Platinum 8000 series (Skylake-SP) processor with a sustained all core Turbo CPU clock speed of up to 3.4GHz, and single core turbo up to 3.5 GHz using Intel Turbo Boost Technology. C5 instances offer higher memory to vCPU ratio and deliver 25% improvement in price/performance compared to C4 instances, with certain applications delivering greater than 50% improvement. C5 instances provide support for the new Intel Advanced Vector Extensions 512 (AVX-512) instruction set, offering up to 2x the FLOPS per core per cycle compared to the previous generation C4 instances.

https://aws.amazon.com/ec2/instance-types/c5/

Haven't looked up details for the new stuff yet, might take some time to be up.


With AWS, I found this info from tableau recently:

An AWS vCPU is a single hyperthread of a two-thread Intel Xeon core for M5, M4, C5, C4, R4, and R4 instances. A simple way to think about this is that an AWS vCPU is equal to half a physical core. Therefore, when choosing an Amazon EC2 instance size, you should double number of cores you have purchased or wish to deploy with.

https://onlinehelp.tableau.com/current/server/en-us/ts_aws_v...

Don't know how true it is, but there's no reason for me not to trust what Tableau has to say.


It is in the AWS documentation:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance...

"Each vCPU is a hyperthread of an Intel Xeon CPU core, except for T2 instances."


It’s true. Also note that you get a physical core for each vCPU with t2.

I’ve benchmarked c5 and t3 against t2 and have found single thread perf to be better on c5 compared to t2. When loading all vCPUs the performance suffered on the newer instances (each HT performed worse than a single t2), so you would get more bang for your buck on t2. YMMV of course.


Yes T2 performs quite admirably! If your CPU-bound workload doesn't use much memory, smaller sizes of T2 Unlimited can be a much better deal than C4.

T3 gives you both threads of your physical core and commensurably more CPU credits per hour to utilize them. You may notice that even T3.nano offers 2 vCPUs where T2.small has 1 vCPU.


Most hosting companies do. Cloud hosting (a small subset, as far as I know), generally don't.

If you believe the sales pitch is to help you know that 2 ECU will be twice as fast as 1 ECU (without having to worry about, does this chip has 8MB of L2 cache or 12?).

If you're more cynical, you'll probably think they do it to help hide the fact that cloud computing tends to get trounced badly across all performance aspects (raw cpu, ram, disk io and network io) while being quite a bit more expensive.


> Cloud hosting (a small subset, as far as I know)

Am I living in a bubble, or you are living under a rock?


I'd be surprised if AWS/Google/Azure accounted for more than low digit % of all "hosting".

Last hard numbers I saw, Amazon was the single largest by # of servers, but OVH wasn't too far behind. There's thousands of these worldwide, most older than any cloud provider. Some of the more well known: OVH, Rackspace, Softlayer, Hetzner, Hivelocity, Online.net and on and on.

Add to that people doing their own infrastructure (Apple, Facebook, Google, ...). Companies doing huge colo and on premise (banks, governments). Then consider even larger providers and telcos: AT&T (where WoW was hosted, might still be), Telefonica, Equinix (which I believe AWS was merely a customer of in Singapore at some point), ...

That still leaves VPS, shared hosting, and whatever you want to call DO/Linode/Packet/Vultr.


On demand pricing starts at roughly $20/month. When they said low cost I was hoping for something in the ~$5 range.


And it doesn't come with any Storage or Network Transfer. You will soon require a Phd to figure out the cost optimisation on AWS, or simply switch to DO for a much simpler pricing.


I just can't fathom why AWS is so popular given its high mental overhead. Every other VPS provider is like, "Look! It's a server with local storage and memory and CPUs. It comes with this much bandwidth and this much storage. If you pay us you can have it!" AWS: "Hold my beer."

EDIT: I get it, I realize that the value of AWS is in the whole basket of services you can use, not just EC2, but damn do they make it unpleasant. If DigitalOcean gets more enterprise-serious and adds more hosted services, AWS really better watch out.


Most people who buy EC2 instances at scale won't want most of them to have persistent storage attached. Storage is handled by a DB or S3.


Right? I'm surprised at how many of the comments here seem to be about using AWS for hobby-scale stuff... I keep a Digital Ocean VPS for my personal website and OpenVPN AS and stuff like that, but anything I'm deploying for work on AWS is... either totally transient (I just need compute and enough working space to store the code I'm executing) or critically important (local storage is not sufficient, I need redundancy and shared state and lots of 9s). I can't think of many commercial use cases outside of prototypes where you'd want a bundle of compute + storage all on the same hardware.

Its like we stopped the "cattle vs pets" metaphor a bit early and some people are mad that a butcher is selling steak instead of a whole cow.

(And just for the record, my team has pretty minimal AWS usage... we're not one of the cloud kool-aid shops. We buy overflow worker server capacity on EC2 spot instances, but our primary app stack runs completely on colo'd bare metal for our day to day operations. Even on our bare metal though... individual app/worker servers are disposable and persistence is provided elsewhere)


They also do not give you any easy to understand centralized view of where you're spending money.

I spent a week trying to figure out why AWS kept charging me a $1.36 every month. Turns out I had a reserved instance that I forgot about. However damned if any of their "cost explorer" can help you at all.

The only way to deal with it is to open the EC2 console and then switch to each zone one by one until you find one that has any running or reserved instances.

The AWS console is a joke, I have no idea why its so popular.


While I agree that AWS pricing and UI can be a nightmare for individuals to understand, at every mid to large enterprise I've worked at they simply haven't cared - a corporate card is entered in an admin screen somewhere and forgotten about.

I honestly don't think this matters to most serious AWS (ie paying substantial amounts of money) customers as much as it maybe matters to individuals looking for a 5 buck VPS to host a side project on. The small players like Digital Ocean/Linode/etc exist and cater more easily for this business if you want it, and it doesn't seem to be helping them make major inroads against the big three.


Well there's still an advantage to that flexibility. Sure it's fun for a random project to have average value to every specs, but some project require much more RAM than CPU, or much more bandwidth, or the more usual much more storage. If you need any big amount of storage on Digital Ocean, you need to take one of their beefy machine, even if you don't need that much RAM or that much CPU. Sure there is object storage, but that's a whole different storage system and it doesn't play well with a database server.

When you actually need specific specs at a huge scale, it's not much more complex to do it over AWS.


for a database server (or for anything that just needs a big filesystem), DO has block storage, which just looks like an attached hard disk. Works fine with a small VM.

they also have memory-optimized or cpu-optimized VM choices.


Seems like GCP is best for sliding amounts of compute, because you can create a machine with whatever RAM/CPU combination you want (more or less).


I'd argue that buying the beefy machine at DigitalOcean would still cost far less than AWS. And you'd be getting on with your life versus banging your head on AWS for the day, week, or year.


If you think that is confusing, try ordering Microsoft SQL server on Azure which was measured in "DTU"s - data throughput units. Which globbed together disk, CPU, and RAM into one "easy-to-use" metric!


Actually, you have a per-vCore purchasing model now.


Management: "Increase revenue per user!"

Azure: "Introducing DTUs, the simplified, easy-to-use pricing metric for SQL Server!"


They offer Lightsail for exactly this purpose.


I think amazon has reached the tip. So many side projects that lay there abandoned because if im not setting up my own vpc subnets and gateways manually i feel defiled and ashamed.

This is what amazon has successfully done, raised the cognitive sunk cost so fucking high you need to pay for their bs certificqtion i didnt even study but somehow passed even aftwr arriving to the exam 30 min late because the city of vancouver are such fucking master extortionist of parking tickets and somehow everybody has decided to park...i digress

Tldr use do or linode....dont soend decade on aws and realize the greatest devil the trick pulled is that he dont exist well lemmme tell u friends unshackle yourseles fromm aws and use netlify


I know you're being downvoted, but this sentiment toward AWS is exactly what will eventually kill it. Why should someone waste their life away learning the complexities of AWS? It's not a lasting addition to one's life like, say, learning biology or history or how to write. It's needless complexity that won't have any value in 15 years.


Dunno about you, but I have not found many "complexities" in the parts of AWS that people actually use that don't map pretty directly to using other cloud providers or running your own systems. I look at an AWS thing, go "oh, okay," and go about my day.

Learn fundamentals, apply fundamentals. It's pretty simple.


Sounds like a startup opportunity. :) RackSpace for hobby projects?


+1 for Netlify


Dedicated AWS instances for machine learning AI-assisted AWS cost optimizations would be the next bestselling AWS product.


You mean this thing they just introduced yesterday? https://aws.amazon.com/blogs/aws/new-predictive-scaling-for-...


Product configuration can be very efficiently modelled as a CSP. You should use MiniZinc instead


'Our product line pricing is so simple you don't even need a FPGA to figure it out!'


Did Amazon hire the MS licensing folks to work on their subscription prices?


No because at least they give you the prices! MS would always say you need to 'get a quote'...


Azure prices are (notoriously) public, and there is a nice public calculator available at:

https://azure.microsoft.com/en-us/pricing/calculator/

(link may redirect to localized versions)

I have no idea where you'd get the "get a quote" thing, other than corporate customers usually needing (or wanting) some hand holding and guidance given that a) most of them already have applicable discounts as part of current enterprise agreements and b) accurate quotes require more than guesswork at sizing - we take into account usage patterns, whether or not the customer can benefit from reserved instances (which significantly lower costs 1-to-3y down the road), etc.

(I'm a Microsoft FTE working on Azure solutions, and have a personal Azure account which has detailed, per-resource cost breakdowns complete with links to the public pricing pages...)


I replied to someone making reference to MS licensing pricing (like Exchange CALs and such), not Azure pricing. MS has done great work in the cloud with Azure.


Azure prices are public, unless they've funneled you through a CSP (which Microsoft loves doing)--at which point pricing is a black-box nightmare with even worse support than Azure has by default.


Not so fast there, you need the associate certification. If younforget to bring your id and basically flush a cool few hundred american


AWS is a low-level service provider; on top of that you get services like DO or Heroku that abstract the complexity away for you. But that only adds cost. Basically you exchange complexity for cost, or, pay more for having to do and know less.


Digital Ocean has their own data centers, they aren’t built on top of AWS

https://www.quora.com/Does-DigitalOcean-have-its-own-datacen...


I wonder how much (volume) discounts you'd need from AWS to profitably host a digital ocean on top of it...


You’d be competing with Lightsail. It’d be hard to compete with Amazon on their own hardware.


Spot instances are about $3.50 a month, for what it's worth.


I wonder how long the spot price will remain this low. It does fluctuate after all.


Ooh. Thanks for reminding me about spot. That's way better than the on-demand price. (Still 1/4 the cores of the Scaleway offering though.)


The a1.medium 12-month reserved cost is $151 which is about $12.60 a month.


I think the high price is to amortize the Annapurna Labs acquisition and to pay for NRE.

Now they will own the entire vertical stack like Apple, but unlike Apple, Amazon can live on minimal margins.

BTW, apparently I predicted it in 2011 ;)

"Rumors about ARM-based EC2 instances from Amazon this year. Also SSD based EBS. #aws #ec2 #arm #ssd #cloud #amazon

3:14 AM - 27 Feb 2011"

https://twitter.com/nivertech/status/41667359487823872


> I think the high price is to amortize the Annapurna Labs acquisition and to pay for NRE.

I don't think that's a useful way to think about pricing a product or service. The price has to be driven by the market, not your costs, which means ultimately your costs have to be driven by the market as well or you're done for.

I suspect they're pricing this a little high because they don't have a lot of them yet, meanwhile plenty of people will pay a bit extra to get a chance to try this out and do proof-of-concepts on them. As long as the market created by the latter matches the availability due to the former, this will work out fine for Amazon. They can always lower prices as these become more mainstream. Novelty value can be real value.


Neither google nor microsoft have ARM CPU's yet, so there is as-yet no price competition.


Scaleway.

FOUR ThunderX cores for THREE € per month.

AWS offers one core for TWENTY $ and with the typical extra charges… excuse me but what the actual fuck?!?!


Only for situations where ARM is an absolute requirement, which are few and far between.


Linaro has an ARMv8 cloud, too: https://connect.linaro.cloud


Packet.net (bare metal cloud hosting vendor) have ARM instances.


Agree, this is still quite expensive.


It seems like a better value than a t2 instance because it's not a burstable core. The network performance is probably much better on this instance type as well.


I was hoping for something in the $0.10 /month range. An A53 type multicore ARM CPU works out at only $5 or so to buy, and only $3 per year of power to run, so a timeshare on a single core of that machine ought not cost much!


Are the employees free? As well as memory, motherboard, etc?


t3.nano instances already start at about ~$4 (+ ebs and network). Their lightsail alternative pricing model for ec2 instances also has $3.50 and $5 instances [0].

[0]: https://aws.amazon.com/lightsail/pricing/


Amazon-made CPUs now available in EC2 instances? I hope someone at Intel is paying attention....


Surely they do, all kinds of "cloud" companies were their favourite milking cows in the last decade. I believe Amazon's primary motivation for going as far as buying an own chip company was to, at least, send Intel a signal "would you lower the price tag?"

That "cloud" industry is approaching the point of market being fully served, and everybody starting to focus on price. Amazon surely wants to be at least prepared for that, if not having a plan to undercut everybody early on ARM based hosting.


From the perspective of capabilities, I think you’re right, the market is nearly fully served. From usability, I think we still have a long ways to go.


It'll be interesting to benchmark these against existing instance types. Also, there hasn't really been anything in AWS for low sustained load but these seem to fit the bill perfectly.


Yea, I think the fact that it even has a medium instance type is the biggest win for apps that don't need a lot of RAM. c5 and m5 start at large with 4GB/8GB respectively.

To me the most interesting thing will be comparing against t3, which gives you a surprising bang for your buck in scale out workloads, even when you factor in the fractional baseline credit allocation.


Rick Branson ran a benchmark with Phoronix. a1 is ~2/3 of the cost/performance of a5. [0]

[0] https://twitter.com/rbranson/status/1067304265696202752


Awesome!


I'm still curious what the real use-cases are going to be for these platforms. I think many people see "ARM in AWS" and think they're going to get an array of super-inexpensive Raspberry Pies with Amazon's network, power, storage and APIs behind them.

But Amazon doesn't appear to be aiming for those users with this product. T3s are probably a faster and cheaper option for the "RPi in the cloud" use-case.

I suspect the real users of this platform are going to be using them for development and testing for mobile platforms.


> T3s are probably a faster and cheaper option for the "RPi in the cloud" use-case.

Cheaper? Yes. Faster? Only in bursts.

The advantage of the A1 instances is you get a full CPU core that you can run at 100% usage 24/7 for ~$18/month. A t3.small is ~$15/month, but you can only use the CPU at 100% for a fraction of the time.

If you're frequently pegging the CPU and so a T3 doesn't work for you, your cheapest option is a C5 which runs ~$61/month.

A1 fits right in between the two needs.


We run an auto-scaled pool of C5 instances that host stateless containerized services. T* instances don’t make sense since we run them pretty warm all the time. If we need less capacity, we scale down. The appeal for this type of workload is purely compute per dollar spent. They are substantially cheaper than the x86 options.


A1 is launched under ARM's new "Neoverse" branding that might help to distinguish this class of products from Raspberry Pi expectations: https://www.arm.com/company/news/2018/10/announcing-arm-neov...


Does vCPU on the chart refer to a physical ARM core and any clue how they stack up against a modern x86-64 core?

If they’re significantly cheaper to run at similar perf this is an easy win for arch agnostic things like bastion servers or purely interpreted scripting languages.


Hello from the EC2 engineering team!

A vCPU on an A1 instance is a physical Arm core. There is no SMT (multi-threading) on A1 instances. In my experience on the platform, the performance is quite good for traditional cloud applications that are built on open source software, especially given the price. Since the Arm architecture is quite different than x86, we always recommend testing the performance with your own applications. There's really no substitute for that.


Thanks for the direct response!

Are A1 instances supported as hosts for ECS clusters?


Yes, and we have AMIs available:

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/...

The Arm build for the ECS agent is not published to dockerhub just yet -- but we plan to start publishing soon.


The docker image they release for the on-host agent [0] doesn't seem to have an arm tag. That seems to point towards it being unlikely.

I'll also point out that most docker images you build/run today are x86 and so won't run on arm machines anyway.

[0]: https://hub.docker.com/r/amazon/amazon-ecs-agent/


Most of the official Docker images are now multi arch. We have been using Packet.net to build the arm images. But there are many images that are not and some registries don’t support multi arch even.


Yes


How come the "ECU" ratings aren't listed for the A1 types in the pricing page?

In particular I'm wondering how the vCPU's compare to the equivalent m5 instances. I.e. a1.xlarge and m5.xlarge both have 4 vCPU's, though the a1 has half the memory. So for tasks that aren't memory bound, would these have similar performance?


ECU ratings are based on an ancient x86 benchmark, and don't really compare well with each other anymore, let alone with totally different architectures. I think they've been trying to retire it for years.

I'm disappointed that they aren't retiring the vCPU designation, too. ARM doesn't have anything like hyperthreading, so an ARM vCPU seems to equal a complete core. The a1.xl has 4 cores, 4 threads. The m5.xl has 2 cores, 4 threads.


Are you running Lambda functions on those?


Why does EC2 still only have read-only serial console?


Amazon making their own CPUs? Damn. Looks like if you run Python or other scripted languages you can simply port your code over...Nice!


Certainly scripted languages run fine on ARM Linux, however there are some caveats worth mentioning. ARM JIT compilers for some platforms and languages may be not as mature or performant as their x86 counterparts since they haven't been battletested as much in the server landscape. Also, if you are using C extensions that don't compile to ARM, obviously that is going to affect you. And finally, if any part of your CI is building images containing native code (even if all of your code is say, Python,) you are going to need to duplicate that process for ARM, possibly requiring some retooling of your build system.

I am very curious to see how it plays out. I've been waiting for a cloud ARM offering from a major provider for a while now (there have been some smaller but notable players like Scaleway, but demand seemed to always outrun supply, and I couldn't get VMs running near me in America.)


I use an ARM laptop for travel. I've had a lot of problems with running Python code. Mostly data-science related. A lot of recompilation is required, which is a hassle. Also, sometimes you'll be constrained in what libraries you can use, because some of them simply don't work on ARM, due to ARM specific bugs. NUMBA is one that caused a lot of headache, but I think they got it fixed since.


Which one? I'm in the market for a super cheap one!


Asus C201P. ~180 euros. Very durable, battery lasts for 10 hours easy, more if only programming. It's preinstalled with ChromeOS. Installing Linux took some tinkering; there are some blog posts around that were very helpful.


You might also look at the Pinebook, 11" for $99. I'm considering one also.


I've looked at the Pinebook but 6 hour battery life isn't enough for me.


Yes, that’s exactly what the linked article mentions

  > If your application is written in a scripting language,
  > odds are that you can simply move it over to an A1 instance
  > and run it as-is. If your application compiles down to
  > native code, you will need to rebuild it on an A1 instance.


Heh, "rebuild it _on_"

They're clearly putting a hundred-mile moat between them and cross-compilation from the start :D

I wonder what their support stance will be when confronted with customer cross-compile scenarios.


I don’t think it is a moat, just stating the simple case.

I’ve spent a lot of time making cross-compile work for complicated builds. It can be tedious and frustrating fighting compilers and build systems to get it right.

Sometimes it is necessary because the target didn’t have a complete native toolchain, but other times because the target was just so incredibly slow for large builds.

I haven’t built anything with them, but these Arm servers appear to be neither of those two cases.


Might just be "figure it out for yourself".

I doubt it's that hard if you're using a language with a reasonable cross-compile toolchain. That will probably influence choice of technology for new things aimed at this kind of performance envelope.

No doubt a lot of existing software will be left out by cross-compilation being too much of a pain.


Golang has really nice cross compilation features, it's as easy as setting two environment variables `GOOS` and `GOARCH` when building.


I can confirm that cross-compiling for aarch64 should work without hiccups.


If you use a modern gitops CI/CD type workflow building containers automatically then you don't need to worry about the manual building part either - this can be done cleanly with automation


They acquired Annapurna Labs back in 2015 - I'm sure this is based on their work, they (Annapurna) were already building chips for Synology prior to the acquisition.

https://en.m.wikipedia.org/wiki/Annapurna_Labs


It is. They mention it in the second paragraph of the linked article.


They landed in a few routers as well: https://wikidevi.com/wiki/Annapurna_Labs


The actual CPU core designs are licensed from Arm (another commenter said they're Cortex A72s).


Properly written C is portable (and much faster than Python)


Depending on how you define "properly written." It was easy to write seemingly proper code that assumed TSO and worked for x86 but does not work on architectures with weak memory consistency.


Code that assumes TSO is likely broken on x86 as well. The use of the volatile keyword doesn't really change that in any meaningful way either, given that part of the language is a bit under-specified. Basically, C compilers are free to do a lot of non-obvious optimizations which can reorder around volatile accesses.

Put another way, there isn't anything in the base C spec which can provide a guaranteed memory ordering barrier, which is why you absolutely have to depend on 3rd party specifications to get those guarantees. For example, if a program is using pthreads or openMP, their synchronization primitives must be used as well to assure portability.

That isn't to say that given a particular piece of code and compiler/switches/version the resulting program is wrong, just that its quite possible changing compilers/flags may result in "incorrect" code generation.


True, but outside of memory ordering, x86_64 and ARM64 are probably among the easiest to port between. Endianness, alignment and type sizes are the same, for example. Plus a lot of code already has been ported to both.


Very interesting. Looking forward to benchmarking these for NodeJS API workloads. Does anybody with experience running node on ARM have any advice/warnings?


It all just works well. I've been actually running Node on these instances for a while and it's awesome.


2 GB / 1 vCPU about $20. On DO you can get 4 GB / 2 vCPUs plus 80 GB storage and 4 TB on Transfer for $20. (Same with linode.com, upcloud.com)


In the same keynote this was announced, they said that one of their regions has reached nearly 5 Pbps of inter-AZ network capacity. They also poked at Google for using phrasing like "regions usually have independent cooling and power control planes". They clarified a common misconception: Regions aren't single datacenters, nor are AZs single data centers, and at least one of their AZs (probably a us-east-1 AZ) has fourteen datacenters.

If you want the Cheapest, go to DigitalOcean. AWS is engineering the Best. Why HN has this obsession with quibbling over a couple dollars while hosting their hobby projects on the same cloud provider that Epic Games and Netflix pay hundreds of millions of dollars to for their massive workloads, I'll have no idea. It's not made for you.

Amazon can get on stage and throw valid infrastructure resiliency complaints at Google. They can't do the same thing against DigitalOcean or Linode, because that'd be like Usain Bolt making fun of a toddler's quarter mile time.


Amazon could also pick on someone their own size. Oracle's annual revenue is more than double AWS at 40B a year. If you look at on-prem enterprise IT, or Azure you'll find competitors much closer to AWS with very high performance, resilience, and security guarantees. In fact, Azure is growing much faster than AWS and Microsoft has a huge advantage in many of the industries that AWS is lagging in.

AWS is a leader in "running a web app at scale". But when it comes to Healthcare, Finance, Government, Education, Telecommunications, etc, they are small fish in a big pond.


AWS by itself will be the size of Oracle in fiscal 2020. They'll do $24-$25 billion in sales for 2018. Oracle's business hasn't expanded in years: sales in 2015 were $38b, and that's still their approximate annualized sales figure today. AWS is ~64% the size of Oracle. In 2019 they'll be 85% the size of Oracle.

So there you have Oracle with essentially zero growth and a failed cloud business that is not only running from behind, it's a disaster. Things are so bad, Oracle has begun trying to hide their cloud numbers when they report.

Simultaneously, Oracle's balance sheet is turning into a toxic wasteland. Net tangible assets have gone from positive $6 billion to negative $12 billion in just three quarters. They're now spending the equivalent of ~22% of their net profit on debt interest alone. Ellison will have to try to turn to another very large acquisition soon to bail out the ship that is about to sink. As the large cloud competitors get far larger in the next few years, they're going to begin not just robbing Oracle of growth but taking their existing business away. The scale is at a point where for AWS and Azure to double in size again (guaranteed to happen), Oracle is going to lose big.


> If you want the Cheapest, go to DigitalOcean.

I switched off of DO after they took my droplet offline because it was getting DDoS'd. I understand they want to protect their network and other customers becoming collateral damage, but it was still annoying. My little node was handling the attack just fine until DO kicked it off.

EDIT: And I've heard of DO customers getting their droplets taken offline for being DDoS'd even when they weren't under attack and were simply receiving a lot of traffic from reddit or something.


I remember once someone saying something along the lines of (and I'm paraphrasing) "Amazon aren't the cheapest retailer out there, but they want you to believe they are".

It's a similar situation with AWS. They're not /cheap,/ and they're not /cheaper/ than a lot of alternatives. The advantage of AWS is flexibility and depth of tooling, not price, though a lot of people are under the general impression that "it's cheaper on AWS".


“It's cheaper on AWS” really depends on where you're starting from, too. If you actually need more than, say, a bare Linux box it's usually hard to even start collecting the staff time + costs for the equivalent enterprise environment with network infrastructure, monitoring, access management and audit logging, the various fault-tolerance / recovery options, etc.

It requires care to do a comparison which actually measures the subset of features which you use — I've seen the other side of that where someone justified dropping a ton of cash on a particular option based on a specific feature but, jumping ahead a few years, never ended up using that due to performance/stability/security issues.


THIS. I gound out the hard way. Im back on linode.

You realize now the power of branding, cult personality, marketing, get foreign cops to beat foreign workers at warehouse....sure is hel--i shit you not my book atrived from Amazon.ca


Amazon shipping their own CPUs (which are only available on their cloud) and Google shipping their own TPUs (which are only available on their cloud).

TPUs are already a better deal than GPUs for training many models. These CPUs don't seem to have a similar niche yet, but who knows what else they have ready to switch on.

The next ten years are going to be really interesting in the hardware space.


Well its true about the TPU's, but you can find similar A72 based servers from other vendors if you try hard enough...

https://en.wikichip.org/wiki/hisilicon/hi16xx/hi1616

The ThunderX2 based machines aren't A72's but a probably easier to source and are basically in the same conceptual ballpark.


Sure, they are Arm cores, but who knows what else they have on them.

Arm has a bunch of extensions[1][2] which would be really useful on these in some circumstances. And that is ignoring the "custom silicon" that Amazon claims to have.

[1] https://www.arm.com/products/silicon-ip-cpu/machine-learning...

[2] https://www.androidauthority.com/arm-project-trillium-842770...


For someone who doesn't know much about ARM vs Intel/AMD. What is the use case here? If both run Linux what is the difference?


At the current point in time, this is probably limited to ARM-specific workloads (such as if I need to run ARM binaries, or generate ARM binaries without the hassle of cross-compiling).

The long term, however, is where things start to get interesting. I suspect that by the second generation we're going to see ARM servers able to deliver a better price-per-core (and perhaps therefore price for computing performance) than the x86 alternatives. This will be particularly beneficial for applications which require a lot of CPU power and are highly parallelizable. Picture a workload that can take advantage of 64 cores, with ARM you can get this many cores for a lot less cost than x86.


These instances already deliver better cost efficiency. The a1 instances are within 5% of the performance of similar c5 instances, but are 40% (!!!) cheaper.


Any idea what ARM cores these are based on? A5x? A7x?

A76 would be pretty interesting...


Looks like A72. Here's the cpuinfo:

  processor	: 0
  BogoMIPS	: 166.66
  Features	: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
  CPU implementer	: 0x41
  CPU architecture: 8
  CPU variant	: 0x0
  CPU part	: 0xd08
  CPU revision	: 3


Too bad, A72 is the sort of thing found in $49 SBC's these days. For the price, it would have been nice to see at least A73, but at least its not an A53.


A73 isn't for servers and I don't see any server/embedded SoCs with A75 yet, so Amazon isn't exactly behind.


True, most server ARM processors that come to mind are custom (or at least semi-custom): Ampere, Centriq, ThunderX2. Trying to map them to A72/73/75/76 is perhaps short sighted since those are primarily smartphone cores - we'll have to wait and see how they perform in real world tests.


> at least its not an A53

Indeed. A72 is like the minimum viable core :) It's no eMAG or ThunderX2, but it's kinda comparable to the original ThunderX.


Jeff has to make take his cut from somewhere after all


Probably should have done a lscpu instead. On recent arm distro's lscpu knows how to decode the part/variant/etc and it also pays attention to the cache and numa topology.


What does cpuinfo display when the CPU lacks the cpuid feature?


Arm processors don't have a cpuid instruction, but they do have a number of ID registers that provide similar information (e.g. MIDR_EL1, REVIDR_EL1).


lscpu output:

Architecture: aarch64

Byte Order: Little Endian

CPU(s): 8

On-line CPU(s) list: 0-7

Thread(s) per core: 1

Core(s) per socket: 4 Socket(s): 2

NUMA node(s): 1

Vendor ID: ARM

Model: 3

Model name: Cortex-A72

Stepping: r0p3

BogoMIPS: 166.66

L1d cache: 32K

L1i cache: 48K

L2 cache: 2048K

NUMA node0 CPU(s): 0-7

Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid


Here's lstopo output for the largest one, for good measure: https://instaguide.io/info.html?type=a1.4xlarge#tab=lstopo though there's nothing terribly surprising.


That seems unlikely, ARM topology has been a bit messed up. So lstopo is confusing what are probably cpu clusters with sockets.


Wow...this is actually huge. Finally a big player for ARM based server!


Very exciting! I’m curious what type of workloads these will be used for.

Will this be much cheaper than Intel instances?


Comparing the large instance (16vCores, 32gib ram) to anything else "large" in N.Virginia, the price seems more affordable than what was purchase-able until now


This is great news for ARM in general. Hopefully they can offer massive thread count in the future, even if with less powerful cores.


Does ARM feature hardware assisted virtualization? Like Intel VT instructions?

I’m thinking incredibly cheap ARM instances in the future.


Yes, Arm does feature hardware virt and it works similarly to other architectures


Can't wait to see some specs and benchmarks of this new ARM processor! Best AWS news for me in 2018.


"we’ve worked with them to build and release two generations of ASICs (chips, not shoes)"


I see, Mr. Bezos finally got concerned by hardware makers turning AWS into their milking cow. Quite a smart move.


I was expecting a firearm based processor


Do you Google Employees think that NSA isn't watching what we do? Mmm, it looks like China is evil so they can't do what USA can




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: