Hacker News new | past | comments | ask | show | jobs | submit login
AWS Fargate – Run Containers Without Managing Infrastructure (amazon.com)
533 points by moritzplassnig on Nov 29, 2017 | hide | past | favorite | 229 comments

Fargate looks really expensive compared to just running an EC2 instance. A 1 vCPU container with 2GB of RAM will run you $55/month. An m3.medium with 1 vCPU and 3.75GB of RAM is $49. The prices seen to get uncomfortably worse from there, though I haven't priced them out the whole way, but a 4 vCPU container with 8GB of RAM is price-competitive ($222 for the container, $227 monthly for the machine) with a freaking i3.xlarge, with 4 vCPUs and 30.5GB, and the i3.xlarge also has 10Gbit networking. Topping Fargate out at 4 vCPUs and 30GB of RAM puts it right between an r4.2xlarge and an i3.2xlarge, both with 8 vCPUs and 61GB of RAM (the i3 is more expensive because it's also got 1.9TB of local SSD).

Enough people are still trying to make fetch happen, where fetch is container orchestration, that I expect that fetch will indeed eventually happen, but this is a large vig for getting around not-a-lot-of-management-work (because the competition isn't Kubernetes, which is the bin packing problem returned to eat your wallet, it's EC2 instances, and there is a little management work but not much and it scales).

If you have decided that you want to undertake the bin packing problem, AWS's ECS or the new Elastic Kubernetes Service makes some sense; you're paying EC2 prices, plus a small management fee (I think). I don't understand Fargate at all.

AWS employee here. Just want to say that we actually had a typo in the per second pricing on launch. The actual pricing is:

    $0.0506 per CPU per hour
    $0.0127 per GB of memory per hour
Fargate is definitely more expensive than running and operating an EC2 instance yourself, but for many companies the amount that is saved by needing to spend less engineer time on devops will make it worth it right now, and as we iterate I expect this balance to continue to tip. AWS has dropped prices more than 60 times since we started out.

I know AWS services put outsized fees on things that don't really have marginal costs (e.g. S3 read operations), because the fees are used to disincentivize non-idiomatic use-cases (e.g. treating S3 as a database by scanning objects-as-keys.)

Under this economic-incentive-system lens, I'm curious whether new AWS services might also be intentionally started out with high premiums, as a sort of economically-self-limited soft launch. Only the customers with the biggest need will be willing to pay to use the service at first, and their need means they're also willing to put up with you while you work out the initial kinks. As you gain more confidence in the service's stability at scale and matching to varied customer demands, you'd then lower the price-point of the service to match your actual evaluation of a market-viable price, to "open the floodgates" to the less-in-need clients.

Tangentially related: you can definitely use S3 as a database now, and they seem to encourage it: https://aws.amazon.com/blogs/aws/s3-glacier-select/

Without lock support, I wouldn't consider it a database.

Kind of a read mostly database, actually useful for data lake type things.

Yeah I'm really looking forward to this as I often have to get some part of one of many jsonl files that matches some condition.

S3 select will likely let me delete a lot of custom code.

Well if something needs more burn in time, the last thing you want to do is get thousands or millions of customers. The reliability of a Timex or Toyota has to be high.

Yes, this is standard pricing model for all new tech.

I’m your target for this service as a large consumer of ECS that hates dealing with the underlying resources. Despite fargate being very compelling it’s priced too damn high (even without the mistake).

I’d be willing to pay a premium but this is just not cost effective.

You hate starting an AMI and having a simple script to connect to an ECS cluster?

No, that’s the easy part. I strongly dislike that auto scaling groups do not communicate with ECS (service scaling) and that updating your nodes to run a new ECS optimized ami is tedious and error prone.

Want to scale up/down when you have high or low load? Cool, configure it at the ECS service level AND at the auto scaling group level. Will the auto scaling group take instances away before they’ve been drained of connections or randomly? Who knows the systems don’t talk to one another to coordinate: Fail

What about just removing a node from the cluster when it fails? Decrease service count min temporarily -> Drain connections elb -> remove from target group -> kill instance. This is a 5-10 minutes process to do correctly that should be 1 click. Same problem, stuff doesn’t talk.

Want to update to a new ECS optimied AMI to get updates and patches but ensure a minimum of core services keep running while you scale down after you’ve scaled up? Good luck!

Also, last time I checked IAM roles sucked on ECS. Not every container in the cluster should have the same role. I wonder if fargate fixes this.

I used the per-hour pricing in my numbers because I assumed the per-second was wrong, yeah.

The issue for me is we don't have funding, so just pay coming out of savings until we make money.

The cheapest seems to be more than $40, but based on the description its only allocated $5 to $10 worth of computing. I am using Linode or Digital Ocean pricing for that, but even with Amazon's inflated EC2 pricing it's probably like $25 max.

So I can't justify paying $40 per month when I can get the same horsepower for $10 or $20. I can set up an open source system for monitoring and containers restart automatically.

For people that have $200,000 burning a hole in their pocket, it may be a different story.

Focus on making a product that makes money and afterwards build with containers and these complex-to-maintain technologies.

Start simple and take care of your scalability problems when you have them, not before.


if you are trying to pay bottom dollar for hosting aws is not the solution

You're completely right. You can get an amazing server in a very good hosting provider at a fraction of AWS. Really. You can get for $200/month a good server in a good hosting that costs literally $5000/month or more on AWS. Azure and Google are not different.

What you don't get is all their regions, easy scalability with EBS, managed services, etc. But all of that has a very high price, and depending on who you are and what you're trying to do, these cloud providers are probably the worst option.

Curious if this also holds for spot nodes...

For build and test tasks... sure scalability would be needed.

What would you suggest, scaleway seems to have poor APIs for automation, and atom CPUs so not useful..

Main thing was security... DO now offers F/W now though.

A T2 will give even better real-world performance than an M3, at 2x vCPU and 4GB of RAM at ~$34/mo. Unless you're compute bound the bursting performance of T2s is perfect for web services. Add an instance reservation, with no upfront cost, and you're looking at ~$20/mo for a t2.medium. Using Fargate will reduce your instance management overhead, but not worth it at over 2x the price, at least for me.

I'd rather have two T2's than one M4 for most web services, or 8 T2's over 4 M4's etc. for both better performance, reliability and price. T2's are the best bang for your buck by far, as long as your service scales horizontally.

No way t2's are a good choice for your average webservice. You don't get the full CPU with t2's. With a t2.medium if you're using over 20% of the cpu (40% of 1 vcpu or 20% of both), you're burning CPU credits. So unless you have a webapp that uses 8gb of memory yet stays somewhere under 20% cpu utilization (maybe with some peaks), you'll eventually get throttled.

t2's are for really bursty workloads, which only makes sense for a webservice if you aren't handling much traffic in general and you have maybe 2 instances total.

> So unless you have a webapp that uses 8gb of memory yet stays somewhere under 20% cpu utilization (maybe with some peaks), you'll eventually get throttled.

FWIW this is, unfortunately, a pretty common description of many low traffic Rails apps.

Just for a data point;

We have been serving around 10 million web requests / day per t2.medium. Not lightweight static files or small json responses, actual dynamic pages with database load. Our servers mostly sit around 20% though.

Might not be that much compared to larger web services but we have a localised audience so nighttime cpu credits get collected and used during peak times. So it fits our workload. They are great value when the credit system work in your setting.

Have a grafana dashboard constantly monitoring credits and have alerts in place though. But haven’t had a sudden issue that we needed to manually remedy.

> Have a grafana dashboard constantly monitoring credits and have alerts in place though.

This, otherwise your infrastructure will come grinding to a halt at the worst time. It's a potential DoS vector in that way too, if you're relying on charging up credits overnight to handle traffic during the day.

Can help avoid that with the even-newer t2.unlimited [1] instances, it seems.

[1] https://aws.amazon.com/blogs/aws/new-t2-unlimited-going-beyo...

Counter data point, my erlang nodes on t2 often reports lockups of more than 400ms. This is not acceptable when queries are handled in 10ms on a typical day. Erlang had built in monitoring when the underlying system scheduler makes a error

Most web apps are IO bound, not CPU bound, and a throttled T2 has IO to spare versus its CPU usage.

t2's are great until something happens and you really need the full CPU, then you get throttled and suddenly your service goes down because it can't keep up with the load.

If you use t2's for anything important, keep an eye on the CPU credit balance.

I work on T2, and boy do I have a fix for you!: https://aws.amazon.com/blogs/aws/new-t2-unlimited-going-beyo...

Just applying selinux policy on CentOS 7 will kill the CPU credits on a micro instance. Running updates is a risky businesss.

But if you get throttled in the first place that means you're going to have some kind of performance degradation since your server was using more than the baseline level. Web apps being IO bound isn't relevant here, because the only requirement for issues to arise is for your server to have consistent 20%+ cpu usage.

Well, they're relevant in that a heavily IO bound app is probably unlikely to use much CPU -- it's too busy waiting on IO to use 90% of CPU, and maybe does stay mostly under 20%. Obviously this depends on a lot of details beyond "IO-bound", but it is not implausible, and I think does accurately describe many rails apps.

t2 instances work great for many web services with daily load fluctuations if you view them as basically a billing mechanism. AWS originally offered a reserved-instance model where you didn't pay for instances that weren't running, but now that you pay for RI instance-hours regardless of usage, scaling down doesn't save you any money, so you either buy RIs to cover your peak usage (and waste money off-peak) or pay much higher on-demand prices some of the time. With t2's (which were introduced around the same time as the RI change), you can just run the right number of instances to keep your CPU Credit Balance above water (stockpiling credits off-peak) and buy RIs for them all, making them extremely cheap. And you can still auto-scale on the credit balance to avoid throttling in case of unexpected load.

I work on T2, and we just released a change that makes your CPUCreditBalance recover immediately instead of through the old complex 24h expiration model. There is now much less need to stockpile or manage your CPU credits: https://forums.aws.amazon.com/ann.jspa?annID=5196

With T2 Standard auto-scaling based on the CPUCreditBalance can be a really bad idea, because the rate at which an account can launch T2s with an initial credit balance is limited. If your application has a bug that causes a health check to fail, the ASG can quickly burn through your launch credits by cycling the instances, and then it's possible for your health checks to keep failing because the later instances start with a CPUCreditBalance of zero.

We just released T2 Unlimited in part to solve that. In this new version all instances start with a zero balance, but none are ever throttled and you can do a sustained burst immediately at launch: https://aws.amazon.com/blogs/aws/new-t2-unlimited-going-beyo...

I'm a bit confused by the wording on the T2 unlimited post:

> T2 Unlimited instances have the ability to borrow an entire day’s worth of future credits, allowing them to perform additional bursting.

What does this mean? Specifically, what is meant by "borrow"? Do they have to be paid back? Does the next day then have fewer credits?

Instead of ever being throttled your instance accumulates a "borrowed" CPUSurplusCreditBalance. If your CPU Utilization goes back below the threshold, the earned credits will pay down the surplus balance. If you terminate an instance with a nonzero CPUSurplusCreditBalance you'll be charged for the extra usage.

We removed the direct relationship between credits and 24h cycles, so your current usage no longer affects tomorrow's balance: https://forums.aws.amazon.com/ann.jspa?annID=5196

Ahhh, I understand now, thanks.

You're completely correct. I was just going price-to-price.

Just wanted to share a perspecticve:

I think it's a misnomer to call it expensive or compare using a more abstracted service like Fargate vs something more granular like EC2.

If I need a service that lets me prototype, build a product, be first to market, etc. splitting hairs over compute costs seems moot. Not to say it isn't interesting to see the difference in pricing or how AWS quantified how to set pricing of the service.

FWIW, if you watched the Keynote stream, the theme of it was literally "Everything is Everything" and a bunch of catch-phrases basically meaning they've got a tool for every developer in any situation.

One other note: From my experience I'd also argue it's often times easier to migrate from fully-managed to more-self managed services than the other way around. By nature of owning more operations, you make more decisions about how it operates. Those turn into the pain-points of any migration project.

But does this lock you in to Amazon?

Trying to run a DCOS/marathon or K8s cluster is not trivial. Last time I looked, every service out there basically spun up a Docker machine with some auto-magic cert generation.

Surely there are other services out there which will just run containers, allowing you to deploy from a k8s or marathon template? What are the other options?

> But does this lock you in to Amazon? Sorta. You can always do more yourself, usually for a cost.

Most AWS services IME offer a fair amount of portability. My high-level point is: I just want tools to build things. Not to obsess over the tools I use the build them.

That said, I don't have specific enough domain knowledge to answer your questions or suggest alternatives.

Sadly even giants like Rackspace are moving to being managers of AWS services. It’s hard to beat amazon’s pricing at scale, even if this is a touch over priced.

Got a cite for rackspace reselling aws resources?

Looking through the pricing page I'm not sure what sort of workload this would make sense for. Just looking at the examples from the pricing page I think I'm getting sticker shock.

- https://aws.amazon.com/fargate/pricing/

Example 1 > For example, your service uses 1 ECS Task, running for 10 minutes (600 seconds) every day for a month (30 days) where each ECS Task uses 1 vCPU and 2GB memory. > Total vCPU charges = 1 x 1 x 0.00084333 x 600 x 30 = $15.18 > Total memory charges = 1 x 2 x 0.00021167 x 600 x 30 = $7.62 > Monthly Fargate compute charges = $15.18 + $7.62 = $22.80

So the total cost for 5 hours of running time is $22.80? Am I even reading this correctly? If so, what would this be cost effective for?

I think they mislabeled the pricing. If you look at the per-hour pricing ($0.0506/CPU-hour and $0.0127/GB-hour), that translates to $0.00084333 and $0.00021167 per minute, which is a pretty reasonable price. This also makes sense in light of their recent announcement of per-minute EC2 billing.

You are correct, we mislabelled the pricing on launch, it is corrected now. The correct values are:

    $0.0506 per CPU per hour
    $0.0127 per GB of memory per hour

Hopefully this is the correct math for their example 1:

(1 * 2 * 0.00021167 * (600/60) * 30) + (1 * 1 * 0.00084333 * (600/60) * 30) = 0.380001

Because that's much better than the original:

(1 * 2 * 0.00021167 * (600) * 30) + (1 * 1 * 0.00084333 * (600) * 30) = 22.80006

Ah yes, that makes much more sense. Hopefully they will update the pricing page with the correct values soon :)

That is correct

Fargate seems like it's an in-between of Lambda and ECS. Lambda because it's pay-per-second on-demand functions being run (or in the case of Fargate, containers) and ECS because Fargate is ECS without having to worry about having the EC2 instances configured. I'm not sure where this falls in, but maybe developers were complaining about Lambda and wanted to just run containers instead of individual functions?

Lambda has some limitations such as cold starts, 5 min max execution time, etc because it is designed for a much more granular operational model. Fargate is designed to run long running containers that could stay up for days or weeks, and always stay warm to respond to requests, so there is no cold start.

The way I think about it is temporal and spatial control, and giving up control over them so that some common entity can optimize and drive down your costs. With Fargate, you're giving up spatial control so you can just pay for the task resources you asked for. With Lambda, you're additionally giving up temporal control so you can just pay for resources when you're lambda is actually servicing a request.

When I think about the offerings this way, I can start to decide when I want to use them because now I can ask myself "Do I need strict temporal/spatial control over my application?" and "Do I think I can optimize temporal/spatial costs better than Lambda/Fargate?".

I assume as much; my contention is that that's not gonna really be worth it even to the people who think they want it. Not at this price.

It's Lambda with out the 5 minute limit.

Plus the ability to do custom containers. For some workloads, may be valuable.

Yes. If you're using Elastic Beanstalk, or Cloudformation with autoscaling, Fargate seems to be an incredible waste of money. Maybe if you have an extremely small workload that doesn't need a lot of resources running, I could see it, but at that point you'd be better off with Lambda instead?

Can you elaborate on what the bin packing problem is?

Kubernetes requires machines big enough to run all your containers. Those machines are the bins. Your containers are the packages. Fitting your containers in such that there is no criticality overlap (in AWS, that all instances of service X are spread across machines in different AZs) and that there is room for immediate scaling/emergency fault recovery (headroom on the machines running your containers) gets expensive. You're buying big and running little, and that comes with costs.

Meanwhile, in AWS, you already have pre-sized blobs of RAM and compute. They're called EC2 instances. And then AWS pays the cost of the extra inventory, not you. (To forestall the usual, "overhead" of a Linux OS is like fifty megs these days, it's not something I'd worry about--most of the folks I know who have gone down the container-fleet road have bins that are consistently around 20% empty, and that does add up.)

You may be the one percent of companies for whom immediate rollout, rather than 200-second rollout, is important, and for those companies a solution like Kubernetes or Mesos can make a lot of sense. Most aren't, and I think that they would be better served, in most cases, with a CloudFormation template, an autoscaling group with a cloud-init script to launch one container (if not chef-zero or whatever, that's my go-to but I'm also a devops guy by trade), and a Route 53 record.

You're basically paying overhead for the privilege of `kubectl` that, personally, I don't think is really that useful in a cloud environment. (I think it makes a lot of sense on-prem, where you've already bought the hardware and the alternatives are something like vSphere or the ongoing tire fire that is OpenStack.)

I know you're answering the question of bin-packing, but after two years of experience with it, I can say that for me, bin-packing is one of the smallest benefits (though it sells very well with management), though perhaps a baseline requirement these days. The real benefits, in my experience, stem from the declarative nature of cluster management, and the automation of doing the sensible thing to enact changes to that declarative desired state.

Sure. CloudFormation exists for that, though, and both its difficulty and its complexity are way overstated while also letting you manage AWS resources on top of that.

And it doesn't cost anything to use.

Eh, there are a lot of terrible things I'd rather put myself through than writing another CloudFormation template for any sort of complex infrastructure. It could have been made easier and more readable if my company had allowed the use of something like Monsanto's generator [1], but creating ASTs in JSON is not my idea of a good user experience.

[1] https://github.com/MonsantoCo/cloudformation-template-genera...

I maintain auster[1] and am a contributor to cfer[2] for exactly that purpose. ;) CloudFormation really isn't a rough time anymore, IMO.

[1] - https://github.com/eropple/auster

[2] - https://github.com/seanedwards/cfer

If you know those tools exist, maybe. I just put together a new project using cloudformation (technically serverless, but it turned into 90 percent cloudformation syntax anyways), and it was pretty rough.

Maybe it's just me, but as a programmer the first thing I ever asked when looking at the wall of CloudFormation JSON was "so how do we make this not suck?".

Our job is not just to automate servers, it's to automate processes, including stupid developer-facing ones.

True, but as a _programmer_, working on a _new to me platform or package_, I am _very_ reluctant to add an extra third-party abstraction layer which requires it's own evaluation of quality and stability and some learning curve. It's gotta be pretty clear to me that it really is "what everyone else is doing", or I've gotta get more experience with the underlying thing to be able to judge for myself better.

I've definitely been burned many times by adding an extra tool or layer meant to make things easier, that ends up not, for all manner of reasons. I think we all have.

Worth noting that "nearly 31,000 AWS CloudFormation stacks were created for Prime Day" [1], so Amazon uses CloudFormation heavily internally. Not a guarantee that it's what 'everyone else is doing', but it's a good indicator of quality/stability and that it will remain a core service within the AWS ecosystem for some time.

[1] https://aws.amazon.com/blogs/aws/prime-day-2017-powered-by-a...

I think they're talking about a CF template generator being the third-party software (I could be wrong).

You're not wrong. But in my case, that had been basically forbidden as an option, essentially because it "wasn't supported by Amazon," and because there's just additional risk to non-standard approaches. AWS certifications cover CloudFormation, so you can hire for that with low risk pretty easily. Other nonstandard utilities, not so much.

Cloudformation templates can be written in YAML now, which is a lot less sucky than writing JSON by hand.

If your only experience with CloudFormation is hand-written JSON, it's worth another look.

We used to use troposphere, a Python library for generating CloudFormation templates, but have since switched back to vanilla CloudFormation templates now that they added support for YAML. We're finding it's much nicer to read and write in plain YAML. We're also now using Sceptre for some advanced use cases (templatizing the templates, and fancier deployment automation).

> If your only experience with CloudFormation is hand-written JSON, it's worth another look.

Strongly agree.

YAML and sensible formatting conventions really do transform the usability of CloudFormation.

And so does terraform which is pretty awesome!

Terraform requires significant infrastructure to get the same state management and implicit on-device access that CloudFormation's metadata service does. A common pattern in systems I oversee or consult on is to use CloudFormation's metadata service (which is not the EC2 metadata service, to be clear) to feed Ansible facts or chef-zero attributes in order to have bootstrapped systems that do not rely upon having a Tower or Chef Server in my environment.

The Terraform domain spec is not sufficiently expressive (just look at the circumlocutions you need to not create something in one environment versus another). It's way too hard to build large modules, but the lack of decent scoping makes assembling many small modules difficult too. Worse, the domain spec is also requires HCL, which is awful, or JSON, which is a return to the same problem that cfer solves for CloudFormation. One of my first attempts at a nontrivial open-source project was Terraframe[1], a Ruby DSL for Terraform; I abandoned it out of frustration when it became evident that Terraform's JSON parsing was untested, broken, and unusable in practice. Out of that frustration grew my own early CloudFormation prototypes, which my friend Sean did better with cfer.

If you're looking for an alternative to CloudFormation, I generally recommend BOSH[2], as it solves problems without introducing new ones. Saying the same for Terraform is a stretch.

[1] - https://github.com/eropple/terraframe

[2] - https://github.com/cloudfoundry/bosh

Cloudformation is not without its problems, even still. I feel you overstate Terraform's issues, though there is tons of valid criticism to go around. I would say Terraform really shines in areas CF still does not.

We use Terraform for our foundation and a clever Cloudformation custom resource hack to export values out to use in Cloudformation stacks where appropriate(use with Serverless services, etc). Works great for us; Terraform has seen significant(surprising even if you haven't looked at it in 6+ months) development over the past year.

Immutability, in other words.

It's multi-machine scheduling, basically. Given N resources and M consumers, how can I fit all M consumers most efficiently, while using the minimum N?

The bin metaphor is that you imagine one or several bins on the floor, and a bunch of things to place in bins. Binpacking is just playing tetris to make sure all your things are packed into as few bins as possible, because bins cost money.



If you have a bunch of jobs and you need to run them efficiently on a bunch of compute, you need to be careful not to oversubscribe the hardware, especially wrt memory. There's an isomorphism between running differently sized jobs concurrently on a compute resources, and the bin packing problem. It's a scheduling problem.

Running load efficiently on a given resource. Most VMs running a single app are under-utilized so it's more efficient to pack apps into containers and run them across a smaller pool of servers so that they all get the necessary resources without waste.

Kubernetes does really well with this, although ease of deployment using config files and the abstraction from underlying vms/servers is probably more useful for most companies.

Kubernetes emphatically does not do better at resource utilization than not using Kubernetes. You should figure on between ten and twenty percent of wastage per k8s node, plus the costs of your management servers, in a safely provisioned environment.

You can argue about the configuration-based deployment being worth it--I disagree, because, frankly, Chef Zero is just not that complicated--but it's more expensive in every use case I have seem in the wild (barring ones where instances were unwisely provisioned in the first place).

Based on what evidence? We can put hundreds of customer apps into a few servers and have them deployed and updated easily. We could try to manage this ourselves but it's much less efficient while costing much more effort. GKE also costs nothing for a master and there is no overhead.

K8S/docker also makes it easy to avoid all the systemd/init issues and just use a standard interface with declarative configs and fast deployments that are automatically and dynamically managed while nodes come and go. We have preemptible instances with fast local storage and cheap pricing that maintain 100s of apps. K8S also easily manages any clustered software, regardless of native capabilities, along with easy networking.

Why would I use chef for that - if it can even do all that in the first place?

> GKE also costs nothing for a master and there is no overhead.

Just a historical perspective: GCE used to charge a flat fee for K8s masters after 6 nodes. After the announcement of the Azure K8s service, with no master-fee, GCE has dropped the fee as well :)

Yes, the pricing seems off by 3 orders of magnitude! 2734 USD for a month of t2.micro -like capacity! Unbelievable!


Check your math. 1vCPU at $0.0506 per hour + 1 GB RAM at $0.0127 per hour gives $.0633 per hour. At 750 hours per month, that's $47.48 per month. A T2.micro is $8.70 per month, but not even close to a whole vCPU, so it's not a direct comparison.

Edit: I think they have a mistake on the pricing page: the per-second rate looks more like a per-minute rate. Doing the calculation with the per-second and again with the per-hour stated prices gives a 60x difference in monthly cost.

Edit2: Yep, they've now fixed the per-second price; it was originally 60x the correct price.

Maybe it's precisely only billing for cpu and memory consumed, so if your workload has a small footprint and is mostly waiting around a lot for other services to respond it would be really cheap?

nah, I asked, it's for the amount reserved, not the amount used. https://twitter.com/nathankpeck/status/935930795864211461

Can someone do the math and comare it to Cloud Foundry / OpenShift solutions? That AWS offering seems to be a step into this part of the market.

(Azure Container Instance engineer here)

This looks very similar to what we launched with Azure Container Instances last summer.

The Azure Container Instances kubernetes connector is open source and available here:


Came here to post this. To me it shows the gap between Azure and the non-enterprise world. Azure did this awhile back, as well as the managed k8s thing, neither of which got much run on HN.

Perhaps Azure needs to work on marketing? Is there a legitimate reason Azure isn't getting more traction in the non-enterprise world? I mean that as a totally serious question, not in a dickish way. Is it because it has the Microsoft name attached to it or just because AWS has so much traction?

As always, full disclosure that I work at MSFT as well.

We run AWS, GCP and Azure.

Devs in my team can pretty much chose their favourite cloud to deploy things to. Everyone always picks AWS, it's just the easiest to navigate and feels like everything links together well.

I think the only things we use Azure for is the Directory, and Functions to run some PowerShell.

As AWS is the industry standard, I feel that a lot of people like to stick with what they know too.

I'm in the unfortunate position of being curious about the one thing that folks are best advised not to share, your security/compliance stack. Based on what I've seen to date, nothing handles all three equally well, but I'm curious if you've found anything that gets close.

We use quite a lot of custom built tools and Splunk to funnel the logs from everywhere, so we can use their AI/ML to detect anomalies etc.

> Is it because it has the Microsoft name attached to it or just because AWS has so much traction?

Yes on both counts.

Also, the perception is common that Microsoft = Windows Server, to a very high degree of bias. Thus, if you don't operate on that platform, you'd immediately disregard Azure. A lot more work is needed to convince non-Windows operations & developers to buy into Microsoft's offerings around Linux. Emphasis that that says nothing about the quality of existing offerings, rather, the issue is of perception. The perception is that Linux is and will always be a secondary concern with Microsoft; and potentially worse, skepticism over whether Microsoft will invest into and support Linux over the very long term. If one buy into that skepticism or doubts Microsoft's commitment to Linux, AWS is immediately a superior choice as a long-term platform bet.

I work for Pivotal and have observed that Cloud Foundry, which has had a boatload of success in this space so far (including on Azure, we won Azure NA consumption partner of the year in 2016 AND google Cloud partner of the year), but you wouldn’t know this from HN posts and comments. It’s some strange big relic from 2012...

HN has strong biases towards things that are startup-targeted or individual hacker targeted: open, inexpensive or new and small, or ubiquitous. Kubernetes and AWS in general get a lot of play compared to Azure , GCP or VMware (or even OpenStack these days). Nothing wrong with that necessarily, just a cognitive bias of the up voters.

"Perhaps Azure needs to work on marketing? Is there a legitimate reason Azure isn't getting more traction in the non-enterprise world? I mean that as a totally serious question, not in a dickish way. Is it because it has the Microsoft name attached to it or just because AWS has so much traction?"

The problems of support and network effects go two ways. People perceive that Azure favours Microsoft tech, and may not support other platforms as well, which may or may not be true. More definitely, the tools and libraries for non-Microsoft languages tend to support AWS as the cloud of choice by having more features and being more heavily-used and tested with AWS than GCP or Azure.

Azure is doing a lot of address this but creating repeatable infrastructure with Hashicorp tools like Packer and Terraform just required way too many steps. However, they are much closer to competing with AWS than GCP.

Wow this looks like an exact copy of Azure Container Instances.

Or just the general idea of a cloud provider making it easy to run containers. That’s not exactly a left-field idea.

Exactly this. I think this is pretty much the use case most people envision when they think about a container orchestration service (it was for me, anyway). My understanding is that EC2 and friends didn't deliver this on day 0 because efficient container isolation is hard.

Which is an exact copy of the Joyent service from the year before :)

You mean Triton? I wouldn’t really call it an exact copy if so. There was a whole Linux syscall translation layer in there...

Copy in the sense of the product features, not the product implementation. Joyent has long provided a "run your container as a service" which IMHO is the best way for a small/medium to run container services. The whole create VM's to run containers creates a lot of extra work. Plus this could be great for teams doing data analysis, just spin up 100 containers for 30 seconds type of workloads.

The OP is short on details anyways, does Fargate run on a tuned xen vm's or do they have linux servers under there (or maybe they're SmartOS ;) ).

Yeah, looks very similar. I will be interested to see how quickly containers can be provisioned on FarGate.

Maybe I was doing something wrong, but my experience so far with ACI is that it consistently took about three to four minutes until a smallish container was ready for use.

What image were you running? Our internal monitoring indicates that it is generally significantly faster than that, but image pull is almost always the long pole for container startup time.

I think you're definitely underselling your title if you are who I think you are :)

>Today’s post is by Brendan Burns, Partner Architect, at Microsoft & Kubernetes co-founder.

I think I have AWS fatigue. I have a few certifications and a few years of experience working with AWS, but it's getting difficult to even keep track of all the services.

This is a trend that will only accelerate in the future. We've finally reaching the point where the rate of change is impacting people's careers. The half-life of useful knowledge keeps shrinking, and there is no end in sight.

Your old knowledge is still useful. Why do you need to offer the newest and shiniest, rather than old, tried and true tech?

Because I intend to stay employable.

It is really AWS Extract Money From Customer's CFOs service

You forgot "Elastic".

of course. the more money CFO has the more is extracted! Pure genius!

AWS is a drug dealer.

Maybe it's just been the last few days, but it feels like every time I look at hn there's 2 new posts announcing new Aws services!

It's just the last few days. The big AWS re:Invent conference is happening this week, with all of the new service announcements.

Re:invent is going on. That's why. They hold big announcements (like new services) until this week each year.

Notably, this appears to confirm a Kubernetes offering (EKS)!

  I will tell you that we plan to support launching containers on Fargate using Amazon EKS in 2018
[Edit] Looks like that just got announced too: https://aws.amazon.com/eks/.

AWS employee here. You are correct. Fargate is an underlying technology for running containers without needing to manage instances, and it will integrate with both the ECS and EKS container orchestration and scheduling offerings.

Do all the containers I launch run in an EC2 VM that’s isolated for my account? Or does Fargate somehow provide the security isolation without being a VM?

Fargate isolation is at the cluster level. Apps running in the same cluster may share the underlying infrastructure, apps running in different clusters won't.

Are they creating separate cluster for each aws account, how the isolation is happening

A customer creates a cluster on their account. You as a customer can create one or more Fargate clusters on your account to launch your containers in.

Is that infrastructure, EC2 instances?

I love AWS and their pace of innovation, but some areas are really lagging behind.

Two new container services announced but São Paulo still doesn't even have ECS which was announced in 2014.

This is one of a few signals that may suggest ECS may not figure prominently in AWS future strategy.

That's an understatement! We've been watching ecs-agent development stagnate for the past 6 months until just a couple of weeks ago.

ECS has been on death's doorstep while AWS has been pushing the Lambda strategy. My guess is that their numbers show a slowdown in Lambda uptake due to the problems with Lambda, so they're now moving over to this Fargate platform and ECS is getting a few dribbles of dev time as a consequence.

I think they need to get over this NIH/Rebrand&Relabel syndrome and implement Istio (https://istio.io/).

AWS employee on the ECS team here.

First of all you are using the wrong measurement of growth vs stagnation. We've continually been releasing features (not all of which are part of ecs-agent), while also working on many interesting backend projects such as Fargate. Much of what we develop is closed source or open sourced later, so the ecs-agent repo is not a good measurement of progress or attention.

Second the idea that ECS is on death's doorstep is just false. In the container state of the union at re:Invent Anthony Suarez, head of engineering on ECS, shared that ECS has experienced 450% growth, with millions of container instances under management, and hundreds of millions launched per week: https://pbs.twimg.com/media/DP1sWVZUMAAflSW.jpg

This matches up with my personal experience as a developer advocate for ECS talking to customers pretty much every day who are considering ECS or moving to ECS because it makes it easier to connect your containers to other AWS services.

These anecdotes are great, but, and I'm being honest here, I don't care about 450% growth or how many millions of container instances that are reportedly running. I care about long-standing bugs being fixed in a timely manner.

Take a look at any random Github project that's unmaintained. That's the image Amazon has been showing the development community when they look at ECS. They don't see the closed source work. They don't see the hundreds of millions of internal KPIs ticked per week.

Now, I haven't spoken with a developer advocate, but I'm happy to share some of my frustration with you: I've had a dedicated resource working around bugs and limitations in ECS for months. We built a service mesh because ECS lacks service discovery, and then we built wonky patches to work around weird bugs in ecs-agent regarding how containers identify themselves. We've spent serious, deep time tracking down intermittent failures in the scheduler. We've worked in and around the strange task abstraction. This hasn't been a lovely experience. It's been hard and painful, but we press onward only due to the lack of time to convert to Kubernetes/Istio.

Are you in Seattle? Shoot me an email; I'd be happy to grab a coffee and share our experience.

I don't see any contact info on your profile. I'm not in Seattle but I am available at peckn@amazon.com and I can connect you to someone in Seattle you can talk to, or we can chat remotely.

Sounds great, I'll reach out; thanks.

AWS employee here on the ECS team. A service that does not figure prominently in AWS strategy would not be the first service featured in the keynote at re:Invent. AWS Fargate and EKS were the first things introduced this morning at the keynote.

Is "Fargate" an Aqua Teen Hunger Force reference? https://youtu.be/uOd7HQoKxcU?t=38

Yeah, as soon as I saw "Fargate" I thought, "Is Amazon really naming a product after a silly reference from an episode of ATHF?"

I'm not sure if I should be surprised or not.

I think they ran out of sensible names a long time ago.

Probably. first thing I thought.

I am getting lost with all the ways to run containers on AWS. Is this the equivalent of google compute engines beta option to boot from a docker container?

As a contractor, I come into places and use stuff for about 6 months then move onto the next place with a different setup.

The Amazon stuff is especially confusing, it seems they have reinvented just about everything with their own jargon, it really doesn't help.

AWS employee here on the ECS team. ECS on Fargate would be the closest thing to what you are asking for. Upload a container image, create a Fargate cluster, and launch a service on that cluster that runs your container.

Is this available today? I thought that's what I heard, but I'm not seeing anything in the AWS console.

It's currently available in the us-east-1 region, under the ECS service in the console. Create a new cluster and Fargate will be an option for launching and operating the cluster.

Thanks! I must have missed that bit about us-east-1 only

It's frustrating how hit or miss their service availability is in each region. I can understand other countries with different laws and regulations but they can't even get some services multi-region in the US.

Google Employee here

It's closest analog is Azure Container Instances or Google App Engine Flex.

I'm not 100% sure about the relationship between EKS, ECS and Fargate.

Why would I deploy to Fargate over EKS? I assume it's because with Fargate I don't have to write a k8s deployment spec?

Why would I deploy to Fargate over ECS?

Legitimately curious, and looking for clarification/correction.

AWS employee here. You would deploy to Fargate because you don't want to have to manage the underlying EC2 instances. You can use Fargate with both ECS and EKS (in 2018).

ECS and EKS are just two different schedulers for orchestrating the containerized services that you want to run. Fargate is the engine behind them that executes the containers for you without you needing to worry about servers.

ECS as a scheduler will always integrate much better with other AWS services. EKS will give you the advantage of being able to run the same scheduler on an on premise or on another cloud.

Thanks a lot for the explanation.

I thought EKS was managed? Do you still have to manage the underlying instances in EKS?

Yes, EKS is just managed K8s, which is the orchestration layer. You still need to have EC2 instances for the EKS tasks to be scheduled on. Unless you run your EKS tasks on Fargate, which is coming in 2018.

So if I do "kubectl get nodes" while using Fargate, what do I see in response?

Fargate is more analogous to EC2 than ECS or EKS.

Fargate is a placement target for containers, just like EC2 instances in a cluster would be.

You use ECS and EKS to define and schedule tasks/containers on a placement target.

The primary difference between Fargate and EC2 is that with Fargate you don't need to manage physical instances and the software stack running on them (Docker daemon, etc). When you start a task, it runs...somewhere. In the AWS "cloud".

With ECS and EKS you get a managed master, then you set up your own autoscaling groups etc to deploy nodes (which you manage) into the cluster.

With Fargate, you get access to AWS-managed multi-tenant nodes. So, Fargate connects to either your ECS or EKS cluster and avoids the need for you to worry about managing the nodes as well.

After reading lots of negative comments about pricing, I think that many people don't get it - AWS Fargate is not a replacement of EC2 nor ECS. It's a sort of lambda with containers and lots of features (HA, autoscaling, etc.) implemented + pay-per second - which is absolutely great! This way you could run short and long running jobs which are container-based (a django-admin job that performs migrations? dunno... just saying...), and also your "normal" services without taking care of scaling up/down, HA, etc.

It's not for everyone, it's not a "one solution fits all", it's very specific and what it does, it does it great (only tested, of course we have to see long-term...), because you don't need anymore to manage a cluster, which is really expensive especially because you don't want to shutdown your machines when you go home and restart them when you come back to the office (for example, in case you don't have a smart autoscaling in place).

Thanks AWS for providing this service!

I strongly agree with you.

Comparing the pricing of Fargate to EC2 is the same as comparing the pricing of EC2 to bare metal. It is more expensive, but it is also more convenient, and is easier to scale up and down.

So what would be some real use cases?

I mentioned one use case - which is very similar to a batch, I agree, therefore there is no real difference there with Fargate/Batch.

However, you don't have to execute only batches with it, you can also run a temporary service within a specific VPC - with the biggest advantage that you don't have to resize/manage your cluster.

For example, you could set a Cloudwatch alert that, reached a peak of 80% of CPU, spins up a bunch of instances for a specific image with Fargate and keeps it alive until the CPU goes down to 60% (there it can be stopped). This way you don't have to worry about optimizing your autoscaling in ECS, because sometimes peaks happen in a matter of few seconds and it could be the case that you have not enough EC2 instances running, because lots of containers are running and they are spinning up in parallel. With Fargate it's like having an unlimited amount of EC2 instances running in your cluster... (which you pay for but certainly less than if you really had such EC2 instances always running)

Let's try to think about it from another point of view: you could also try to use Fargate to execute all your services, but then you would get the following features already implemented:

- autoscaling

- HA

- maintenance/cluster management

However: - you don't like how AWS does autoscaling, because you notice that many times the nodes are under stress, and you would like to use a different strategy, or maybe you have computed your own autoscaling algorithm which is great and lets you save a large amount of money with it

- HA is trivial for you to implement, because you already have a lot of experience with it, lots of CloudFormation scripts, and well, so far it worked like a charm, so why would I want to switch now? the platform is really functioning well, no need to switch to another technology

- maintenance is not a problem for you, because your ECS cluster is small and easy to manage

Maybe in such cases, yes, you don't need Fargate after all. Keep your ECS cluster and don't worry about that.

Right now, we have 2 services and 1 scheduled job running on an ECS cluster running on 2 m3.medium spot instances. However we are utilizing less than 10% of the resouces available so moving to Fargate would be cheaper and much more convenient for us.

Unfortunately Fargate isn't available on the Ireland region yet.

This is a really cool midway point between lambda and ec2. You can have a large codebase, run continously, but on "serverless"

This is going to be really great for batch jobs which need isolated environments. I have been waiting for something like this for a long time. Amazon is really doing work. I'll be definitely be using this.

Have you tried AWS Batch? My team moved a couple of our batch machine learning modeling jobs to it earlier this year and it's worked out great.


How would you use Batch + Fargate? Let’s assume Fargate is a supported compute environment in Batch.

(I run the containers org at AWS. I happen to run Batch as well)

My org is looking to move machine learning to batch as the underlying infra.

All I want is to be able to do this: 1) specify a DAG of tasks. Each task is a docker image, CMD string, CPU and memory limits 2) hit an API to run it for me. Each task runs on a new spot instance 3) be able to query this service about the state of the DAG and of each individual node

Sounds like if AWS provides an API to create a batch cluster (or whatever you call it) and lets the tasks be defined in terms of what docker image to run with what command you'll satisfy this desire

That is in line with our vision for Batch; to be the engine for systems where you essentially describe a DAG and we run and hyperoptimize the execution for you. We do some of what you’re asking for but that’s great feedback around what you’d like to do.

Thanks for the response!

I'm curious, what would be the interaction between Batch and Fargate? Right now I use Batch to run a container and then exit out with as little thought about the underlying machine as possible. Is Fargate a push further towards serverless?

One concern I had with Fargate from the product description is around the configuration options. Our models require more than 100GB to build, but I'm seeing "Max. 30GB".

I haven't really tried batch. But, from initial reading of documentation it didn't look like it supported running docker images. My use case requires running docker images of static site generators and that sort. Will take another look at it.

See here for the details on how to define a job, specifically around running docker images on ECS


Fargate is a very logical step, I agree Kubernetes is really nice but very complex for simplistic setups, looking forward to use it, too bad it's only in N. Virginia

We will be steadily rolling Fargate out across other regions starting in 2018.

Ive been using hyper.sh i really like it. Especially i dont want a web interface i can pull a container and start up a container from my command line in 3 seconds. I can pull from docker repo attach ips and storage all in terminal. How does this compare i want to stay out of a web mgmt interface.

AWS has an API, and a command line application for integrating with the API. You can (and probably should) use AWS without ever touching the web management interface.

For an easy getting starting command line experience for ECS I highly recommend this tool: https://github.com/coldbrewcloud/coldbrew-cli

If I understand this correctly, Fargate is similar to Elastic Container Service, without having to worry about EC2's instances? But you also can manage the EC2 instances with Fargate as well? Seems like AWS has lots of products that overlap and it is confusing to end users.

I'd say this is exactly why Google Cloud is superior (in my opinion). AWS lacks user experience and KISS philosophy. Just feels like AWS keeps on bolting things on.

Actually I think their approach is great—it's very modular and there's little overlap between use-cases. An exception (among a few) may be ECS and EKS, where ECS was probably the wrong bet for them now that K8s is getting so much traction. Hence EKS.

But being able to put use either EKS or ECS for orchestration, and then schedule those tasks on either EC2 or Fargate (depending on compute needs), opens up a lot of options. You can start simple and grow as requirements become more complex—without fundamentally changing the deployment artifact. That was the promise of containers originally, so it's nice to see it play out.

No, Fargate is just a container target, like EC2 is.

You manage the containers with ECS (Or the newly announced Kubernetes equivalent). They are placed on either EC2 instances or somewhere inside Fargate.

I wonder how they handle isolation. Linux container technologies don't normally provide sufficient isolation for multi-tenant environments, which is why most of the cloud container orchestrators require you to pre-provision VMs (ECS, GKE).

Azure Container Instances uses Windows Hyper-V Isolation that boots a highly optimized VM per container, so containers have VM isolation.

Has AWS built a highly optimized VM for running container?

AWS employee here. Isolation is handled at the cluster level. Apps that are run in the same cluster may be run on the same underlying infrastructure, but clusters are separated.

It would be interesting to see how fast the startup time of the docker containers will be. If its for faster than EC2, this could be used for some super elastic job processing. Somewhere between EC2 and lambda. I doubt the startup time would be faster, since the docker image download would hit the startup timing.

If the startup time is fast, and it can run of GPU, a killer Deep learning platform could run on this.

Everyone at Zapier was hoping for AWS managed Kubernetes.

Edit: Maybe we'll get it! https://twitter.com/AWSreInvent/status/935909627224506368

They announced that too, "AWS Elastic Container Service for Kubernetes", or EKS as they call it.

This is different, this is where your containers run, not for managing the containers.

You can use either ECS or EKS for scheduling containers on Fargate, the same as scheduling them on EC2 hardware.

It is there as well https://aws.amazon.com/eks/

Guess I should learn about Kubernetes now.

I found the talks by Brendan Burns to be very good for a high-level overview.

Fargate is complementary to the just-announced Elastic Container Service for Kubernetes (EKS)

Well that's one of the more nonsensical names to come out of AWS recently.

I'm pretty sure someone at Amazon is an Aqua Teen Hunger Force fan: https://www.youtube.com/watch?v=uOd7HQoKxcU

Oglethorpe: We have successfully traveled eons through both space and time through the Fargate. To get free cable.

Emory: I think it's a s-star gate

Oglethorpe: Its the Fargate! F, its different from that movie which I have never seen, so how would I copy it?

Was hoping this was close to Google's Appengine. Patiently waiting.

How does this compare to Heroku?

Would this be a direct competitor to Google Cloud's app engine flexible? Aka i just upload my docker container?

I haven't had a chance to dig through the documentation yet. Can we deploy a POD instead of just a container? One of the things we are struggling with is all the side services that have to go with a container deployment (i.e. a secure or oauth proxy).

Fargate uses the same task definition abstraction as Amazon ECS. See http://docs.aws.amazon.com/AmazonECS/latest/developerguide/l... So yes, you can launch multiple containers in a single logical unit.

A word of caution: ECS multi-container tasks do not have the same semantics as Kubernetes pods. In particular, there is no support for bidirectional network discovery.

Is there any plan for Fargate + EKS to be able to support attached EBS volumes? Please say yes.

We are super interested in enabling EBS support for Fargate. We do not have any timelines, but would love to know what your expectations are and what you would use EBS for.

(I run the containers org at AWS)

Goal is to have a developer write up a service definition with e.g. a web tier, service tier and database tier, wherein some of those pods might need to have persistent data volumes and expect EKS be able to run that application for them without my intervention, even if I were to have something shooting the underlying compute nodes in the head (but ideally, I won't even sweat those nodes' existence thanks to Fargate).

We'd be using services like RDS for everything we could, of course, but sometimes someone insists on persisting something to disk, and sometimes that strategy makes sense.

One good example use case would be running distributed NoSQL / KV stores. Running say a Riak KV cluster or an image caching and processing service. Both of these would probably be best using SSD EBS's to be able to bring the data storage and computing closer together rather than using RDS or similar, which in certain use cases it can be significantly faster due to less network calls and latency. As an example setup, Rancher has a plugin to allow using docker data volumes from EBS volumes. It handles the naming, attaching the drives to the EC2 instance, etc.

I currently run Cassandra inside a container. The data is on an EBS volume, attached to the instance through CloudFormation at stack creation time, and mounted through a systemd unit defined in UserData (Also through CloudFormation). It is then exposed to the container via a Docker volume mapping specified in the task definition (Also through CloudFormation!).

Would love to have an extension of the run-task command that specifies an EBS volume to attach and where to mount it when using Fargate.

Silly question! Of course we should be able to attach EBS. How else does one provide data persistence. Please don't say "efs".

Pet sets describing EBS requirements. This way if a pet set instance is migrated the volume can be moved and set up with it.

One of our use-cases involves operating a time-series database for collecting app metrics, InfluxDB.


edit: (I dont actually know, im just saying yes cause you asked)

:) and if you ever have questions or feedback we are a tweet away.

Is there any work in progress to simplify AWS console. With so much (more) cool stuff being announced AWS feels a bit overwheling for some, me included. I’m refering to the UI part mostly and some concepts like policys and user mangement. Forgive if this is the wrong place to ask... then Ill try Twitter ;-)

For low utilization low cost continuous applications (think a web socket listener with not much to do) this lowers the entry level cost below a t2.nano it looks like. That’s a win in my book.

Wow ”hundreds of millions of new containers started each week” these are pretty insane numbers. Insane in a very cool and mindnumbing way that is!

It has nothing to so with that movie, or the syndicates series based on zeh movie...

A part of me really hopes the pm named it this as an aquateen reference

Did anyone else notice that 11/15 top stories on HN right now are Amazon announcements? Crazy.

Sorry for the offtopicish post...

Reinvent causes this to happen every year. And to be frank, AWS announcements generally have major impact on the internet as a whole, especially on how people do business on it. So, it's deserved, i would say.

Full disclosure: former AWS Employee

Yes, their big annual conference: https://reinvent.awsevents.com/

That would explain it!

Isn't that because re:invent is happening now?

Its the AWS developers conference right now. You see the same effect for other big tech firms during their respective equivalents.

how is that different than when apple has it's launch days? the announcements are mostly relevant to the major audiences here, seems reasonable to think they'd get a surge of upvotes

They need to chill on posting. They posted 15 posts to the front page and thats 50% of the headlines... all amazon.

Hope everyone loves amazon!

How is it different than ECS? I tried to apply for it and just end up on my ECS Page.

Is it the "AWS Day" or something ? I see 5 AWS related news in the top !

AWS re:Invent is happening this week. A lot of announcements and product launches.

AWS re:Invent is happening in Vegas right now

I wonder when Fargate will be hit GA and be available in the Ireland region.

looking at the number of Amazon products on the front-page, it's mind-blowing. Amazon (will) probably have a monopoly on developer mindshare in the future.

It's not like it's a pattern. Today is AWS re:invent. The same is true of Google and MS during their respective annual dev conferences.

12 stories on the front page of HN leading to amazon.com and their offerings? Hm....

It's the first day of re:invent and they have a 40%+ market share of IaaS. Not surprising that it's all over HN.

Note: I do not work for Amazon. :)

Announcing everything on the same day pays off :)

sigh, amazon is taking over hacker news. They need to chill on posting. They posted 15 posts to the front page and thats 50% of the headlines... all amazon.

Hope everyone loves amazon!

A third of the front page of Amazon; what's going on? Did they release a dozen products in one go? Interesting release strategy to bulk everything as opposed to spacing it out..

Yes, their big annual conference: https://reinvent.awsevents.com/

re:Invent is happening right now.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact