Hacker News new | past | comments | ask | show | jobs | submit login

Fargate looks really expensive compared to just running an EC2 instance. A 1 vCPU container with 2GB of RAM will run you $55/month. An m3.medium with 1 vCPU and 3.75GB of RAM is $49. The prices seen to get uncomfortably worse from there, though I haven't priced them out the whole way, but a 4 vCPU container with 8GB of RAM is price-competitive ($222 for the container, $227 monthly for the machine) with a freaking i3.xlarge, with 4 vCPUs and 30.5GB, and the i3.xlarge also has 10Gbit networking. Topping Fargate out at 4 vCPUs and 30GB of RAM puts it right between an r4.2xlarge and an i3.2xlarge, both with 8 vCPUs and 61GB of RAM (the i3 is more expensive because it's also got 1.9TB of local SSD).

Enough people are still trying to make fetch happen, where fetch is container orchestration, that I expect that fetch will indeed eventually happen, but this is a large vig for getting around not-a-lot-of-management-work (because the competition isn't Kubernetes, which is the bin packing problem returned to eat your wallet, it's EC2 instances, and there is a little management work but not much and it scales).

If you have decided that you want to undertake the bin packing problem, AWS's ECS or the new Elastic Kubernetes Service makes some sense; you're paying EC2 prices, plus a small management fee (I think). I don't understand Fargate at all.




AWS employee here. Just want to say that we actually had a typo in the per second pricing on launch. The actual pricing is:

    $0.0506 per CPU per hour
    $0.0127 per GB of memory per hour
Fargate is definitely more expensive than running and operating an EC2 instance yourself, but for many companies the amount that is saved by needing to spend less engineer time on devops will make it worth it right now, and as we iterate I expect this balance to continue to tip. AWS has dropped prices more than 60 times since we started out.


I know AWS services put outsized fees on things that don't really have marginal costs (e.g. S3 read operations), because the fees are used to disincentivize non-idiomatic use-cases (e.g. treating S3 as a database by scanning objects-as-keys.)

Under this economic-incentive-system lens, I'm curious whether new AWS services might also be intentionally started out with high premiums, as a sort of economically-self-limited soft launch. Only the customers with the biggest need will be willing to pay to use the service at first, and their need means they're also willing to put up with you while you work out the initial kinks. As you gain more confidence in the service's stability at scale and matching to varied customer demands, you'd then lower the price-point of the service to match your actual evaluation of a market-viable price, to "open the floodgates" to the less-in-need clients.


Tangentially related: you can definitely use S3 as a database now, and they seem to encourage it: https://aws.amazon.com/blogs/aws/s3-glacier-select/


Without lock support, I wouldn't consider it a database.


Kind of a read mostly database, actually useful for data lake type things.


Yeah I'm really looking forward to this as I often have to get some part of one of many jsonl files that matches some condition.

S3 select will likely let me delete a lot of custom code.


Well if something needs more burn in time, the last thing you want to do is get thousands or millions of customers. The reliability of a Timex or Toyota has to be high.


Yes, this is standard pricing model for all new tech.


I’m your target for this service as a large consumer of ECS that hates dealing with the underlying resources. Despite fargate being very compelling it’s priced too damn high (even without the mistake).

I’d be willing to pay a premium but this is just not cost effective.


You hate starting an AMI and having a simple script to connect to an ECS cluster?


No, that’s the easy part. I strongly dislike that auto scaling groups do not communicate with ECS (service scaling) and that updating your nodes to run a new ECS optimized ami is tedious and error prone.

Want to scale up/down when you have high or low load? Cool, configure it at the ECS service level AND at the auto scaling group level. Will the auto scaling group take instances away before they’ve been drained of connections or randomly? Who knows the systems don’t talk to one another to coordinate: Fail

What about just removing a node from the cluster when it fails? Decrease service count min temporarily -> Drain connections elb -> remove from target group -> kill instance. This is a 5-10 minutes process to do correctly that should be 1 click. Same problem, stuff doesn’t talk.

Want to update to a new ECS optimied AMI to get updates and patches but ensure a minimum of core services keep running while you scale down after you’ve scaled up? Good luck!


Also, last time I checked IAM roles sucked on ECS. Not every container in the cluster should have the same role. I wonder if fargate fixes this.


I used the per-hour pricing in my numbers because I assumed the per-second was wrong, yeah.


The issue for me is we don't have funding, so just pay coming out of savings until we make money.

The cheapest seems to be more than $40, but based on the description its only allocated $5 to $10 worth of computing. I am using Linode or Digital Ocean pricing for that, but even with Amazon's inflated EC2 pricing it's probably like $25 max.

So I can't justify paying $40 per month when I can get the same horsepower for $10 or $20. I can set up an open source system for monitoring and containers restart automatically.

For people that have $200,000 burning a hole in their pocket, it may be a different story.


Focus on making a product that makes money and afterwards build with containers and these complex-to-maintain technologies.

Start simple and take care of your scalability problems when you have them, not before.


This.


if you are trying to pay bottom dollar for hosting aws is not the solution


You're completely right. You can get an amazing server in a very good hosting provider at a fraction of AWS. Really. You can get for $200/month a good server in a good hosting that costs literally $5000/month or more on AWS. Azure and Google are not different.

What you don't get is all their regions, easy scalability with EBS, managed services, etc. But all of that has a very high price, and depending on who you are and what you're trying to do, these cloud providers are probably the worst option.


Curious if this also holds for spot nodes...

For build and test tasks... sure scalability would be needed.

What would you suggest, scaleway seems to have poor APIs for automation, and atom CPUs so not useful..


Main thing was security... DO now offers F/W now though.


A T2 will give even better real-world performance than an M3, at 2x vCPU and 4GB of RAM at ~$34/mo. Unless you're compute bound the bursting performance of T2s is perfect for web services. Add an instance reservation, with no upfront cost, and you're looking at ~$20/mo for a t2.medium. Using Fargate will reduce your instance management overhead, but not worth it at over 2x the price, at least for me.

I'd rather have two T2's than one M4 for most web services, or 8 T2's over 4 M4's etc. for both better performance, reliability and price. T2's are the best bang for your buck by far, as long as your service scales horizontally.


No way t2's are a good choice for your average webservice. You don't get the full CPU with t2's. With a t2.medium if you're using over 20% of the cpu (40% of 1 vcpu or 20% of both), you're burning CPU credits. So unless you have a webapp that uses 8gb of memory yet stays somewhere under 20% cpu utilization (maybe with some peaks), you'll eventually get throttled.

t2's are for really bursty workloads, which only makes sense for a webservice if you aren't handling much traffic in general and you have maybe 2 instances total.


> So unless you have a webapp that uses 8gb of memory yet stays somewhere under 20% cpu utilization (maybe with some peaks), you'll eventually get throttled.

FWIW this is, unfortunately, a pretty common description of many low traffic Rails apps.


Just for a data point;

We have been serving around 10 million web requests / day per t2.medium. Not lightweight static files or small json responses, actual dynamic pages with database load. Our servers mostly sit around 20% though.

Might not be that much compared to larger web services but we have a localised audience so nighttime cpu credits get collected and used during peak times. So it fits our workload. They are great value when the credit system work in your setting.

Have a grafana dashboard constantly monitoring credits and have alerts in place though. But haven’t had a sudden issue that we needed to manually remedy.


> Have a grafana dashboard constantly monitoring credits and have alerts in place though.

This, otherwise your infrastructure will come grinding to a halt at the worst time. It's a potential DoS vector in that way too, if you're relying on charging up credits overnight to handle traffic during the day.


Can help avoid that with the even-newer t2.unlimited [1] instances, it seems.

[1] https://aws.amazon.com/blogs/aws/new-t2-unlimited-going-beyo...


Counter data point, my erlang nodes on t2 often reports lockups of more than 400ms. This is not acceptable when queries are handled in 10ms on a typical day. Erlang had built in monitoring when the underlying system scheduler makes a error


Most web apps are IO bound, not CPU bound, and a throttled T2 has IO to spare versus its CPU usage.


t2's are great until something happens and you really need the full CPU, then you get throttled and suddenly your service goes down because it can't keep up with the load.

If you use t2's for anything important, keep an eye on the CPU credit balance.


I work on T2, and boy do I have a fix for you!: https://aws.amazon.com/blogs/aws/new-t2-unlimited-going-beyo...


Just applying selinux policy on CentOS 7 will kill the CPU credits on a micro instance. Running updates is a risky businesss.


But if you get throttled in the first place that means you're going to have some kind of performance degradation since your server was using more than the baseline level. Web apps being IO bound isn't relevant here, because the only requirement for issues to arise is for your server to have consistent 20%+ cpu usage.


Well, they're relevant in that a heavily IO bound app is probably unlikely to use much CPU -- it's too busy waiting on IO to use 90% of CPU, and maybe does stay mostly under 20%. Obviously this depends on a lot of details beyond "IO-bound", but it is not implausible, and I think does accurately describe many rails apps.


t2 instances work great for many web services with daily load fluctuations if you view them as basically a billing mechanism. AWS originally offered a reserved-instance model where you didn't pay for instances that weren't running, but now that you pay for RI instance-hours regardless of usage, scaling down doesn't save you any money, so you either buy RIs to cover your peak usage (and waste money off-peak) or pay much higher on-demand prices some of the time. With t2's (which were introduced around the same time as the RI change), you can just run the right number of instances to keep your CPU Credit Balance above water (stockpiling credits off-peak) and buy RIs for them all, making them extremely cheap. And you can still auto-scale on the credit balance to avoid throttling in case of unexpected load.


I work on T2, and we just released a change that makes your CPUCreditBalance recover immediately instead of through the old complex 24h expiration model. There is now much less need to stockpile or manage your CPU credits: https://forums.aws.amazon.com/ann.jspa?annID=5196

With T2 Standard auto-scaling based on the CPUCreditBalance can be a really bad idea, because the rate at which an account can launch T2s with an initial credit balance is limited. If your application has a bug that causes a health check to fail, the ASG can quickly burn through your launch credits by cycling the instances, and then it's possible for your health checks to keep failing because the later instances start with a CPUCreditBalance of zero.

We just released T2 Unlimited in part to solve that. In this new version all instances start with a zero balance, but none are ever throttled and you can do a sustained burst immediately at launch: https://aws.amazon.com/blogs/aws/new-t2-unlimited-going-beyo...


I'm a bit confused by the wording on the T2 unlimited post:

> T2 Unlimited instances have the ability to borrow an entire day’s worth of future credits, allowing them to perform additional bursting.

What does this mean? Specifically, what is meant by "borrow"? Do they have to be paid back? Does the next day then have fewer credits?


Instead of ever being throttled your instance accumulates a "borrowed" CPUSurplusCreditBalance. If your CPU Utilization goes back below the threshold, the earned credits will pay down the surplus balance. If you terminate an instance with a nonzero CPUSurplusCreditBalance you'll be charged for the extra usage.

We removed the direct relationship between credits and 24h cycles, so your current usage no longer affects tomorrow's balance: https://forums.aws.amazon.com/ann.jspa?annID=5196


Ahhh, I understand now, thanks.


You're completely correct. I was just going price-to-price.


Just wanted to share a perspecticve:

I think it's a misnomer to call it expensive or compare using a more abstracted service like Fargate vs something more granular like EC2.

If I need a service that lets me prototype, build a product, be first to market, etc. splitting hairs over compute costs seems moot. Not to say it isn't interesting to see the difference in pricing or how AWS quantified how to set pricing of the service.

FWIW, if you watched the Keynote stream, the theme of it was literally "Everything is Everything" and a bunch of catch-phrases basically meaning they've got a tool for every developer in any situation.

One other note: From my experience I'd also argue it's often times easier to migrate from fully-managed to more-self managed services than the other way around. By nature of owning more operations, you make more decisions about how it operates. Those turn into the pain-points of any migration project.


But does this lock you in to Amazon?

Trying to run a DCOS/marathon or K8s cluster is not trivial. Last time I looked, every service out there basically spun up a Docker machine with some auto-magic cert generation.

Surely there are other services out there which will just run containers, allowing you to deploy from a k8s or marathon template? What are the other options?


> But does this lock you in to Amazon? Sorta. You can always do more yourself, usually for a cost.

Most AWS services IME offer a fair amount of portability. My high-level point is: I just want tools to build things. Not to obsess over the tools I use the build them.

That said, I don't have specific enough domain knowledge to answer your questions or suggest alternatives.


Sadly even giants like Rackspace are moving to being managers of AWS services. It’s hard to beat amazon’s pricing at scale, even if this is a touch over priced.


Got a cite for rackspace reselling aws resources?




Looking through the pricing page I'm not sure what sort of workload this would make sense for. Just looking at the examples from the pricing page I think I'm getting sticker shock.

- https://aws.amazon.com/fargate/pricing/

Example 1 > For example, your service uses 1 ECS Task, running for 10 minutes (600 seconds) every day for a month (30 days) where each ECS Task uses 1 vCPU and 2GB memory. > Total vCPU charges = 1 x 1 x 0.00084333 x 600 x 30 = $15.18 > Total memory charges = 1 x 2 x 0.00021167 x 600 x 30 = $7.62 > Monthly Fargate compute charges = $15.18 + $7.62 = $22.80

So the total cost for 5 hours of running time is $22.80? Am I even reading this correctly? If so, what would this be cost effective for?


I think they mislabeled the pricing. If you look at the per-hour pricing ($0.0506/CPU-hour and $0.0127/GB-hour), that translates to $0.00084333 and $0.00021167 per minute, which is a pretty reasonable price. This also makes sense in light of their recent announcement of per-minute EC2 billing.


You are correct, we mislabelled the pricing on launch, it is corrected now. The correct values are:

    $0.0506 per CPU per hour
    $0.0127 per GB of memory per hour


Hopefully this is the correct math for their example 1:

(1 * 2 * 0.00021167 * (600/60) * 30) + (1 * 1 * 0.00084333 * (600/60) * 30) = 0.380001

Because that's much better than the original:

(1 * 2 * 0.00021167 * (600) * 30) + (1 * 1 * 0.00084333 * (600) * 30) = 22.80006


Ah yes, that makes much more sense. Hopefully they will update the pricing page with the correct values soon :)


That is correct


Fargate seems like it's an in-between of Lambda and ECS. Lambda because it's pay-per-second on-demand functions being run (or in the case of Fargate, containers) and ECS because Fargate is ECS without having to worry about having the EC2 instances configured. I'm not sure where this falls in, but maybe developers were complaining about Lambda and wanted to just run containers instead of individual functions?


Lambda has some limitations such as cold starts, 5 min max execution time, etc because it is designed for a much more granular operational model. Fargate is designed to run long running containers that could stay up for days or weeks, and always stay warm to respond to requests, so there is no cold start.


The way I think about it is temporal and spatial control, and giving up control over them so that some common entity can optimize and drive down your costs. With Fargate, you're giving up spatial control so you can just pay for the task resources you asked for. With Lambda, you're additionally giving up temporal control so you can just pay for resources when you're lambda is actually servicing a request.

When I think about the offerings this way, I can start to decide when I want to use them because now I can ask myself "Do I need strict temporal/spatial control over my application?" and "Do I think I can optimize temporal/spatial costs better than Lambda/Fargate?".


I assume as much; my contention is that that's not gonna really be worth it even to the people who think they want it. Not at this price.


It's Lambda with out the 5 minute limit.


Plus the ability to do custom containers. For some workloads, may be valuable.


Yes. If you're using Elastic Beanstalk, or Cloudformation with autoscaling, Fargate seems to be an incredible waste of money. Maybe if you have an extremely small workload that doesn't need a lot of resources running, I could see it, but at that point you'd be better off with Lambda instead?


Can you elaborate on what the bin packing problem is?


Kubernetes requires machines big enough to run all your containers. Those machines are the bins. Your containers are the packages. Fitting your containers in such that there is no criticality overlap (in AWS, that all instances of service X are spread across machines in different AZs) and that there is room for immediate scaling/emergency fault recovery (headroom on the machines running your containers) gets expensive. You're buying big and running little, and that comes with costs.

Meanwhile, in AWS, you already have pre-sized blobs of RAM and compute. They're called EC2 instances. And then AWS pays the cost of the extra inventory, not you. (To forestall the usual, "overhead" of a Linux OS is like fifty megs these days, it's not something I'd worry about--most of the folks I know who have gone down the container-fleet road have bins that are consistently around 20% empty, and that does add up.)

You may be the one percent of companies for whom immediate rollout, rather than 200-second rollout, is important, and for those companies a solution like Kubernetes or Mesos can make a lot of sense. Most aren't, and I think that they would be better served, in most cases, with a CloudFormation template, an autoscaling group with a cloud-init script to launch one container (if not chef-zero or whatever, that's my go-to but I'm also a devops guy by trade), and a Route 53 record.

You're basically paying overhead for the privilege of `kubectl` that, personally, I don't think is really that useful in a cloud environment. (I think it makes a lot of sense on-prem, where you've already bought the hardware and the alternatives are something like vSphere or the ongoing tire fire that is OpenStack.)


I know you're answering the question of bin-packing, but after two years of experience with it, I can say that for me, bin-packing is one of the smallest benefits (though it sells very well with management), though perhaps a baseline requirement these days. The real benefits, in my experience, stem from the declarative nature of cluster management, and the automation of doing the sensible thing to enact changes to that declarative desired state.


Sure. CloudFormation exists for that, though, and both its difficulty and its complexity are way overstated while also letting you manage AWS resources on top of that.

And it doesn't cost anything to use.


Eh, there are a lot of terrible things I'd rather put myself through than writing another CloudFormation template for any sort of complex infrastructure. It could have been made easier and more readable if my company had allowed the use of something like Monsanto's generator [1], but creating ASTs in JSON is not my idea of a good user experience.

[1] https://github.com/MonsantoCo/cloudformation-template-genera...


I maintain auster[1] and am a contributor to cfer[2] for exactly that purpose. ;) CloudFormation really isn't a rough time anymore, IMO.

[1] - https://github.com/eropple/auster

[2] - https://github.com/seanedwards/cfer


If you know those tools exist, maybe. I just put together a new project using cloudformation (technically serverless, but it turned into 90 percent cloudformation syntax anyways), and it was pretty rough.


Maybe it's just me, but as a programmer the first thing I ever asked when looking at the wall of CloudFormation JSON was "so how do we make this not suck?".

Our job is not just to automate servers, it's to automate processes, including stupid developer-facing ones.


True, but as a _programmer_, working on a _new to me platform or package_, I am _very_ reluctant to add an extra third-party abstraction layer which requires it's own evaluation of quality and stability and some learning curve. It's gotta be pretty clear to me that it really is "what everyone else is doing", or I've gotta get more experience with the underlying thing to be able to judge for myself better.

I've definitely been burned many times by adding an extra tool or layer meant to make things easier, that ends up not, for all manner of reasons. I think we all have.


Worth noting that "nearly 31,000 AWS CloudFormation stacks were created for Prime Day" [1], so Amazon uses CloudFormation heavily internally. Not a guarantee that it's what 'everyone else is doing', but it's a good indicator of quality/stability and that it will remain a core service within the AWS ecosystem for some time.

[1] https://aws.amazon.com/blogs/aws/prime-day-2017-powered-by-a...


I think they're talking about a CF template generator being the third-party software (I could be wrong).


You're not wrong. But in my case, that had been basically forbidden as an option, essentially because it "wasn't supported by Amazon," and because there's just additional risk to non-standard approaches. AWS certifications cover CloudFormation, so you can hire for that with low risk pretty easily. Other nonstandard utilities, not so much.


Cloudformation templates can be written in YAML now, which is a lot less sucky than writing JSON by hand.


If your only experience with CloudFormation is hand-written JSON, it's worth another look.

We used to use troposphere, a Python library for generating CloudFormation templates, but have since switched back to vanilla CloudFormation templates now that they added support for YAML. We're finding it's much nicer to read and write in plain YAML. We're also now using Sceptre for some advanced use cases (templatizing the templates, and fancier deployment automation).


> If your only experience with CloudFormation is hand-written JSON, it's worth another look.

Strongly agree.

YAML and sensible formatting conventions really do transform the usability of CloudFormation.


And so does terraform which is pretty awesome!


Terraform requires significant infrastructure to get the same state management and implicit on-device access that CloudFormation's metadata service does. A common pattern in systems I oversee or consult on is to use CloudFormation's metadata service (which is not the EC2 metadata service, to be clear) to feed Ansible facts or chef-zero attributes in order to have bootstrapped systems that do not rely upon having a Tower or Chef Server in my environment.

The Terraform domain spec is not sufficiently expressive (just look at the circumlocutions you need to not create something in one environment versus another). It's way too hard to build large modules, but the lack of decent scoping makes assembling many small modules difficult too. Worse, the domain spec is also requires HCL, which is awful, or JSON, which is a return to the same problem that cfer solves for CloudFormation. One of my first attempts at a nontrivial open-source project was Terraframe[1], a Ruby DSL for Terraform; I abandoned it out of frustration when it became evident that Terraform's JSON parsing was untested, broken, and unusable in practice. Out of that frustration grew my own early CloudFormation prototypes, which my friend Sean did better with cfer.

If you're looking for an alternative to CloudFormation, I generally recommend BOSH[2], as it solves problems without introducing new ones. Saying the same for Terraform is a stretch.

[1] - https://github.com/eropple/terraframe

[2] - https://github.com/cloudfoundry/bosh


Cloudformation is not without its problems, even still. I feel you overstate Terraform's issues, though there is tons of valid criticism to go around. I would say Terraform really shines in areas CF still does not.

We use Terraform for our foundation and a clever Cloudformation custom resource hack to export values out to use in Cloudformation stacks where appropriate(use with Serverless services, etc). Works great for us; Terraform has seen significant(surprising even if you haven't looked at it in 6+ months) development over the past year.


Immutability, in other words.


It's multi-machine scheduling, basically. Given N resources and M consumers, how can I fit all M consumers most efficiently, while using the minimum N?

The bin metaphor is that you imagine one or several bins on the floor, and a bunch of things to place in bins. Binpacking is just playing tetris to make sure all your things are packed into as few bins as possible, because bins cost money.

https://en.wikipedia.org/wiki/Bin_packing_problem


https://en.wikipedia.org/wiki/Bin_packing_problem

If you have a bunch of jobs and you need to run them efficiently on a bunch of compute, you need to be careful not to oversubscribe the hardware, especially wrt memory. There's an isomorphism between running differently sized jobs concurrently on a compute resources, and the bin packing problem. It's a scheduling problem.


Running load efficiently on a given resource. Most VMs running a single app are under-utilized so it's more efficient to pack apps into containers and run them across a smaller pool of servers so that they all get the necessary resources without waste.

Kubernetes does really well with this, although ease of deployment using config files and the abstraction from underlying vms/servers is probably more useful for most companies.


Kubernetes emphatically does not do better at resource utilization than not using Kubernetes. You should figure on between ten and twenty percent of wastage per k8s node, plus the costs of your management servers, in a safely provisioned environment.

You can argue about the configuration-based deployment being worth it--I disagree, because, frankly, Chef Zero is just not that complicated--but it's more expensive in every use case I have seem in the wild (barring ones where instances were unwisely provisioned in the first place).


Based on what evidence? We can put hundreds of customer apps into a few servers and have them deployed and updated easily. We could try to manage this ourselves but it's much less efficient while costing much more effort. GKE also costs nothing for a master and there is no overhead.

K8S/docker also makes it easy to avoid all the systemd/init issues and just use a standard interface with declarative configs and fast deployments that are automatically and dynamically managed while nodes come and go. We have preemptible instances with fast local storage and cheap pricing that maintain 100s of apps. K8S also easily manages any clustered software, regardless of native capabilities, along with easy networking.

Why would I use chef for that - if it can even do all that in the first place?


> GKE also costs nothing for a master and there is no overhead.

Just a historical perspective: GCE used to charge a flat fee for K8s masters after 6 nodes. After the announcement of the Azure K8s service, with no master-fee, GCE has dropped the fee as well :)


Yes, the pricing seems off by 3 orders of magnitude! 2734 USD for a month of t2.micro -like capacity! Unbelievable!

https://aws.amazon.com/fargate/pricing/


Check your math. 1vCPU at $0.0506 per hour + 1 GB RAM at $0.0127 per hour gives $.0633 per hour. At 750 hours per month, that's $47.48 per month. A T2.micro is $8.70 per month, but not even close to a whole vCPU, so it's not a direct comparison.

Edit: I think they have a mistake on the pricing page: the per-second rate looks more like a per-minute rate. Doing the calculation with the per-second and again with the per-hour stated prices gives a 60x difference in monthly cost.

Edit2: Yep, they've now fixed the per-second price; it was originally 60x the correct price.


Maybe it's precisely only billing for cpu and memory consumed, so if your workload has a small footprint and is mostly waiting around a lot for other services to respond it would be really cheap?


nah, I asked, it's for the amount reserved, not the amount used. https://twitter.com/nathankpeck/status/935930795864211461


Can someone do the math and comare it to Cloud Foundry / OpenShift solutions? That AWS offering seems to be a step into this part of the market.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: