This allows things like per second billing for container runs, serverless containers (there's no container running 24/7, only when there's traffic), etc.
To my mind this completely negates any value proposition of the container. The only thing missing, at face value, is something as straightforward as the Dockerfile for building base images. I imagine that shouldn't be hard using things like guestfish etc in guestfstools.
Lambda works great as a deployment and execution model. This allows anything to run on Lambda, not just specially prepared runtimes.
Prefer containers because build once run anywhere, as opposed to build for each deployment target.
When your container is not running (say, 99% of time), other customers' containers are running. No need to ever boot the kernel, etc.
One might say that an unikernel has advantages over it. But it also has a higher barrier to entry.
How similar is AWS Fargate to what you're describing?
With Fargate-Lambda crossover I wouldn't be running anything 24/7, and it would be a lot less resource intensive than one Lambda-Container per request as well.
Google's App Engine gets / got this right when they first launched, but to make it work they had to demand apps be written for their sandbox (like AWS Lambda), because of which the model isn't as general purpose. Firecracker would allow regular containers to be used this way, making a Firecracker service the first service to allow general purpose servers to be started and stopped (all the way to zero) based on incoming traffic.
(and, as thesandlord notes below, arbitrary containers is the idea behind g.co/serverlesscontainers)
Disclaimer: I work on GCP.
It would be great if they announced that they were gonna remove EKS master costs altogether. Technically Firecracker should make it possible for them to run that infrastructure more efficiently :)
- If we're talking about a business that provides a service to local companies, there are quite a few hours during the week where everyone is either asleep or enjoying their weekend. Not every company has millions of users spread across every time zone; some companies provide a niche service to a small number of high paying users.
- Lots of developers have small hobby projects that are inactive for most of the day/week.
Scale to zero can be convenient, but it's not usually a make-or-break thing.
What I was getting at is that the "scale to zero" feature with Knative is rather worthless if you spend more money just running the Kubernetes master on EKS alone than you would spend running a $5 or $10 per month DigitalOcean instance.
Lambda scales all the way to zero, and it's free when it's at zero. You just pay for what you use. Actually, you pay for what you use minus $7, since every account always gets the free tier for Lambda.
5M GB seconds (2 * 86e3 * 28)
2400M function invocations @1k qps, (86e3 * 1e3 * 28 / 1e6)
Does the math checkout?
I know which model allows me to "scale to zero" faster. If I hand you $10, you will give me a nice used Honda?
Using your numbers and current Lambda prices:
5M GB-seconds * $0.00001667/GB-sec = $83.35
(5,000,000 * 0.00001667 = 83.35)
2400M function invocations * $0.2/million = $480
(2400 * 0.2 = 480)
total = $480 + $83.35 = $563.35/month, which is nowhere close to $17k/mo. I have no idea how you got that number.
I also find it highly unlikely that most businesses are servicing 2.4B requests per month on their API. In my opinion, you either have to be an absolutely enormous monster of a business or have a really unusual business model to be at that level of utilization and not be huge.
Whatever your business, you're unlikely to be spending $10/mo in resources to service nearly 30 billion requests per year. You're probably going to need more than $10/mo just to store the access logs for your API, let alone useful customer data!
In reality, a lot of entire businesses would be much lower in utilization than that. The downsides of a single DigitalOcean droplet are many: a single point of failure, and you'll never achieve high availability if you apply updates regularly and reboot, unless you have multiple $10/mo droplets and a load balancer. You'll also have high latency to customers outside of your region of the world, unless you run a HA-cluster of multiple droplets in each regional datacenter that you care about. Call it three droplets per region and we'll say you want to run them in five regions, so that's $150/mo just in droplets alone, plus a $10/mo load balancer per region, so $200/mo. Did I mention that you or your engineers are going to be responsible for doing the maintenance and replication across regions? Surely the number of engineering hours devoted to this upkeep will be worth more than $350/mo. Huh, I guess we just justified Lambda's costs.
And again, I never claimed this was some kind of snake oil magical solution.
I don't know why you think I'm opposed to the DigitalOcean solution and some sort of salesman for Lambda. I use DigitalOcean heavily for my personal stuff, and I almost never use Lambda, even though I like the idea. You've created a strawman, and you're trying to expertly knock it down... except for the whole math thing as noted above.
If we really dig into this, neither the DigitalOcean solution nor the Lambda solution discussed accounts for the cost of running your stateful infrastructure, whether it's a traditional RDBMS, some NoSQL system, Kafka, or just a giant object store like S3.
My singular point in this entire thread was that scale-to-zero is worthless if the solution that enables scale-to-zero costs more than not scaling to zero. If DigitalOcean is the solution that costs less than scaling to zero, then obviously that is more valuable than the solution that scales to zero.
The economics are laid before us like a golden fleece.
and a high-level design document about how it works https://github.com/firecracker-microvm/firecracker/blob/mast...
This is awesome! Really excited to try this out!
It should hopefully eliminate the cost disparity between using Fargate vs running your own instances. Should also mean much faster scale out since you containers don't need to wait on an entire VM to boot!
Will be interesting to see what kind of collaboration they get on the project. This is a big test of AWS stewardship of an open source project. It seems to be competing directly with Kata Containers  so it will be interesting to see which solution is deemed technically superior.
- it seems to boot faster (how ?)
- it does not provide a pluggable container runtime (yet)
- a single tool/binary does both the VMM and the API server, in a single language.
Can anyone else chime in ?
They do, if you read the FAQs: https://firecracker-microvm.github.io/#faq
Kata Containers is an OCI-compliant container runtime that executes containers within QEMU based virtual machines
So this is exactly what runv's lkvm backend is doing (except kvmtool isn't patched anymore). And Intel Clear Containers do not exist anymore(many broken links on clear linux's website subsist, though), since they moved to Kata as well:
> Firecracker has been battled-tested and is already powering multiple high-volume AWS services including AWS Lambda and AWS Fargate
[Not affiliated with Intel in any way---just a long-time proponent of the clear containers approach.]
Based on the the responses I have seen from non-Amazon employees with experience in this space, it looks like their approach is solid.
It should also be noted that one of the main architects of Firecracker was formerly the project lead for QEMU
I think what is impressive about Firecracker is that they have chosen to reuse a lot of
the right things (Linux/KVM/Rust) while also taking a new approach and rethinking important assumptions (No BIOS, no pass-thru, no legacy support, minimal device support).
In my opinion the Firecracker FAQs give sufficient mention to parallel projects and tools they have built on like Kata Containers, QEMU and crosvm. The developers certainly seem open to collaboration with those communities.
AWS doesn't have much of a track record in terms of leading an Open Source projects so some skepticism is understandable, but I think what we have seen so far is a very good start.
In fact we are considering integrating a more secure language than C in QEMU, even though we're just at the beginning and it could be C++ or Rust depending on whom you are talking to. :) It's possible that this announcement could tilt the balance in favor of Rust, add it would be great if QEMU and Firecracker could share some crates.
I am excited that everyone seems very excited.
[Not affiliated with either side]
People (mostly AWS folks -- dig a little deeper into who is writing much of the serverless blog posts out there) keep pushing the "serverless is containers" but that's just a tactical response. Add a layer of abstraction and it's very clear why AWS is betting so hard on serverless. Originally, AWS commoditized the old datacenters by providing the same network/CPU substrate, but at a higher cost because you outsourced the management of those resources to AWS. And AWS slowly dripped out new and convenient services for your application to consume, allowing you to outsource even more of your application needs to this one vendor. And while the services offered by AWS were just a little bit different, they were functionally similar. And that's how you locked yourself into using AWS instead of CoreColoNETBiz or whatever datacenter you were using before. I remember one of the first major outages of us-east-1, which caught most of the internet with its pants down (interestingly, the answer was just to give more money to AWS for multi-region redundancy). AWS had a pretty good thing going: Outsourcing the management of all those resources to AWS is expensive! But that's when containers came along and people at AWS started to take notice. With containers, people could de-couple their applications from Dynamo and Elastic Beanstalk and VPC and all those specialized services that cost so much time/money. Instead, you could just cram all that shit into containers, without needing to set up IAM roles or pore over Dynamo documentation or dump so much time into getting VPC set up just right. And that's the whole point of containerization: Easily build your services in a homogeneous environment with exactly the software you want to use and eliminate that technical debt of vendor lock-in and the enormous cost center of specialized vendor knowledge (e.g. Dynamo, IAM, VPC, etc etc). Treat the cloud -- any cloud -- like a bunch of agnostic resources. Docker commoditized the commoditizers.
And serverless is how AWS plans to get you to re-couple your application tightly to their specialized web of knowledge and services. They get to say that you're still using containers, but they need to gloss over the fact that you're locked into the AWS version of containers. You cannot "export" your specialized AWS-only knowledge of Fargate or Lambda or API Gateway or ECS to Google Cloud or Azure or some dirt cheap OVH bare metal. You're tightly re-coupled to AWS, having bought into their "de-commoditization" strategy. Which I need to stress is totally fine, if you're okay with that. It just needs to be made clear what you are trading off.
No-one that I worked with saw containerization as a threat. And why would they? At the VM level you can already paper over differences between cloud providers and I don't think that anyone at any of the large cloud providers lies awake at night worried about this.
I also don't understand why serverless would couple you to a particular cloud provider. All the big cloud providers provide serverless features and it never takes long to see feature parity.
What ties you to a cloud provider (or any company) is when you use features unique to that provider. And presumably you're using those features because the value they add outweighs the perceived costs of lock-in or the cost of implementing it yourself.
Again, this comes down to cost-benefit calculations. If some companies find that proprietary feature X from cloud Y provides a bigger (perceived) return on investment than not using feature X, then they are likely to use it. If company X later shafts them, they have to swallow more costs to migrate away but hopefully (for them) they took this possibility into consideration when they made their original decision.
Qemu is exciting technology and has paved the way for all kinds of interesting layers. So, creating a slimmed down improvement that really makes it faster and provides a new lambda-ish execution context is great.
I'm sure Amazon cares about that. I'm sure people doing millions of lambda calls a day care about that.
But, if I'm an entrepreneur thinking about building something entirely new, is there something I'm missing about this that would make me want to consider it?
Lambda and Firebase Functions are exciting partially because they break services into easy to deploy chunks. And, perhaps more importantly, easy things to reason about.
But that's not the big deal: the integration with storage, events, and everything else in AWS (or Firebase) is what really makes it shine. It's all about the integration.
When I read this documentation, I'm left wondering whether I want to write something that uses the REST API to manage thousands of micro vms. That seems like extra work that Amazon should do, not me.
Am I missing something important here? Surely Amazon will integrate this solution somewhat soon and connect it to all the fun pieces of AWS, but the fact that they didn't consider or mention it makes me think it is something I should not consider now.
Reminds me of rkt + kvm stage 1 https://github.com/rkt/rkt/blob/master/Documentation/running...
Too bad it didn't take off.
A compare/contrast with Kata Containers would also be interesting. Their architectures look similar. (Kata Containers  being another solution for running containers in KVM-isolated VMs, that has working integrations with Kubernetes and containerd already. Not affiliated, but I'm tinkering with it in a current project, though I'm also now keen to get `firecracker` working as well.)
Obviously, if nothing else, qemu vs crosvm is a big difference, and probably significant since my understanding is that Google chose to also eschew using qemu for Google Cloud.
I love QEMU, it's an amazing project, but it does a ton and it's very oriented towards running disk images and full operating systems. We wanted to explore something really focused on serverless. So far, I'm really happy with the results and I hope others find it interesting too.
I'm not sure how you can make any of the conclusions anyway, unless you know a lot of seemingly private details about how KataCoda is implemented.
Starting a Lambda inside a VPC involves attaching a high security network adapter individually to each running process, which is likely what takes so long. I assume AWS is working on that, though, they've claimed some speedups unofficially.
If your security model allows, try running your Lambdas off-VPC.
Our normal cold starts are in the 1-2 second range, and the app initialization comes after. Too high for an API facing users :/
The obvious solution would be to just merge microservices into the same lambda, but then we'd rather switch to EKS or smth, and actually be able to utilize microservice architecture fully.
To give a little more context, we have a bunch of microservices exposing GraphQL endpoints/schemas. We then have a Gateway which stitches these together, and exposes a public GraphQL schema. Because of the flexibility (by design) of GraphQL, we can easily end up invoking multiple parallel calls to several microservices, when the schema gets transformed in the Gateway.
This works really well, and gives a lot of flexibility in designing our APIs, especially utilizing microservices to the full extent. It also works really well when the lambdas are already warm, but when we then get one cold start, amongst them all, suddenly we go from responses in ms, to responses in seconds, which I don't think is acceptable.
We've been shaving off things here and there, but we are at the mercy of cold starts more of less. So our current plan is to migrate to an EKS setup, we just need to get a fully automated deployment story going, to replace our current CI/CD setup, which heavily uses the serverless framework.
There are various ways to do it, but I feel that it's a very suboptimal solution, and it still won't guarentee no cold starts happen.
I've personally come to the conclusion, that lambda is very nice for anything non-latency sensitive. We are still using it to great effect for e.g. processing incoming IoT data samples, which can vary quite a lot, but only happens in the backend, and nobody will care if it's 1-2 seconds delayed.
Edit: wanted to add, that from what I’ve gathered from people testing online, bundle size didn’t really matter, but perhaps someone else has some information that points to the contrary?
Script isolates makes a lot of sense with current hardware limitations, but full processes at the edge are coming sooner or later.
From the "Disadvantages" section of your first link:
"If you can’t recompile your processes, you can’t run them in an Isolate. This might mean Isolate-based Serverless is only for newer, more modern, applications in the immediate future. It also might mean legacy applications get only their most latency-sensitive components moved into an Isolate initially. The community may also find new and better ways to transpile existing applications into WebAssembly, rendering the issue moot."
So basically, a gVisor alternative?
"Machine-level virtualization, such as KVM and Xen, exposes virtualized hardware to a guest kernel via a Virtual Machine Monitor (VMM). This virtualized hardware is generally enlightened (paravirtualized) and additional mechanisms can be used to improve the visibility between the guest and host (e.g. balloon drivers, paravirtualized spinlocks). Running containers in distinct virtual machines can provide great isolation, compatibility and performance (though nested virtualization may bring challenges in this area), but for containers it often requires additional proxies and agents, and may require a larger resource footprint and slower start-up times."
You can see here the security model: https://github.com/firecracker-microvm/firecracker/blob/mast...
The firecracker process itself is limited in the system calls it can make, but kvm allows the guest Linux process the ability to expose a full set of system calls to end user applications.
> What operating systems are supported by Firecracker?
> Firecracker supports Linux host and guest operating systems with kernel versions 4.14 and above. The long-term support plan is still under discussion. A leading option is to support Firecracker for the last two Linux stable branch releases.
However, it seems they boot in a slightly unconventional way. They take a elf64 binary and execute it. This works for Linux and likely some other operating systems that can produce elf64 binaries. Windows supports legacy x86 boot and UEFI, but likely not elf64 "direct boot".
So if you can get windows into an elf64 binary and have it run without a GPU you could have it boot. So, likely not. But the reason isn't due to KVM.
Firecracker can likely run other operating systems, such as IncludeOS. You can't run those in containers.