Hey HN, since our pivot earlier this year we have been working with the alpha users of our product shuttle (YC S20) on creating the best backend development experience possible. Interacting with hundreds of developers, we have become convinced that the time has come to rethink the virtualization layer that we are all so used to - containers. Don’t get us wrong, containers have improved development in a lot of ways, but over time they have also created a lot of problems too.
For the way most people use them in deployments of web apps, containers are too heavy. The heavier your containers are, the more difficult everything else becomes. They take longer to build, they need more resources to run, they are more expensive to store, etc. This has significant repercussions for the developers that have to deal with containers, IOW most backend developers.
Our view is that by restricting the scope of virtualization to something more specific to web app backends, we can build a tool that is much better for the job than containers. With WASM and WASI becoming stable, this is more possible than ever.
We’re very excited to share our ideas with you and would love to get your thoughts on this!
I think people on twitter, and some here as well, forget that Docker adds layers to restore functionality that was lost because of deliberate decisions from OS manufacturers.
Windows, Linux, Mac, BeOS, Solaris, whatever else; they have different binary formats solely because they didn't want to allow binaries written for other systems to run. (there's more to it than this, but not much more) Docker just brings back what was taken away by abstracting those decisions out of the execution path and uses Linux instead. The stuff that was taken away? were those things taken away for a good reason? Not at all. incompatibility solely for incompatibilities' sake.
Why can't we just revisit that decision to introduce incompatibility on top of identical hardware? I feel like that is the correct way forward, rather than WASI. WASI will just replace Docker as a runtime for these things, and it will incur a performance penalty well beyond what Docker itself experiences, because it lives on top of the OS rather than using virtualization or isolation to go around the OS, in a manner of speaking.
Everyone these days seems to want necessarily worse performance at every opportunity. It is amazing what even a single core on my laptop can do in one second, but it takes a handful of seconds to launch any graphical application on my computer... apparently things are still too fast for the people stewarding this stuff.
Long term, I would love to see more software ported to/rewritten for Wasm, and I think greenfield projects should seriously consider it.
But the reality is that containers and VMs work well enough that I think it's going to be a long, long time before the huge backlog of working software we have today gets pushed aside.
It won't be pushed aside. It just won't be adopted for the next project someone works on. When this is done by enough people, great changes can happen quickly
Is it projected what the eventual performance loss will be long term? Honestly if it's at least 80% of native and lets me use the same binaries across all architectures and in the browser, I'd probably consider it worth it.
You're probably right when it comes to scale though. Portability is more important for apps than services.
That said, it would be cool to be able to develop a Rust service locally on my x86 machine then deploy a working/signed artifact directly to ARM servers.
Rust can cross-compile. I.e. you can build on an x86-machine and target aarch64.
> Is it projected what the eventual performance loss will be long term?
I don't know, but my understanding is that wasm ir is optimized for fast JIT compilation and not ideal for AOT workloads.
And compiling on demand on a target machine isn't great either because that means you're burning CPU cycles on compilation. Of course that can be worked around by turning wasm into machine code and distributing that to target machines. But then it's better to compile straight to machine code.
This is really cool. In a sense, this is also the approach we took with Darklang. We are running the runtime layer (http servers, DB drivers, OS runtime, etc) so all that needs to be deployed is the core of your app, as opposed to your app plus your language runtime plus your OS etc etc.
I'm curious what shuttle does once your app is deployed? Is a container cloned with the app injected, then connected to a load balancer and left running? In Darklang we just keep the AST in the DB and fetch it each request, but at some point we'll want to do better.
We've got containers running for each instance of the runtime a user needs. And for now we just keep them running forever as long as the project is up. But that's very inefficient, and we're definitely looking for a better way to do it.
> Whereas all your service logic, database and endpoint code build into lightweight WASM modules that are dynamically loaded in-place by this global persistent process
This sounds like J2EE application servers for WASM instead of Java byte code.
It's exactly what it is, except there is also a browser implementation without plugin. Which also means that WASM/WASI is not a total replacement for containers: just like there as JVM image there will be WASI images. The argument of size given in the article isn't very convincing, when images like Alpine exist.
The problem addressed in the post is more about getting around Rust slow build time than phasing out containers. There's a lot of buzzwords (WASM, Rust, Cloud, etc.) but at the end of the day nothing that isn't easy to do with a modern "boring" stack like .NET.
The model they describe is pretty much exactly how .NET Azure Functions works. (which is however currently switching to a slightly different hosting model)
> The argument of size given in the article isn't very convincing, when images like Alpine exist
For a lot of uses you can go even smaller than Alpine. Static executables for example don't require standard Linux tools to operate. I expect WASI would be similar. Just drop the runtime and your Wasm files into a scratch container.
True. A static executable doesn't even need Linux in the first place, that can be done with a unikernel. It's a tradeoff about familiarity, size, performance, documentation and technical level.
Super exciting news -- shuttle has been a really nice breath of fresh air after dealing with containers and such. I write a lot of rust-based backends and it was a pain that even for a simple service like fly.io I still had to manually write docker files... shuttle made that a LOT easier. Love to see them moving forward!
Same. Used shuttle for https://endler.dev/2022/zerocal/ lately and was super happy with the experience. I no longer have to worry about hosting and can focus on the product instead.
The "why not containers" is quite thin here, considering the incumbent position would be just to run your WASM things in containers:
> The heavier your containers are, the more difficult everything else becomes. They take longer to build, they need more resources to run, they are more expensive to store, etc.
> At shuttle we're convinced that a lot of the pains experienced by software engineers in the post-Docker world can be traced back to that very simple statement: containers are often too heavy for the job.
Why not use small lightweight containers? You can just have a layer with the WASM runtime.
because a web backend has a particular pattern of deployment that you can exploit and bake into your system that changes less often than the meat of your web app itself. You could either build out that pattern using container orchestration or using programs running inside container orchestration that in turn orchestrate the more lightweight wasm binaries. I think what you would find is the later to be faster, responsible and stable. It is basically a trade off between writing that logic at say the k8s operator / custom resource layer, or placing that logic inside containers running on k8s that each run wasm binaries.
For the more fixed configuration viewed as an alternative to using K8s... I think most people using containers don't need or want K8s, and most people using K8s wouldn't see this as replacing it. And besides this the service configurations would seem orthogonal to the runtime.
As far as faster, responsible and stable - WASM currently is slower and less stable, and the container world has good options to provide improvements sandboxing/isolation (like gvisor, firecracker, etc).
In the example, is the .await in get_article run in a tokio executor inside wasm, or is did you write a way to poll the future across the wasm/host boundary? I'm curious if you could do something like tokio::spawn, or tokio::join from within the wasm code.
Hey, Damien here, author of the article! Real happy you like it!
We initially thought about running tokio in wasm (they've added support for WASI recently as well IIRC), but elected against it because we found it just moved the compile time problem from one compilation target to another.
So in the end we decided to go with the second option. So when the guy in the example `.await`'s, context can go back to the runtime. But inside wasm this is not tokio, this is our own shim to the executor running on the outside.
This is the same line of thinking as Deno. Turns out the browser sandbox model is a solid foundation to build systems in which processes share resources in a secure way
For the way most people use them in deployments of web apps, containers are too heavy. The heavier your containers are, the more difficult everything else becomes. They take longer to build, they need more resources to run, they are more expensive to store, etc. This has significant repercussions for the developers that have to deal with containers, IOW most backend developers.
Our view is that by restricting the scope of virtualization to something more specific to web app backends, we can build a tool that is much better for the job than containers. With WASM and WASI becoming stable, this is more possible than ever.
We’re very excited to share our ideas with you and would love to get your thoughts on this!