Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
WebAssembly: The New Kubernetes? (wingolog.org)
328 points by signa11 on Jan 21, 2022 | hide | past | favorite | 356 comments


WASM being the “new k8s” is only one small fragment of where it is going to go. I think there is a good chance it’s going to become the defacto deployment target for a lot of software, so in addition to browser and server:

- Plugins for desktop/mobile apps, wasm provides a nicely sandboxed environment for them.

- Gaming, the combination of WASM and WebGPU will be the perfect platform for cross platform game development. I could see Steam, for example, creating their own runtime.

- Embedded electronics, simplifying the development and deployment of IoT devices.

When it comes to wasm on the server, it’s often talked about as targeting the “edge”. I believe this the next area of massive growth, however the part that I don’t think is fully solved yet is on the database side. If your server app is making multiple database calls from the edge to a central db there is a good chance that it will slower than a traditional (single location) deployment.

What’s needed is a edge first database, we need at least read replicas to be at the edge with your app. Fly.io have something like this with their Postgres’s product. The alternative is to use an eventually consistent db such as CouchBD, but that causes other problems. I’m interested to see if CloudFlare do anything in this area as it’s the part of their stack that’s missing.


> Gaming, the combination of WASM and WebGPU will be the perfect platform for cross platform game development.

That storm already happened. It was called Unity. And since then others have joined, like Unreal.

WASM + WebGPU doesn't bring anything to the table here other than being a decade late, slower, and with fewer features. Why would anything outside of a web app touch any of it? Why would steam bother? You still have to build so, so much platform-specific stuff to make it work (input, audio, windowing, etc...), why would you do all that and then just pay a massive performance overhead to avoid compiling a whopping ~5 times?

WebGPU could potentially see a use as a middleware for things that want to use the GPU but aren't concerned all that much about performance or overhead, things like image filter effects. But there's also no shortage of existing solutions for them, either, like bgfx (as well as domain-specific offerings like halide of course)


You left out the part where running WebGL via Unity was an absolute nightmare, and was fudged in as a stopgap when NPAPI was deprecated and there was no more Flash web player target. Barring that, before you even got the build up on the web you got to sit through 20 minutes of emscripten and wasm magic compiling, hoping and praying you considered any bridging methods to .Net from the frontend that don’t appear until they’re useless emscripten errors in the unity console. Oh, you’d like audio/video comms in your product? Well, hope you’re ready to build the non-supported WebGL compatible version of Photon/Agora’s voice subsystems.

Don’t get me wrong, I’m card carrying Unityman and just backed off of XR development recently in favor of more stable webdev work… but WebGL as a solution to high-performance browser gaming was an absolute shitshow from start to finish and I’m not even sure WebGPU lands in time for anything useful to be done with it to deprecate WebGL “support”. I assume that the lightweight 3D solutions based on threejs, etc. are going to drag the rest of the game engine industry kicking and screaming into whatever best solution arises and this wasm-as-k8s sounds like a neat component of that.


I meant Unity already solved the "make creating a cross platform game trivial" thing.

Games on the web face far greater issues than WASM or WebGPU can solve (like a complete lack of an asset delivery pipeline), unless you're only after casual flash-style gaming anyway. But mobile largely replaced that market.


Installing a native unity game requires more trust than running a game in a web browser. Web browser games would also work on platforms like Chromebooks, and don't require Google's or Apple's store approval, with their 30% cut.


And where are you going to get your game assets from? Are you going to try and stream those over the network? Good luck with that. Or are you going to show a long loading screen as you try and fight with local storage limits to prefetch all your assets?

To give you a scope of the problem of game assets, "small" indie games like Stardew Valley or Terraria are close to 500MB. How on earth are you going to deploy that on the web?

And if you are assuming a reliable, super-fast internet connection to stream game assets over (along with paying for the CDN to host it), well cloud gaming already does this better and already exists in a meaningful way. Why struggle to fight with the incredibly pathetic limits of a web browser when you can just stream a video?


>"small" indie games like Stardew Valley or Terraria are close to 500MB. How on earth are you going to deploy that on the web?

With lazy loading? Or just an installation progress bar. What's the problem? The user has to click on the permissions dialog, allowing extended space usage, and that's it.

>reliable, super-fast internet connection

It wouldn't need a connection more reliable than what Steam requires for proper functioning.

>along with paying for the CDN to host it

I assure you the costs for a minimally successful game are minuscule when compared to the 30% cut Steam takes.

What you lose is the exposure that a release on a store like Steam gives you. That's not an insignificant problem, and is much more problematic than all the technical stuff you've mentioned.

>Why struggle to fight with the incredibly pathetic limits of a web browser when you can just stream a video?

Answered in a different comment.


> With lazy loading?

And now you're back to 2006 era performance on a good day. 72 Mbit/s would be considered a pretty good internet connection. It's also the speed of a PS3's blu-ray drive.

People have been complaining about the 80-100MB/s on the outgoing generation of consoles for a long time now. Regressing by ~10x isn't a compelling argument.

> Or just an installation progress bar. What's the problem?

Without background downloads or updates? Every time someone wants to play your game, they're forced to stare at a progress bar that can easily take upwards of an hour depending on the game ("smaller" 3d games like subnautica push 10GB)? And you somehow don't see the problem?

> I assure you the costs for a minimally successful game are minuscule when compared to the 30% cut Steam takes.

And I assure you that you don't have to deploy on steam, either. You can both be a native game and get all the benefits of it and not pay anyone an app store fee! You don't have to suffer the web's treadmill of being a decade+ behind to have that.


>Regressing by ~10x isn't a compelling argument.

If it's loading faster than user "consumes" assets, it doesn't matter how fast it is. It's not different for native games that allow you to play earlier than the whole thing finishes downloading.

>And you somehow don't see the problem?

Don't stupidly update the entire thing every time? Send deltas? Don't update as often? As for not having background updates it's a minor issue anyway (I for example don't start Steam unless I'm going to play), and there are Progressive Web Apps and Service workers to solve this issue.

>You don't have to suffer the web's treadmill of being a decade+ behind to have that.

Whoever needs the latest features can of course use a native platform. For developers who don't need the latest features or especially heavy graphics, WASM+WebGPU will be a viable option. That's all.

>You can both be a native game and get all the benefits of it and not pay anyone an app store fee!

And the user has to risk trusting me executing native code, which would definitely lower conversion rate.


You are missing the tiny detail that browsers are free to clean the cache any time they feel like, you as game developer have nothing to influence this, it is completely out of your hands as security measure not to bomb user's computers.


> With lazy loading?

AAA titles are already constrained by hard drive bandwidth, which is measured in Gbps on newer systems. They can’t keep all assets in memory. You’re going to have to have a top tier internet connection to get the same performance.

On top of that a browser thread can’t rely on having much more than half a gig of RAM assigned to it.


Are you seriously considering AAA titles in a browser? Is that the benchmark WASM+WebGPU has to pass or otherwise it's worthless?

AAA titles are famously heavy in both GPU and CPU loads, and nobody is proposing to run them in a virtual machine, especially in the 32 bit one. ( memory64 is at the proposal stage)


They could start by matching Infinity Blade visuals, should be easy, it is a remarkable GL ES 2.0 game from 2010.


> To give you a scope of the problem of game assets, "small" indie games like Stardew Valley or Terraria are close to 500MB. How on earth are you going to deploy that on the web?

Someone watching YouTube videos is sucking down hundreds of megabytes if not gigabytes per hour: the higher-resolution the video the more bandwidth required. Most video game assets require less bandwidth (megabytes per minute of consumption time), and can be cached locally.

When designing games for the web, creators will have clear incentive to apply more aggressive compression to everything they send over the wire and figure out how to make do with smaller assets, to cut bandwidth. Games designed to be downloaded have a relaxed size budget and can afford plenty of inessential bloat.


> Someone watching YouTube videos is sucking down hundreds of megabytes if not gigabytes per hour: the higher-resolution the video the more bandwidth required. Most video game assets require less bandwidth (megabytes per minute of consumption time), and can be cached locally.

This is completely incorrect. Games regularly saturate and are limited by the 80-100MB/s that can be streamed from a PS4 or Xbone hard drive. This is in fact the biggest differentiating feature of next gen consoles - the switch to NVME and even streaming directly from disk to the GPU, bypassing the CPU bottleneck.

Meanwhile YouTube video, and indeed most streaming video, is well under 10MB/s. Netflix is even content to only recommend 25mbit/s for 4k HDR content.

> Games designed to be downloaded have a relaxed size budget and can afford plenty of inessential bloat.

This is such an arrogant and asinine statement. Games already exclusively ship highly compressed assets. They aren't just pissing away storage space for the lulz, and if they only tried a little harder could somehow cut sizes to 1/1000th of what they are today. Especially not to satisfy the needs of a platform that provides nothing but problems and headaches.

You can't simultaneously claim that WASM and WebGPU will enable a new era of high quality web based games while just entirely ignoring all the actual problems with shipping games on the web. Which WASM and WebGPU literally solve none of the issues. WebGL2 is already competitive on feature set and capabilities with mobile, and yet casual web games are still largely dead - a regression from the flash era, even.


>> YouTube videos is sucking down hundreds of megabytes if not gigabytes per hour

> This is completely incorrect. [...] YouTube video, and indeed most streaming video, is well under 10MB/s.

10 MB/s is the same as 36 GB/hour. So we are in agreement.

But what game uses more than 36 GB of assets for 1 hour of play? I can’t imagine that is at all common. What kind of assets are they shipping that use that much data?


You don't understand what the phrase "well under" means, do you? It means less than. Videos use less than 10MB/s. Netflix estimates 1-7GB per hour. Games need to load that much data in under a minute. Games don't stream data over an hour, they need to load a world instantly and can then stream in things you don't see. It's those bursts that matter, as that's what players are stuck waiting on to complete before they can get back to playing.

Think of it as if video keyframes were 1GB. Video streaming wouldn't exist in such a world as nobody is going to watch a loading spinner for 20 minutes. No, it'd all be a world of pre-fetched, downloaded content. Which unsurprisingly is exactly how game stores & consoles work.

Here's some coverage of the storage architecture of new consoles and why it's such a big upgrade: https://www.anandtech.com/show/15848/storage-matters-xbox-ps...

Trying to shove lazy asset loading over the internet into that picture just doesn't work.


> Netflix estimates 1-7GB per hour.

Yes, and my estimate was “hundreds of megabytes if not gigabytes per hour”. So apparently Netflix also agrees with me.

> Games need to load that much data in under a minute.

This is based on design decisions from the programmers, not some kind of iron law of nature. Obviously you can’t just drop a game designed for one set of constraints into an entirely different context without change and expect it to work precisely the same way.


> This is based on design decisions from the programmers, not some kind of iron law of nature.

No, it's based on that's what textures and model quality expectations are. Reducing that size is directly reducing image quality.

You can make a web based game that looks straight out of 2002, sure. But you can't make one that looks contemporary, and WASM+WebGPU aren't changing that.

And of course no game developer is going to bend over backwards and make their game look like shit so it can run in a web browser for no reason other than to satisfy the ridiculous beliefs of web evangelists.


>To give you a scope of the problem of game assets, "small" indie games like Stardew Valley or Terraria are close to 500MB. How on earth are you going to deploy that on the web?

A friend of mine just got 350 Mbps optical fiber to his house. I was wondering what would come along to eat up that kind of bandwidth, you just answered the question. Someone playing a game on the web could also be seeding the thing on bit-torrent at the same time, thus cutting down the hosting fees for the developer.


Back in the early 2000's you could deliver a fun immersive flash based game in under a megabyte. Sites like miniclip had dozens of games that could easily absorb several hours of fun.


1. Those types of games are still possible with HTML5.

2. The reason we don't see them anymore is because it's far easier to monetize them on mobile.


I absolutely agree that this is 100% possible with HTML5, and the only we’re not seeing it happening on a massive scale is a because of a cultural blind spot - as evidence, most of the comments in this thread. The technologies we have now, like Phaser, are vastly superior for building games for the web than Flash ever was (I was a full time game developer for 10 years, and a full time HTML5 game developer now.) And I believe it’s actually easier to monetize these using Stripe than through an App Store. Your game lives on a URL, which is just click away, and paying for it is just one more click - this is about 10 seconds of work for a user. And, you control 100% of your business from production to distribution. There’s a huge pile of money lying on the table here for anyone who can see this.

An interesting article on the topic (not written by me): https://www.fortressofdoors.com/tag/instant-games/


Where is the Phaser designer that can match Flash tooling for the same purpose?


And Streaming games does that even better over WebRTC in the browser and we have that today! With a good internet connection (like 5g that is rolling out) the experience is fantastic. I bought my kids iPad Mini's for christmas, they have both 5g and we have 1gb internet at home and GeForce Now and XBox Cloud Gaming have been absolutely great for them. A ps5 controller and the ipad mini can be the best gaming device available today IMO.


Streaming games is no different from distribution using a large platform/store with accompanying commissions -- you can't do it independently as a small company. Also more demanding towards one's internet connection and has higher running costs, since you have to pay for both traffic and a server side GPU, so not viable for a certain percentage of lower budget/cheaper/free games.


Almost no one can do game development as a small company without a large platform of some kind. The economies of scale are just too much for small indie dev style game studios to overcome. Speaking as someone that was an indie dev and has been around the game industry for 20 years. I highly doubt that is going to change anytime soon. Webassembly certainly won't fix that issue. I'm not sure what that comment has to do with the subject. With Webassembly you won't have to pay for server side gpu but you will have to pay for discovery, hosting/bandwidth, payment, likely some form of advertising, and some level of support. There is no free lunch and large platforms will nearly always win for software distribution.


I'm not denying any of that, and said something similar in another comment. But Minecraft, Escape from Tarkov came into existence without dependence on a store or a platform. Same for Runescape, albeit in a different time. WASM+WebGPU will make an appearance of such things more possible in 2022+. It's just a technical capability. A web implementation (of an online game) could come in addition to the ones available on Steam and Android/Apple, etc.


.. for money.


3D on the Web is only good for online shops and trivial first generation playstation games.

3D API for 2010 hardware or last generation raytracing hardware, what do you prefer?

Unity cannot do anything against castrated 3D APIs.

No wonder streaming is where everyone is going instead.


> 3D on the Web is only good for online shops and trivial first generation playstation games.

That is absolutely not going to continue being the case.

> No wonder streaming is where everyone is going instead.

"Pixel streaming" is great if you don't have latency requirements or your client has a poor GPU, but otherwise native 3D is superior.


Until WebGPU provides parity with native 3D, it is going to be the case no matter what.

Heck, 10 years of Web 3D and still there are no proper debugging tools that match native ones.


WASM is going to provide more than enough impetus for tooling.

The ecosystem will explode, and the tooling will come with it.


The future is made of dreamers after all.


To my knowledge (correct me if I'm wrong please), the performance issues from WASM only exist in the context of moving things back into the DOM. I was always of the understanding that WASM itself is super performant and if it's possible to run a WASM application without going through the DOM you'd get native performance, making it a valid choice for things like gaming.

Most devs would obviously still use unreal engine etc, but these engines will be able to give you improved performance of your games by utilizing WASM


WASM gets you faster performance than JavaScript, sure, but it's not native, either. Sandboxing & byte code aren't free to have. See the extremely long history of existing byte code VMs here - WASM isn't really doing anything new.


Modern hypervisors have astonishing performance, and I don’t see why Chrome and Safari wouldn’t manage to eventually provide a similarly efficient implementation.


Modern hypervisors don't run bytecode and are using the CPU to do security enforcement, not software emulating it like WASM is forced to do (since it's not using a hardware protection domain, aka a process)


WASM was explicitly designed to be easy to into native code. No production-ready WASM runtime is interpreting bytecode in hot code paths.


Easy to convert to native code is very different from easy to convert to optimal native code. Just like WASM having performance as a design goal both doesn't mean they achieved it nor does it mean that it won when a compromise had to be made against other design goals like portability or security.

In fact I'm quite certain that whenever portability or security ran up against performance in WASM's design, that performance always lost. Those other two goals were definitely a higher priority for WASM's designers.

Stable, portable byte codes are always inherently lossy. Information that could be (and often is) useful to optimizers is lost in that conversion. Then the conversion from byte code to running code also is on the hot path - it's on the hot path of starting execution. This is the same problem existing byte code solutions face, like CLR & JVM. WASM didn't manage to magic a solution where nobody else had. It's the same thing that's been done to death, just with "web" slapped on the front of it & bundled in a browser. That part of "being in a browser" makes it interesting for web devs, sure, but in the broader context of "all of computing" it's... just not? It's what we already have had (and been using!) for decades.


For some server applications using WASM the biggest performance benefit is not really how fast it is running but how fast it is to start a new WASM instance compared to launching a new OS process.

However, I think it would be possible to make systems in which similar instances, because they are written in a type-safe, memory-safe language in the first place, could bypass WASM and both start up fast and run fast optimised code.

I have done some research on portable bytecodes, and have some novel ideas for getting more performance, but I agree that the inherent problem remains. One thing that help make optimised C/C++ fast is that optimisers assume that undefined behaviour doesn't get triggered. But in a system for portable code, you can't just assume - you'd have to prove that it can not exist to be able to do the same optimisations, and that is far from straightforward. And when it triggers, it has to work the same say, or it won't be portable (bug-compatible) between hardware.


WASM is more secure than a hypervisor, since it prevents code injection.


It is trivial to modify function pointers, implemented as indirect calls and "read-only" data in wasm.


Yes, WASM in combination with an unsafe language is less secure than memory safe languages, but more secure than a hypervisor. You can also use WASM with a memory safe language.


WASM is less secure than a regular process boundary, and far less secure than a hypervisor. I'm not sure where you're getting this from? Things like spectre mitigations in CPUs are not doing anything for the likes of WASM, after all, they only target process & ring boundaries. It's why you have to enable COOP and COEP to use thread with WASM - because the security for WASM's threads comes from the CPU enforcing process boundaries, not from WASM's sandbox.


If we are concerned with leakage of user data available in your own web program, then unsafe WASM code is more secure than unsafe code in a hypervisor because it prevents RCE, which results in full control of the hypervisor environment, which also enables transient attacks against the host environment, if there are vulnerabilities in your hypervisor (previously hypervisors had to be patched for Spectre). If we are talking about host platform protection against arbitrary malicious code, information leakage from neighboring processes/tabs, WASM is no different from a process boundary, at least in chrome, because that's how it's implemented. It's an implementation detail independent of WASM. Could as well be implemented with hardware virtualization, but it would make interfacing with browser APIs more expensive, probably to the point of impracticality.


Are you trying to say that code compiled with CFI is less vulnerable than code that wasn't? In which case sure, but that's entirely orthogonal to WASM. Code inside WASM isn't otherwise inherently protected. Code reuse attacks via indirect calls are still possible, as are of course side channel attacks and data leaks.

And since WASM uses linear memory and lacks things like ASLR, the code in the sandbox cannot achieve the same level of protection it could if it was in a hypervisor.


No need for RCE when data corruption suffices to change code paths.


It doesn't matter if it's rust or not. You can still modify read only data and still redirect indirect calls. You can still modify the stack and the heap. Rust changes nothing.

Also, not sure where the comparison to a hypervisor is coming into play. As @kllrnohj mentioned hypervisors are relying on actual hardware.


I'm not a rust expert, but I thought rust without unsafes should prevent such things at the compile time. C# and Java will be compiled into WASM differently, I believe no less securely than JVM or .NET. Having information leaks is still better than arbitrary code execution.


No safe language in the world prevents anything if the attacker can change the binary contents, it is no longer the code produced by the compiler.

That is why we have signed binary execution to start with, WASM doesn't support any of this.


AFAIK you don't really get native performance, although it's hard to find good data on this (https://nickb.dev/blog/wasm-and-native-node-module-performan... was one I could find on the fly). WASM is still virtual machine based and not compiled to a native executable so this is to be expected I guess, it's supposed to be faster than JS and it achieves that goal. Maybe it can be an alternative to Unity, which would be good enough for a lot of games, but I don't know how that compares.


> not compiled to a native executable

except it is, WASM is designed to be AOT compiled to native code.

Sure there is a bit of difference, WASM needs additional bounds checks and WASM also doesn't have all the info a more high level compiler has.

Now Unity is so much more then just a tool to make cross platform easier. So comparing it to WASM is kinda pointless IMHO. Though it probably is a grate choice for any extension mechanism, like mods or even in game scripting.


> except it is, WASM is designed to be AOT compiled to native code.

Interesting, I didn't know that. How does WASM handle different architectures? Do you build different binaries for x86/arm? Or does it do it the Apple way with the giant bundle that contains all binaries?

> Now Unity is so much more then just a tool to make cross platform easier. So comparing it to WASM is kinda pointless IMHO. Though it probably is a grate choice for any extension mechanism, like mods or even in game scripting.

Agreed, that was badly worded. Like you said, one could compare a module written in Unity and compiled to whatever Unity compiles to/is implemented in these days with a module in WASM.

Edit: Or does the AOT mean that the format that is shipped is some non-native format but the client compiles everything before the first run?


> How does WASM handle different architectures?

The same way JavaScript or C#/Java do. It's a bytecode format (typically the first stage in today's JavaScript engines is to turn into a bytecode), which the different engines then JIT for the OS and architecture they are running on.

For more details on a couple different engine implementations, you may find the below of interest.

- https://v8.dev/docs/wasm-compilation-pipeline

- https://hacks.mozilla.org/2020/10/a-new-backend-for-cranelif...


"designed for AOT" means it ships platform independent byte code, which is easy to turn into assembly/machine code before running it.

AOT is a counter part to JIT:

- AOT ahead of time (but still on the system, e.g. just before running)

- JIT just in time (while interpreting code you notice a hot section and turn it into native code)

So what most WASM runtime do is parse the WASM byte code, running certain checks for wellformedness and emit assembly (with necessary bounds checks, etc. to not escape the WASM sandbox).

This doesn't mean you can't interpret it, or interpret + JIT it, it's just that AOT seemed like a better choice then JIT or pure interpretation for WASM, as just interpreting it isn't fast enough compared to JavaScript JIT/or asm.js, to make it worth adding it to the web.


Epic deprecated support back in the 4.23 release of Unreal Engine, so developers haven't been able to export to HTML5 unlike Unity which has to this day maintained the abilit to target the browser, and in fact they are in the process of building out support for WebGPU. Check out this job ad:

https://careers.unity.com/position/software-developer-unity-...

Our startup is working on support for Unreal to support WebGPU from 4.24 onwards, and we've already upgraded support so developers can use WebGL 2.0 instead of WebGL 1.0 like was available previously before Epic made it a community supported platform extension.


Unity is not a free standard with multiple implementations. Unity doesn't support a plurality of programming languages, while it's the goal of the garbage collected WASM. Unity is higher level in the stack, and could be executed on top of WASM and WebGPU. The work of making GCd WASM (which could interface with browser APIs without an overhead) and WebGPU fast is ongoing.


There are plenty of other middleware options, Unity is not the only game in town.

What all of them have, is being able to take advantage of 2022 hardware in 2022, with 2022 3D APIs.


Not all games need "to take advantage of 2022 hardware in 2022, with 2022 3D API". A good plenty would look fine with a middling level of graphical fidelity or "latestness" of employed GPU features. Using the web platform you also free yourself of gatekeeping, cut taking middlemen. So there is a niche for both WASM and WebGPU.


Sure, I also like to revisit my Playstation and Amiga 500 games.


So in your world there are only games employing the latest features, and "retro" games from the 80s and the 90ies, and nothing else? Ok.

As far as I know WebGPU doesn't lack any significant features beside raytracing (They are prioritizing the MVP, and raytracing could come later). Some of the newest features like mesh shaders are not yet supported by the native engines either, because there isn't a sufficient hardware base. Intel for example will support mesh shaders only in the yet to be released Alechemist GPU.


What can I say, that is the quality of most WebGL games anyway.

10 years later there still isn't something at the same level of Infinity Blade.


I think one big difference with WebGL and WebGPU is that WebGL was never really a target that you'd use for native applications. I've never heard of WebGL running outside of a browser (you'd use OpenGL ES for that), whereas I think native WebGPU has the potential to end up being the OpenGL replacement people have been asking for for years.

It definitely won't kill Unity or Unreal, but I have hopes that it will make developing without a ready-made engine a bit more attractive at least!


I don't see any benefit in using a castrated 3D API outside the browser.

Want Vulkan/DX12/Metal without boilerplate?

Just use Ogre or Godot, among endless other options.

Even Qt is better than what WebGPU is capable of.


>I don't see any benefit in using a castrated 3D API outside the browser.

Castrated how? A single digit performance loss compared to the most extremely complex graphics APIs which aren't properly cross-platform?

>Want Vulkan/DX12/Metal without boilerplate?

>Just use Ogre or Godot, among endless other options.

"I want a graphics API!"

- "Use this game engine or this old and outdated graphics library instead"

>Even Qt is better than what WebGPU is capable of.

Ah, is it? Do you have any example of implementing forward+ rendering in pure Qt? No? What about GPU compute? Not that either?


> "I want a graphics API!"

> - "Use this game engine or this old and outdated graphics library instead"

That is the actual answer for when people ask that question, most of the time. Almost nobody is asking for a raw GPU API, which is what Vulkan, DX12, and WebGPU are. Rather they want a graphics API. Something that provides utilities and helpers. Something that can actually load textures, models, and animations. Or has a particle effect system.

But if all you want is a GPU API, as I mentioned far higher up in the thread there's no shortage of those today either (like bgfx, or even Angle). 'native' WebGPU isn't addressing some unserved market here.

Although if you actually do want a GPU API, there's also a good chance you actively don't want to use a middleware like WebGPU since it limits what you can do and gets in the way...


bgfx is definitely a competitor in that sense. However, it doesn't have the same backing, is not standardized, and would have a hard time building momentum due to that.

I'm mostly thinking of indie game devs who decide to write their games from scratch (which still happens, though it's becoming rare), or up-and-coming game engines such as Bevy. They might not have a team who can spend the time optimizing Vulkan (while also making sure MoltenVK runs it fine, if they want to target MacOS), and might prefer something more modern than OpenGL.

>WebGPU since it limits what you can do and gets in the way

Does it do that more than OpenGL ever did though?


Castrated by the security constraints of being a MVP 1.0 API to fit into the security sandbox of a Web browser.

By its nature cannot be more than that what browser do.

In case you missed the train, Qt now makes direct use of Metal, Vulkan, and DirectX 12.

I guess you were too busy keeping up with WGSL changes to notice that.


>Castrated by the security constraints of being a MVP 1.0 API to fit into the security sandbox of a Web browser.

What security contraints are you thinking of which are making it castrated? In what meaninful ways is it castrated?

> In case you missed the train, Qt now makes direct use of Metal, Vulkan, and DirectX 12.

So with Qt, can I write a complete renderer with advanced techniques, and have it run on all 3 platforms without changing a single file? If not, how is it relevant to even mention?


Qt3D is the ultimate goal for that, yes.

Here is a little secret, thanks to extension spaghetti and driver bugs, there is seldom "without changing a single file" unless it is some toy app, instead of being used across all possible consumer hardware.

Plus you are focusing too much on Qt while being defensive for WebGPU on native deployments, there are plenty of other middleware I can use as example.


Qt3D examples use GLSL shaders. This example has 3 shader variants for different platforms https://code.qt.io/cgit/qt/qt3d.git/tree/examples/qt3d/advan...

What are you talking about? Are you saying that https://doc.qt.io/qt-6/qsgmaterialshader.html is less "castrated" than WGSL ? Doesn't look like it. I don't even see the spec, it only says "Vulkan-style GLSL"

>Here is a little secret

Google and Apple definitely have more resources to spend standardizing WGSL (WebGPU has a Conformance Test Suite) than whoever owns QT this time of the year.


Enjoy trying to make WebGPU having any commercial meaning outside the browser, done here.


> while being defensive for WebGPU on native deployments

Because I still haven't heard a single argument against it from you that isn't extremely vague and handwavey.

For example, you skipping over my questions regarding more details on what you think makes WebGPU bad/castrated, or what is making Qt the better choice over WevGPU? (ignoring the fact that Qt doesn't seem to have a common shader language)

I'm open to hear the reasons, but it seems like your goal is to be contrarian.


There's a multi-billion market for hyper casual games, and double so if we include slightly less casual games.

And since you need to be Epic or a multi-million funded team with big scale to do an AAA game, but everybody could do a casual (and even more a hyper casual game), I'd say the latter is smarter than sneering on simpler games...

You might not care for those, but billions do. Heck, Wordle could make 10 times the money an average FAANG made in their whole career if it chose so...


Sure, and everyone on that market moved to mobile, because it actually provides the proper tooling.


The only part here that I'm a bit sceptic are the embedded electronic. Maybe I'm too into bare metal and RTOS instead of Yocto or other Linux-based development, but I don't see the advantages of using WASM here. By the other hand, I agree completely with the rest of the other deployment targets.


I wouldn't mind better isolation interfaces between my entertainment system and my engine performance options. Of course, this doesn't require WASM... also, the hardware horsepower (so to speak) for the computers you interact with are anemic to begin with, let alone the 2-5x impact of WASM on top of it.


Write once run anywhere sounds like Java of the 1990s ;).


It's very similar. If Java had had a good sandbox model and not shipped with a large runtime available to all applications, from the beginning, there would be no reason for WASM to exist.

WASM is like Java bytecode without access to the Java standard library (like, no `java.lang.String` or even `java.lang.Object`... just the primitives).

But it has a mechanism to call "host" or "imported" functions and to use linear memory (an array of bytes which can be shared with other modules and the host), which is how it actually does anything useful.

I can imagine a version of Java that works like this as well, actually, I wonder if there's ever been anything like that?


Once the GC proposal lands, letting the host and application interact through object graphs, won't that bring back most of the same perennial security problems that Java suffered from?


Java's perennial security issues was that the sandboxing mechanism was implemented in Java in the same VM as it was trying to sandbox... then they added reflection. They couldn't plug all of the holes that could let you get a reference to the security management objects and go to town on them with reflection.


> Once the GC proposal lands

"Once".

That GC proposal isn't anywhere close to landing. It's making progress, but it's the "Zeno's paradox" kind of progress where the feature seems further away the more they work on it. I wouldn't be surprised at all if it still wasn't supported in browsers in 10 years.


WASM security story is sales talk, as anyone that understands pentesting is aware of.


I'd be interested in hearing more about this if you can point to some resources.


You can start here,

"USENIX Security '20 - Everything Old is New Again: Binary Security of WebAssembly"

https://www.youtube.com/watch?v=glL__xjviro

Then there are the issues that since WebAssembly doesn't prevent memory corruption, while RCE attacks or sandboxing breaks are not possible, by producing memory corruption, it is possible to eventually trigger alternative code paths that wouldn't be possible in normal circumstances.

A contrived use case would be to validate a user with higher credentials that they are supposed to have, in a WASM based security module.

Then since the external calls from the module are configured from the runtime, one can do man on the middle attacks on the functions being called from the module, thus access data that it wasn't supposed to be made available in normal cases.

And I bet that when the hacker community starts having fun with WebAssembly, as much as they have had with other bytecode formats, more issues will be found.


Thanks!


It does, but so do interpreted languages and the many compiled languages with different compilation targets.

It's less of a problem in 2021 than it was in 1995.


Sure, its the JVM, but open from the start, native in major browsers, and at a time when a popular choice for desktop development is “wrap a browser in a layer providing access to OS services”.


With the caveat that Electron is basically Windows 98 Active Desktop, and if we forget about Mozilla boycotts against CodeAlchemy and PNaCL.

Ironically, those boycotts are now meaningless given Chrome market share, including Electron.


It's very similar for sure, but it's worth nothing how this is different. WASM is much more accessible as it doesn't require additional downloads or tooling. It also supports a wider range of languages. I can take my Qt desktop application written in C++, run it through emscripten, and have it running in Firefox or Chrome in just a few minutes. Instead of having to build and release binaries for Linux, Windows, Mac, and Android, I can make a single WASM build, host it on my website, and run it anywhere. It's honestly incredible


Just like the Common Language Runtime.


Which always worked quite fine on the server side.

For desktop use, it just never had a good GUI API.


Agree 100% on gaming, which is why my team and I at Wonder Interactive are creating a suite of optimization tools and a hosting platform for Unreal Engine developers to to target the browser using WebAssembly, with WebGPU and WebXR support on the way for UE4 and UE5. We’ve already got some demos online that are really impressive despite only being WebGL 2 (OpenGL ES 3.0) for the time being. With WebGPU, it’ll enable desktop quality on par with what you’d except from Vulkan, Metal, or Direct 3D 12, except in Chrome, Safari, Edge, and Firefox.

The future of gaming and the metaverse is cross-platform and free of walled gardens, with developers empowered to distribute directly to their end users online without a middleman required. The reinvention and subsequent golden age of high fidelity 3D browser games will be upon us sooner than most realize.

Links for anyone interested:

https://www.theimmersiveweb.com/

Our Discord:

https://discord.gg/zUSZ3T8


> The future of gaming and the metaverse is cross-platform and free of walled gardens

Given that you mentioned Unreal Engine, why the web and not native?

I mean, this sounds like an awful lot of work to port something to the web, when it already runs in every platform.


Perhaps because of the web's deployment model? Quickly getting from "interested in" to "running the game" might be a game-changer. (apologies for the unintentional pun)


> free of walled gardens

How will it reduce walled gardens?


Well, the hardest in cross-platform has always been designing for different input / output devices (which usually fails, better stick to a single platform and do it well).

Another question : how, as a player, do you run a game like this offline, or run multiplayer when the game company servers have shut down ?


Do we have a bright lookout at WebGPU? Based on historic trends, similar projects seldom come to fruition, and not even cross-platform graphics frameworks managed to stick (vulkan, metal, directx, opengl, basically none of them run on all 3 major platforms, let alone if we add mobile platforms as well to the list)


I don’t think it’s a matter of trends, we have to look at the particulars of each API.

OpenGL/WebGL aren’t well enough aligned with modern hardware to serve as a cross-platform graphics layer.

That’s where the newer APIs come in. Vulkan, Metal and DirectX 12 all follow similar principles that would work from a technical standpoint, but we live in an age where platform vendors would rather cripple their platform than make it easy to port software. Apple is the worst offender here, because while Microsoft isn’t happy about it and disables it in “metro” apps, at least they don’t prevent GPU vendors from shipping Vulkan support for normal “desktop” apps.

But I’m optimistic about WebGPU: It’s designed to be efficiently implemented on top of the vendor APIs, so platform vendors can’t block it. And it’s low-level enough that it should allow implementing modern games on top of it.

(To clarify: I’m optimistic about WebGPU as an API for desktop applications. For browser apps, it seems like all vendors are on board already.)


> It’s designed to be efficiently implemented on top of the vendor APIs, so platform vendors can’t block it.

Their choice of inventing their own shading language instead of using the ONE standard that the industry finally could agree on (SPIRV) makes me very pessimistic.

Apple is purposefully crippling stuff to retain their App dominance yes, but Apple IS a major part of WebGPU. So it never had a chance and was doomed to fail from the very begining.

Google even had a presentation titled "regaining developer trust" on this topic, but WebGPU continued to push WGSL nevertheless.


As the draft says [1]:

> Trivially convertable to SPIR-V

> Constructs are defined as normative references to their [SPIR-V] counterparts

> All features in WGSL are directly translatable to [SPIR-V]

> Features and semantics are exactly the ones of [SPIR-V]

> Each item in this spec must provide the mapping to [SPIR-V] for the construct

So WGSL is essentially just an alternative representation for SPIR-V.

But I don't see how this compromises the viability of webgpu.

[1] https://www.w3.org/TR/WGSL


That was the original sales pitch which has been abandoned in favour of "Lets make a Rust-like language for shaders!".

It's a textbook case of: Embrace, Extend, Extinguish

Google tried to save it, but that proposal was rejected: https://docs.google.com/presentation/d/1ybmnmmzsZZTA0Y9awTDU...

The Proposal to work with both WGSL, was also not accepted because it would "weaken the position of WGSL".

"But think of the children/open web!"

Apple knows how to push the right buttons on the Mozilla folks so that they tag along: Rust, wEB-StAnDARds, not invented here.

- SPIRV was done by an evil technological group, not an open and inclusive standard body. We would tie our standard to whatever the SPRIV says, that's not open web. The open web can only flourish if everything is text based! cough WASM cough Rust ist such a modern language! -

It's similar to what happened to WebSQL, except this time it's much worse as it also ruins any hope that WASM might be able to replace all the other technological cruft. Playing the old guessing game of how the transpiler will mangle your SPIRV is simply not something that people want to go back to.


I'm a bit confused.

The draft was just updated two days ago.

Are you saying that the direct mapping requirements I quoted above have been abandoned?


> Are you saying that the direct mapping requirements I quoted above have been abandoned?

Yes.

See these quotes:

Apple: "Having an isolated focus on SPIR-V is not a good perspective, we have MSL etc. to consider too"

...

Apple: "Extreme A is no interaction with SPIRV?, extreme B is very tightly coupled to SPIRV, we should find a middle point"

...

Google: "We take all targets into consideration. We can allow developers do runtime probing and maybe branch according to that. Optimization occurs in existence of uncertainty. Dealing with fuzziness is a fact of life and we should give the tools to developers to determine in runtime."

From https://docs.google.com/document/d/15Pi1fYzr5F-aP92mosOLtRL4...

See also:

https://github.com/gpuweb/gpuweb/pull/599

https://github.com/gpuweb/gpuweb/issues/582

https://github.com/gpuweb/gpuweb/issues/566

http://kvark.github.io/spirv/2021/05/01/spirv-horrors.html

https://github.com/gpuweb/gpuweb/issues/847#issuecomment-642...

https://news.ycombinator.com/item?id=24858172

The process has been very implementation driven, with Naga, the Rust tooling that does the whole conversion shenanigans, growing and changing its IR and semantics steadily. It's ironic that a group that is usually very adverse to having a single implementation of things is now pointing to Naga and arguing that we don't need an easy conversion step because we have all the compiler infrastructure build already.

The group has slowly redefined "bijective" to mean "easily compilable", which would be pretty hilarious, if it weren't so sad.


I agree Apple is the bad guy here and even the public discussions were a clown show, who knows how much worse it was behind closed doors. But I’d rather have an inelegant standard than none at all, and I think the problems with WGSL can be worked around.


Without much experience in the topic, why was the “de facto standard” SPIR-V was not chosen then instead? If they are so similar, is there any advantage of a new format?


Here’s a comment summarizing the official rationale: https://github.com/gpuweb/gpuweb/issues/847#issuecomment-642...

TBH, I don’t find those arguments convincing, but then, WebGPU is largely a political compromise, not a technical one. In a politics-free zone, they’d have standardized a “WebVulkan”, not made an entirely new thing. That would have been on the table, but Apple vetoed it. The same for a “Web-SPIR-V”.

However, I also don’t think it’s that big of a deal in the grand scheme of things. I’m glad we have a viable standard at all.


SPIR-V is an intermediate representation and WSGL compiles to it. WSGL is (or will eventually be) a w3c specification.

I fail to see where your point about trust fits in to that.


Apple + Mozilla did a bait and switch on developers in a textbook case of: Embrace, Extend, Extinguish

The original google proposal was to simply use SPIRV. A compromise was then settled for the intermediary language to be isomorphic to SPIRV, removing some of the warts and edges of it and make verification easier.

However, Apple with the help of the Mozilla folk successfully managed to twist the semantics into something that is no longer easy to translate from and to.

Game developers don't want to work with WSGL, the whole point of the new WASM and WASI ecosystem is to allow devs to bring their own tools.

This is especially bad for something like a shader language, where you want to squeeze every bit of performance out of the hardware, often with hand rolled instructions. Having the browser run a mangled version of the SPIRV that you or your toolchain/engine e.t.c. produced is simply not acceptable.


Microsoft is definitely the core issue here : they are trying to unify XBOX and PC gaming, and are NOT supporting Vulkan on XBOX.


WebGL 2.0 today runs on not just desktop browser but also through the web on Xbox, which has a Chromium-based Edge which can run browser games and will get turbocharged with near native performance with the arrival of WebGPU.

So you now have a platform target with one codebase that developers can target PC/Mac/Linux, mobile/tablet, and one of the biggest console platforms. I'd say that's pretty powerful.


We have always been in such an age, including console vendors.

Apple support for OpenGL is a side effect of NeXT's acquisition, had Copland's efforts succeeded, they would have kept using QuickDraw 3D.


vulkan runs everywhere except on apple shit, and that is on nobody but apple. they are actively working against open standards. the world is better off if you ignore them and their users.



No Vulkan for you on Playstation, XBox.


WebGPU is to WebGL as Vulkan, Metal, DX12 are to OpenGL, DX11 and below. In that context I would expect WebGPU to be just as successful as WebGL has been.


Basically mostly 3D renderings for online shops, while most indie devs rather focused on mordern hardware on mobile platforms.

Ah, and Google Maps.


I hope so. And for that matter, I hope that WebGPU makes it out of the browser as well. It's a modern API, based upon Metal, that has an execution model much closer to actual GPUs, much like Vulkan/DX12, while still handling some of the hairy scheduling/dependency issues for you, as well as lacking the incredible verbosity of the modern APIs.


As much as I hate it, I think economics will make WebGPU a thing the same way it did electron.


It could happen... I still remember how exciting VRML was in the 90's. Of course, I still have more of a wait and see approach to anything actually working out.


There are a lot of security reasons to get away from compiled native code as well as convenience reasons. Something like Spectre/Meltdown could have been mitigated completely with a JIT upgrade.

Then there's the huge advantage of encouraging more competition in the CPU market. Something like RISC-V would have a much better chance if you didn't have to recompile everything for it.

I'm surprised native binary is still the standard. Back in the 1990s when I saw Java and then .NET I thought it would be dead by now outside of specialized high performance libraries for things like codecs and cryptography.


> - Plugins for desktop/mobile apps, wasm provides a nicely sandboxed environment for them.

> - Gaming, the combination of WASM and WebGPU will be the perfect platform for cross platform game development. I could see Steam, for example, creating their own runtime.

> - Embedded electronics, simplifying the development and deployment of IoT devices.

We will finally be able to write code once and run it everywhere! I bet at least 3 billion devices will run WASM. It's a breakthrough, how come nobody came up with such great idea yet?


> "how come nobody came up with such great idea yet?"

Cross-platform Virtual Machines have been around for a very long time, enabling exactly the kind of thing you're talking about. So in-fact, lots of people thought about this, and built various versions of this. The browser doesn't bring anything new to the table here: it's yet another high-level abstraction layer which inevitably will incur performance penalties in a setting where performance means everything. People want to play complex, immersive games at 60 FPS with amazing graphics and textures, sounds, single/multiplayer settings, and more.

The browser was never built for modern triple-A gaming in mind, which means the APIs available for game creators will inevitably be less flexible and lag behind the more platform-native APIs we've had for 30+ years. This isn't a precedent by the way: recently developed APIs such as WebAudio & WebWorkers are just awful and don't come anywhere close to what a desktop application can do on a modern OS, and those are often considered "cutting edge". Synchronization APIs and locking primitives are only now being introduced to browsers - https://developer.mozilla.org/en-US/docs/Web/API/Web_Locks_A... - something games could easily make use of outside of the browser for the past 40+ years.

Lightweight gaming can easily take place within the browser, and perhaps this new technology can make that easier to build and use. But to build a proper heavy-weight game - you'll need much more than a browser can offer.


I think unfortunately the post you are replying to was another sarcastic joke about the JVM and probably didn't deserve your thoughtful reply.

Completely agree that web despite being perfect for light weight and casual gaming, is not (yet) the platform for AAA games.


Isn't multithreading support, which is kinda essential for high-performance applications, spotty under WASM? I've read that threads in WASM are hack built-on top of Web Workers + SharedArrayBuffer, which is kinda dodgy at best, and most importantly, not even supported under Safari.


Safari does support wasm threads now! It's pretty recent:

https://twitter.com/RReverser/status/1471623675623481356

Yes, wasm threading support on the web uses workers + SAB. There are some limitations (the usual issues with blocking on the main thread on the web), but it works well in production for many things.


I hope to create a wire compatible Firebase Realtime server for edge use https://observablehq.com/@tomlarkworthy/rtdb-protocol


> edge first database

Sqlite was successfully ported to wasm, maybe something can grow out of that?


We are singing from the same song sheet.

SQLite in the browser and at the edge is going to explode in popularity. A slightly unknown feature of SQLite is “sessions” which would allow you to build an eventually consistent system on top of SQLite.

https://www.sqlite.org/sessionintro.html

I can very much see SAAS type apps wheee each costumer/workplace has a SQLite db effectively deployed to the edge.


This is the biggest question actually.

Something like cockroachdb's serverless might fit too: https://www.cockroachlabs.com/blog/how-we-built-cockroachdb-...


> What’s needed is a edge first database, we need at least read replicas to be at the edge with your app... I'm interested to see if CloudFlare do anything in this area as it’s the part of their stack that’s missing.

Cloudflare already has a free caching layer, a freemium eventually-consistent KV store that optimises for frequently-read items, and a strongly-consistent datastore. Besides, they partner with other database vendors too: https://blog.cloudflare.com/partnership-announcement-db/


> strongly-consistent datastore

I think you are referring to the upcoming R2 object store? I’m not aware of what I would think of as a “strongly-consistent datastore” which would be closer to a traditional database with querying. It’s that that I think is missing.

Quite right they have partnered with other db providers, however I think there is a need for something native to their platform that ensures good proximity to Workers to ensure low latency.


I think they’re referring to Cloudflare Durablr Objects: https://developers.cloudflare.com/workers/learning/using-dur...


The future in the datacenter is unikernel virtual machines scheduled by kubernetes.


Is WASM going to replace:

Lua, JS and LLV IRM?


No, no, and no. The only one remotely close in scope is LLVM-IR, and even then scope is pretty significantly different. LLVM-IR is a general purpose IR for native compilation. Wasm is deliberately small in scope and designed for sandboxing.

Lua and JS are not even binary languages.


There is a proposal from Mozilla/Facebook for a binary JS format.

https://github.com/tc39/proposal-binary-ast


WASM is a compiler target. It was designed for fast compilation into native instructions with deterministic effects.

LLVM-IR has been considered, but has never been suitable as a compiler target. Code is targeted for a specific arch even before it becomes translated into LLVM-IR. Undefined behaviour in C is undefined also in LLVM-IR, and a portable compiler target can not have any. It is also changing too much between compiler versions (which is a reason why SPIR moved away from it).

BTW. It seems that in the long-term, the use of LLVM-IR in compilers is going to be replaced with MLIR (Multi-level IR).


Isn't it an intermediating language ? Why would it replace them, it's not like a lot of people directly code in assembly when C is available...


Instead of, as an example, coding a plugin in lua, you code it in whatever you want and compile to wasm (that supports wasm as a compile target). In that sense it would be replacing lua.


But lua is typically used as an easy scripting language for non-programmers...


No. I am not a clairvoyante but I think there will be possibilitites to compile to WASM. LLV IRM is a special case but transcompiling is not unimaginable.


> What’s needed is a edge first database, we need at least read replicas to be at the edge with your app.

Cloudflare Durable Objects is exactly this.


I don't think he understands what K8s is. WASM is the new Docker, but you still need something to manage/orchestrate/schedule all those WASM instances. What if your WASM process goes down, who will restart it? What if you need 10 extra instances to handle a traffic burst? How do you isolate associated WASM instances from the rest of the environment, and manage their access to/from the outside world? How do you provide them with configuration or credentials?

You still need something like K8s, even if you're not using docker underneath . Indeed you can already use other runtimes under K8s, so what he is proposing is nothing new anyway.


Yeah, I'm not seeing it, either. Kubernetes' scope is right in the name: control loops. K8s is basically a super powerful IFTTT engine: it looks at state, looks at spec, and figures out how to jigger the system until state matches spec. It has nothing a priori to do with containers, it just happened to be written in service of that space. You could use k8s to run your teakettle.

It's like saying QEMU is the new Systemd; it doesn't even make sense, it's a category error.


You mean like the author says, at the end of the article:

> I compare WebAssembly to K8s, but really it's more like processes and private namespaces. So one answer to the question as initially posed is that no, WebAssembly is not the next Kubernetes; that next thing is waiting to be built, though I know of a few organizations that have started already.

> One thing does seem clear to me though: WebAssembly will be at the bottom of the new thing,

Edit: formatting


But to the point that K8s has "nothing a priori to do with containers" made by a sibling comment to yours, there's no reason for WASM to necessitate "the next Kubernetes". WASM is just a different workload type that could be orchestrated w/ Kubernetes. I haven't looked too much into it, but I assume that's what projects like Krustlet are working on: https://krustlet.dev/

I think that's kind of essential to understanding the role Kubernetes fulfills: it's an automation platform.


Yes, you're right. I missed that. A bit of a shame to leave it to the end though.


Whenever anyone does that they might as well simply state "I wrote this entire article and then realized I was wrong". Writing is a thinking exercise.



Wasm is a compile target. You can use tools like wasmtime to handle isolation and linking in wasi (the system interface).

In that sense, wasmtime is an alternative to runc for wasm/wasi.

In no way is any of this a replacement for a high level tool like docker.


> I don't think he understands what K8s is. WASM is the new Docker

Or the new Java virtual machine, as they are literally the same thing. Kubernetes has nothing to do with it.


> Or the new Java virtual machine, as they are literally the same thing. Kubernetes has nothing to do with it.

Before containers became a thing, everyone was happily deploying their Java applications on Java application containers (Tomcat, Weblogic, those things, remember?)... so I wouldn't say it has nothing to do with Kubernetes... just wait a bit and you might as well see the new generation of Weblogic written on WASM.


You are right, and wrong. Yes we will need a kubernetes-like orchestrator to manage WASM instances. But what the author means in general is that WASM will change the game entirely, such that the industry's recent affair with Kubernetes will become irrelevant. And I agree.


I don’t see how. Everything K8s does will still be needed. Switching from containers to lightweight VMs is not that big a step. Distribution becomes easier, but what really changes apart from that?


that is my impression too and i've honestly forced myself to mentally take a few steps back and consider that maybe i got something wrong / misinterpreted.


So be it!

Until XyZ is the new WebAssembly and @!# is the new XyZ!

As long as my Django backend works and browsers support my HTML/CSS/JS frontend, I am fine.

I have the same LAMP (Linux, Apache, MariaDB, Python) SAAS project running for decades now, paying all my bills, letting me live a free life.

As long as nobody touches that, you can do all you want. Invent new fancy stacks every other year, make courses for it, migrations to it, a religion of it - I don't mind.


Having been wrangling with a monster Nuxt.js front-end that takes 30 minutes to compile for what is essentially a few pages of an eshop, I long for it to be reimplemented in Django.

SPAs are good for some things, but those things are much fewer than what we use them for.


I really hope this use of SPAs for CRUD apps like eshops is going to end soon, it's completely unhinged. There's a reason why Shopify went with Rails


Agreed -- but going back to Rails isn't the answer. Remix.run is moving things forward in the right way, ie PHP/Rails -> SPA -> NextJS -> Remix.


Wow, that sounds like such a simple process! It's definitely the "right way"!

Never look back at working tech folks. That's today's lesson. If it works, don't just fix it -- remake it, rebrand it, and overcomplicate it!


If you spent 3 minutes reading https://remix.run you might realize that mean-spirited sarcasm and a lazy, ignorant trope were not your best response.


It sure does sound like the latest "next best thing"!

I'll stick to Rails though thanks :)


Same here, it is kind of funny to see Java and .NET being re-invented by haters all the time.

First they do containers to replicate application servers, and now WASM to replicate bytecode server side.


The JVM is actually good, but it is a bit too Java-centric. A VM without GC and with a more economic memory model is needed, and it looks like wasm is that.


And were these application servers (like JavaEE) that bad? Separately deployed EJB modules can also be considered microservices, and everything was much easier to manage.


Not at all, I am still working with modern versions of them.

In fact I consider Kubernetes much worse experience, regardless of how much hate Websphere and Co might get.


I’m a Django developer at hart and have been for 15 years. The dream I have that I hope WASM will provide is to package any app (Python, Ruby, Rust, Go, anything!) up into a single binary file that we can deploy anywhere, on any architecture (x86, arm). No more packaging nightmare, no more containers, just a single file that runs anywhere. Maybe it’s wishful thinking but I hope it happens.


It's PyInstaller: https://pyinstaller.readthedocs.io/en/stable/

Or containers.


PyInstaller is hackish though, I had to add tricks to setup.py by trial-and-error until I found why some config file or sub-module of a sub-package wasn't included when everything else was. It's also slow to launch, because it extracts the code into a temporary directory before importing python.dll, which then imports all the rest.

Containers are fine for a server after installing the runtime, which is already one too many steps. There isn't a self-contained app format that can run without installing something else before starting it.


Containers are not architecture-agnostic. In theory they are not even kernel version agnostic, though Docker seems to work around that.


“Containers” are basically Linux only and that means they can freeride off Linux’s really stable userland api


Isn’t this basically JVM?

Jython is a thing.


The parallels between WASM and the JVM are well known. But I like to think of it as WASM is building upon the knowledge gained from the JVM.

With the JVM you had to reimplement your language to target it, or design a new one. WASM was planned from the start to be a target for existing compiler frameworks such as LLVM and GCC.

With Jython you could not use extensions that were designed for CPython. With WASM you can. Take a look at Pyodide, they have ported most of the scientific Python stack to WASM, that’s not possible with Jython.

https://github.com/pyodide/pyodide


Just like IBM TIMI, TenDra, Amsterdam Compiler Toolkit, CLI.

That is the thing with WebAssembly marketing, not being aware of bytecode formats history.


And yet WASM still has warts that make it impossible to simply transpile existing binary code onto it.

Java, by the way, also isn't stopping you from storing all your data in one big ByteBuffer, which is how WASM handles memory.


Sort of - and the JVM has been a thing on webservers since WebSphere and Tomcat. But never quite achieved the ease of use. I'm not aware of a service you can just chuck a jar at and your webservice spins up on your (sub) domain?

Jython has been a casualty of the 2 => 3 transition.


I think your LAMP (Linux, Apache, MariaDB, Python) is a fancy new LAMP (Linux, Apache, MySQL, PHP).


If WebAssembly is the new k8s (in terms of complexity), I am going to do as much as with k8s with it: nothing.


Off topic: I assume you have your own SaaS app. If yes, can you share more details about it?


Here's a tweet from Solomon Hykes noting, if WASM existed back in 2008, he wouldn't have created Docker! https://twitter.com/solomonstre/status/1111004913222324225?s...

My biggest concern with K8s is how the entire ecosystem is a costly affair. Can I run a service with multiple nodes for 10-50$ per month?

This is why I'd love WASM to win. Right now your browser can run 100s of tabs, with inter-tab communications and all the security sophistications on a single machine. For me, this symbolises a future where WASM can bring cheaper computing to the cloud as well. I don't want to pay a fortune as AWS/Azure bills just because k8s is a costly ecosystem.


When I’m reading about Kubernetes, it makes me tremble. Like you need three servers with 16 GB RAM each just for some management stuff.

I have one server with 1 GB RAM. Can I start with that? And expand on more servers later if need arises.


Yes you can, that's the point really. You can just start off on your own machine with something like Minikube [1] and deploy to GCP on a bunch of much larger nodes to see how it behaves. The overhead of Kubernetes itself isn't much, it's just a bunch of API daemons.

For small scale development, plain docker/podman is better in my opinion.

[1]: https://minikube.sigs.k8s.io/docs/start/


I'd suggest trying out https://k3s.io/ then. Runs on pretty much anything and supports multiple nodes that can be added later. They have ripped out a lot of things you might not need (cloud drivers, etc). Despite that, it was easy to install things like storage drivers or Cilium for networking when I wanted that.

I used k3sup to setup my cluster, but that was just for convenience.


Yes, you can start with that. There is a k8s distro called k3s that runs fine even on 1GB Raspberry Pi 3.

I have been running it on those for a while, but also on other SBCs. You can grow the single master by attaching one or more workers to it. The "pain" is mainly that you can't expand the single-master to a multi-master setup, so once you want to go beyond a single master, you have to re-install it all. On the other hand, with k8s you have all deployments, etc. in yaml files and you simply redeploy these on the new cluster.

My remaining pain-point is persistent storage - it's simple to run complex setups with stateless containers or that can retrieve there data at initialization (i.e. a secondary name server that populates itself from the master at startup), but for classic databases I currently still have a manual step to import the DB dump once the service got (re)started/migrated.

Sources: - https://k3s.io/ - https://www.jeffgeerling.com/blog/2020/installing-k3s-kubern...


Where on earth are you getting that notion?

K8s operates just fine on 1GB nodes. It's very low overhead.

You can operate it on a single node cluster, but that's generally not recommended, not for performance reasons, but for redundancy.

The reason larger nodes are recommended is that k8s is for fleets of applications. If you are only talking about a single service, then a k8s cluster is overkill.

K8s is the cheapest way to run a fleet of microservices.


Seconding K3s being a good option, supports most distros and is easy to set up in minutes: https://k3s.io/

If you'd like to use a management UI instead of just the CLI, Portainer is also a wonderfully lightweight option: https://www.portainer.io/ (and also supports Docker Swarm, if you'd like even more lightweight container orchestration)

That said, in my eyes the biggest problem with Kubernetes is that some distros by default reserve resources and thus disallow overcommit, which is fine from a stability perspective, but doesn't really work for overcommit scenarios.

For example, with Docker Swarm, it's exceedingly easy to say: "Okay, this PostgreSQL instance can use anywhere from 0 to 256 MB of RAM, but no more than that" and with 1 GB of RAM i could easily run 8 of those instances if the average usage was half or less of this maximum limit - a risk that i'd sometimes like to take, when i know that the limits in place are more along the lines of controls so that the whole OS doesn't run into OOM errors.

Of course, if you mess around with the YAML, i guess that's doable on some level as well.

As a sidenote: most of the software that i run (Java, .NET, Ruby, Python, Node, Go, PHP) all is memory constrained most of the times, so when looking for server specs, i usually prioritized the memory the highest, CPU, storage and network speed all usually being a secondary concern. Sadly, in the current day and age, i'm not sure how far you could actually get with 1 GB or why even use container orchestration at that point. The smallest VPSes that i have come with 2 GB of RAM and even those are for smaller pages/proxies and such, whereas the majority have at least 4 GB (since GitLab and Nexus won't launch and work with much less) and my homelab servers have 16 GB, whereas my workstation has 24 GB at this point, because that's all that i can afford.

Wirth's law be damned, i doubt that even OSes will work well with that much in the future, especially because of desktop software being infected with Electron and server software having JVM eat as much memory as you'll give it (though that also happens with MySQL or MariaDB). I don't care about "Unused RAM is wasted RAM" as much as others do, i just want to run more stuff on my servers and to have software that doesn't NEED the memory not take it.


I believe you can get away with much less, but this is something that concerns me as well. We deal with customers that want Kubernetes, but the management plane uses more servers/VMs than is required to run their code.

There is mini-Kubernetes implementation, but I don't know if they're any good for production.

There seems to be a limit to how small you can go with Kubernetes in my mind, and most people never need to cross the line where they're large enough that Kubernetes makes sense. It's honestly a niche product, but right now it's being viewed as a catch all solution.


I think k3d would be a good choice for you as others have mentioned. It's a lightweight wrapper on top of k3s - https://k3d.io/v5.2.2/


Yes with 1GB you can run k3s.


More and more it appears that k8s is designed to extract maximum value from the tenants of the cloud companies.

You need to connect your cluster to public Internet? No problem that's $x for each load balancer (whether its needed or not).

Ohh you are feeling the whole thing feels like a blackbox and difficult to debug or observe? No problem that is $x for data dog.

Gcp is the worst when it comes to setting up k8s with preemptive nodes. Thought you could get away with preemptive nodes? Not so fast, we restart all the nodes together at end of 24 hours period so that your multi node cluster will have zero availability for 5 minutes everyday. Or jump through hoops to killing your own nodes periodically to keep them all restarting at the same time.


Those aren't features of Kubernetes.

If you like you can install a free ingress such as NginX and route traffic to it.

If you like you can just have logs on your physical nodes and go look one by one, just as you would have to before things like k8s came along. Datadog is a value add. It's not essential.

GCP preemptible VMs are nothing to do with k8s - they are literally designed to be short-term (up to 24 hours) VMs to do things with. Yes, GKE can use them, but not as persistent resources. That's not what they're for.

Here's what they're for:

> Preemptible VMs are Compute Engine VM instances that last a maximum of 24 hours, and provide no availability guarantees.

I.e. don't try and "get away with" them. There are plenty of options for cheap K8s.


Yes, you can setup your own kubernetes if you want, using just ec2, but nobody wants to do it because it's easier to just pay.

I think k8s actually saves you money, because it allows you to use your resources better.


Yep - you can also just use a VPS or even some Raspberry Pis in your home.


The ingress can be free with Nginx but not the inbound firewall rule that passes the traffic to Nginx. And 'coincidentally' it costs the same $ as using their load balancer which automatically has this rule applied.


You're talking about a cloud provider's fees, not kubernetes.


It sounds like it's just setting up the load balancer for you behind the scenes? Kubernetes isn't a load balancing engine, to my knowledge. It just sets up containers and stuff.


If there is anything nefarious in the design and rollout of k8s it's just to make it uncool to run something small.

I've worked with it for a bunch of years now and run a cluster at home, but I do hate that it scales down so poorly. Is so much work to run a bare minimum, if not for SME-purposes but for local development. It's just starting to get acceptable.


My only real gripe with GCP is that every project is set up to use their premium networking tier by default.


> if WASM existed back in 2008, he wouldn't have created Docker!

doubt it as there was JVM back then.

>This is why I'd love WASM to win. Right now your browser can run 100s of tabs, with inter-tab communications and all the security sophistications on a single machine.

Now just add K8s on top to orchestrate it :) Somebody will just develop new WASM based container ...


Is the JVM made to be isolated like WASM is?


You just use the Java standard library to do everything, and the Java runtime handles keeping them namespaced. At some point (right around the dot-com crash) everyone switched away from Java (because Python was free) and that resulted in using POSIX to do everything. No more namespacing. Then people wanted to scale, and, viola! Docker.


Ah, but WASM is Java for the haters, so it sells much better at the bytecode shop.


I wonder why the Java haters chose WASM over Kotlin


Kotlin for WASM is in the works[1], so once that arrives they won't have to choose between them.

[1] https://www.youtube.com/watch?v=-pqz9sKXatw


"I wonder why the C++ haters chose ARM over Go" is a comparable sentence. I hope the error is clear.


I don't think so. The parent said:

>[...] WASM is Java for the haters, so it sells much better at the bytecode shop.

Which means WASM was advertised as a new bytecode VM to replace Java.

What I'm saying is that, for these supposed Java haters, why did they choose to completely reinvent the bytecode VM when they could have just chosen Kotlin and not have to reinvent the wheel? It would fix most if not all of their concerns about Java.


Docker was not the only container based system back then and we did have other projects for running arbitrary code in the browser. Were they as good as WASM? No but the browser wasn't as good either. Point is it's not just WASM that makes him feel that way today. It's pretty damn hard to roll back your mind 12 years and then apply 12 years of innovation from the side and make accurate predictions of what you would have done differently.

Just like "if we had internet in the 19th centry", it implies so much more.


Ironically the only WASM that is as good as Flash tooling in 2011, is Flash itself recompiled to WASM.


But why using a browser in the first place ? It's not like your OS is incapable of running 100s of processes (with inter-process communications and all the security sophistications), without the additional weird abstraction of tabs and the browser quirks...


Right... Chromium uses user namespaces as part of its own security model but I'm not sure how that extends to v8 or WASM runtimes. Where's the line between implementing a security model in a userspace VM and just providing an abstracting layer on top of OS features?


IDK either. It's like we have meta-platforms, or platform adapters. Platforms inside platforms.


Why can't you run multiple Docker containers with inter-container communications and all the security sophistications on a single machine? You can do exactly that. I suspect you have different expectations from Docker and WASM, such as expecting to run heavy things in Docker and light things in WASM.

Anyway, to make WASM useful as a server platform it will end up re-implementing the entire Linux kernel API.


Doesn't Docker re-implement about 30% of the Linux kernel API?


> My biggest concern with K8s is how the entire ecosystem is a costly affair. Can I run a service with multiple nodes for 10-50$ per month?

If your service can run on few nodes, do you really need a scheduler like K8s? Why not go serverless like Fargate or CloudRun?


I wonder if it makes sense to build up a WASM environment on a union filesystem (layers) as it mostly seems to in Docker.


this is hyperbole, wasm is just code, it has nothing for data, networking, instrument, observability. How do you deploy your stack to wasm? You compile all the services to it, then what? Write more code to wire them together?


Obviously the "Oranges: the new apples?" title is off-base; k8s is fundamentally a control loop engine, Wasm is a runtime.

What I want to know is, is there anything out there like kubernetes but stripped down to just the control loop engine part? Like a glorified IFTTT but without all the overhead and baked-in assumptions of k8s. Some kind of daemon/library where I can write an arbitrary controller which reads a spec, queries some state, and has actions to push the system toward the desired state.

The actual I/O is just writing an API and could basically just be a CRUD. What I'm looking for is the state management engine; that is tricky.


That's what Terraform pretty much is. All it does is manage a graph of dependencies/resources and then relies on a provider, which implements the CRUD.

Not super familiar with Hashicorp's other product Nomad but I'd think it have a similar approach, but more of a continuous lifecycle check?


> k8s is fundamentally a control loop engine, Wasm is a runtime

Thank you for this comment. I was scratching my head wondering what developments I had missed on the wasm side of things that would replace k8s.

So essentially wasm could replace or augment the CRI in k8s eventually. That sounds great if it can be properly integrated with the rest of the platform like Ingresses, PSPs, etc.


There are already people working on WASM as a CRI runtime in k8s: https://cloudblogs.microsoft.com/opensource/2020/04/07/annou...


If it's process control specifically then systemd or a lightweight process manager like s6, supervisord, foreman, etc. are a good option. You can do things bare metal or just run containers as processes. systemd services + nspawn is pretty close to k8s container control for a single host


For the infra side of things you have hashicorp terraform, which will manage state of your infra and can also manage the state of your workloads.

For the workload side of things you have hashicorp nomad. Which is basically a stripped down version of k8s.


I don't know if it's Terraform or our implementation but I've not been particularly impressed by Terraform's error state recovery functionality.

If something takes to long to pass a health check you often do not end up in State A or State B. There is a lot of surface area for undefined behavior.


Not really. Nomad is closer, and Mesos (the old one) is closer still.

I wrote my own as a rust library, and I'm in the progress of further breaking it apart. Might get a prototype finished this weekend and if so, I'll share it here.


Mesos, but it seemed like Kubernetes ate it’s lunch since most use cases are handled with a single controller.


nit: Kubernetes is the container orchestration, not the container technology itself. So the comparison is apples vs. oranges.

You could replace the containers that are being scheduled by Kubernetes with WebAssembly. Others already linked to Krustlet which is effectively this.


Yeah, exactly. This comparison is a stretch, it compares different things and tries to shoehorn them together. I'm surprised that your comment is this low in the thread.


I can't help but have flashbacks about the JVM, managed code and all the discussions the Forerunners in traditional IT used to have.

It's like we are doomed to reinvent certain kinds of technology (with circumstantial advantages due to historical context). Runtimes, process orchestration, databases, the works, everything old is "new" again.


It's exactly like that :)

Hang around long enough and you eventually see all the old culprits re-branded and sold as the solution to the world's problems.

At some point, one might get wise and look at the points of stability in the industry and try to understand why. Nobody is going to sell you on stable, though. There's less money to be had if you are satisfied already.


C++: the new jquery?

What a nonsense title.

K8s is about far more than simply running containers. WebAssembly only, at best, solves the problem of running a distributable.

This article focuses on one of the smallest aspects of k8s.


I'll ask the same questions I had over on lobsters since I didn't get satisfactory answers there:

* 64bit - Can I run a database that addresses more than 4 gig of memory (eg: keep all my indices in memory so perf doesn't fall off a cliff)? I think the answer is no. This effectively rules out most 'enterprise' software and most things involving 'big data' (eg: all ai/ml projects), not to mention I haven't deployed 32bit to prod in something like 10 years.

* threads - shared array buffers && web workers aren't threads - this isn't the biggest deal in the world since all interpreted languages are effectively single process/single thread but there is quite a lot of software that would like real threads.

* security - You can overwrite constants in what traditionally would be read-only. You can overwrite portions of the stack. You can overwrite the heap. You can overwrite function pointers. You can overwrite and redirect indirect calls. Taking an existing linux binary and converting it to wasm severely downgrades the security of it.

* real sockets/TLS - https://github.com/WebAssembly/WASI/pull/312 && https://github.com/bytecodealliance/wasmtime/issues/71 ? yes, I understand some people have shimmed in various layers here but most applications would currently require modification to use said shims - what's the best of breed to not require application mods?


I'm not sure we're thinking about this the right way. What we want (and have) is:

A) a way to package any bit of functionality in a standardized way

B) run it in any environment.

Docker delivers both, although docker images are large (as they include their own OS) and (contrary to what the article claims) somewhat less efficient than running on bare metal.

Executable files provide A, but are specific to an operating system. They are however much smaller and make better use of the hosts resources as they don't require virtualization.

WASM delivers A for a subset of languages and B in so far as it runs in a browser and specific standalone runtimes.

These are certainly some attractive qualities, but it leaves me wondering: does it make sense to introduce yet another standard into the mix or wouldn't it be better to write a "Kubernetes for executables" rather than having the additional abstraction layer of WASM?


Docker can't "run in any environment": it relies on Linux kernel features, which do not exist on other operating systems.

There are workarounds for invoking Docker containers from some other operating systems (e.g. Docker Desktop for macOS and Windows), but those work by installing Linux into a virtual machine/hypervisor, and using that to run the containers. If that's what counts as cross-platform these days, then the term is meaningless (e.g. there are Gameboy emulators for some OSes, so Gameboy ROMs "run in any environment").

Considering that Docker containers themselves often bundle an entire Linux OS, that results in multiple layers of virtualised Linux operating systems. At which point, why not remove a layer and just ship a VM image?


WASM also can't "run in any environment" since it relies on imports, and different environments will have different things available to import. The only way around that is by only defining very minimal behaviour... like it does on the web. But then you can't do anything useful with it by itself.


I think we say 'run in any environment' when what we mean is 'recreate the environment on any machine'. And this is often more of a continuum than a black and white answer. I can get a lot of things done without low level access to the GPU, or printer, but I can't do everything.


Docker supports Windows-native containers.


So they've even managed to break their own weak definition of cross-platform/run-anywhere, by forking into two incompatible versions? The more I learn about Docker, the more I'm astounded by the sheer scale of anti-patterns being built into the bizzaro-world they're constructing.


Yes it is a joke, but something that is now unavoidable as a reality of DevOps workflows.


Wait until you hear about ARM.


> as they include their own OS

That is not required.


I want a simple life where I program and run programs. I don't want to fight layer after layer of abstraction, some of which will inevitably impair my programs (i.e. by not allowing me to use SIMD or access the graphics coprocessor) and multiply tenfold my expenditure in cloud services.

If you think I'm exaggerating, try to configure a GPU passwover to a virtual machine. It is doable, in your server in your garage, after you spend a considerable amount of hours of your life. In your cloud deployment? Only if you use Google or AWS, and even then, only for a price.

Please, call the Spanish Inquisition, this guy needs to be prosecuted before he gathers a coven and gets away with this perversion.


I mean everything is what you make of it.

The fact that wasm is sandboxed allows some genuinely new things. Imagine a JSON file starting with a line that gives a base64-encoded WASM parser for JSON. (Or a hash on a public database if you're worried.) Switch to YAML or TOML by changing that line up top, or add comments to your JSON, or get fed up and ship SQLite databases instead. Oh wait I have security concerns, I need to encrypt this file at rest, better change the code that consumes it—why? The registry has a NaCl container, just prefix the encrypted version with the wasm hash to prompt for a password and decrypt it.

It’s a technique that used to be popular and then died out as networks proliferated—you could no longer be sure that your machine code did the right parse on someone else's architecture.

Your container crashes and we coredump to an S3 bucket. You know what’s better than downloading gigabytes of coredump? If S3 can just let you spin up a container or run a WASM program on the storage node itself. You send the programs to the data so you don't need to pull the data to yourself.

This post describes something like Erlang actors, but if each can be a pure function encoded in WASM... That is also a neat dream. But the next step is the MonadArrow instance, programs sending programs to each other. Then we’re “thinking with portals.”


> It’s a technique that used to be popular and then died out as networks proliferated—you could no longer be sure that your machine code did the right parse on someone else's architecture.

This will be the case with WASM extensions. You won't be sure what's available in someone else's WASM environment.

The only reason it's not the case with WASM on the web today is because it's so simple, and any differences are dealt with in the JavaScript layer.

> Your container crashes and we coredump to an S3 bucket. You know what’s better than downloading gigabytes of coredump? If S3 can just let you spin up a container or run a WASM program on the storage node itself.

S3 literally has this feature, and it doesn't even have to be WASM. It's called "S3 Object Lambda". It's quite new, so you're forgiven for not knowing about it.

Downloading from S3 is quick and free if it's within the same region, so you don't actually need this anyway.


> This post describes something like Erlang actors, but if each can be a pure function encoded in WASM...

Not just a dream. https://github.com/fluencelabs/fluence


I'm not aware of any surcharge for using a GPU in a VM on AWS or GCP, could you link documentation of this?

Of course AWS and GCP charge extra for units with GPUs in them, but I can't seriously expect you think they should stick a GPU in every VM instance for free?


VM allowed cheaper computing by using resources in a more efficient way, but it came with the cost of abstracting the components.

You can always buy bare metal and have your simple life.


I'm pretty sure in a cloud service you are not paying for GPU passthrough, you are paying for the GPU itself!


Even assembly code is an abstraction.


> I want a simple life where I program and run programs...

I feel like this is an oversimplification.

The company I work for has tens of thousands of concurrent users, plus dozens (hundreds?) of business partners all of whom are integrating with our platform.

We are not serving static HTML pages like StackOverflow, we have dynamic content and our customers want up-to-date analytics.

So.. how am I supposed to "program and run programs" in this environment?

You act like we invent complexity for no reason, but we are dealing with scale and "I want a simple life where I program and run programs" is like saying: "I want an easier job." Cool. They are out there. They probably don't pay as well.

Meanwhile I'm going to try and fix the mess someone made 8 years ago in the monolith so the business paying my bills can keep making money. it ain't pretty, but its a living.


> You act like we invent complexity for no reason, but we are dealing with scale

1. Yes, we do invent complexity for no reason

2. The absolute vast majority of us don't deal with "scale" and most of the things we do can be done with a single beefy server (even for "tens of thousands of concurrent users")

3. Sayin that all SO does is serving static html pages is hugely disingenuous at best


> The absolute vast majority of us don't deal with "scale" and most of the things we do can be done with a single beefy server

Most simply fail to understand exactly what modern servers are capable of. We continue to read article after article about "scaling out", but you're right, the vast majority will never have the need to scale beyond just a few servers.


I remember seeing that Stack Overflow runs on half a rack of servers, plus a redundant copy. https://nickcraver.com/blog/2016/03/29/stack-overflow-the-ha...

I bet most people are running sites smaller than Stack Overflow (Alexa rank #65 [did you know Alexa is getting permanently shut down in a few months?!]).


> The company I work for has tens of thousands of concurrent users, plus dozens (hundreds?) of business partners all of whom are integrating with our platform.

> We are not serving static HTML pages like StackOverflow, we have dynamic content and our customers want up-to-date analytics.

Yeah that can all still be done with a simple monolithic CRUD app on just a few servers


This would be the last step predicted by this 2014 (!) talk: https://www.destroyallsoftware.com/talks/the-birth-and-death...

At the time it seemed unrealistic; asm.js was only just invented


It's an excellent talk, but IIRC the point was that hypothetical Metal language was still a subset of JavaScript. From the technology point of view WebAssembly has basically nothing to do with JavaScript, except that both are part of the web platform and commonly implemented by JavaScript engines. It's just a low-level VM which happens to be included in web browsers.


Wasm was definitely inspired by asm.js (though I think asm.js is still faster). The MVP for wasm was basically "do what asm.js does."


The Nuke was COVID, so we need another 5 years or so to release our emotional energy from those C infrastructure.

At least I hope the nuke was COVID.....


Yeah, that talk proved extremely prescient.


Interesting that this subject came up today. A few days ago I was looking back at Capsicum [1], a project meant to bring a capabilities framework to UNIX I knew about from years back. The impetus behind it being a short review of containerization primitives I was doing. Sadly it doesn't seem Capsicum went anywhere in Linux-land beyond kernel patches. But one of the subsequent project that pages on Capsicum often pointed to is the WebAssembly System Interface [2]. POSIX APIs, capabilities and portability sounds strangely apt for microservices, but at this point I'm still not too sure how to think about it. I'd appreciate it if anyone had good WASM/ASI resources for stupid devs like me. In any case I don't see why k8s or Terraform couldn't orchestrate WASI-defined containers. If anything it's a dig at containerd/docker/podman.

[1]: https://www.cl.cam.ac.uk/research/security/capsicum/

[2]: https://wasi.dev


I don’t understand how WebAssembly compares with Kubernetes. It seems like WebAssembly is a nice replacement for JVM / CLR but it does not provide standards for service orchestration.


I'd say that article compares WebAssembly to Docker/moby/other container runtimes, not k8s.


Even so, the scope is different. I don’t use WebAssembly to define networks. Limiting privilege within the WebAssembly runtime is weaker than limiting it at the OS namespace.


It depends on specific WASM runtime implementation, but usually you have to be explicit about allowing each syscall.


Checkout krustlet https://github.com/krustlet/krustlet

Kubernetes orchestrating wasm runtimes


That's not a great comparison. The author probably should have compared Docker with Webassembly.

Kubernetes isn't really about running many containers sharing a physical hardware. You can do that with Docker as well. What Kubernetes offers that Docker does not offer, is self-healing and resilience when managing many containers spread across many underlying nodes. There is no monolithic "Kubernetes" running. Rather, Kubernetes is designed as a set of microservices loosely coupled and maintaining desired states. The resiliency comes from that design.

Because it is architected as microservices, Kubernetes is also extensible. Operators, for example, which manage a single distributed system (such as etcd, or postgres, or mongodb) can only be created because Kubernetes was architected as a set of loosely-coupled microservices.

WebAssembly is the container runtime without the orchestration capabilities. However, Kubernetes is capable of being modified such that it can schedule, orchestrate, and manage WebAssembly containers: https://cloudblogs.microsoft.com/opensource/2020/04/07/annou...


I predict an amusing PoC in a few years, implementing k8s in w9y. It will spark discussions about microfrontends.


It's difficult to compare kubernetes and WASM.

Also WASM doesn't look very mature to me, and chrome and apple can decide to stop supporting it if they want.

I'm also curious how well WASM works on mobile. I wish WASM could replace mobile apps somehow.


Apple and Google have too much power via the app stores to ever allow pure web apps to become significantly viable competitors to native apps.


I just want to make clear here that in reality that it is Apple and Apple alone which is holding back web apps from competing with their ecosystem.


Maybe the best pivot ever (since the iPhone was made for web apps [1]).

Can Apple ever switch back? Since their valuation depends on their income from the app store, they are forced to continue their policy.

However, is this a valid long-term strategy? Right now, most apps are by requirement single-purpose tools. Is Apple prepared for the time when there are user-generated apps? Much of what web3.0 can be is inhibited if Apple puts a tax on every transaction and prevents apps that change their source code.

Since the US mobile phone market is dominated by Apple, does this give China a huge advantage for developing the next generation of software?

[1] https://9to5mac.com/2011/10/21/jobs-original-vision-for-the-...


And how would they do that?


I think this article is very much about wasm running on the server rather than in a web browser.

In fairness there's been some very exciting development on that front, mainly pushed by Fastly with their own compute @ edge proposal and opensourcing a set of tools like cranelift.

I think their use case of cloud fonction @ edge is absolutely perfect for webassembly on the server, and I'm actually impressed.

I agree the technology needs to mature a bit more. We need broader langage support, and then we'll have a very powerful and exciting tool to add to our tool-belt.


Wasm is not only for browsers. There is wasmer, for example.

The author is saying that you will be able to replace docker (I think comparing to kubernetes is wrong) with wasm, because it can run code sandboxed.

Another example is a plugin ecosystem using wasm. Your go program would be able to load plugins written in c, lua or any other language that compiles to wasm.


It seems like a more accurate comparison would be container images and webassembly. Kubernetes is a orchestrator that tells containers where to run, how many, etc. For WebAssembly to replace Kubernetes, enterprises would need some other orchastration engine - or conceivably, Kubernetes could run webassembly packages in addition to container images.


> Kubernetes could run webassembly packages in addition to container images.

https://github.com/krustlet/krustlet

https://cloudblogs.microsoft.com/opensource/2020/04/07/annou...

“Despite the excitement about Wasm and WASI, it should be clear that containers are the major workload in K8s, and for good reason. No one will be replacing the vast majority of their workloads with WebAssembly. Our usage so far makes this clear. Do not be confused by having more tools at your disposal.”


There is a node image you can use to run web assembly. Assuming everything you do is web assembly then using that as a base image would cut down the amount of downloads that'd have to happen for new applications.


How do you orchestrate your Node runtimes? In containers?


Yup, that's what I was suggesting. If all the containers have the same baseline node image then the only difference will be the app code.


Genuine question: outside of the web browser, what's the advantages of WebAssembly over the JVM or a similar VM?


I’d say the ability to compile other code to it by using it as an LLVM backend. Also, designs like V8 isolates seem to be much lighter weight to spin up than the designs Amazon Lambdas use. Cloudflare has some blog posts on this. I’ve been fiddling with it currently, mostly at the level of its of text assembly format (WAT). The still in progress WASI spec gives OS access like file systems for server use.

In short, if the JVM works for you in an enterprise type role, I’d see little reason to switch now to WebAssembly. But it’s future path as a possible mainstream compile target and functions-as-a-service type platform are interesting.

Currently performance gains in the browser vs JS appear minimal- mostly I’m guessing because V8 et al are so damn good at runtime optimization.

The fact that WebAssembly seems to have fallen off of the hype train means that it may have a shot at being the future of serverless.

edit: I’d add that its origins in the browser leave it with a “download and stream code to compiler” that strengthen its serverless scenario visbility.


https://news.microsoft.com/2001/10/22/massive-industry-and-d...

> More than 20 programming tools vendors offer some 26 programming languages — including C++, Perl, Python, Java, COBOL, RPG and Haskell — on .NET.

But hey, WebAssembly invented the language agnostic backend.


Some WASM interpreter can run on MCU.

https://github.com/bytecodealliance/wasm-micro-runtime


Excuse my lack of expertise but I don't like web as a platform. Isn't it basically Javascript (an now WASM) + CSS on the client side and a browser as the VM?

On all platforms web is a second class citizen and the feature set is the common denominator which means it's always lacking something. It doesn't have access to all the hardware, it runs in a VM (which I assume means it is slower) and the dev languages are meh.

I prefer something along the lines of what Fuschia promises. That is, ephemeral software that gets downloaded lazily and runs natively as a first class citizen of the underlying OS


WASM is a new portable runtime with very few capabilities exposed, so it has a much better security model than the web's JavaScript engine.

You write your code in any language and compile it to WASM.

You inject capabilities such as file system access, network access, etc.

There is no UI layer, so CSS has nothing to do with WASM.

Although WASM was invented and developed primarily in the browser, it is now being deployed standalone by companies like Fastly that let you customize their CDN with WASM running at the edge of the cloud via point of presence (POP) data centers.


The CPU already offers a non-portable runtime with very few capabilities exposed. Then we exposed the capabilities and didn't provide a way to unexpose them. That was the mistake.


Agreed that there are many ways to skin this cat. Operating Systems, Hypervisors, containers, Unikernels, etc.

WASM is a very low level building block that you could build something like Docker out of, but it isn't a high level virtualization product.


If WASI gets GPU support, WASM can also replace HTML + CSS and make the browser essentially a thin compatibility layer for nearly native apps.


I agree! Let’s come to terms with the ubiquity lesson of JavaScript and make a proper vm, that for now interplay’s with ubiquitous-JavaScript. But also will not leave out the evolution of other languages or software concerns at the language level. Let’s go down a layer and have a ubiquitous vm. I think the possibilities for this are huge. Before you say “ok but the jvm”, “history repeats” - I don’t think that argument is fair. the jvm was built at a very different time and has its own niche and it’s own bloat. Again, we’re transitioning from JavaScript first. Great things will follow.


My guess here is that WebAssembly will become more popular for new developments, but I don't see it supplanting containers/Kubernetes across the board.

One of the reasons that containers have been successful is that, in many cases, you can lift and shift an existing application into a containerized environment with no or minimal changes. I don't think that's the case with web assembly at the moment.

I kind of liken it to Serverless, which is great where it fits your application architecture, but hasn't really become ubiquitous the way some predicted it would.


> I kind of liken it to Serverless, which is great where it fits your application architecture, but hasn't really become ubiquitous the way some predicted it would.

This. I would really like to hear the opinion today of all those 2016-2017 gurus that were telling us "serverless is the only future".


Serverless could have been the future. But it’s too expensive and not worth it.


Our backend is 100% lambdas and it's significantly cheaper than running our services on eks.

Can you elaborate on where the cost is? Definitely not cloud spend..

Engineering time? Serverless Stack takes almost all the pain away (vs serverless framework)


> Our backend is 100% lambdas and it's significantly cheaper than running our services on eks.

Could you elaborate? We've got both, but we're not seeing a lot of differential in price.

It could simply be we don't have enough traffic that would drive this correctly.


I can make a function that returns the current time in cowsay ascii art, and claim a "100% lambda serverless webscale backend".

A bit more detail on the scale / general usecase or features would be handy to know.


30 services (so far), maybe 400 lambdas, very little traffic, all gql apis. Pg backends, db per service that requires it, maybe 20 aws accounts (per squad). 50 devs (mix of perm and contract maybe 80/20), established multi million revenue business, not funded externally.


> It's even better in some ways because WebAssembly provides "allowlist" security -- it has no capabilities to start with, requiring that the "host" that runs the WebAssembly explicitly delegate some of its own capabilities to the guest WebAssembly module. Compare to processes which by default start with every capability and then have to be restricted.

In other words, WebAssembly is another step in the saga of systems that try to fill the hole left by the absence of capability based security in the Operating System.

If it gets me a way to get back into the business of developing programs for a desktop, and avoiding worry about details of web servers, tls, caching, and endless browser specific bugs, I'm all for it, especially if I can write a program that uses a PowerBox to get a capability to a local file. At that point, I could just load the file from a disk, or via the web server/browser pipe, and it will work the same.

Eventually, someone will figure out that a capability based OS for the desktop could launch a trillion dollar market for new applications without the need for a vendor specific web store or lock-in, and we'll be off to the races again, like in the 1980s shareware boom.


> K8s itself is an evolution on a previous architecture, OpenStack. OpenStack had each container be a full virtual machine, with a whole kernel and operating system and everything.

Linux containers (LXC) are designed for running applications whereas Virtual Machines (VMs) are designed for running Operating Systems. So, they are nothing alike and any hint to "evolution" is just more complexity layered on by people who have nothing better to do than make things seem complicated.

Not every application needs this type of complexity. Sometimes an idea or application can just run on a regular old VM until such time it doesn't and then I put it on AppEngine, where it's technically K8 and Docker, except I don't have to think about it.

The amount of complexity K8 and Docker brings is not worth the time and effort require to learn those things well if doing one's own deployments. I'm thinking back to Docker literally uninstalling itself SEVERAL times on my laptop, leaving itself in a broken state. Sure, I can learn these technologies a little, but I am certain problems will crop up over time that whittle away at the usefulness of choosing to use them over just using older, more well known methods to get shit done.


I think Linux containers (LXC) are not only for applications, so that's why they are often called "system containers". They are also useful when one wants to 'just' emulate a whole OS without actually emulating a whole machine (VM).

For applications, I think Docker is the way to go.


The trade-off of resources Isolation matters, eg. OS->Container->Namespace(wasm)


also layer 0: hardware isolation (e.g. Yubikey)


The promise of WebAssembly is that we can stop writing shitty Javascript and write shitty web apps in slightly less shitty languages, without having to transpile it to shitty Javascript. Ideally this will lead to less overall shittiness.

I don't really think we'll ever actually get there, but that's the hope. And even if we do, we're stuck with the DOM APIs until the heat death of the universe.


Does anyone have any examples of a sort of self hosted way to do functions as a service with web assembly? I know you can do web assembly FaaS with Cloudflare Workers (https://github.com/cloudflare/workers-rs among others) but am unaware of a way to self host.


That would indeed be super useful to extend self hosted automation tools like Huginn. From what I understand, wasm/wasi modules don't have full networking or filesystem access support yet. So you probably have to rely on specific runtimes to provide hooks for doing that, and regular code won't work out of the box.

1. https://github.com/suborbital/reactr 2. https://wasmedge.org/


It's still early days, but as far as I understand it, one of Suborbital's explicit goals is to commoditize wasm-backed FaaS platforms: https://suborbital.dev


This is amusing to me because CGI is basically self-hosted functions-as-a-service.


So now (or in the near future) you can run WASM functions on a WASM FaaS platform running in a container running on K8S running in a VM on a cloud platform, and achieve what you could do in 1993 with Apache CGI and Perl 4.x!

Progress!


The part about the startup performance of WASM being good - yeah, nah. It's another bytecode language, much like LLVM IR, so it needs at least one compile pass, like Java/.NET does. I wouldn't be surprised if Python/JS beat it in this metric in interpreted mode. It's a far cry from mmap-ing a binary blob, and jumping to the entry point.


It's similar in that it adds even more overhead to just running your software to make your life just a tiny bit easier. You don't need to think about files and networking if you can't do any IO, after all!

I don't see how webassembly would by any different from just running a JAR file or similar in terms of ease and usability. It's just a new method of writing bytecode that can do less. Bytecode interpreters already struggle to make efficient use of things like SIMD and this is just another layer on top of that.

What's the point of using low level languages if you're going to compile them to semi bytecode. You can already set up virtual machines with predetermined state of any kind, see Firecracker for example, there's no need for a WASM overlay.

WASM is a cool technology, don't get me wrong, but in my opinion this is a stupid way to use it.


No, the old Java Application Servers rebranded for Java haters.


So instead of Enterprise Java Beans (EJB) we'll get Enterprise Webasm Workers (EWW)? Might have to work on that acronym, though.



Java application server could run only Java, had really no control over the application (eg kill it if it uses too much memory or throttle it’s cpu), and provided hard to use APIs.

The idea was great, but it was premature and not well thought.


Nope, it could run any language that targeted JVM bytecode and HP-UX Vault already provided container like support for Java Application Servers.


> any language that targeted JVM bytecode

Any language similar to Java. Small subset of languages.


So IBM TIMI then, or dozens of other examples that I can pull out of the hat since 1961.

It is a big crufty old hat full of legacy papers and old scanned computer history.


If I recall correctly, Saša Jurić mentioned in an interview that he envisions the BEAM being used as an OS in the future, where you can run your database, web server and other services under one VM.

While I don't personally have much use for it (at the moment), I'm curious about how this would look like with Lumen [0] where you can run your whole web stack with a single binary and being internally supervised by the BEAM.

Lumen's development has been slow but I remember seeing a tweet from a team member stating that development is slowly ramping back up (not a criticism in any way, just providing some context).

[0] https://github.com/lumen/lumen


This is topical for me because I'm working on a distributed Webassembly runtime that aims to combine the best parts of Kubernetes and Erlang.

The big benefits:

* Much easier deployment. All you need is a small .wasm file, usually hosted in a registry similar to Docker. No need for messing with cumbersome Docker files. No need for tooling that audits Docker images for vulnerabilities. No huge image downloads for each service.

* Language interop. Progress is slow, but interface types [1] will soon provide a standardized way for type-safe abstract interfaces between Webassembly instances. This makes cross-language interop seamless and very efficient.

* Much smaller applications. You can move expensive dependencies like http servers/clients, database clients, etc into the runtime and let the environment provide those. This leads to way fewer dependencies, so you don't need to constantly update your apps and aren't harassed by Dependabot PRs. More importantly: smaller attack surface and less opportunity for compromises.

* Security. Webassembly is sandboxed and has a much smaller attack surface than a whole operating system syscall sandboxing layer (aka Docker et al). Wasm also naturally lends itself to capability based security with handles that are passed around, so you can even scope permissions inside your application code. (hint: reference types)

* Easier local development. It's much easier and quicker to download and start 100 small .wasm based services than it is with (often large) Docker images / Kubernetes setups.

* Quasi-instant startup (after the wasm is compiled, or with an interpreter)

* Less resource usage. It's relatively easy to freeze and persist Wasm instances to disk when they aren't used. Or do things like seamlessly migrate them between nodes. Or snapshot and resume them locally for debugging.

* In-browser usage: you can even just run the same code in your browser anytime you want. ( the runtime I'm working on can provide a mostly complete dev environment just in the browser, which works mostly identically to server deployments, excluding obvious deficiencies like no direct network or storage access)

---

Of course things aren't just rosy. Some downsides/risk factors:

* There are quite a few proposals that are crawling through the standardization process. (GC, more control over the stack, tail calls, interface types, WASI, multi threading, module linking, ...). Several of those would be really important especially for interpreted languages.

* Language support. Rust, C and C++ work. This is not so much the case for other languages. As mentioned above, the spec leaves a lot wanting for running interpreters, and few of the popular languages can be AOT compiled. There is a Spidermonkey WASM build that makes running JS feasible, but it's of course much slower than a regular JIT enabled v8.

* Tooling and ubiquity. WASM is basically the next iteration of serverless. And there are many areas where serverless is frowned upon. I feel like this is mostly a tooling problem though that can be solved by a good ecosystem and a powerful runtime.

---

All in all I do believe WASM has the potential to be a very prominent (and hopefully the preferred) way to deploy a lot of applications, eventually.

[1] https://github.com/WebAssembly/interface-types/blob/main/pro...


Is the "syscall layer" for this distributed runtime also designed to be distributed? I don't think WASI and POSIX-like abstractions lend themselves to be scheduled across a cluster. For instance, there's no way to mark a newly launched thread as not requiring shared memory. I see a lot of benefit in designing a syscall interface from the ground up for a distributed context.


How will you create the small .wasm file - probably with some kind of source code and build script, right? Is that not equivalent to a Dockerfile?

How will the runtime know which dependencies to provide, and how won't it be similar to Docker layers?


The thing I'm not understanding is why WASM runtime is that fundamentally different from an OS? Isn't the OS already supposed to securely isolate the different programs? Shouldn't we try to fix that instead?


This was comprehensive, thanks! Lightweight deployment and better security (than configuring seccomp) are both interesting features. Could you please link to the product/project you're building (of course if it is open)?



Not public yet.

I'll post a SHOW HN once it's ready for an initial open source release.


Shipping self contained modules, this whole things sounds like reinventing what the jvm is.


Nobody mentioned Deno. They're also building similar to WASI architecture but with a focus on JavaScript applications. The future is super isolated processes on a giant server instead of this container craziness we are having today!


Isn't that all a "container" is anyway? There is the image ecosystem of docker (which I think is a pretty big success) for the content that will be in the initial bootstrap of the isolated process, but beyond that containers are just very isolated processes no?


Yes, a container is just a different kind of process. For that matter, so is a VM. All the supposed differences are just implementation details, like the difference between buses and trains.


I am building a compute layer for Observablehq which enables services to brought up using nothing other than a web browser. It's a bit too soon to call it a K8s replacement but the motivation was the complexity and laggyness of bringing up services on cloud or k8s.

The WEB + on demand infrastructure is the distributed replacement of K8S

https://webcode.run/

WEBCode is about eliminating environments and the difference between backend and front end to radically reduce the number of tools necessary and to enable new development workflows like live debugging production trafic with the vanilla Chrome debugger.


Many misconceptions in the article. It is a good example of an apples to oranges comparison.

Kubernetes is first and foremost an orchestrator.

Docker is an environment. You can run Redis, proxies and all kind of infrastructure in Docker containers. Or your own binaries.

The article is making a valid point in distributing binaries. That's all. Nothing new here. If WASM will conquer the world - we'll see. Potential is there. But WASM for the browser was already overhyped. Hype seems necessary in our times but is clearly not enough.


WASM is like K8S in that 90% of projects don't need it.


it makes sense to compare WASM to Docker or other container runtimes but not Kubernetes.


Trust me no, one is something that can be used in production by a small team of developers, the other is kubernetes and needs to be taken out back and shot. Mainly for the huge gulf of support that exists between a mock-VM and Amazon support. There's a tonne of people who just want something in the middle and to build their infrastructure and not be _forced_ to rely on the cloud nonsense.


Seems this has some potential to simplify the server side stack. Hopefully it won't be just be yet another abstraction layer piled on top, increasing complexity? I can well imagine running WASM components managed by a WASM runtime, running in a container managed by K8S, running on a VM managed by a cloud provider orchestration system.


A project that's kind of an implementation of this idea: https://github.com/lunatic-solutions/lunatic.

I find it interesting at least, but I haven't had the time yet to play around with it.


kubernetes is my personal afghanistan


I'm very excited to see more people are jumping into this vision.

I commented about this a few times here in Hacker News [1] [2]. I believe WebAssembly/Wasmer will become the new Docker (fun fact: the name Wasmer comes from Wasm + Docker) and there is a greater opportunity for it to become the base for the future Kubernetes for the Edge.

Excited for what's to come!

[1] https://news.ycombinator.com/item?id=27158187

[2] https://news.ycombinator.com/item?id=26271806


yes, but way better even it's not apple to apple

- only big companies need kubernetes (as kesley hightower said, only use it if you building a platform and solving million dollar problem)

while desktop app that runs in browser (games, graphics design, etc) can benefit from wasm, also if you want your service to run users' code, it's better to use wasm so they can choose whatever language they want that able to target wasm


Today? no

Too much javascript glue code to write

Tomorrow? hell yeah! it is perfect, desktop no longer the lowest common denominator when building crossplatform solutions!


The WASM/container comparison doesn't even make sense, let lone WASM/container-orchestrator.


Multiplexing between several WASM modules is exactly how services are organized on https://fluence.network

Each peer in a p2p network can host multiple services and expose their API to the network. Each service, then, is a collection of WASM modules that interact with each other in FFI manner.


Sounds a lot more like WebAssembly is the new JavaVM. WORA, security, portability, edge, etc.


The new Docker, rather. We would still need something like Kubernetes to orchestrate.


Time to investigate WASM-security holes for a next generation security firm


> But, in WebAssembly you get a few more things. One is fast start.

This is one of the key things I think is differentiating versus the container world. The idea of per-request instantiation is possibly within reach. This is a possible return to a very old, very simple architectural pattern, of cgi-bin! Running a process per request. Which we left because creating processes was expensive. Our new architectures- discussed in this write up- revolve around long running applications, which are compelled upon us for performance reasons. But WebAssembly seems well suited to very fast instantiation of new contexts. It re-opens the possibility that we can have (return to) smaller more discrete chunks of work, safer little units of execution running. We could go back to a Apache HTTPD style prefork world, the very old[1]!

This is part of the gradient we've been walking down, it feels like. VMs -> containers -> lighter weight containers. Returning to finer grained processes.

For anyone interested in a deep dive into some of the super neat systems programming going into making WebAssembly starts/instantiation fast, there's a bunch of active work to reengineer module instantiation, using memfd [2][3]. Really fun to see this getting hashed out.

> * So one answer to the question as initially posed is that no, WebAssembly is not the next Kubernetes; that next thing is waiting to be built, though I know of a few organizations that have started already.*

And as others have recommended, worth noting Krustlet[4], which allows running WebAssembly instances via Kubernetes. This could definitely make sense for a lot of our workloads. I think a lot of people also expect though that the cloud infrastructure itself will get rewritten or recompiled into webassembly. Where-as today we tend to wire systems together over protocols, via http or grpc, we'll be able to more directly stitch together plugably modular systems, where a generic control process to govern, i dunno pick an example, load balancers say, has a bunch of specific webassembly modules plugged in with the specific actuators / configuration for your set up. There's a real potential to fulfill much of the idea of micro-kernels here, of decomposing big applications into looser assemblages of generic code, drivers, and configuration.

[1] https://httpd.apache.org/docs/2.4/misc/perf-tuning.html#comp...

[1] https://github.com/bytecodealliance/wasmtime/pull/3697

[2] https://github.com/bytecodealliance/wasmtime/pull/3691

[3] https://krustlet.dev/


Anything that runs 1's and 0's will do!


"a software virtualization substrate"


I have an irrational hatred for Kubernetes based solely on people writing "K8s". I understand that this is my problem and not theirs. I am, in a word, triggered.


I have a general dislike for Kubernetes because most folks don't need its complexity.


It’s like saying that beans are the new cars.


Overhyped, over engineered, and overly used?


Not "overly used" yet.


"not" yet then


Web3 isn’t going to be about crypto or the metaverse. It will be all about WASM.


This sounds like the generic comparison websites that compare just about anything. Kubernetes is for _managing_ containers.

Btw. Betteridge's law of headlines holds.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: