Hacker News new | past | comments | ask | show | jobs | submit login
Announcing WCGI: WebAssembly and CGI (wasmer.io)
148 points by syrusakbary on April 6, 2023 | hide | past | favorite | 115 comments

Next thing you know each wasm assembly will need a package format to ship assets with and have the app server provide common resources to all assemblies, e.g. db connection pools, some notion of security, etc.

Replace Wasmer with the a JVM-based app server and WASM assemblies with JVM-bytecode. The big difference is the source language doesn't matter as long as it's able to be run/replaced by WASM bytecode.

We're heading in circles in a lot of ways

Indeed. The JVM did a lot of things right, however they missed three that are now solved with Wasm:

* Completely tied to an ecosystem, and incompatible with another (you could not run C programs in the JVM)

* Proprietary (vs based on an open standard)

* They couldn't run in the browser seamlessly

I'm still flabbergasted that all these people, in the year 2023, think a hypertext markup document viewer with a terrible UX and bizarre design restrictions that takes 4GB of RAM to run and re-implements the features of an entire operating system is the end-all be-all of technology. If it doesn't run in a web browser it's worthless.

I can't even come up with a metaphor for it. We're choosing to be stuck with shitty antiquated technology because it's easier than making something better. It's depressing. Like a world that never got past the horse and buggy. Large engines powered by steam would require additional investment in refining of steel and making giant cast or forged parts... easier to just stick with the horse.

The browser "won" on seamless distribution. The browser is all the things you said and worse. It conflates the developer API (which could be as complex as the collective human comprehension will allow) with the execution environment (which must be as simple as possible and understandable to mostly anyone). I have a dream that the browser will turn inside out and start losing APIs that will be reimplemented on WebAssembly. But WebAssembly is moving too slowly e.g. with regards to tail calls and parallelism. So it's probably just a dream from now[1].

[1] https://www.youtube.com/watch?v=zlQEQQSqZ9g -> it's an old dream that others have had before.

Well, that's partly why WASM exists.

It can be run outside a browser, and is much more performant than using the whole browser. The future is very likely going to consolidate around WASM and WebGPU, regardless of what hardware you're targeting. If you want more performant specs, it will be driven through a public and consensus driven way... there are far too many economies of scale to standardization for it not to be.

The days of building an ecosystem around a closed, proprietary language/protocol/spec are over. The browser was just the first to bridge the gap... now we move on to WASM, and maybe 10 years from now something newer



Yeah. You’re right. THIS will be the time that open wins. When most peoples primary computing is running an OS developed by one of two companies that have a strong incentive to maintain walled gardens.

Largest /s imaginable.

Your comment is pretty far into LARPing territory.

Heard of Kubernetes, Docker/OCI, and CNCF? A crapload of computing now is running within Linux (open) in containers (open). Sure, end users use MacOS and Windows as the base OS, but a lot of programs they interact with now are running in a browser or Electron (all open standards-driven things). WASM’s future is as a much more performant and lightweight alternative to containers, to the point where it could be run and used by anyone. containers require a shitload of configuration to be able to run them, but a wasm module can be packaged up in a native assembly that requires zero runtime setup for the end user. Just install and go.

You could say the same thing of Go apps, but they certainly haven't taken over computing. They got a lot of attention from certain corners, but I think everyone acknowledges now that it's just another compiled language. That doesn't solve all your problems, it just solves one. Maybe WASM solves two problems by not needing to be cross-compiled. But there's still a thousand problems left, many of which are addressed by containerization, but not addressed by WASM.

The innovation of "docker containers" as a total solution is more akin to a POSIX standard than the Go language. Containers only became popular because of Docker, not because of containers themselves. Docker is a solution to a dozen different problems. "A container" is really just a chroot in a unique namespace. Again, that's basically just one problem solved. The functional combination of all the features of container orchestration software and interoperability is where the actual value of containers lies. Not in the container, but in the 12 different problems that are now all solved in a universal way by all the different solutions that deal with containers in the same way.

If WASM can evolve an entire ecosystem and standard around all the problems needed to be solved to run software easier, then sure, it could be revolutionary. We'll see if that happens in a way that is easy to use. My bet is it won't happen.

The browser/web is already open. People are writing cross platform apps in browser containers. Open is already the standard. WASM simply closes the performance gap between “widely accepted open standards” and native.

You seem confused… fewer and fewer people are writing software in OS locked apis/contexts. Back in the day it was much closer to 100%

Apple's close ecosystem nonsense and blind worship of vendor-lock-in by the tech community is partly to blame for it.

Unfortunately, the world is scheduled to end (or reboot if you prefer) at 03:14:07 UTC on January 19th, 2038. [0]

So, that means that the remake of an OS that we call a browser is likely to be the pinnacle of computing for our lifetimes.

Hopefully after the Great Reboot of 2038, the next generation will learn from our computing mistakes, but since we've never learned from those that came before us, it's highly unlikely. But at least they will have to start from scratch, so there's a chance!

[0] - https://en.wikipedia.org/wiki/Year_2038_problem

> Like a world that never got past the horse and buggy

At lunch one day a colleague explained that the size of the space shuttle could be linked to the width of two horses walking on a Roman road :-)

One of my favorite lunch breaks of all time.

Give us the recap


I'd be very leery of trusting the specifics too much. E.g. the reference to "Roman war chariots", which are not a thing the Romans ever used.

Snopes entry: https://www.snopes.com/fact-check/railroad-gauge-chariots/

Nice, so dude was actually just reciting an email FWD he studied to sound interesting.

God, I had forgotten about those email FWDs. The old people I know totally abandoned email.

That’s a bit cynical and dickish. Maybe he just read it, found it enjoyable, and remembered it? God.

Funny that most people seem to have forgotten that browsers used to ship with Java support. Not saying that was a good thing, but 20 years ago you could run JVM apps in the browser without issues. Also, there are dozens of language runtimes for the JVM, e.g. Ruby, Python, Golang, Javascript, Scheme, .... And regarding proprietary software, Wasmer is a for-profit startup that seems to offer open-source tooling with the hope the ecosystem will standardize on it and give them opportunities to monetize that. So not that much unlike Java I'd say.

> but 20 years ago you could run JVM apps in the browser without issues

That was definitely not my experience at that time using a linux desktop.

I’m sure that compared to doing anything else on a Linux desktop it was just fine. You made your bed.

It was not. It was demonstrably worse. It had it's own janky UI that barely worked, it failed to understand the filesystem most of the time, and it failed to integrate cleanly with any other devices on the system.

Everything _else_ was able to do this just fine, in particular it's main competition, Adobe flash. That was actually _almost_ a first class experience and I didn't need to track down and install "icedtea" to make it all work. It just worked out of the box.

The JVM promised "write once, run anywhere." You can blame me for that failure, if you like, but it was then and always was a flat lie. You simply choose to ignore this.

Funny that you forgot about all the security issues of JVM applets that Wasm finally solved

And now you can run a wasm jvm: https://leaningtech.com/cheerpj/ (there are others, too)

Like this one?

  #include <stdio.h>
  #include <stdlib.h>

  int main() {
    char *s = "world";

    s[0] = 'o';
    s[1] = 'w';
    s[2] = 'n';
    s[3] = 'e';
    s[4] = 'd';

    printf("Hello, %s\n", s);

Just compiled and ran it on my system within Wasm, my computer is still fine. Your point?

The point of Wasm is - programs can and will malfunction (and some will be malicious), so we have to protect the environment that runs it.

Believe a plugin need to be installed first. Fuzzy memory says the JRE install set it up for you.

You can't run c in wasm.

You can compile c to wasm and run that.

In the same way one could compile c to java byte code, write a wrapper program to allocate the "heap", disable the gc and execute it in close to the same way it executes in a wasm runtime.

It's not an open standard but the JVM spec is publically available and intended for multiple implementations


I would also mention that, especially in the past, the JVM sandboxing was not great. Which is why Applets were such a problem.

Not just in the past, arbitrary program sandboxing is not possible even today.

Exactly, yet everyone pretends wasm is the exception for some reason...

Wasm is an exception. It’s designed from the ground up to have an extremely simplistic model of execution that prevents common exploits like buffer overflows and such. The WASM runtime only exposes “safe” things and at an incredibly low level. Everything has to be built on top of these abstractions. The JVM is not architected the same way, the VM has a bunch of APIs exposed to it and you have to attempt to constrain those down to make a program secure. WASM starts from a secure by default position and you grant it capabilities as needed to accomplish the task. If all you need is to take in a string, do some computation, and output a string, then you can run a WASM module with zero disk or network access whatsoever, no ability to integrate with processes, environment variables, or anything else on the system. Trying to run a program on the JVM or CLR but prevent it from reaching out of itself by blacklisting certain functions or something is an exercise in futility.

WASM was designed before spectre happened, so no it's not the exception. In-process sandboxing cannot be securely designed anymore. You can endlessly chase security exploits with increasingly expensive and convoluted workarounds, but that's about it.

> It’s designed from the ground up to have an extremely simplistic model of execution that prevents common exploits like buffer overflows and such.

So is the JVM and CLR and dozens of prior runtimes for that matter.

WASM for guests is mostly a security regression as all the design focus was on protecting the host. Things in the runtime got thrown under the security bus (such as no ASLR)

> The JVM is not architected the same way, the VM has a bunch of APIs exposed to it and you have to attempt to constrain those down to make a program secure.

You're confusing concepts here. The JVM bytecode runtime has very few APIs and WASM has no inherent requirement on being minimal or capability-based. WASI, which is what's being used here, has neither of those design attributes, for example. It's just regular POSIX apis. No permission system + massive API surface, yet still WASM all the same

I don't know about the limitations of sandboxing but I am curious. How does one break out of the JVM's or wasm's sandbox?

The approach is the same in both cases - find an exposed API surface with a privileged implementation, find some way to confuse it, exploit.

In a few cases there were JVM exploits that weren't based on that approach, almost always involving reflection or JIT compiler bugs.

The people saying WASM is easier to sandbox than the JVM are sort of half right and half wrong. The hard part of sandboxing is exposing safe APIs to the sandboxed code. WASM solves that by simply not exposing any APIs at all. This essentially punts the sandbox construction to the user and will allow WASM vendors to claim a good security track record, which they will get by not doing very much.

On the other hand, the OpenJDK guys are retreating from providing any sandbox at all and are taking it out of new Java versions. So you'll end up with mandatory exposed APIs that don't even try to be safe.

Neither approach is really all that great if what you want is the ability to run a useful set of normal-ish programs in a safe way. GraalVM has its own sandboxing features which look like a decent compromise, and you can still use process or VM sandboxing on top.

I meant decent sandboxing on the JVM is not possible, that’s the advantage of WASM.

There was a time when JVM ran in the browsers.

Yup, via Java Applets. But they were a pain to use, that's why I intentionally wrote "seamlessly" :P

As far as I could tell, the big pain point was performance -- they were just too slow on median machines during the late 90s/early 00s window they had mindshare.

I actually enjoyed using them to get around a few browser limitations around the mid-to-late 00s, and they seemed feature/performance competitive with Flash unless what you were doing fit the media authoring model closely. But by then people were skeptical about Java and if you had to do anything to get it installed they wouldn't, and the direction was native web.

The Java Kernel project sorted that out.

They were better than Flash (which needed a weirder runtime, and couldn’t interact easily with the DOM, and didn’t have variables until version 3). This is hugely improved over JavaScript at the time (pre-XHR) and had less lock-in than ActiveX.

GWT allowed to run it without applets, also much more efficient than any current stack (since they dont support closure compiler)

I spent years working with GWT, it's not nearly as simple as you describe. It's not running "Java" in the browser, it's compiling down (a subset of) Java to Javascript.

That said, GWT was ahead of its time in a lot of ways, but it had warts galore.

I recall around 2010 or so, when a company I worked for was creating a new, rather ambitions, web application. I had to argue against using Java Applets in favor of standard web technology for several components. Thankfully I won that battle.

That web application is still in use, in production, today.

> you could not run C programs in the JVM

Not entirely true, but of course there has never been any official support. http://nestedvm.ibex.org/

Memory usage too, right? A C++/Rust wasm won't consume a hundredth of the memory a JVM application typically uses. I like getting the job done on a 512MB RAM VPS. JVM the language might be cool; the bloat and forced-GC can be spared.

Java can be fast and memory efficient. The problem I see with many Java programs is that they're built on top of layers upon layers of frameworks. Web server frameworks on top of multithreading frameworks with database ORM frameworks and microservice frameworks all calling in and out of each other. Stack traces that end up accumulating 30 or 40 calls before actual product code even begins.

Tuning the garbage collector often helps, modern Java has excellent GC options that will make most software just run better. That's often not an option when you're stuck with JRE 8 because of the curse of legacy code, but modern JREs have made significant progress in both memory management and general performance optimizations.

Port the same abstractions to any language and you'll get very similar performance issues. I've seen plenty of NodeJS applications crash because they grew out of the 2GiB memory limit I set up, and those node processes weren't doing anything that I deemed worth more than half a gigabyte of RAM either.

The JVM is somewhat aggressive in claiming RAM as memory usage grows, but that often leads to a performance increase in the comparisons I've done. Setting parameters (-xmx / -xms) can often reduce the amount of memory used significantly at the cost of slower application startups.

JVM ran in relatively little memory back in the day. I remember running applets on ~16 megabyte systems. Compare that to today where you need a half gig of RAM to even launch a browser.

Right, the internet caffee computers I used to hang out at in the early 2000's had around the 128-256mb of RAM memory, and Java/Flash used to run just fine. Although Java Applets usually seemed to take longer to download, and used to wait a lot of time to start the JVM after that.

Just like a JVM for embedded development won't.

It is incredible how with so much Java hate, the WASM folks are doing their best to replicate everything we had in 2005.

Java still doesn't have value types AFAIK. Meanwhile WASM started out as no-gc continuous memory VM.

Comparing WASM and JVM is like comparing a truck and a bus because they are relatively the same size, move at the same speed etc.

I mean sure you can load people in trucks and cargo into buses - and both have been done with JVM (eg. people built C compilers) and WASM (people are building GCed runtimes on top of it despite lack of GC support from platform).

Almost 30 years later nobody sane is running C on JVM and there were many attempts posted here over the years.

Nobody sane should be running C, regardless of the target platform.

You'd have to delete your operating system and most of your libraries.

Thankfully not all OS are written in C, and many are making efforts to fix that mistake created by UNIX release into the industry.

Similarly, many managed compiled languages don't depend on C.

Finally, cybersecurity legislation will speed those efforts.

Outside of the kernel it's a lot more C++ than C

Sure, this fits the way software evolves on the circle of dumb. It goes something like this: What a great idea!... (a little later) Hmmm. That's a problem... (later) We need something to run apps in the browser. I wonder if WASM would work... (now) What a great idea!...

Software development always goes in cycles. "Apps" were great now maybe not so much so...

In the late 80s/early 90s, the CEO of ETA Systems (a supercomputer company) had a vision that by 2000, the world would be split between supercomputers and workstations. I have seen some evidence that people are considerng that once again... The Circle of Dumb is always with us in software land.

As long as it's refining and making at least some progress then it's more of a helix than a circle

I see it as more of a conical spiral, like water going down a drain.

Honestly, I just think Java was ahead of its time. I hated sites with applets because they felt slow to startup and run. That problem is long gone with modern computers.

We are not replicating Oracle’s lawyers, sales agreements and non-disclosure policies.

The hate against Oracle is beautiful, when WASM only exists due to politics against PNaCL, and without Microsoft, Google and Apple there is no WASM standard anyway.

All angel companies that do no evil.

2005? Maybe 1996. That's when I ran (and wrote) my first applet.

> Next thing you know each wasm assembly will need a package format to ship assets with and have the app server provide common resources to all assemblies, e.g. db connection pools, some notion of security, etc.

You mean in a OCI image, a bit like this? - https://docs.docker.com/desktop/wasm/

and then deployed using Kubernetes? - a bit like this - https://learn.microsoft.com/en-us/azure/aks/use-wasi-node-po...

Multi language support is just one of the selling points, you didn't mention the other two: sandboxed security with explicit capabilities and high performance. E.g., I have a Web Assembly module running on the server (Fermyon) that needs explicit capabilities defined to even make a network call to a Twilio endpoint or read a local file. That means you can run a random Web Assembly module that you with confidence, just like you can typically open a random website on the browser without concern. By contast, you can't say that when running a random Java class that you don't trust.

I was just going to say, this road leads to OSGi...

The delivery of assets sounds like a job for Bindle which is already integrated to Spin, which looks to be more similar to python's WSGI than this



Bindle is one of the worst specifications of a container format I've ever seen, I would personally recommend anyone to stay away from it.

No wonder they are ditching it in favor of OCI...

Don't forget, a mail server will be included at some point since that always seems inevitable.

PHP was compiled into WASM, so now you can run PHP apps "as WASM". How is this different from just running PHP, without WASM? Apparently it's faster, but also they make this claim:

"Picture running Wordpress and not having to worry about attackers breaking into your system"

uh, so, you sprinkled some WASM magic on some code and suddenly several decades worth of security research is obsolete? .....yeah, I'm gonna call bullshit. Compiling code or "running in a sandbox" does not stop attackers from breaking into your system. Might slow them down for a few months while they develop some new attacks.

Compiling a service to wasm to protect it "from itself" (ie from untrusted data) has trade-offs.

On the one hand, you lose ASLR and other security features designed for native code. On the other hand, your program becomes immune to stack smashing, so arbitrary code execution becomes a lot harder for an attacker (at least that's my understanding).

But arbitrary code execution hardly features on the OWASP Top 10 for 2021:

  - Broken Access Control
  - Cryptographic Failures
  - Injection
  - Insecure Design
  - Security Misconfiguration
  - Vulnerable and Outdated Components
  - Identification and Authentication Failures 
  - Software and Data Integrity Failures
  - Security Logging and Monitoring Failures
  - Server-Side Request Forgery
If a modern hacker wants to exploit a web app, their first thought isn't shellcode.

The fact that the address space starts at 0x0 comboned with no ASLR seems like some pretty major regressions in security. Nearly all SIGSEGV crashes just became memory corruption bugs, for example. That's huge.

And of course WASM won't protect from just regular ol logic bugs like SQL injection or similar.

Compiling things to WASM to make them "more secure" seems bonkers backwards. WASM's priorities are to protect the host, not the guest.

Does it become harder or easier?

  #include <stdio.h>
  #include <string.h>

  int bad(char* src) {
    int admin = 0;
    char buf[2];

    strcpy(buf, src);

    printf("%d\n", admin);

    return admin;

  int main() {
    if (bad("ADMIN!!!")) {
      printf("we're admin!\n");
    } else {
      printf("we are not admin\n");


If you’re worried about securing server side resources by hardening the client side, you’re doing something wrong.

At any rate, the example you give is arguably a better case than what we have right now with JS — anyone interested in subverting client side access policies can poke around trivially in the JS globals, they don’t even need to decompile anything, they have a DOM inspector and a REPL to make exploits that much easier to pull off. If security through obscurity is your thing, compiling to WASM could only help.

Again, though, client side shenanigans shouldn’t be of any concern anyway.

Edit: I thought I was replying to the other comment you made within the context of client side execution, but I can touch on that too.

Programming errors (and resulting exploits) aren’t something unique to WASM, and there’s nothing stopping you from using something like Go or Haskell or Rust as a source language, if you want something more safe than C.

The WASM sandbox is a lot stricter than running native code, or many other VMs (Java/dotnet). The difference seems to be in the design process: most normal VMs are built around the idea of quickly developing and deploying an application that can do everything a native application can do, without having to deal with platform differences. WASM was built around optimizing specific parts of web pages and was explicitly designed NOT to have a huge surface area. By default, you set it up to be little more than "memory goes in, memory comes out" without having to worry about file system restrictions or sockets like you have to with alternative VMs.

I'm not a fan on the way these features are being tacked back onto the runtime, but the approach of having to opt into them make the security boundary a lot more transparent. By disabling the file system (i.e. baking a fake one into memory) you can prevent whole classes of attacks. You won't get surprised by your logging library making network calls (log4j) if your sandbox never even enabled networking in the first place.

I'd much rather see easy and safe alternatives (like using containerization APIs) but there's still no easy way to tell your computer "run this program with a maximum amount of memory, CPU time, no network access and no file system outside this directory" that doesn't come with tons of caveats for preventing escape. For PHP, setting up a secure systemd configuration (more secure than the one that comes with your package manager) can be done, but it's still far from easy.

> Compiling code or "running in a sandbox" does not stop attackers from breaking into your system.

Correct - PHP is a scripting language with managed memory, quite different from C. Probably not all, but most WordPress vulnerabilities have to do with issues like sql injection, bad configuration defaults, etc, all in business logic. All of these will continue to exist when compiled to Wasm.

Assuming you actually have access to a database in the first place, while wasmer may have some vendor locked in solutions for it, generally with Wasm you won't get beyond SQLite for the near term.

edit: this was completely wrong, TFA is talking about server side WASM

I think the point is that a simple-enough application can be delievered as WASM and run entirely in the user's browser, so there wouldn't be any server-side system to break into? So one could ship e.g. wordpress + db + content in one bundle, and the user would be none the wiser. A wild claim, and probably self-defeating for anyone who needs to protect their content.

Otherwise, the WASM-dust at best moves the security boundry to a different service.

It’s already been done!


(By my colleague Adam!)

I think this experiment becomes interesting thinking about times you really want WordPress to be headless (like demoing a plug-in or theme), but less for serving websites to people. Potentially also useful as a testing development environment.

I was surprised at how fast it is, since it basically becomes an SPA.

I don't think that's what they mean, Wordpress is a server-side app with a database backend, I don't think you'd want to ship your entire Wordpress database to the browser.

No, CGI is an interface for backend web servers, this is not about client code.

> Consider the challenge of running PHP programs on servers. We have two primary options:

> 1. Wrap the PHP interpreter with a layer that instruments each HTTP call

> 2. Use the existing php-cgi program and simply compile it to Wasm

> Option 2 is not only faster, but it also enables any web application on Wasmer more efficiently.

I’m confused. This seems to be suggesting that php-cgi, which has to initialise the PHP environment every time, would be faster than the likes of php-fpm, which, well, I understand and presume it has significantly less overhead per request, though I’ve never benchmarked it.

I have PHP 5.6 installed on my VPS for one old site, and it takes around 27ms to start¹ (compared to under 30μs for just plain `echo`, as a closer indicator of actual process spawn overhead). PHP 8.2 might be faster, but it’s still going to be much slower than `echo`.

By simply compiling php-cgi to WASM, it will surely be doing all that initialisation for every request. Because CGI starts everything from scratch for each request, it’s inherently less efficient. In theory you could coordinate a time to snapshot the process/VM/whatever, forking from that point, but that’s not CGI any more.

All up, what they’re claiming is so completely contrary to what I would expect (and without any explanation or justification whatsoever), and kinda follows the “dust off something old to laugh at it again” trope, that I’m honestly having to check that it’s not the first of April any more (the article is dated the 6th).

So as I say, I’m confused. Option 2 seems very clearly slower and much less efficient, by the very nature of CGI. No one targets CGI (it’s been basically dead for… I dunno, close to twenty years?), because CGI is considerably worse than the alternatives. Can someone enlighten me? Have I missed or misunderstood something?


¹ Measured by running this in zsh and reading the “total” figure (across sixteen runs, I got between 2.671 and 3.032 seconds):

  time ( for i in {0..100}; do php56 <<<'<?="."?>'; done )
The comparative echo test uses `echo -n .` and takes one thousandth as long.

To me this seems a little closer to the architecture of AWS Lambda than OG CGI, though that is not a perfect analogy either since this is in a WASM runtime within their server process, rather than a separate process. But the programming interface is a handler function you provide with an interface that looks like this in Rust:

`fn handler(request: Request) -> Response `

My understanding is the main function is called only once, and registers that handler. So `main` is where you'd initialize the majority of the environment, and no that is not truly CGI; definitely no process is being created for each request, but it may be the case that this is more like FastCGI where you have a pool of single-threaded runtimes all setup that way that can handle requests.

This still seems inefficient compared to a threaded or event polling process that can handle multiple requests concurrently without having to marshall data back and forth, but I'd think it can get closer to that than FastCGI or Lambda do.

You’re misunderstanding it. This is just recompiling CGI-speaking binaries to WASM, meaning that it’s effectively spawning a new process for each request. Being WASM it’s not a new native process but just a new instance of the WASM module, but in practice process-spawning is not the slow part, but what happens inside main, which is being run for every new request.

I've always loved the simplicity and flexibility of CGI.

To check my understanding: since CGI just takes a raw request over stdin and returns a response over stdout, would a WCGI wasm module be compatible with WAGI[1] and vice-versa?

[1] https://github.com/deislabs/wagi

To me it has always felt like a underspecified hack but maybe I am talking out of ignorance. (I did read the RFC though( I think it's strange idea to run to get a bunch of arguments that you have to get from the environment and/or the stdin and parse the whole of it and then try to programmatically output it all by printing to stdout. No wonder people have come up with template languages that are html supersets and that work with a preprocessor.

I don't use CGI but when I do I like the simplicity of Haserl (Basic template language + any interpreter, lua by default) : https://haserl.sourceforge.net/

That's right, both WCGI and WAGI are currently compatible!

Things might evolve a bit different on the mid term, but let's see what the future holds :)

Matt Butcher who used to work at Deis, which developed WAGI, and Microsoft after it was acquired, now has a new startup, Fermyon, which has something called Spin which I think uses a new protocol that's different from CGI. https://www.fermyon.com/blog/introducing-spin

That would make sense, anyhoo. CGI is plain text which I don't think is optimal for this stuff.

Hey Ben, why so many comments spamming Fermyon on this thread? Do you have any relation with them?

The amount (and evolution) of acronyms in the WASM space is kinda overwhelming so I might be out to lunch…

At the top of the article it says “…compiling them to WASI”, but is that a semantically/technically correct statement? My understanding would be more that it should say something like “compiling them to WASI-compliant WASM” or something. Or can you actually “compile to WASI”

WASI is just WASM outside the browser, it kind of implies what you're saying. It's still WASM, just adhering to a specific interface.

Like when you say "HTTP API" you don't necessarily need to change it to "TCP HTTP API" as it's somewhat implied (although maybe a shitty example, as HTTP is starting to appear over more things than just TCP as of late)

WASI is also WASM inside the browser :), since there's a js/wasm conformance of the interface: https://www.npmjs.com/package/@wasmer/wasi

This is genuinely exciting - the prospect of running Wordpress without the usual security concerns is a game-changer. WCGI seems like it could really disrupt the server-side development landscape. Can't wait to see what other applications will benefit from this technology!

Full disclosure, I work at the company behind WCGI, but I truly believe this is a groundbreaking development that will have a significant impact on the industry.

Full disclosure, I have only minimal understanding of web assembly, other than using C functions inside a web browser. I run wordpress in a read only docker container, what better security could WCGI bring?

Here are the main differences with the Docker strategy:

* If you want it to be usable, you will need to ship it with some mechanism that allows running CGI over http (kind of Apache or Nginx), so your container would be bigger than the Wasmer package

* Regarding security: Docker containers needs to rely on hardware virtualization to run securely (via KVM or simlar), aside of a virtualization on the systemcall layer (which depends on the crun layer that you use)

Because of that, Docker containers will have the downside of: being able to run only in one chipset/OS, they will be bigger-sized and they would be slower to start up (even if you use state of the art for running them, aka Firecracker, you still get 250ms vs < 1ms with Wasmer)

> you will need to ship it with some mechanism that allows running CGI over http (kind of Apache or Nginx)

Is wasmer stable and secure enough to be exposed to abuse of the entire Internet?

> your container would be bigger than the Wasmer package

The first Google hit for "docker php nginx" is https://hub.docker.com/r/trafex/php-nginx - they claim their Docker image is 40 MB compressed, whereas Wasmer for amd64 (latest from https://github.com/wasmerio/wasmer/releases) is a 80 MB tar.gz (unpacks to 300 MB tar). Even with larger images, like the `wordpress` image (200 MB), the size is neglible.

> Because of that, Docker containers will have the downside of: being able to run only in one chipset/OS

You probably don’t need to care about architectures other than amd64 and arm64. Both are supported by the trafex/php-nginx and wordpress Docker images.

> (even if you use state of the art for running them, aka Firecracker, you still get 250ms vs < 1ms with Wasmer)

Starting a fresh VM for every request doesn’t make sense, so this difference wouldn’t matter in real life.

> whereas Wasmer for amd64 is a 80 MB tar.gz

Wasmer ships everything by default, including 3 compilers (LLVM is the big one!), which adds most of the size. However, the wasmer runtime in headless mode weights only about 2 megabytes.

Even more, even if you include only one compiler instead of 3 (just singlepass) it would be in the order of 5-10Mb.

Stay tuned, because if you are in macOS/iOS you will see even smaller binary sizes!

Im going to ask a very ignorant question ;;

Is it possible to spin-up a 'container' or whatever youre calling the VM of a site, for each individual user? So if you have a high security req on data accessible by computers, you spin up an individual container of said site that only serves that user, and is destroyed on exit... so that whatever the user does cannot affect others?


Another aspect to consider is portability. WCGI, built on open standards like WebAssembly & CGI, allows for easier adoption of security improvements/updates across different platforms & environments. Definitely worth exploring alongside read-only Docker containers!

I don't understand. Why not just compile to machine code and use plain old CGI?

Others have some good reasons to also consider. Also, launching new sandboxes in wasm is supposed to be extremely extremely extremely cheap.

Where-as launching a cgi-bin executable- even a very small libcgi based one-has a significant cost, requires a lot of kernel work & context switching.

With WCGI making new "processes" is nearly free & you don't have to context switch.

A lot of the excitement around wasm in general is that it could potentially enable a communicating-processes model of computing that would be inefficient today. Even current "function as a service" paradigms tend to retain processes, have warm/cold start distinctions. With wasm there is a potential to have requests spawn not just their sandbox, but to create whole graphs of lightweight sandbox/processes. Sometimes you might hear this described as a "Nano-functions" architecture.

I think the closest thing to "nanofunctions" today are V8 isolates.

As well as sandboxing there’s the potential for better startup performance. Wasmtime have described how they can achieve microsecond startup times using virtual memory tricks to reset and reuse a module isolated between requests. https://bytecodealliance.org/articles/wasmtime-10-performanc...

This is faster than forking a process because there are fewer operating system resources to manage.

CGI starts a new process rather than forking an existing one which makes it unsuitable for use with languages such as Python or JS which have slow initialisation times (milliseconds.) Wizer is able to snapshot a WebAssembly module to avoid that work. So in combination with the fast startup that brings initialisation down to microseconds.

Now runtimes are still somewhat slower on WebAssembly than native, and much slower for JITed languages since the JIT cannot run in WebAssembly. But there are many cases where startup time dominates and this will be faster overall for cases where you need per request isolation.

I believe the main selling point is portability and flexibility. Anything written in a language that can be compiled to wasm can now be turned into a web service.

Platform independence: WebAssembly allows you to compile code once and run it on any platform supporting it, saving time and effort when deploying applications across various servers compared to dealing with platform-specific binaries.

Who is deploying the same backend to multiple architectures?

Though many developers may not prioritize deploying the same backend across multiple architectures, the platform independence offered by WebAssembly and WCGI can still be advantageous for migration, development efficiency, and keeping up with changing tech trends(serverless, edge computing)

Sandboxing, I believe is the answer. Portability, too, I suppose. Maybe a long-lasting archive format for older binaries...

Note WordPress has an official WebAssembly build for the browser and Node.js: https://developer.wordpress.org/playground https://github.com/WordPress/wordpress-playground

Disclosure: I'm the creator

This project is different in that it builds PHP to WASI.

Hello Java Servlets.

This looks good, I'd been thinking about putting my little Python program that prints a random line from a textfile onto my Apache server for the internet to enjoy, this ought to enable it nicely. Where would be best to look for examples?

The idea of exposing Python via a normal cgi script is terrifying to me

FastCGI would be better than regular CGI. Spawning and cleaning up processes is expensive.

A bit late, there is already Spin from Fermyon


Fixed. Thanks!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact