I'm curious if there's perf differences between canvas and webgl canvas. This project uses just canvas, but iirc passing frames to be rendered by webgl is faster. Perhaps I'm wrong in this context.
I also don't see threading in here. Makes sense for a demo, but if this were to be used performantly you'd have to throw it all in a webworker so it doesn't block the main thread. This is one point of contention with wasm because it's not straightforward to render to a canvas/webgl on the main thread from a worker thread. OffscreenCanvas is one workaround but not supported by FF or safari.
Also Canvas is getting some GPU acceleration in some browsers:
In any case, given how browsers black list drivers and GPU models, even WebGL isn't guaranteed to be accelerated.
With these constrains the Web will never be as fast as native for graphics programming.
They took us Flash away, but forgot to ensure the same capabilities were kept.
My bet is that webgl probably would be, but I'd also be curious to hear from someone with more info.
Seems to basically come down to whether canvas of webgl does a more efficient version of some kind of memory copy onto the GPU side.
I tried this a few years ago with a Gameboy emulator I had ported from Go to webassembly and used web workers to run the emulator in.
Getting the keyboard input in, in a performant way was a real struggle using postMessage, although I'll admit I'm not the best at web programming so someone more skilled might have been able to do it better
How is keyboard input different from any other sort of data round trip to/from a web worker?
The emulator ran in the worker, and the inputs were handled on the main thread.
The output (i.e. display) was pushed from the worker -> main thread
I think if I was to do it from scratch I'd do something similar to the DooM approach here and use renderAnimationFrame etc
If you're interested I wrote about it a few years ago but haven't touch it since https://djharper.dev/post/2018/09/21/i-ported-my-gameboy-col...
It's the same as when you make a python UI all in one thread, you can't receive user input and also do some long task at the same time.
It is the easiest way to get into WASM.
Threading requires sending custom headers by the way.
> Doom has a global variable screens which is a byte array of SCREENWIDTH*SCREENHEIGHT, i.e. 320x200 with the current screen contents.
It would seem to me that the right approach would be to hoist out Doom's main loop so you just have a renderFrame() function, then put something on the main browser thread to "blit" the image into the canvas itself.
I agree and disagree. It seems like no one is questioning why we need to use legacy web browsers in between all the code we're executing locally.
It's like a new iteration of old tech like lisp machines, which started out as specific purpose only to grow into complete environments (afaik).
In this regard, we haven't come far, it's just the syntax that has changed.
Also, you have FreeDoom.
DOM access can be achieved with helper libraries which call out into JS. And since any sort of DOM manipulation is extremely slow anyway there's not much of a performance difference even with the overhead of calling out from WASM into JS (which actually is quite fast nowadays).
The good one!
UTf-8, NOT null terminated, pascal like.
ie: what rust have:
REPEAT the mistakes of C (and considering the security angle! in a browser!) must be a big no.
So, given this is a fact, the best course of action is chosen the most safe alternative.
And for everyone else? Well an array of bits ant let the host/callers that are the only that know their own stuff deal with it.
string ≡ (list char)
Basically, this is "the most safe alternative"??!
A safe one. Your sample is that - except I think is better if is a utf-8 string, but this one works for me too-. What will be worrisome is if is made to be like in C.
Rust (or C++) strings are not pascal strings. In pascal strings, the "string buffer" also contains the length information, and historically it was all bytes with a length byte at the start, which was why your strings started at index 1 and limited to 254 bytes.
It's possible to modernise this style of strings to be less crummy (that is essentially what sds does), but C++/Rust string are a third take where the length (and capacity) are stored separately from the string buffer, and that buffer is always on the other side of a pointer (ignoring SSO, which Rust sadly doesn't have due to the original interface definition).
So you get all the fun to corrupt linear memory C style.
Agreed, but as consumer from WASM modules that isn't your option to make.
> WASM's job is to prevent code inside the sandbox from escaping the sandbox, not to prevent memory corruption inside the sandbox.
That is not better than a typical OS process, just it happens to be randomly downloaded into my computer.
On the other hand I need to admit that I would have not forseen some of the more recent use cases for WebAssembly
which can be reminescent of the "docker daemon running as root" issue.
To be fair, I don't know how many clock cycles creating, destroying or modifying a DOM node costs on average, but most likely "a lot" compared to the overhead of a WASM to JS call because a lot more machinery is involved.
But there's no reason you must write this yourself. Others have done the hard work for you and written libraries.
edit: from the linked article, in case it isn't clear, an HN comment by eich:
"Sure, in userland many languages compile to assembly. Hmm, where have I heard that word lately?"
 - https://news.ycombinator.com/item?id=9554914
The lack of a DOM API is something I sorely miss as well. It's currently possible (and not that hard, you can just interact with JS), but comes with such performance overhead that you lose the entire benefit of WASM.
But as you have pointed out, the missing layer on top i.e. the 'thing we can practically use' is a big gaping hole and it's a little bit diabolical.
The fact that JS has gotten so much faster and the lack of both higher-level abstractions and notably a really good 'bridge' to JS means it's lagged in terms of material applicability.
I know it's an existing proposal for WASM, but it feels so massively out of scope. If the issue is having to include runtimes in the WASM binary it might be more useful to think about how we could serve runtimes in a more efficient way.
(At least, that's what it used to be; I haven't been involved in WebAssembly for a long time.)
Now if you want to talk about details of how we implement runtimes that do have observable GC details, like weak callbacks, Java's zoo of reference types, etc, then let's do that, because Wasm GC will eventually need to have low-level mechanisms to support those.
But if we're talking about a Wasm engine GC's ability to allocate, trace, move (or not!) little blocks of memory around, then I don't see any fundamental stumbling blocks to making that mechanism efficient and universal.
So if a future WASM GC doesn't offer APIs for such capabilities, it is useless from those runtimes point of view.
Java has an API for executing the GC on demand, and VM engineers I have talked to over the years think it's a knob that apps shouldn't have.
Wasm already supports multiple return values, so you don't need to box value types on any boundary--they can be flattened whereever they occur.
Pinning memory has to do with interfacing native code that could potentially do unsafe things. That doesn't fit into wasm's model, and would only be necessary for interacting with platform APIs, which are being designed not to need that. Same for "marshalling to native code".
I don't understand what you mean by GC regions. Realtime Java had GC regions and a complex system for trying to allow threads to run without touching the heap. It really didn't go well. I think if regions are useful for a GC, the engine should do inference of them, because adding regions to the type system infects everything.
GC performance depends more on the program than the language.
But regardless, the hardest parts of getting to advanced GCs, such as concurrent and parallel algorithms are usually very deep assumptions of single-threadedness and uninterruptibility that are debt in the runtime. It usually doesn't help that most runtimes are written in C/C++ and suffer that environment's complete uncooperativeness in finding and manipulating roots.
 To the point of seeming hostility. It's been how many years and LLVM still fights against supporting stack maps?
Looks like a lot of the work on the Doom port (https://github.com/diekmann/wasm-fizzbuzz/tree/main/doom) is about getting common functions from the C standard library to work in WASM. Surely this seems like a good opportunity for a new Free Software initiative - something optimized, properly licensed/credited and easy for everybody to use?
Screenshot from Firefox vs Chrome https://i.imgur.com/Af8nTim.png
The "click the blocking button and turn it off" model is much better. It still trains you to turn off blocking when something is broken. However, crucially, that's only when it's broken. When it's not broken, you just use the site, instead of habitually clicking through the permission prompt that's just harvesting data, not actually needed to function.
And yes, malicious sites can of course display themselves as falsely broken until you grant the permissions. But this makes them more annoying to use, granting a UX edge to the honest sites which don't request unnecessary permissions. In other words, the incentives of sites and users are more aligned.
>>> all it does is train you to click through the prompt.
modular blocking prompts were broken, optional prompts are fine
note on controls: 'ctrl' is a bad choice because ctrl + up/down on mac map to window management shortcuts, making the game unplayable.
Quite a mess, IMHO. (Not that I'm blaming the author).
It reminded me of the meme "howbro draw an owl":
1. Draw 2 circles
2. Draw the rest of the f**ing owl
i assume a proper emscripten comparison would also need to strip networking & audio output.