Hacker News new | comments | ask | show | jobs | submit login
WebAssembly’s post-MVP future (hacks.mozilla.org)
512 points by steveklabnik 4 months ago | hide | past | web | favorite | 204 comments



On a semi-related point, TinyGo (a subset of Go for embedded devices) recently added a WebAssembly output target:

https://github.com/aykevl/tinygo)

Unlike the current mainline Go's WebAssembly output (min ~2MB file size), the Wasm generated from TinyGo is practical in size. eg ~1kb for the toy examples

This is all leading edge dev stuff too, so updates and improvements are happening pretty frequently. :)


Your comment just made me realized that, while using cPython => webassembly for something else than electron would be overkill, this would be kinda neat for microPython.



Yep. If you want to add it, that would be kinda neat. :)


Here's a newly created proposals repo to keep track: https://github.com/WebAssembly/proposals.

> Skill: 64-bit addressing

As a WASM backend implementer in an environment w/ only 32-bit addresses, I hope people don't move to 64-bit addresses too soon. Are we really reaching the limits here already?

> Skill: Portability [...] A POSIX for WebAssembly if you will. A PWSIX? A portable WebAssembly system interface.

Yes please, but dev'd outside of WebAssembly. This is needed for interoperability between languages. Even just strings would be nice. A stdlib interface to rule them all, so to speak. If kept modular and avoided a lot of bikeshedding and had a full test suite, it could have a great benefit even outside of WASM.


Agreed on the stdlib concept. I'm sure people are working on it, the benefit would be massive. Just the amount of mobile data transfer that could be saved by 1MB of software pre-distributed to all browsers would be huge.

I wonder if anyone is trying to do it based on the javascript integrity hashes. In theory, if you took the top 100 javascript libraries and established a blessed build pipeline that would produce reference builds that would be bundled to the browser, and if anyone loaded a resource with an integrity hash matching the browser's pre-installed versions, it could just load from disk instead. That would give people who can tolerate less frequent library releases the ability to use all that functionality without networking related page load costs, and would provide a fallback since CDN's could host the file anyway if the browser doesn't have it.


You can probably build a poor man's version with Subresource Integrity and the DecentralEyes plugin.

https://decentraleyes.org/


Just the amount of mobile data transfer that could be saved by 1MB of software pre-distributed to all browsers would be huge.

That would be great. I’m normally a fan of multiple competing implementations, but this might be an area where a single common library implementation, developed in partnership by all browser vendors, would be beneficial. All the code would be in wasm so it would be portable by definition. And with everybody using exactly the same implementation there shouldn’t be any sneaky incompatibilities. With a single master repo it could be versioned and updated cleanly.

Just as long as it’s kept small (1MB sounds like a good goal) and there isn’t too much churn -- those are the real challenges.


Yeah agreed, it is extra complicated since it requires coordination between browsers and also enough websites to make it worth it. If it were released today it could take years to register in i.e. global internet statistics. I guess in theory shared CDN's and caching should be almost as good as well if major website could agree on dependency sharing.


64 bit addressed cannot come soon enough. In certain domains the entire data set you want to work with is many times the 4GB limit, and JS has no such restriction so it’s a PITA that WASM does.


It is hard to imagine a tool less suited to working with multi-gigabyte data sets than JavaScript.


Yeah I'm really confused why working with 4GB+ datasets in the browser is a current need. Why does everything end up in web browsers these days? Is it only because developers aren't familiar with anything else?


You might similarly ask why app stores on iOS and Android are such a big deal, and why Steam is even a thing.

It's because deployment is a harder problem than development (at least, it's harder for developers, because deployment is a social problem). The web is a pretty poor development target (though it keeps getting better), but it's a damn terrific deployment medium, and that's a winning tradeoff.


Non-browser programming platfomrs terrible portability stories when your app needs many features from the platform, like accelerated graphics, networking, security etc.


Do people actually use JS for data sets larger than 4GB in the browser?


Browsers don't support it without config flags, so rarely. But there certainly is demand in visualization, games, medical imaging, "data science", etc.

(Also these apps are not necessarily written in JS, it's a compile target too)


Having 64-bit addresses let you map files and not worry about address space limitations, even if resident size is < 4gb


I wouldn't be suprised with Jupyter notebooks and all that.


Wouldn't that data be held in the language's kernel? When I load a 4GB file into memory in Jupyter, the memory is allocated in the python kernel, not the browser. Although I guess your point would be valid for iodide.io which seems to run everything in the browser.


Good point.


Email services offer 50 gb of data storage for free. It's not so crazy that you'd want to load a substantial portion of that data into memory in an "offline mode" or while doing full text search across the database. Yes, there are probably ways to fit it into 4 gb, but I'm writing this post on a computer with 64 gb of RAM - why not write the next feature instead?


You and I are in the minority though. The Web is already using too much RAM.

I'm still seeing laptops sold with 4GB which would be used really quickly.



I've been following wasmjit quite closely and they are implementing a PWSIX interface for WebAssembly.


I'm a bit concerned by the addition of Garbage Collection to WASM. Is there a way to implement those without favouring some type of language?

It seems to me that GC tuning and memory models are very language-specific. A purely-functional language like Haskell or Closure can make different assumptions about the memory regions but also generate a lot more young generation items compared to OOP languages.


Well, not implementing GC isn't really a viable option long-term. You have to have some bridge between Web Assembly and objects defined in WebIDL, and, for better or for worse (I think: for better), GC'd objects with WebIDL interfaces are the COM objects of the Web.

If DOM objects can hold on to Web Assembly objects--as for example happens in the case of event listeners implemented in wasm--and Web Assembly objects can hold on to DOM objects, then the only sensible solution to memory management is a GC that can trace both kinds of objects. All other solutions eventually lead to uncontrollable memory leaks (see IE6).


Why not implement the GCs themselves using low-level WASM code? That would enable a language-agnostic approach (though it would prohibit collecting garbage over multiple cooperating languages at the same time, which is a less urgent problem imho).


Yes, the target audience for WebAssembly seems to be performance-conscious application authors who use non-GC'd languages, and unpredictable GC is a headline reason for choosing WS in those circles. So giving authors control over GC would be a fine thing to deliver, it would let real-time app authors still use GC in a controlled fashion.

It could probably be done in a way that allowed multiple GC's to coexist and cooperate.


> (though it would prohibit collecting garbage over multiple cooperating languages at the same time, which is a less urgent problem imho).

That's the whole reason they're adding a GC, how is it not an urgent problem?


You've already got 2 co-operating languages - JS and whatever is compiler to WASM.


True, but once you're using WASM, you probably don't need to hold references to JS objects that point back to the objects created in WASM.

We've been interfacing C with e.g. Python without needing a garbage collector that crosses language boundaries. So why would we need one now?


> We've been interfacing C with e.g. Python without needing a garbage collector that crosses language boundaries. So why would we need one now?

Because C doesn't need garbage collection, it uses 'manual' memory management. This scenario is exactly analogous to the WASM-JS world that exists now (WASM is manual, and JS is GC).

The goal here is to have a GC that's usable in WASM, and WASM is going to be interoperating a lot with JS for many applications, so it needs to cope with that. A better analogy in this case would be IronPython which is Python built on the .NET framework rather than C. In that case, the Python GC is the .NET GC, so that objects can be passed back and forth 'between worlds'.


Because that's not safe. It's only as safe as the Python binding is, which is not good enough.


Define "safe". Are you talking about security or correctness, or both, and please elaborate.


The point of WASM is the ability to safely run any untrusted code in a performant way, that includes seamless interfacing with other modules.

https://webassembly.org/docs/security/


I'm sure that if you can make WASM secure without the GC part, then you can create a safe interface. Yes, it may leak memory (remember, so can a GCed program that forgets to unreference objects), but that's just something you have to deal with when programming at a low level; and languages built on top of WASM can make that a non-issue, actually.

Let's keep WASM simple. It's probably the best thing one could do in view of security.


Can you elaborate on how two WASM modules each with its own GC could safely interface without unnecessary overhead? Why would we want to have this overhead in the first place? It may not be that bad with 2 modules, but you might easily have 20 or (way) more.


I've got multiple specialized GCs within one module, and they're doing fine.

Trying to shoehorn everything into one model isn't something that should have 'asm' in its name.


OK, but could you share the specifics? I'd like to understand. Emphasis on combination of safety (primary) and performance (secondary) - remember that any number of completely unrelated, untrusted, maybe buggy and potentially even malicious modules might be combined. They're being very careful about not forcing languages into one model, BTW.


The kinds of memory guarantees and controls a GC wants for good performance seem to be the kind of things you wouldn't want to give to some random third-party.


I believe the GC integration is far more important for Javascript (or other host language) interoperability than for WASM-targeting languages. Many of them will probably be able to take advantage of it, but anything more special-purpose should still be possible to implement in a custom language runtime.


At one time people thought that you couldn't possibly have a high performance runtime that supported both object-oriented and functional patterns vis-à-vis garbage collection, yet the CLR managed to pull it off with some clever use of ML (pun intended)[1].

[1] https://youtu.be/ZTbyKsw7uIU?t=684


The CLR in .Net seems to support a very broad spectrum of languages, including functional, statically typed, dynamically typed, etc.


I've not used the CLR, but the one thing I noted in Gilad Bracha's recent post on generics ( https://gbracha.blogspot.com/2018/10/reified-generics-search... ) was his claim that the CLR makes implementing and using dynamic languages difficult:

> In systems designed to support multiple programming languages, reification brings a different problem. All languages must deal with the complexity of reification; worse they must conform to the expectations of the reified generic type system of the "master language" (C# or Java, for example).

> Consider .Net, the poster child of generic reification. Originally, .Net was intended to be a multi-language system, but dynamic language support there has suffered, in no small part due to reification. Visual Basic was a huge success until .Net came along and made it conform to C#. And what Iron Ruby/Python programmer ever enjoyed being forced to feed type arguments (whatever those might be) into a collection they are creating?

Not saying CLR's approach is bad; just that it doesn't seem like an example of one-size-fits-all as is sometimes claimed (and the same goes for the JVM, etc. too)


This particular snippet that you quote talks about dynamic languages that want to interop with the rest of the CLR language ecosystem. Because CLR provides a strong static type system that has reified generics, it means that dynamic languages have to devise ways to interop with that system. For example, if you're in IronPython, and you want to create a list that is strongly typed from C# perspective, you have to do something like this:

   from System.Collections.Generic import List 
   lst = List[str]()
   lst.Add(123)
   obj.DoSomething(lst)
However, this is only an interop problem. A dynamic language targeting the CLR that does not care to interop with C# does not have to do anything special. And none of this has anything to do with the GC - that operates on a level far lower than anything to do with generics.

(I would also claim that the quoted snippet vastly overstates the problem, while also misrepresenting it - it has all to do with static/dynamic type system mismatch, and practically nothing with reification. In practice, if you want to interop between C# and Python, you'd just use "dynamic" in C#, and access native Python collections directly - that's exactly the scenario it's intended for. The only time you'd need to muck around with generics from the Python end is when you're calling into a library written with C# - but that's just FFI, no different in principle than having to specify types when you're using ctypes to invoke into C.)


Reification seems pretty unrelated, as Wasm doesn't have an equivalent concept.


Is GC that wide of a problem that tunable parameters could not be provided to support different use cases?


Yes.


I guess I was hoping for a more informative answer. Are there at least categories of GCs, such that a VM could provide the n main GC types and most languages would be able to choose the appropriate ones? Or is GC very specific to each language?


GCs have a lot of implementation specific quirks and properties, so for example a real-time app would be better served by shipping its own tested GC implementation than giving a "short pauses please" hint to the browser API.


Given the dual needs, it might make sense to have different allocation schemes, one that uses the built-in GC, and which allows sharing/interop outside of WASM (JS, DOM, etc), and one which is entirely manual (or home-rolled GC), which does not (or makes you set a very specific set of flags which alerts the user). Without that, I'm not sure there's a way to really handle memory allocation and deallocation sanely that prevents most memory leaks.


Aren’t they adding gc to node.js/js or some part (I read somewhere) and/or multithreading/parallel processing? Not really me, but it seems the market is demanding that functionality if it’s coming up?


JavaScript has always had garbage collection; it's a dynamic scripting language with no support for manual memory management.


I'm reminded a bit of Java's early days when Sun was able to create the illusion that it would take over the world. WebAssembly has had a lot of early success and its limits are hazy, but it seems like they must be out there somewhere? To speculate:

On the server side, it seems like docker format has already won. Maybe cpu architecture portability doesn't matter there and x86 is good enough? Although serverless and edge computing often use JavaScript.

On mobile this seems to depend on Apple and/or Google deciding to support WebAssembly for native apps, which seems unlikely.

Developers need laptops to run their development environment in. I'm wondering if this might just be Linux tools running in a container on your laptop. Chromebooks seem to be moving in that direction.

The revival of something like Sandstorm might be interesting, but that's pretty speculative.


WebASM really only makes sense in the context of embedding untrusted code, which outside of the browser is a concern that largely doesn't exist.

Otherwise you are severely crippling yourself for no real gain. You're going to lag the hardware features by years if not decades, you're not going to get arch-specific optimizations as readily, and you're going to have huge startup costs. You're also stuck with sandbox restrictions that you've self-imposed, making it harder to do things that would have otherwise been trivial or needs to be re-invented.

You could maybe squint and claim you need a portable IR, but given there's only really 2 ISAs in widespread use (x86 & ARM) does that _really_ matter? Similarly you could claim cross-OS support, but again as there's only _really_ 2 OS's for each target (Windows/Linux for server, Android/iOS for mobile) does that _really_ matter? Critically does it matter to such a degree that you'd rather severely cripple your own app to get it instead of just compiling it 2, 3, or 4 times?


I agree that WASM is only really interested for embedding untrusted code, but am not sure that the browser is the only place where you need that. A good example is Cloudflare's workers, which they talked about at the last SF Rust Meetup.

Once you get a _good_ solution to embedding untrusted code, you might find it turns out to be pretty useful.


Do you really need embedded code for things like Cloudflare's workers, though? It doesn't seem like it does, seems like each worker can just be its own process using normal process isolation mechanisms. Switching to embedded untrusted code for things like that would actually make a lot of stuff a lot harder - scheduling, resource monitoring, etc... would all need to be re-invented from scratch. Is it really useful to re-invent the kernel in userspace to use a bunch of embedded webasm contexts instead of just a bunch of processes?


Hi. Cloudflare Workers tech lead here.

No, process isolation does not work for us, because it does not scale well enough.

The point of Cloudflare Workers is to distribute your code across Cloudflare's entire edge network -- 154 locations and growing -- so that you can respond to requests at the closest location to your end user. We want to put every customer's code in every location, rather than forcing you to choose a handful. Meanwhile, obviously, not every one of our locations is a mega-datacenter.

This means we need a way to support extremely large numbers of tenants per machine, with relatively low traffic per tenant (because each tenant's traffic is spread out over the world). We need this to be efficient.

This means, we need low memory overhead (to support many tenants per machine), very fast cold-start time (since cold starts are a lot more common in a decentralized scenario), and low overhead to switch between workers (because we're mixing traffic from all our customers everywhere).

Using embedded V8 isolates within a single process gives us 10x-1000x better performance on all these metrics than if we gave a whole process to each customer.

Relatedly, here's a talk I gave at Heavybit about the neat things you can do when you have fine-grained server compute: https://www.youtube.com/watch?v=YZSvJNBZsxg


V8's developer guide doesn't recommend running untrusted code in a shared process with "sensitive" data. How do you know if your different customers data is sensitive?

https://v8.dev/docs/untrusted-code-mitigations#sandbox-untru...


This is a relatively new recommendation on V8's part specifically in response to Spectre-type vulnerabilities.

We've spent a lot of time thinking about and building mitigations for speculative side channel attacks. For example, early on in the project -- before anyone even knew about Spectre -- we made the decision that `Date.now()` would not advance during code execution, only when waiting for I/O. So, a tight loop that calls `Date.now()` repeatedly will keep getting the same value returned. We did this to mitigate timing side channels -- again, even though we didn't know about Spectre yet at the time.

Chrome has indeed stated that they believe process isolation is the only mitigation that will work for them. However, this statement is rather specific to the browser environment. The DOM API is gigantic, and it contains many different sources of non-determinism, including several explicit timers as well as concurrent operations (e.g. layout, rendering, etc.). Side channel attacks are necessarily dependent on non-determinism; a fully-deterministic environment essentially by definition has no covert side channels. But, there's no way Chrome can get there.

The Cloudflare Workers environment is very different. The only kinds of I/O available to a worker are HTTP in, HTTP out, `Date.now()`, and `crypto.getRandomValues()`. Everything else is perfectly deterministic.

So, for us, the problem is much narrower. We need to make sure those four inputs cannot effectively be leveraged into a side channel attack. This is still by no means trivial, but unlike in the browser, it's feasible. `getRandomValues()` is not useful to an attacker because it is completely non-deterministic. `Date.now()` we've already locked down as mentioned above. HTTP in/out can potentially be leveraged to provide external timers -- but the network is extremely noisy. A practical attack would require a lot of time in order to average out the noise -- enough time that we can do a bunch of higher-level things to detect and disrupt possible attacks. It helps that Workers are stateless, so we can reset a worker at any time and move it around, which makes attacks harder.

NetSpectre demonstrated that even physical network separation does not necessarily protect you against Spectre attacks. There's simply no such thing as a system that's perfectly secure against Spectre, process isolation or not. All we can do -- aside from going full BSG and giving up on networks altogether -- is make attacks harder to the point of infeasibility. Luckily, we have lots of tools in our toolbox for making Spectre attacks infeasible in the case of Cloudflare Workers.


1. Thanks for your multiple well-informed, articulate comments.

2. "BSG"?


I think your parent is referring to Battlestar Galactica; it had no computer networks, and therefore was not able to be effectively attacked by the Cylons.


Yep. :)

The Battlestar Galactica was the only ship in the fleet that hadn't networked its computers, because the captain was paranoid. When the Cylons (AI) attacked, they instantly hacked all the other ships, but the Galactica stayed under human control and got away.

It's fiction, but I'm honestly really impressed with this bit of writing. It's both an entirely plausible and almost realistic strategy, and it gives the writers an excuse for the crew members to interact rather than let the computer do everything. Whereas on Star Trek one wonders why they bother with a bridge crew -- the captain might as well be punching everything into a computer directly rather than inefficiently giving orders to humans.


Do you see the complexity of V*'s multi-tiered JIT compiler as a security risk? Maybe a simple JS interpreter plus a simple AOT WebAssembly-to-x64 compiler would be safer.


Complexity is a risk, but we also care deeply about performance and startup time in much the same ways Chrome does. So we need that JIT.

I'm comforted by the fact that if there's a breakout in V8, Chrome is a much juicier target than we are -- and has an incredibly strong security team that will jump on the issue. I think we would be overall less secure using a lesser-known JS implementation.


Actually, I think Cloudflare is a juicier target, due to a combination of two things:

1. No process isolation 2. Data from lots of users on any given node

Am I missing something?


While I don't mean to deny the risk, which we take very seriously, I do think there's a number of reasons Chrome remains a much more interesting target:

* While process isolation makes Chrome somewhat less interesting than it used to be, keep in mind that process isolation is being rolled out as a defense to arbitrary-read attacks (like spectre), not arbitrary-code-execution attacks. Executed code can potentially talk to the rest of Chrome and perform actions that a web site is not intended to be allowed to. (At least in the past, this included the ability to read all the user's cookies, though it's possible they've improved on that, I don't know.)

* Cloudflare has potentially multiple tenants in a node, but the attacker has no control over who they are. There's no particularly good way to target a specific end user nor a specific web site.

* V8 vulnerabilities are likely to be memory safety issues, which are difficult to leverage into an attack without an understanding of the process's memory space, which is hard to get without having access to the source code or a compiled binary. The attacker would be flying blind and would almost certainly just segfault or be caught by our layer 2 sandbox (which blocks almost all syscalls), which incidentally would immediately alert us to their activity. (I don't particularly like this as a defense, since I like open source -- but frankly, it is a pretty big barrier.)

* The attacker would be burning their zero-day by uploading it to Cloudflare. We block eval() and the like, such that the only way to run code on our edge is by uploading it through our deployment APIs, which keep a copy.

* Last I checked, Chrome still does not use site isolation on mobile, because the overhead is too high.


That is how IBM i, IBM z, Unisys ClearPath work, how a large majority of Android and Windows apps, some iOS ones (mostly watchOS actually), Garmin devices, SIM cards, Blue Ray, Ricoh copiers, ...

If anything the industry is increasingly adopting bytecode as distribution format, kind of returning to the early 60/70's designs with microcoded CPUs just with another approach to the final execution.


For the vast majority of code that runs on either desktop or mobile, the latest and greatest hardware features don't matter, and neither do arch-specific optimizations. Startup costs are trivially solved by an OS-level precompilation and caching mechanism. And we already have sandboxes for apps, even on the desktops (both macOS and Windows store apps). Besides, why can't WebAssembly have a different, less limiting sandbox outside of the web?

The real benefit is having a portable bytecode that 1) can accommodate any language out there without constraining their object and memory model, and 2) is guaranteed to be compiled to native code by an optimizing compiler when deployed. Add some basic APIs, and we could have a situation that has heretofore has been enjoyed pretty much only by x86 - the ability to compile code, and run it 30 years later as is.


In what language can you just “compile 4 times” to target android, iOS, windows, linux, macos and any future platform?


Literally all of them? Since you scope creaped to add macos to the list even though it doesn't exist in the server or mobile space you'll need to increase that to 5 compiles instead of 4, though.

But C, C++, Rust, Go, D, etc... all support compilation to those platforms. This is a really common thing to do.


JavaScript.


In addition, WASM lacks bindings to the browser environment (or any API for that matter), as its original pitch was improving performance-critical code paths in browser apps; but like you said, it won't be able to reach native ARM ISA speeds nevertheless. At most, in a couple years, it could hope to do what JavaScript has been doing for over a decade. But the browser rendering model and WebGL will just be the same. And then you still have an upstream battle and hope Apple and Google will open their mobile platform APIs to WASM apps, and agree to a common standard which I don't see happening. I can't understand for the life of me what people are projecting into WASM. A new bytecode format for code delivered over the network doesn't solve any problem we're having that isn't solved much better using a native platform API/ABI.


I think nobody is saying WASM is better than a „native platform API/ABI“ at solving all problems. It‘s simply a (potentially) viable alternative for scenarios that are way more common than everyone thought at first. And in some situations, it‘s the only solution (e.g. you won‘t get native APIs on the Web).


One awesome thing about WebAssembly is that it makes a lot of sense as an extension language. Instead of building around the JVM or .NET to get access to languages on those platforms and instead of standardizing on one embedded language (Lua, Python or JavaScript), I can use anything that supports WebAssembly and double dip with that same ecosystem working in the browser. And that list of supported languages will only continue to grow since we finally have an alternative to compiling to JavaScript.

Now, I don't think it'll "take over" the way C has, but I also wouldn't be surprised to see it edging out other VMs in many contexts, like the Lua or Python VMs, which may end up porting to WebAssembly instead of trying to justify another dependency for your app.

I hope we'll finally get to the point where I can choose what language to use for a specific task based on the merits of the language instead of making the larger argument that it's worth getting it to work in our existing app.


I work with some people who are working on Iodide [1] which is a Jupyter style notebook thing that runs entirely in the browser, they've already ported Python to WebAssembly [2][3] so you can use it as a language.

[1] https://iodide.io/ [2] https://github.com/iodide-project/pyodide [3] https://iodide.io/iodide-examples/python.html


Nice.

I've been watching efforts to port Lua to WebAssembly (the official VM looks simple enough, but luajit won't be easy) because I want to have an app that shares code with the browser, and it would be awesome to play with WebAssembly as the plugin system of choice on the backend.

I'm also super excited for nebulet[1], which is a micro-kernel experiment using WebAssembly in Ring 0, which is pretty neat.

I'm really excited to see where WebAssembly can go. I know many of these projects will peter out, but it's exciting nonetheless.

[1] https://github.com/nebulet/nebulet


> I'm really excited to see where WebAssembly can go. I know many of these projects will peter out, but it's exciting nonetheless.

Absolutely. On the Lua front someone made a PoC to add Lua as a language for Iodide [1] using Fengari [2] which (while I haven't used it yet) definitely looks interesting. Hadn't heard of Nebulet before but will definitely be following it now. Cheers!

[1] https://groups.google.com/d/msg/iodide-dev/ahc4fg8_JLg/llkHS... [2] https://fengari.io/


That's actually why I'm pretty bullish on WASM. It could become everything Java/JVM has ever wanted to be.

Almost inevitably JS will have the best interop, with the great trend for javascriptization of everything WASM could find it's way into all sorts of environments (like the native apps you mentioned).

People are already looking into WASM VMs for blockchains https://www.parity.io/wasm-smart-contract-development/


> On the server side, it seems like docker format has already won. Maybe cpu architecture portability doesn't matter there and x86 is good enough?

Docker has won to a degree, though it still requires complex orchestration technologies like Kubernetes built on top of it, which are rapidly developing and not a particularly stable target.

> Although serverless and edge computing often use JavaScript.

Yes, this is where I see WASM really hold its own. Serverless and edge environments want to be able to have code running in a persistent process that is completely managed by the environment. Right now, they frequently offer just JS, or a couple of language runtime environments like Python as well; for serverless environments, you can generally load native modules if you want to, but its awkward, while edge environments will only run JS, though now they're starting to offer WASM.

It's the places where being able to run multiple tenants within a single process, without paying IPC and process isolation overheads, that make the overhead of WASM more likely to be worth it.

> On mobile this seems to depend on Apple and/or Google deciding to support WebAssembly for native apps, which seems unlikely.

I don't think it would be too unlikely for Google to support WASM for native apps. For one, it would allow them to do architecture independent apps without ART, on Fuchsia, for example. Right now, I think Dart + Flutter compiles to ARM code, but if they want to target a wider range of devices then platform independence could be important.


> Developers need laptops to run their development environment in.

Not strictly true. I spent a bit of time using Cloud 9 and found it to be quite a useable experience. It’s certainly very handy to be able to access a dev environment from web browser wherever you happen to be.

https://aws.amazon.com/cloud9/


c9 helped when I was back in college. The desktops there were locked down with preinstalled tools. c9 was better than what the college had, and I didn't have to worry about saving my work. Once I was home I could sync the changes.


> Developers need laptops to run their development environment in.

I hear a lot of developers going on and on about laptops these days but I can't figure out why. Is working out of a coffee shop the norm now or something?


I can't carry my desktop everywhere and I don't always want to be at home.


Personally, I'd rather not take my work everywhere, but even then there's RDP. If you happen to spend a lot of time developing in locations without decent internet, then I guess a development laptop is a good idea.


We just get to work at the office on a docking station, at other company sites, on customer premises, home office and while traveling to customers.


Docker is hard to use. There is still space for a simple solution to implement cross platform.


There's a learning curve, sure. I don't know if it qualifies as "hard" though - once you've figured it out it doesn't exactly get _more_ complicated.

Well, unless you add Kubernetes on top of that... then yeah, it gets... complicated.


The biggest weakness of WASM IMO is that it doesn't have its own standard library. It relies on the HTML DOM. This is an awkward, overcomplicated API that tends to differ between browsers, and worst of all, changes all the time. Because of this, WASM will likely remain a second-class citizen in the browser. It will also keep suffering of the problem of code constantly breaking when the DOM changes.


Wasm apps can just target canvas if they don't want to deal with the DOM. That's what they've been doing, since DOM access has been reliant on using JS.


> On the server side, it seems like docker format has already won.

No.


I’m not impressed by wasm yet. One of the reasons JS is so good is because it can manipulate data without doing a full page reload, and wasm will do that, but another important reason is how productive modern JS is.

I’m not a UXer, never was, and I moved into management long before programs even got pretty so I’ve never even had to pick up.

Yet I can make a pretty web application with vue with minimal efforts. Add stuff like graphql, and wasm looks so ridiculously old aged before it’s even born.

Of course that kind of defeatist attitude is silly, and I’m looking forward to see where wasm goes in the coming years, because if people just gave up, I wouldn’t have been able to make a pretty app in vue either.


Wasm isn't really for pretty apps you can quickly make in Vue. It's more for those heavy duty apps, games or libraries that most people run natively because of performance and because they're written in languages like C++, Rust, or even Fortran (scientific computing libraries).

Also, JS isn't the best tool for all programs, and programmers would like more of a choice when it comes to things that are more than just your standard web app.


> Wasm isn't really for pretty apps you can quickly make in Vue.

Why not? Developing pretty apps in Vue (or anything in JS-land) is an horrible experience, why shouldn't we use WASM to port some better environment into there?


There are programmers who haven’t worked with anything but the HTML+JS+CSS ecosystem, especially on the front-end. Lots of sunk costs there.

Presenting WASM as a threat will lead to it facing stiff resistance from entrenched interests (amazing that people in their 20s and 30s are entrenched interests, but that’s modern front-end development).

A more gradual, evolutionary, low-hype approach seems prudent, as the platform matures and until/if someone makes the “killer app”.


Yeah, look at what happened to Dart in the browser, which is a shame.


I don’t think it’s horrible, could you point me to something better?

We used to work with, first WPF then web forms and later MVC with razor, and we very quickly bought Telerik because the standard UI components in .Net are horrible.

Aside from that I’ve worked with JAVA I can’t recall what we used before FX but both were terrible.

In the JS CSS environment I can literally make an interface that doesn’t look like ass by simply using the standard setup in a front end with very few lines of code.

I guess a think like Blazor.net might make wasm competitive, but unlike JS, that requires Microsoft not to drop it in a year because they don’t have the OS movement JS does.

I understand why people dislike the package he’ll of modern JS, but so far, it’s proven itself to be just as trustworthy as any curated library.

On the other hand you’re not tied down in JS. When Microsoft made entity they wanted people to use it, even though it’s fucking awful. In JS you can very easily replace a component if something better comes along. That’s a great strength.

Graphql has been really instrumental for us for instance. We operate apps in low connective environments, so being able to only transfer what is needed is really great.

Graphql isn’t JS only, but it takes very little time and effort to setup an Apollo, graphql and vue app, and it’s got very few issues. The .Net integration of graphql is still a 3rd party library that doesn’t really work that well and isn’t easy to use.

That’s the thing with JS, it’s just really productive in the real world.

Blazor.net looks really productive, but it’s only productive when you don’t need it to do something Microsoft hasn’t thought of, and it’s the only WASM integration that I’ve seen that is even remotely competitive with JS.


I don't really see a need of replacing the DOM. It has some problems, but nearly everything around has at least as many problems, just different ones. Indeed, that's the reason I want to see WASM interact well with it.

The largest problems of the web programming are lack of componentization and Javascript, we may be able to solve that last one soon!

The thing with JS is that it is a no batteries, no novelty, no shortcut, not very productive language, that manages to be both old-school and novel on all the bad ways. You can't just carelessly gather code and use it, because there are no safety boundaries; most code that is not at the frontend (yeah, like Apollo) have no consideration about security, so you can't simply expose it; the entire ecosystem is a house of cards; and the language itself is a pile of WTFs just waiting for you to discover another one.

Besides, the web stack is pretty much controlled by Google nowadays. Yes, they manage development resources better than Microsoft, but I would prefer working on a stable and free environment.


If there's some better environment to be found in non-JS-land, why is Electron and its ilk so popular?


"better environment" - worse is better. Its explained in the comment above you about entrenched interests. Whether js and html are worse in the browser is one thing but I think you could argue strongly that they are worse on the desktop. As a user, the experience for me is pretty bad.


Is electron really worse though? I use VS and VSC professionally and VSC is the best ide I’ve ever used.

VS is great too, but it’s performance is really, really, terrible when your documents and settings directory is on one network drive and your codebase is on another. Not exactly VS’s fault, but it’s an ide for enterprise that can’t function with the most basic enterprise drive setup?

VSC, the electron app on the other hand doesn’t give two shits.

People usually tell me VSC is the one off unicorn. But isn’t discord better than team speak and ventrilo? Isn’t slack excellent?

I don’t think sunk cost is a negative when people are building better tools with it.


Electron is popular among developers who mainly know JavaScript. It's popular when someone already has a web app or has to build one anyway. It's popular among people who think it's better for their app to look the same on every platform than to integrate well. It seems to be pretty unpopular among people who have experience writing native apps.


Because all the better environments are not sufficiently portable.

That said, even then, I'll take Qt over Electron any day of the week.


If they're not even portable between established systems, how are we going to port them to a comparatively exotic wasm+DOM target?



Wasm+canvas is an easier target but even if it gets us to the same difficulty as normal desktop targets, portability between normal targets is already a problem.

My reaction to using the canvas instead of the DOM is very, very, very negative by the way. Webpages with fake text that is actually pixels rendered to a canvas are currently mostly restricted to dystopian notions of either the end result of the ad-blocker arms race or else extreme anti-copying measures.


Basically it is Flash all over again, just now it is built-in in the browser.


I think the world MUST start to make/tell the difference between (web)apps and web pages. Slack does not need to be written in js/html. Skype does not need to be written in js/html. They do those in electron just to avoid having to maintain X different code bases.


Because many don't know any better.


Meh, the issue with Electron is performance. The actual dev environment is decent, like how trivial it is to bring your own abstractions like React. Being able to inspect element and use Chrome's debugger. Being able to use whatever text editor you want. Using a compile-to-JS language. etc.

I'm certainly not using Xcode + Cocoa + the MVC abstraction for iOS/macOS apps because it's the superior way to build software. It has its own warts, like the effort it takes to swap out MVC with some other pattern, or how it comes with no async abstractions -- you'll just be writing callback code like JS developers had to before the Promise.

It's tempting to unmask "ugh, JS developers" as a bunch of idiots, but I think you'd only be kidding yourself if you think your pet environment is some global optimum.


Every time I start doing anything moderately complicated in JS, I rage at how much more difficult it is compared to slamming together a UI in .NET.


Really because WPF, web forms or razor all seem very inferior to me than VUE or react.

Hell when we were still building interfaces in .Net we paid a subscription to use Telerik on the frontend because the standard .net stuff is absolutely terrible.


Where is Blend for Vue or React?

I never used Telerik, and we only bought ComponentOne once for a Windows Forms project, started before WPF even existed.


WinForms or WPF? I can throw together a (terrible) HTML page faster than I can remember how MVVM is meant to work.


I'm a lover of WinForms, since I cut my teeth on VB6 and MFC... it's surprising how decent you can make things look with a little polishing.

It's nuts that visual layout editors for the web have, if anything, regressed from the WebForms editor in Visual Studio 2008.


MVVM is meant to work the same way as Angular bindings work.

In any case you aren't obliged to use MVVM in WPF, it is just a best practice for easier unit testing and composability. Then again Web Components are only now starting to be a thing.


I do native across multiple platforms and Web, being coding since the mid-80's, so the experience is not constrained to a single pet.


Because JS runs on the browser, so anything else is less portable.


> another important reason is how productive modern JS is

Some people will argue on this, but I'm actually in agreement.

That said, it is important to realize WHY the JS productivity is so important: We're iterating a LOT. Web is so young, web interfaces are so young, the problems we are trying to solve on web are ever changing, and the devices we use to interface with web are changing almost as fast. We have to iterate again and again to be decent at the problem we're trying to solve only to toss it aside in favor of the new problem.

Some of this is wasteful, some of this is just part of being part of a big technological change. (I'm sure TV, radio, fax, printers, copiers, etc all had similar iterations, albeit at not quite the same speeds). But regardless of the "if we should" aspect, it's important to note that the current JS needs are both (1) currently real - trying to adopt some philosophically "pure" tenet in the defiance of these needs will fail and (2) not necessarily permanent - We can extrapolate into the near future from this, but I'm hard pressed to say much about more than 2 years in the future.

> Add stuff like graphql

Funny, I thought about graphql when the article mentioned HTTP caches, since graphQL _can't_ make use of that (most graphQL implementations use non-GET for everything, so you have to rely on server-side or client-app caching, since you can't rely on the network or browser) and took it as a hit on graphQL, not wasm.

That said, I'm interested to see where wasm goes - short term, we're in agreement that anyone expecting it to be the "Js-killer" is missing the mark, but long term I don't expect it to be dead-on-arrival, just used for not-what-was-expected.


I was iterating a lot with Smalltalk, Caml Light, Prolog, Oberon, Delphi and C++ Builder during the 90's.


I'm not really sure wasm is meant to compete with JS like that though?

To me the main benefits of wasm are that you can * make web pages with languages besides JS * do lower level things not possible with JS (low latency real time games?)

If you already know JS well and want to make a normal website I don't think you need WASM necessarily although it will enable those who don't know JS or want to use JS to make them too.


Once they add GC, threads, polymorphic inline cache, and some sort of stdlib, it will be possible to have popular scripting languages run in the browser without the current problems of download size for the runtime and performance.

So, it may not be completely intentional, but WASM does have roadmap items that may end up competing with JS. WASM starts to look more like a general purpose VM over time.


On the other side I see huge potential for WASM because all the things I worked with in C/C++/Rust that required a client-side install are now available to me on the browser.

Stuff like UnrealEngine running in browser, low-latency audio processing, etc. Lots of potentially cool solutions where you need strong control over allocations and memory placement.


> I’m not impressed by wasm yet. One of the reasons JS is so good is because it can manipulate data without doing a full page reload, and wasm will do that, but another important reason is how productive modern JS is.

You don't need a jet engine to make paper airplanes. Gluing jet engines to paper airplanes would indeed be counterproductive.

> and wasm looks so ridiculously old aged before it’s even born.

Wasm is about achieving near native level performance. For libraries and such. Stuff that your pretty high level js frameworks might use.


Okay, I've only done the over-breakfast skim of this, and I hate to be that guy who comments without having fully read TFA, but here I go anyway...

I don't see any explicit mention of what I understand to be the killer feature: the ability to directly access the browser's WebAPIs, from Accelerometer to Document to MimeType to XPathExpression. Start with modifying the DOM, and go from there. Of course fast WASM/JS interop lowers the cost of the obvious workaround. And yes, there's discussion in the article of getting WASM to play nice with JS GC, which I understand is a prerequisite. But there should be explicit discussion of what the predicted path is for making WebAPIs available in WASM, what the sub-goals are, and how to measure progress.

And the first paragraph of the article is a joke, claiming that people thought the 2017 version was the final version. No, we're waiting for a relevant version (this sentence is an exaggeration, but it's more true than what TFA said).


It seems to be tracked under/after GC, since that would be a prerequisite.

See https://github.com/WebAssembly/gc/blob/master/proposals/gc/O.... Things like DOM access are specifically mentioned.


You can access web APIs and access DOM in wasm today (from your comment you seem to be unware of that?). I just wrote a tiny web app in Rust with wasm-bindgen, and it’s pretty good already. With wasm-bindgen interfacing is pretty manual at the moment — you need to declare the APIs you need in Rust by hand, but an app without dependencies that modify the DOM generates a wasm file that’s only a few KB. My app pulls in quite a few libs and ended up with a ~700KB wasm build — a bit larger than satisfactory, but definitely in the tolerable range.


I know wasm-bindgen exists, and I presume it works. But it's black magic.

Near as I can tell, it serializes everything through the linear memory; this would have a huge impact on the performance of both ends, would it not? (Particularly if you wanted to do DOM manip w/ it.) I believe this is what the article means here,

> You need to pass values into the WebAssembly function or return a value from it. This can also be slow, and it can be difficult too.

> There are a couple of reasons it’s hard. One is because, at the moment, WebAssembly only understands numbers. This means that you can’t pass more complex values, like objects, in as parameters. You need to convert that object into numbers and put it in the linear memory. Then you pass WebAssembly the location in the linear memory.


I only spent about an hour on the rust + wasm app for fun, so I treated wasm-bindgen and wasm-pack as black boxes and didn't really try to understand the inner workings. You're probably right about serialization, but I'm not sure it has to copy huge buffers around.

Look, the supported types in wasm-bindgen are somewhat restricted. The full list is here [1]. Other than "atomic" types like numbers and pointers (I assume the opaque types are treated as pointers too), the interesting one are str/String (basically &[u8] under the hood) and number slices. It's apparently not great if you have to copy these around, but since these are well-aligned, why can't you just pass the starting address and length? Again, I don't know enough about wasm or its current state to say if this is doable. Maybe it doesn't like "someone else's memory" at the moment.

(By the way, wasm-bindgen allows you to access pub fields of structs from JS, but they have to be Copy. That is to say, the opaque types are not completely opaque, but I'm not sure if the pub field access is achieved through implicit getter methods or serialized in the first place.)

Anyway, the frictions might be a big problem if you're doing React-style DOM re-renders all the time, but in my app, I'm only running the computation heavy and memory sensitive tasks in wasm, and communications are few and far between, making it a non-issue.

[1] https://rustwasm.github.io/wasm-bindgen/reference/types.html


What is the meaningful difference between WASM/JS interop and WASM/Web-API interop? Implicit importing of them all? There are host bindings coming [0]. If Web APIs are expressed in terms of JS and you want interop with them, you use what they are expressed in. Or are you saying Web APIs should be available in WASM terms (which is harder due to lack of structural, array, string, null, etc types)?

0 - https://github.com/WebAssembly/host-bindings/blob/master/pro...


The difference is being able to serve some WASM script that interacts with your page without having to send and entire glibc equivalent.

In practice, that means having an standard library on browsers that you can call in a standard idiom.


Dead code elimination in your compiler should be taking care of this, no?


I’d assume you would get a performance (and therefore battery) benefit by not maintaining a js layer, though a js layer is a perfectly acceptable temporary solution.


Careful. Javascript was a perfectly acceptable temporary solution too. Temporary solutions have a way of becoming permanent.


It would kind of be like defining a set of syscalls that anything targeting WASM could use. Right now, JS interop is sorta like being given a DLL that requires an entire JS runtime to support it, whereas direct Web-API interop would mean there is no JS runtime requirement IIRC.


Performance


One might expect that never leaving the WASM context and interoperating with browser APIs, even if defined in JS terms, would not incur JS penalties. Granted it may affect JS argument validation. But this requires representation of those types. For example, if the DOM API allows searching for a selector which is a string, how do you build that string in WASM and how do you pass it? You'd use host refs anyways I would guess, essentially creating a new host "string" which is meaningfully equivalent to creating a JS string.


Native support for WebAPIs is blocked on garbage collection - WASM code will need to claim a reference to garbage-collected DOM/JS objects, and will need to be able to hand off GCed objects and callbacks to JS APIs.


Yes, we know that, and I even said that in my comment, which I'm sure you read.

But many things that were discussed in TFA are blocked on other things that were discussed, and yet still got hundreds of words of article space.


GC is listed under a section "Small modules interoperating with JavaScript", which talks about general access to JavaScript code. Web APIs are a subset of that usecase.


(Hoisting to top level from a comment farther down.)

The section "Small modules interoperating with JavaScript", talks about general access to JavaScript code. Web APIs are a subset of that usecase.


Wow. Never did I expect it to generate this much (potential) innovation back when asm.js was announed. Pretty excited to be following on, even though I personally probably won't be writing anything for WASM any time soon - though this excellent series on Mozilla Hacks probably also played a role.


The parallels with the Destroy All Software talk "The Birth and Death of Javascript" [0] are crazy. Seeing the section where they address the possibility of Node modules and system access from WASM is like seeing a flying car advertisement in real life.

[0]:https://www.destroyallsoftware.com/talks/the-birth-and-death...


Re: like seeing a flying car advertisement

Ditto the feeling. I don't "get" WASM from a typical in-house CRUD development. Game makers, maps, movie editors; sure they need the speed. But some say it's gonna revolutionize most browser-based application dev, but I can't get specific examples that are relevant to us. And even for those domains listed, relying on inconsistent and buggy DOM as found in browser variations is a problem it probably won't solve. DOM will still suck as a general purpose UI engine.

WASM just makes DOM bugs run faster.


The DOM isn't particularly "buggy", relative to, say, Win32 or Cocoa. It may or may not be a bad API, but implementations are pretty solid.

(I have to confess I've never understood the objections to the DOM. I have literally never had an instance in which I had to use the raw Win32 API that didn't turn into a miserable experience.)


With Win32 and Cocoa you pretty much have one vendor with roughly 3 or so major releases. But with browsers you have roughly 8 vendors above 1% market-share with 3 or so (notable) major releases each. Therefore, you have to target roughly 8x more variations of the UI engine.

Look how hard Wine's job is to be sufficiently compatible.

I believe we need to either simplify the front-end standards (pushing as much as possible to the server), or fork browsers into specialities: games, media, CRUD, documents, etc. What we have now sucks bigly. Try something different, please!


Interoperability, and the standards process, is how we get specs that are sensible. Whenever I have to program using Win32, Cocoa, etc. I inevitably spend a ton of time reverse engineering how the proprietary APIs work. For DOM APIs, things generally work how they are supposed to work, because they were actually designed in the first place (well, the more recent APIs were).

Wine isn't comparable, because the Web APIs are designed by an open vendor-neutral standards committee and have multiple interoperable open-source implementations.

Your proposals break Web compatibility and so are non-starters. Coming up with fixes for problems in the DOM and CSS is the easy part. Figuring out how to deploy them is what's difficult.


Re: "the standards process, is how we get specs that are sensible." - They are not sensible: different vendors interpret the grey areas differently. A sufficiently thorough written standard would be longer and harder-to-read than the code itself to implement such.

Re: "Your proposals break Web compatibility" -- Web compatibility is whatever we make it. A one-size-fits-all UI/render standard has proven a mess. What's the harm in at least trying domain-specific standards? We all have theories, but only the real world can really test such theories.


Between WASM and modern graphics APIs, we might be able to actually kill DOM altogether. Something like this:

http://blog.qt.io/blog/2018/05/22/qt-for-webassembly/


Let's not do that until we have a way to make non-DOM-based web applications accessible to screen readers and other assistive technologies.


Meta-data can be embedded to describe and categorize content. But accessibility is usually not a goal for many "office productivity" applications (per my domain-specific standards suggestion). Usage of DOM alone does not guarantee accessibility either.

As far as Qt, while it may be a good starting point, I don't think direct C++ calls is practical. Some intermediate markup or declarative language should be formed around it: a declarative wrapper around Qt.


> accessibility is usually not a goal for many "office productivity" applications

I think I might be misunderstanding you. Are you saying accessibility is usually not a goal for the kind of applications that people need to be able to use to do their jobs?


I believe there's a reasonable limit to how adaptable workplace software has to be to those with poor vision, etc.


> Some intermediate markup or declarative language should be formed around it: a declarative wrapper around Qt.

QML is exactly that.


What is the state of expanding GPU programming APIs in the browser? Seems that one of the biggest factors holding back the development of media applications for photo/video editing, games is a lack of a modern graphics api and first class GPGPU support. A while ago there was momentum towards webGPU and webVulcan but these projects seemed to have quieted down. We did get Webgl 2.0 but there is still tons of room for improvement and much to be gained!


Chrome team has also announced intent to implement WebGPU on most platforms

https://groups.google.com/a/chromium.org/forum/#!msg/blink-d...

In addition to rich applications for 3D modelling, photo editing, etc. Adoption of WebXR as a distribution platform for VR/AR content could drive development. MagicLeap's Asset Builder Web Tool for example is built on top of NextJS and ThreeJS


AFAIK the current WebGL-successor-work happens here, and there's very recent activity:

https://github.com/gpuweb

No idea how long until we can play around with it in browsers though.


The threading description is highly misleading, the current proposal being implemented is based on WebWorkers, they are not just native OS threads with a small overhead


That's true for the "threading MVP", but there is also discussion about adding "pure wasm threads" as a follow-up which avoids worker overhead/limitations.


Is there already a performance advantage with the current WASM implementations for number crunching code compared to writing plain Javascript?

My experience is that modern JS engines are already pretty good at optimizing that kind of code, so I'm wondering if there are still significant speedups to be had by using WASM, given that it's still pretty new and didn't have much time to get optimized further yet.


Depends if you measure against asm.js or handwritten Javascript, and on which browser.

For my emulator stuff (mostly bit twiddling on integers: https://floooh.github.io/tiny8bit/), I saw a very slight improvement of WASM vs asm.js on browsers that have special handling for asm.js (in the meantime I have dropped asm.js, and only compile to WASM, since all browsers support it).

On iOS Safari (which I guess doesn't have special asm.js handling) WASM is a whopping 3x..5x faster than running the same code compiled to asm.js.

The question now is how much slower 'idiomatic, handwritten Javascript' is on iOS Safari versus the same code written in C compiled to asm.js, but I think another 3x..5x slower is realistic.


I have done some benchmarking comparing JS, WASM, and native C++ addon performance in Node.js: https://github.com/zandaqo/iswasmfast

From the look of it, in Node.js, WASM suffers less data exchange overhead while been decently close to performance of the native addons.


> Is there already a performance advantage with the current WASM implementations for number crunching code compared to writing plain Javascript?

I may be out of date here, but IME neither JavaScript nor WASM is any good for actual "number crunching" (i.e. statistical or scientific code). For that, you want at least a compiler/language that uses the latest SIMD instructions and lets you choose 32- or 64-bit floats. Better still, you want one that automatically parallelizes loops (you may need to align your data), gives you cheap GPU access, and above all is simple enough that you can figure out why the compiler isn't performing the optimizations you expect.

Modern JS engines are pretty good at speeding up the weirdness that is JS, so maybe WASM will only have small benefits, but using either for numerical work would be making your life unnecessarily hard.


WASM does let you choose between 32 and 64 bit floats. SIMD and threading are mentioned in the article as things that are being worked on now.

GPU access is about browser APIs rather than being something tied to WASM. We have webgl now and it seems like there's more coming hopefully (https://github.com/gpuweb/gpuweb).

>and above all is simple enough that you can figure out why the compiler isn't performing the optimizations you expect.

Reading WASM seems easier to me than reading real assembly, so double-checking a WASM compiler's output seems much easier than double-checking a native compiler's output. Of course it's on the compilers to support WASM well, but hopefully that simplicity aids them too.


"The tail calls proposal is also underway", but the repo hasn't had a commit in 5 months.

https://github.com/WebAssembly/tail-call/blob/master/proposa...


You can deploy WebAssembly to 154 data centers as a part of Cloudflare Workers if you'd like to play with it: https://blog.cloudflare.com/webassembly-on-cloudflare-worker...


The visualizations in this article are excellent!


I believe Lin Clark is the artist, she has a whole series called "Code Cartoons" where she explains various (usually web related) technical systems like WebRender in similarly excellent style.

[0] https://code-cartoons.com/


>People have a misconception about WebAssembly. They think that the WebAssembly that landed in browsers back in 2017—which we called the minimum viable product (or MVP) of WebAssembly—is the final version of WebAssembly.

They do? When people point out that WASM can't do stuff they are simply pointing out that WASM would need to be extended. That is a pretty simple concept and a very important observation because historically that is the phase where everything falls apart.

Creating an intermediate language that all the higher level languages can compile to is not a new idea. It has been done over and over again with little practical result. Some current scepticism is in order.


From what I have seen it looks like Mozilla and Firefox are really leading in WASM. All these detailed posts explaining it in easy to understand terms are amazing as well.


WASM's gc story will be tough to outline, it's going through its own MVP cycle in omitting finalizers & weak references


I wonder if eventually with some of the stuff like GC in place, you could write a javascript JIT on top of wasm. I'm only half kidding. It would be pretty neat if wasm became a kind of universal IR that different (AOT and JIT) compilers could target.


JIT is pretty hard to do with wasm. There's a strict separation between code and data in wasm, for security reasons. You can't jump into the heap. So the best you could do is to generate your new JITted code as a separate wasm module.


That's exactly what I was thinking when I tried a more theoretical approach to my web emulator. I wrote a little about my approach in the Wiki [1]. Definitely one of my worst hacks :-) . Unfortunately the demo doesn't work anymore. I need to find out why.

[1] https://github.com/s-macke/jor1k/wiki/Breaking-the-1-billion...


Ah interesting, thanks for clarifying. I'm way out of my depth here, but this could become possible if a future version of wasm provided the option to dynamically load code? After some googling, that's what this [1] seems to be talking about.

[1] https://webassembly.org/docs/future-features/#platform-indep...


I used to think without DOM web assembly is uselsess then I saw how Figma made their UI super fast rendering to a canvas and was convinced that hopefully WS will be the reason DOM apis go out of fashion


How does this not end up as Flash reincarnated though?


What part of flash was bad?

- the (bad) sandboxing? WASM uses the same sandboxing model as JS, and thus provides the same security guarantees?

- the requirement of an external plugin? WASM comes preinstalled in all browsers.

- the proprietary aspect? Wasm is an open standard.

- the obfuscated nature? WASM can be decompiled very easily. Besides, js served by modern websotes are already minimized and obfuscated beyond recognition.

What am I missing here? I consider Flash to be a truly great piece of technology. It allowed easy creation of multimedia contents Its flaws were in its implementation, not in what it allowed. I want wasm to succeed as a reincarnated, better flash.


Having to reimplement things such as text selection, clipboard, and making it more or less impossible to write plugins which affect the pages being shown (adblock, reddit enhancement suit, etc) are a few huge reasons. I'd prefer to keep the web consistent for sites which don't actually need to be rendered on a canvas.


Adblock works on the network layer, for the most part, so this would continue to work. Furthermore, most content isn't going to migrate to canvas-based rendering, as that's much more complicated to setup.

The GP I was answering too was talking about figma, a sort of content creation app in the browser. Being able to write those tools and deploy them via the web is a net positive for the web. Just like all the good ol' flash based games were an awesome thing for the web.

When flash was a thing, most web content didn't move to flash. Not only did Flash behave in a foreign way, breaking the user's habit, it was significantly harder to create flash content than web content. Similarly, it's delusional to think most web content will move to wasm. The barrier of entry is enough to prevent that.


Accessibility is a big problem.


Multiple implementations, including more than one Open Source implementation.


Displaying is the easier part. Taking input would be trickier.


Also accessibility. But those are easy to solve problems worth solving considering how much of a garbage DOM is


AFAIK, Figma only uses WebGL/Canvas for non-UI elements.


Maybe a stupid question but why does the browser itself need to do compilation?

Why can’t WebAssembly developers compile ahead of time and just serve the binary?


This is answered in the article (pretty far down, I'll admit):

>But if all we have is the link, there are two problems here that we haven’t addressed.

>The first one is… you go visit this site and it delivers some code to you. How does it know what kind of code it should deliver to you? Because if you’re running on a Mac, then you need different machine code than you do on Windows. That’s why you have different versions of programs for different operating systems.

>Then should a web site have a different version of the code for every possible device? No.

>Instead, the site has one version of the code—the source code. This is what’s delivered to the user. Then it gets translated to machine code on the user’s device.


Two reasons:

Portability - you would need to know what you're compiling for, in the same way you need to compile a different version of your program for your PC and your phone. You could imagine a system where some pre-compiled versions exist for certain browser/host combinations to make life faster for the majority case thought except:

Security - the primitives that WASM has available are limited and only affect stuff in the sandbox. The browser then compiles that to machine code and can know that the machine code will only affect stuff in the sandbox. This is exactly how JS works - unless there's a browser bug, JS can't write to an arbitrary place on your harddisk (for instance). If machine code was provided directly, then the browser couldn't easily verify that the machine code only did safe things (because machine code can do anything, unlike WASM).


AndrewDucker I think has the main point, but it's also easier trust and sandbox a binary that you've built yourself.


Because they don't know what they're compiling to. The browser could be running in a wide variety of hardware.


Right, seems obvious now, thanks


Because you don't know the target architecture and configuration.


I'm looking into building a state-management system for React with Pyodide. Yes, it might add 3mb to the downloads, but that should be acceptable. It will probably use some kind of asyncio work-alike which will use Web-Apis to perform what asyncio normally does.


This is all right and all, but I have one question: why the hell it needs to be inside of a browser? To do what exactly? I don't care about browsers, I care about applications. I don't want freaking Photoshop in my browser, because I want a browser to die. This should be a part of the OS, not a freaking browser. Give me a built-in runtime with sandboxing, a delivery method, and an AppStore. Give me next generation Java/Flash. I tired of a browser as an OS replacement. It is not and it shouldn't be.


Because it effectively solves the distribution problem in a way which includes non-technical people.

If you think your solution does that, meditate on what "non-technical" means.


Having the URL system also makes them so much better for sharing. Imagine being able to share a link that someone could click and it would open up your shared image inside of photoshop.


Android, iOS and UWP do that with deep links.


It doesn't need to be inside of a browser[0], that's just going to be the most common client.

[0]https://webassembly.org/docs/non-web/



Dude, like, just get with the program, man.


Why is it such a bad thing for a browser to act as an OS?


I don't get this. What would you write in this language ?


I dont think you will write in that language. You will compile to that language from whatever language you prefer to write in.


It would be a shame if these improvements were restricted to WebAssembly only. For most modern high level languages, JS remains a much better compile target than WebAssembly for the foreseeable future.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: