Yes, the way most web developers click together their project these days is a bloated mess. But Web Assembly is not a solution to it.
Just use a leaner stack to output your html+css+js. Throwing Web Assembly into the mix is not going to do that for you. If anything, your stack will become even more complex.
Sure, the web is sometimes used that way currently, but few projects are using full-on Foo-to-JS compilers like GWT or ClojureScript that make the original source entirely illegible when viewed through the browser console.
Instead, projects are mostly using "light" transpilers built to have clean mappings to JS semantics, like TypeScript, where the original source can be recovered and traced by the browser as long as the transpiler generates a source-map.
WASM really won't work with the "source map" paradigm, and so a move to a web where most development uses languages that compile to WASM, is a move to a more opaque, less learnable web. (And also one that browser extensions have far less of an ability to intercede in.)
This just means you won't debug things using a browser console, but using a debugger designed for the host language instead. And that would be a blessing: I'm far more proficient in GDB than I am in the Chrome/Firefox developer tools.
Why won't WASM really work with the "source map paradigm"? Is there something fundamentally different about bytecode that means it cannot be traced back to its original source, given tooling support?
But there's an upside, no? There is no way to reference an undefined variable in go? So if the compiler is correct, so will be the result, and errors would be found compile time.
While you cannot reference an undefined variable in Go, you can still dereference nil pointers and invoke other operations that result in a panic (some of the standard library functions can panic).
It's more predictable, when generated from a strongly typed language.
Most of the times, due to the strong typing, it is faster. However, there's a cost involved in compiling, invoking and passing data in and out of a wasm module. If the overhead cost is greater than the raw performance gain, there's no discernable difference
Don't forget about crossing language boundaries. Using JNI can be slower than pure Java code, and I think the same is true for wasm-js interop [1]. When wasm gets "native" access to the DOM and other browser features this may change, but until then, DOM manipulation is probably going to be faster in js.
JNI transitions (at least on OpenJDK and Android) are much more expensive that WebAssembly<->JavaScript transitions. For the latter the same JIT is being used on both sides and there is no need for a GC transition. JNI is also far from as fast as it could be; .NET’s PInvoke is much more efficient (and easier to use).
Mark Reinhold already stated on a few talks, including at JavaONE that Sun designed it that way on purpose as they wanted to keep everyone true to WORA without carrying native code around.
Which is also a reason why a replacement is being done as part of Project Panama, initially advocated by Charles Nutter due to the JRuby project issues.
Web Assembly is gimped because it needs to be compatible with existing JS runtimes - when they add threading support you will no longer be able to run a WASM shim in JS but at the same time WASM will become a lot more general purpose.
Existing JS runtimes actually use shared-memory concurrency a ton, even ignoring the (currently disabled) SharedArrayBuffer. Code generation and garbage collection have at least some concurrent stages on all the major JS engines. JavaScriptCore even exposes this in its API.
Webworkers are multithreading. You just have to use slower, async communication methods. WASM will likely be the same way, given the security considerations.
But if catching up means taking on baggage and complexity, is that still a win. Technology is a means, not an ends. Of course non-native should evolve, but taking a copy-cat approach seems like a not so good plan.
If you look at the sheer number of requests to fetch some JS from some random website happen when loading a modern web page this will not be possible. Web pages rendering depend on JS being pulled from many places, and often not under the dev's control. So we'll end up with a mix of JS and WASM whether we like it or not.
Initially you'll use JavaScript as a top-level glue and all the submodules in WebAssembly. Then once it's stable you just get rid of all the JavaScript. You can even completely abandon DOM/HTML if you like and won't feel speed penalty for it.
JavaScript as glue and WebAssembly modules is more complicated, though, not less.
The JS ecosystem is a complicated mess, it's undeniable. But once you take the time to learn it, it's fine. In the majority of cases WebAssembly is just give to give the lazy an excuse to use a language they already know rather than learn something new.
And as for abandoning the DOM... yeah, it's not a great model. But is a dozen incompatible, competing rendering frameworks (that each site downloads separately) going to be good for the web?
> self-satisfaction at learning yet another single-purpose language
is a weird way of describing learning the language native to the web platform. I don't see a lot of people screaming about having to learn Objective C or Swift to make iOS apps.
> How is this significantly different from where we are now?
Err, because the DOM is a universal standard that comes built-in to every browser? <div style='background:red'/> looks the same everywhere, with no extra code required. If you're building a new renderer from scratch you're going to have a lot more code users need to download.
Would it though? I'm guessing something like the OpenGL spec would be standardized across browsers, and `<div style='background:red'/>` would be replaced by `glClear(1.0, 0.0, 0.0, 1.0)`.
Sure. Clearing the screen is one thing. But if you've done any work in WebGL you'll know that building an entire UI library with it would be a huge, huge endeavour.
But that doesn't seem like something that should be standardized anyway. There is no one way to do web UI, and the current popular paradigm (DOM/CSS) isn't particularly good either. Going to a lower-level gives more control to web developers, and the browser could always cache popular frameworks.
Now that we think about it, even with the DOM/CSS UI framework built into the browser, people still make use of libraries like jquery so library caching is something the browser needs to do anyway, WASM or not.
No webassembly is a solution to it. JS just sucks. Actually so does HTML and CSS. The entire web ecosystem is a closed platform. The reason that the front end stack is constantly bloated is because JS,HTML & CSS are awful. If the people behind wasm were smart, they'd allow generic handles to a graphics context so that people can write/export other rendering frameworks. Strong typing is good, reactive programming is good. Using html and css strings is idiotic, and we couldn't build alternatives to this model for the past 30 years.
Please clarify what this means. Standards are open access and browser engines and JS engines are for the most part (Edge, Safari, Chromium, Firefox) open source, so "closed platform" seems, to me, inaccurate.
> If the people behind wasm were smart, they'd allow generic handles to a graphics context so that people can write/export other rendering frameworks.
Do you see how bloated web browsers are already? Do you really want apps to recreate all of this base functionality, or even to encourage such horrific inefficiency? Hopefully I'm missing something, because this seems like a terrible idea.
HTML (or any SGML-derivative, really) + CSS is the only application view format we have in common use that allows for clients of all shapes and sizes to reformat the page to fit them (with reflow rules, browser-default styles, and user-agent styles); and for alternative clients like screen-readers or "scraper"-like API clients to work with the view without the author having considered their needs ahead of time; and, best of all, for client-supplied optional third-party extensions to go into the view and mutate it to suit whatever custom purpose the client desires.
In other words, by mandating that the application view layer interacts with the client by delivering declarative (HTML) or imperative (JS) instructions to populate a semantic document (the DOM), rather than to populate a particular viewport; and by mandating that styling considerations are kept as separated as possible from that document; we can then operate on that document apart from the application's control, and render it the way we want to, or parse it into something entirely else.
A large part of the point of "the web" as a technology stack, is to provide exactly that capability. There's even a name for explicitly targeting such use-cases as a developer: HATEOAS (i.e. making your API endpoints act like web pages as well, in that they're also "documents" embedding hyperlinks to related documents, and form-input descriptions for making queries against the document—so that API clients can "navigate" your API just like browsers navigate your site. In extremis, your API becomes your site—since, if every endpoint is already responding with hyper-documents, your controllers can just have a logic for "Accept: text/html" that renders that hyper-document into hypertext, and attaches styling to it.)
If you don't care about any of that, you don't need the web. You just need a Hypercard viewer with an address bar. Or Lotus Notes.
HTML and DOM is the current way that is forced on us. What if I had a better model? I can't implement it myself. The DOM was meant to describe documents, not interactive webpages. HTML and CSS never had to compete with other ideas or technologies, because you physically can't do it.
The DOM is a relic from the 90s that refuses to die.
Implement it yourself, rendering to WebGL (with a canvas fallback if you really care). When you're done, show HN! We'll probably mock you, unless it's somehow awesome. The DOM is horrible, and knowing what we know now we could at least make it less weird, but some of the big annoyances really are its greatest strengths.
I am doing something similar. The problem is not just HTML/CSS it's also JS. You need a strongly typed language, and have a scripting language on top. What I'm doing now is a Rust based framework, with an Erlang like scripting language to define complex interactions. Taking inspiration from Flash, all animations and transitions can be keyframed with tweens. Taking inspiration from the C/C++ to python interoperability, I want the same with Rust and Erlang. Now my entire backend can be written in Rust.
Unfortunately, I'm not open sourcing it. Not until I get a proper graphics context, and implement my own rendering engine. I think web assembly is going to revolutionize the web. I see browsers allowing you to cache common libraries/assets, and html/css being replaced by several competing UI paradigms. The one I favour is inspired from functional reactive paradigm, but there are equally valid alternatives. I don't think there will be a single frame work to rule them all, just competing schools of thought. Right now the web doesn't have any competing paradigms, just html, css,and js.
Reason web sucks:
HTML/CSS are not suited to define complex UIs with intricate layouts and complex transitions and key frame animation. This problem was better addressed 30 years ago. Look at the menu UIs in games from the n64/ps1 to ps2/gamecube era (I'm talking about the game menus, not the game themselves). Some of them had far more impressive UIs than the webpages of today. This type of UI is an order of magnitude harder with html/css than just doing it all via X11 and C (even with memory management concerns).
The reason for this is that HTML first isn't strongly typed. If you make something complicated, and make a small change, it messes everything up. Your compiler can't help you. HTML tags are somewhat inconsistent. React, Rails etc. are all trying to address this problem. They are all taking inspiration from android, iOS, QT and gtk+ to make it somewhat component based. However, you fundamentally can't achieve this. HTML is in a land of its own, when you create an HTML tag, sure you can wrap it in a JS class, and mimic some of the characteristics that desktop frameworks provide you. You won't get the whole thing. I've seen many times different frameworks reinventing html templates (like handle bars). Templates can't solve all issues.
CSS sucks. The first thing about css that sucks is the "cascading" part of css. It should just be ss. CSS was originally designed to limit bandwidth by having it cascading. That's not the case today, and it messes everything up. The second thing that sucks, similar to why html sucks, is that again css is in a land of its own. It's hard to control css from JS (for interactive site you want this). I'm not even going to talk about how hard it is to position things properly via css when you are building layouts dynamically.
JS just sucks. I don't think I need to elaborate, I've mentioned enough points already.
Every single web framework out there is trying to address some aspect of the problems I mentioned, but my claim is that you can't solve it, you need to start over. Imagine if CPU makers made crappy inconsistent assembly code. Every year a new compilers and languages come out that tries to fix their predecessors. The most logical choice is instead to ditch that chip maker, make a better CPU with a consistent assembly code. You can't do that with web, you can't ditch assembly(html,js,css).
It only took 30 years for the web to get web sockets, webgl and now wasm. I think in another 20 years we'll come a full and get the remaining amenities that modern operating systems provide.
The reason I chose Rust is because it's a low level language with strong typing. It's not quite Haskell, but good enough. Writing a web server and UI with Rust is very nice. It just connects to the backend processes with ease, and doesn't make things awkward the way npm does. I do a lot of low level numerical and mathematical simulations, all done in C/Rust with all coordination & distributed computing done in Rust. I want Rust to take over. The sequence should be: Erlang -> Rust -> C
I chose Erlang because years ago I saw how reactive programming is extremely useful for complex interactions in a AAA game I was working on. In a game system you have the play interacting with the world, with enemies. The enemies interact with the player and each other. The player can pause the game at any point, then you need to bring up a menu screen with its own interactions. I've seen this work flow done in many different ways, and by far the reactive paradigm is the most sane. This was done in a bastardized version of Lua and Lisp.
I pretty much agree about JS and Rust, but I find myself disagreeing with some of your points regarding CSS/HTML.
>> Reason web sucks: HTML/CSS are not suited to define complex UIs with intricate layouts and complex transitions and key frame animation.
CSS does have a concept of keyframes now (I know this may not cover everything you like about flash keyframes, but I figured I'd throw it out there in case you were unaware):
https://www.w3schools.com/css/css3_animations.asp
>> This problem was better addressed 30 years ago. Look at the menu UIs in games from the n64/ps1 to ps2/gamecube era (I'm talking about the game menus, not the game themselves).
Those layouts had a known aspect ration, 4:3, and a reasonable expectation of being viewed on a large screen. HTML and CSS are expected to work on a wider variety of screens that may change size and aspect ratio on the fly, possibly during an animation. If I only had to make HTML/CSS animations for use on a normal television screen with a known zoom level in a specific browser, I could make interfaces every bit as beautiful as your N64 games (well... with a competent designer, but my point is that I could write the layout and script the animations). Some of the pain points of HTML and CSS are historical accidents that can't be changed without breaking compatibility, but some of them are quite useful and necessary. Responsive design is a good thing, it just sucks to write an animation that works in a whole range of aspect ratios and zoom levels. It's a problem that Sony and Nintendo never had to design around. The amount to which this constraint cripples and strengthens the web is hard properly convey. I love and hate it, but as time goes on I learn to hate it less and love it more.
>> It's hard to control css from JS (for interactive site you want this)
It's costly, as it touches the DOM, but it's not hard. Just change the class attribute (or even the inline style attribute -- there are cases where this can be argued for). Simple example: https://www.w3schools.com/js/js_htmldom_css.asp
> The first thing about css that sucks is the "cascading" part of css.
This tells me you hve very little experience with CSS or even HTML. The cascade saves you from declaring "font-size: 16px" in every single element of your page. Think of it as a form of property inheritance
Sorry, you can’t say “CSS sucks” when the only limitation it has is your lack of knowledge. If YOU can’t build complex layouts with it, it’s not the language’s fault. Plenty of people can. Open an “awards” site and see what people can do and you can’t.
It depends on what you compare it to, most native stacks have a more simple and understandable way of handling styling and control layout.
You can cut a lot of things with an ax, but you shouldn't criticize someone for wanting a saw, even if you can build the skill set to do the same with an ax.
The fundamental problem is that if you do end up reimplementing a slightly better DOM yourself, you end up with (e.g.) 20 MB of browser and rendering code. This will have to be downloaded by the client. If this were a onetime thing (perhaps some kind of "install"), it wouldn't be a problem.
But browser caches aren't generally up to the task of making it a onetime thing because:
-They're usually pretty small.
-They'll evict pretty frequently.
-They're limited to same-origin, even if the content cached is exactly the same. (For security reasons, yes, but there's no way to opt out.)
If caches were fixed, I think we would see a much healthier web ecosystem. Incidentally, it would make the browser vendors themselves fairly obsolete, which is probably why it hasn't happened yet.
I think billybolton is right that HTML/CSS/JS/the DOM all kind of suck, and that they can't feasibly be replaced is problematic. The heroic modern struggle of the web ecosystem is an attempt to encapsulate and work around the suckiness.
> Incidentally, it would make the browser vendors themselves fairly obsolete, which is probably why it hasn't happened yet.
Mostly, but not entirely true. It's been a while seen we've seen a new browser come out, but the time is ripe to begin work on one, and release it in a few years. Just fork webkit to maintain backwards compatibility. I see browsers just becoming a sandbox virtual machine with a key ring, with minimal functionality.
Apple has the strongest incentives to build what I am mentioning. Not sure if they will.
It took 100 years to go from the onset of the printing press to the book form and we are in some weird in between time frame where browsers aren't really enough to do what we want. Conceptually it is going to take something that is revolutionary, yet at the same time extremely familiar... like the book. I bet we would find similar strains in the intervening decades between the printing press and the book.
If that's your opinion then why bother targetting web browsers with WASM at all? You've been able to create highly interactive apps and games using native development for ages. I simply see no need for a new bytecode format, especially when mobile is ARM-only already, and even less so in Web browsers. WASM doesn't define a rendering or sound API, nor does it solve any other single problem game developers are having on established platforms. And, like JavaScript, needs HTML and CSS or another graphic API such as WebGL or canvas.
It's not for games. We are dealing with big data, we need to highly interactive UIs on the web that we can use to communicate and interact with this information. Something like this is where I see the future of web:
http://globe.cid.harvard.edu
That is not real time, but very difficult to pull off on web (harder than using C++ on desktop! Debugging webGL in JS is a nightmare). I don't see any issues with an ARM processor. The Harvard globe project if done in C++ could definitely run on a 2004 era computer. Most phones these days are much more powerful.
Just use a leaner stack to output your html+css+js. Throwing Web Assembly into the mix is not going to do that for you. If anything, your stack will become even more complex.