As people have been shifting towards running everything in the browser (just like people like me run everything in Emacs), this effectively results in a revival of ubiquitous proprietary software.
I don't think this is desirable.
Would you be free of Facebook? Would you be able to do anything useful with at all?
No. The problem is that Facebook still has all of the data and that's where the freedom and control live today. The code is just an interface for it.
Online multiplayer game developers figured this out decades ago. The primary way to prevent people from cheating is not by preventing them from hacking or cracking the client application code. It's keeping the game state — the data — only on the server.
Are you kidding?
The source code for the full toolset would reveal how they analyze their data, what they find relevant (even merely based on which types of tools exist and which don't), how access is defined for various categories of employees to access that data, differences between what they claim about those capabilities and the actual infrastructure, problems they've had which they've attempted to fix in changes to the code, how they set prices to sell access to advertisers, how their frameworks to respond to lawful requests for user data from government agencies work (hell, even inferences about which government agencies they respond to), whether or not they've made serious attempts to curb the fake news problem, a full understanding of the ways in which they track people who don't have a facebook account across the web, and probably a whole host of other enlightening details about their entire operation.
I'm sure there are people who can make quite accurate guesses about how all of that probably works without looking directly at the code. But if the Snowden leaks tell us anything it's that a handful of security specialists being able to deduce something is completely different that a critical mass of programmers being unable to deny that a problem exists. Access to that code would certainly give developers of FLOSS privacy/security software a better idea of how to protect their users' privacy.
At least, good online multiplayer game developers.
The irony is that proprietary browser based software happens to run on top of FOSS libraries and languages, which most companies hardly contribute anything back.
True, though we have reached a point where one could be really comfortable without proprietary software. I certainly am. It's just with an increasingly "appified" web that proprietary software makes again inroads on otherwise free systems.
Agreed. I extremely comfortable with my fully free system, and the only risk I have each day (which I've mitigated) for running non-free software is the constant barrage that others attempt to force upon me using my web browser---free browsers gladly download and execute non-free software by default.
I gave a talk about this and the issues of package management and code signing at LP2016: https://media.libreplanet.org/u/libreplanet/collection/resto...
Seriously, there's no shared libraries in compiled monolithic WebAssembly, is there? I think that means included GPL code will require exposure of the source.
Maybe if we're lucky some best practice will emerge that the source is often available as a sourcemap so companies don't have to worry about being sued for infringing the GPL. It sure would be nice to finally see an outcome of companies erring on the side of (legal) caution that's better for consumers.
1: Think because everybody seems to like to argue the specifics of this point.
In any case most companies are busy moving away from anything GPL related.
GCC just got kicked out of Android and all Linux alternatives for IoT are using MIT like licenses.
Always cracks me up how many people think the GPL means "I get everything I want, exactly how I want it".
What did I say that made you think that? I said the GPL requires exposure of the source, and maybe some best practices could emerge if we're lucky that might amount to it being easily available.
> Always cracks me up how many people think the GPL means "I get everything I want, exactly how I want it".
Interesting, as it always cracks me up when people decide to interpret statements in contorted, narrow ways just so they align with a pet peeve and they have something to rail against.
That said many companies do contribute back and in the case of the web quite a few big libraries and frameworks were released open by private companies.
Currently, difference in performance between asm.js and WebAsm is about 5%. It doesn't looks like performance gain or portability are main reasons to select WebAsm over JS. IMHO (I'm developer myself), main reason is better obfuscation of the code.
> Machine code donesn't move needle of obfuscation much at all
There are two very fundamental mistakes you are making when equating webasm to raw processor instructions. The first is that webasm even its binary format is still organized in a abstract syntax tree, so its instruction are not flat, they are in a hierarchy.
Also, I don't know how you think there's a noticeable difference of readability between minified asm.js and textual wasm for the average person. It might be more difficult for a programmer to read the wasm, but I can't imagine that "disassembled" asm.js is particularly readable in the first place (and both are completely incomprehensible to a non expert)
Average person see no difference at all. Moreover, I tested tank demo right now, and asm.js version is about 6% faster at loading (1.99s vs 2.12s from cache at Firefox 57.0/Fedora 26).
https://caniuse.com/#feat=asmjs vs https://caniuse.com/#feat=wasm
If you are trying to point that wasm is supported better on iOS, then look at performance tests: https://arewefastyet.com/#machine=29&view=breakdown&suite=as... . At many tests, Safari is faster than other browsers at asm.js microbenchmarks.
Aside: I really think that the terms "open" and "closed" are imprecise and confusing, which is why I avoid them. With deterministic builds and free software I have a mapping from source to corresponding binary; so while the binary may not be directly editable (or "closed") it still is free software and I could edit the corresponding source to obtain a new binary. In my opinion the problem really is one of practical software freedom, which has never been achieved in a satisfying way for web applications.
Ok, technically you could but that would be insane. Even if you consider how much FOSS exists on servers, it's impossible to ever be certain what is running the logic of a site unless you can run it locally, which defeats the whole point of the web.
This looks like a strawman.
For a user of a service there is no difference between a service provider that uses free software and a service provider that uses proprietary software. Proprietary software primarily harms the user of that software --- and in this case it is the service provider, not the user with a browser.
There are different concerns to outsouring computing and/or communication to services, but these are not the same concerns that apply with the use of proprietary software.
I haven't heard him argue that position. Free software obviously can be faulty and it can be unsafe to use.
There are many JS libraries and full web applications that are in fact software libre. It just isn't really any freedom the users can exercise due to the lack of application interfaces.
If I were in the mood for an argument, I'd argue that with all the ugly SOAP and Java services users had more potential freedom than today.
The vast majority of people do not understand or want that. It's mostly a nerd fetish (and I count myself in).
This openness is also a double edged sword. People focus on the warm fuzzy side of it and forget the mess that it tends to create for a platform because no one can ever rely on some features being available for everyone.
You will need a never ending amount of feature detection, polyfills, polyfills for your polyfills, transpiling, blah blah blah because oh look someone wanted to exercise their freedom to disable textboxes on their browser. And now they are demanding that you make your application gracefully fallback to handle the case where a textbox is not available. IT'S THEIR RIGHT!
Eventually someone will show up with a baked sweet potato and demand that your application must work on their potato because they have disabled all features but still want the functionality.
You can't realistically cherry pick underlying software pieces like that.
Rather than delivering a buggy software that works under a million different combinations of features and settings I will deliver a package of software that run predictably well on one or more predictable platforms. That is "the atom". You either take it all, or not at all.
From my experience the average users tend to be perfectly happy and satisfied with that. They are almost always running with default options everywhere anyway. It's the GNU-enthusiast hacker-types (and I count myself in) that tend to throw a tantrum about their custom preferences and philosophical and technical objections.
No more open web. And the crap will become inextricably intertwined with the content of value.
Right now, it's "for content." There is going to be enormous commercial pressure to make it "for everything".
We should start reconsidering the term "user agent".
This misses the point. Looking at a binary through a hex editor is also just text --- and the binary is also "just code", albeit at a level at which only few people are comfortable to work.
Obfuscated code (and generated code in general) is clearly not the preferred form, so it hardly even counts as source code.
But let's pretend your point were valid: would it remain valid with WebASM?
> But let's pretend your point were valid: would it remain valid with WebASM?
Yes (pretending GP's point were valid), because WASM has a text format, which is how "View Source" on the web is meant to be used for viewing WASM running in your browser. The text version can be derived from any WASM file (i.e. you can do the equivalent of beautifying JS to any arbitrary WASM).
* Early computers (1960-1980s): Dumb Terminal - Remote Server
* Early PCs (1980s-1995): Local Processing - Remote Storage
* WWW (1995-2010s): Dumb Terminal - Remote Server
* JS/ASM/Etc (2010s - near future): Local Processing - Remote Storage
Its fully possible that we will switch again to dumb terminal model. For instance, once the hassle of local code execution takes it toll, someone will have the bright idea of just putting the web browser itself in the cloud and just having a remote control connection to that browser....and the cycle will begin again.
Early computers did not have the local processing power needed to run heavy jobs.
Early PCs did not have the storage.
WWW solved an entirely different problem, namely distribution and communication.
JS et. al. solved the problem of responsiveness and interaction ie. latency.
I don't really see those as exhibiting cyclical traits, at best I see it as a correlation ie. side effects of the true problems being solved.
You mean "introduced", right? Back before the modern web, when we all expected software to run on our machines, there was no responsiveness and latency problems, because data didn't travel over the wire unless it absolutely had to.
That said, what I meant is modern JS enabled moving into the web things that should have stayed local. It's literally two step backwards (moving software into "the cloud") and one step forward (giving back some responsiveness through AJAX).
While I generally agree with the sentiment of "do more by coding less", and minimalist user interfaces, there are plenty of cases where you throw a whole lot of functionality out the window by outright banning JS.
> JS et. al. solved the problem of responsiveness and interaction ie. latency.
Regarding latency and responsiveness I agree that they are issues that depend largely on your use cases and your skill at implementing solutions. In this regard I can agree that some pages simply don't need JS. It is also being misused for ads and tracking to a degree that is problematic and can in itself cause issues.
One way to approach the latency problem is with more aggressive colocation. For example, an apartment complex could have its own AWS or Google servers for local computation or video streaming.
That would be great, if only we could make it not belong to Amazon or Google. Consider the same idea phrased like this: apartment complex have servers in their basement offering compute, and services you use work on those servers. The model of today is that companies own services, control where the compute happens, and ship your data to them, taking ownership over it in the process. The alternative model I dream of is your data under your control, your choice where the compute happens, and third-party code being shipped to that place.
Don't Chromebooks have ~4yr end-of-support lifecycles where they stop receiving upgrades?
I miss the internet I grew up with. The one before all the money arrived to ruin absolutely everything.
In that context, just attaching an interactive window to a remote running process (as long as it can stream faster than whatever is your minimum acceptable framerate at your minimum acceptable resolution) is literally indistinguishable from running locally. So I frankly do hope that comes to pass (as long as I also get to keep being able to build a desktop computer for working offline).
Native apps have won over web app ecosystem, plainly and simple. WASM will not change things even a bit to root causes of people deciding to use native apps over web apps.
That includes the problematic features like tracking. Running software from random sources is dangerous in ways we are only beginning to understand.
How about having an application platform that would be just an application platform? No HTML, no CSS, no built in multimedia. Just a VM, a viewport, audio, and inputs (and local storage if the user allows it).
Flash on the web faltered rapidly after many years of efforts by Mozilla, Apple, Opera, and associated individuals. They were looking to move the web 'forward' had a high-profile disagreement with W3C, so they started their own standard-setting collaboration to specify HTML5 and associated JS APIs. The blogosphere eagerly awaited the results, which promised to formally bring multimedia and rich interactivity to HTML, without having to use a vendor plugin.
When Apple announced that Flash won't be supported on the upcoming first iPhone, it was over. After a few years, when apps came to the iPhone, Adobe failed at marketing the fact that their Flash assets can be compiled into iPhone apps using Adobe AIR.
With existing Flash assets effectively relegated to desktop-only, it was only a matter of time before it was pushed out of the standard browser stack. Although later, both Microsoft and Google shipped Adobe's plugin (with better process isolation) together with the browser or the OS, and hooked into their respective auto-updaters, Flash was on its way out.
Looking at the wikipedia page I see one reason (though I am not convinced its the reason it didn't take off).
Using the browser platform to bootstrap such a thing isn't such a bad idea in comparison... sooner than later we need to depreciate the ugly parts, like WebRTC, WebAudio and trim some fat here and there, but other platforms have their ugly parts as well (look at the mess that is Android).
Plus all the server side stuff. It's all a mess.
A market with only 4 competitors (Microsoft, Google, Mozilla, Apple), is not a market. It's an oligarchy.
Also rendering the whole page in a canvas implies forgoing the entire DOM, which leaves behind basics like links, forms, embedded videos, etc. It would also mean getting raw input and drawing+laying out everything yourself instead of letting the browser do it.
But the domain serving the ads might be the same one serving the rest of the content. And if that server is motivated enough to show you that ad, they will do all of the things you mentioned. I've been worried for sometime that this will be the end-game of the ad-blocker wars.
Lots of fancy words for bloat. Oftentimes malicious bloat at that.
The main issue with the web currently is that tools designed for the purpose of displaying simple images and text, plus a little interactivity, are being stretched to realize complex applications.
Powerful on-demand applications on the web are a good thing, and it's a good thing that we're finally getting the tools to build them properly.
I can easily imagine that Adobe R&D already has a working WebAssembly prototype for Flash.
Lets see when the first examples pop up on Project Zero or CCC.
So you’re saying they have mitigated every cross-origin-based exploit? I bet they really mean it this time.
Just today there were a few on another HN thread.
Actually Haxe and Openfl will probably beat them to the punch since Adobe has pretty much killed flash by 2020.
No small feat!
I liked simple text pages of the past but I also appreciate that lots and lots of applications that would only ever see a windows release (and maybe a buggy mac release) will run on whatever platform I want as long as I have a modern browser.
The web existed long before it was monetized, and it worked great! Better than now, actually.
Gtk+ and WPF have very good support for people with disabilities.
The harm to disabled users remains.
HTML5 was there to obsolete quirky flash/silverlight/java applets and tell people to not to confuse code and the content, now main "web ecosystem pushers" are doing everything to reincarnate Flash in a new embodiment as WASM
I can't think of a single site that works like you are saying, yet webasm would only be a 2x-8x speedup over existing techniques.
I did actually, because most aren't around any more. Even so, I was talking about sites that purely use the html canvas to draw. There is nothing preventing websites from being built like that right now, so I don't see how webasm will make much of a difference.
The UI construct could be WPF/XAML or even WinForms like tech or something new.
Scripting and CSS would be entirely unnecessary. A lot of businesses would love nothing better than to dump the Jedi-like skills of the scripting developers and trade that for mundane forms skills.
As with Flash, WebAssembly will be implemented as fully integrated apps where it makes business sense, which will probably be a limited number of cases, such as graphics or media delivery. Everywhere else, it will either not be implemented at all, or be used alongside JS in the existing web.
Anvil and tools like OutSystems are probably the one thing that comes close to what Blend is capable of.
Having a pixel perfect WYSIWYG GUI designer, with a components market, painless DB integration and deploying to the web at the press of a button will get lots of enterprise love.
I look forward to the age of embedded binary apps on the web . I think the web is the only real option we have to preserve software long-term, and the likelihood of software being preserved is enhanced by that software remaining executable. But even in my wildest fantasies where every program ever written maps to a URL, I doubt that use case will take up more than a fraction of online content. The web is just too big and too complex and too general to reduce to any single heuristic or use case.
People are still using COBOL and Perl and pushing code to production with Notepad++ and Filezilla. The real world doesn't optimize the way you're suggesting it would. The business world certainly doesn't.
We're just saying the developer and deployment optimization story would be vastly simplified without the JS/HTML/CSS stack.
Those of us who remember what development was like before "front-end" development as it is today know that story very well. The amount of undocumented, untested, and wonky code in the current web stack has always seemed ridiculous to us.
The problem with ActiveX, Flash, Silverlight, and OneClick was never about the developer story. It was about security and openness. WebAssembly solves both of those problems.
Open tools, pixel-perfect GUI design, compilers and debuggers, and strongly-typed languages would absolutely reduce the amount of HTML/JS/CSS in the world. It may not kill all of it, but it would put a sizable dent in it.
With WebAssembly you can bypass all of it, and do your UI framework in GL, and everything else with native libraries compiled into WebAssembly.
Seriously, DON'T do this. This breaks ctrl-f. This is unlikely to work well for people who need to enlarge text or enhance contrast due to vision impairment. This breaks screen readers. This will probably break most site archive navigators, so your content is lost to history (e.g. wayback machine). This will probably prevent Google from indexing your site, killing your page ranking and driving away potential customers. Even if you re-implement all the things you think it will break, you'll find that your users have things set up in ways you'd never considered...
Just, don't. Please don't do that.
It's a long topic, but basically boils down to companies being greedy and not giving a fuck about their users, and web developers being too busy chasing shiny to stop and care about actually providing value to users.
> you'll find that your users have things set up in ways you'd never considered...
Web companies don't care. They aim for the majority, which is users with popular browsers on default settings, without ad blockers and any plugins.
I really think it will bring Flash like web sites back, and browser vendors are the ones actually pushing it.
The wheels are already in motion, with everyone trying to port their favorite language or VM into WebAssembly.
For example, today it is Qt WebGL Streaming, tomorrow it might just run directly from WebAssembly.
It will be up to those frameworks to provide usability support, like WPF does for Windows.
"Add support for WebAssembly as target platform for Qt
Actually, it wouldn't surprise me if Google used some sort of AI to recognise the resulting images of text. What this would do is make it exponentially more difficult to start a search competitor to Google. Which might explain why Google is all-in on WebAssembly …
I've written multiplayer VR experiences that run cross-platform through the browser that only clock in at 500KB of JS. You can put a LOT of code in a MB.
Thus, webasm being a few times faster to download and parse won't change a whole lot, since few web developers seem to care much about the speed of their pages.
If webasm won't change much, then it isn't reasonable to predict a future of web pages that simply render to the canvas.
If it leads to an arms race of adblockers vs. websites wanting to serve ads, the longer the race goes on, the longer web-page loading times become. So eventually a website will lose because its loading times will get too long and users, even non-adblocking ones, will go elsewhere
Your site would also be unreadable by Google, meaning that you'll be heavily penalized in search ranking and no one will find your abomination of a website designed this way.
Google is an ad company. They will simply open an API for content providers to push content into their search engine. They don't have to maintain their crawlers, users can't block their ads, and both Google and the content providers get more revenue. It's a win-win situation.
Oh, I forgot the users: they obviously lose! But hey, at least you can write rich applications :^)
Ah, clickfarmers and adfraudsters will welcome that with open hands.
Its just a hassle and can't be done with pasting a line of code
Ad-networks want the ads pulled from their servers so they can track views. If web-owners serve the ads themselves, then the ad-network must trust whatever the web owner says about the numbers of visits/clicks/etc.
For now the ad-networks just don't care about ad blockers, because they are making money hand over fist anyway. If things ever get hairy for them, I suspect they'll switch to a reverse-proxy model. You point your domain to their servers as you do with cloudflare, and they they serve your content with ads injected in the right places, served under the same domain. This would be pretty easy for web-owners and completely nullify ad-blockers in their current incarnation.
It seems this is for real then?
JS has had a surprisingly long life actually given its original use cases, and I understand the objections to wasm, but I guess that something like this was inevitable given how big the browser is as a platform and how web apps have been steadily fattening on the back of a suboptimal language, which, if this goes ahead, can be confined to just UI again.
They are planning to add support for gc, threads, bigger mem, tail recursion, etc , so will it be running everything efficiently? Even on mobile, given its backers?
I hope better alternatives will quickly become available, or at least some package where you don't to install a thing inside the thing you already downloaded and installed.
I wish I could just do `wasmcc -ofoo.wasm foo.c` and get a `foo.wasm` that I can include, without any implicit dependencies and such. Let me supply malloc if I call it. I'm sure there'd be libc-lite libraries in no time (if they aren't there already, I'm out of date).
Would it be either a G++ or clang++ module, but something a lot more straightforward and simple. No intermediary, no multiple dependencies. Either a simple compiler binary, and if not possible (although I still wonder why developer never release both source code AND binaries), provide a single downloadable repo that I can build using cmake.
I have read tutorials about binaryen and emscriptem, I did not have a fast computer nor a fast internet connexion, and it was so painful and unclear I gave up.
I get that asm.js was great, but it was mostly a hack, wasm should be much cleaner. I don't understand the choice of emscripten.
From the user's perspective, isn't emcc a C++ compiler that directly compiles to wasm? There are some internal IRs along the way, but you shouldn't notice them, like you don't notice the GIMPLE IR when using gcc?
The emscripten sdk isn't small, that's true, but that's because LLVM+clang are not small, plus the musl libc and libc++ are not small either. But all those pieces are necessary in order to provide emcc which can compile C++ to wasm. So it can't be just a single repo, multiple components are needed here - in fact, regardless of emscripten, more will be needed soon since the LLVM wasm backend will depend on the lld linker.
What we can maybe do to make this unavoidable complexity seem simpler, is to compile all the necessary things into wasm, and have a single repo containing those builds. So it would contain clang compiled to wasm, lld compiled to wasm, etc. Then the user would check out that one repo and have all the tools immediately usable, on any OS. The wasm builds would be a little slower than native ones, but perhaps the ease of use would justify that - what do you think?
In other words, the emscripten integration for those components is the easy work, the upstream LLVM/lld stuff is the hard part.
If half the effort wasted on C++ compilers and usage learning had been spent on something worthwhile...
I don't think C++ will fade away as it's already a well installed language, and it is getting several upgrades, which are supported by big companies.
There is really nothing comparable to C++ in terms of language, to be really honest. There really are no language which are as readable and flexible as C, AND extensible and high level.
I tried to get interested to statically compiled language like rust, go, D, and honestly I can't get used to them. Safety is not really a good idea because putting barriers between the compiler and the programmer will always result in pain. C++ is the ease of C with some syntactic sugar for more convenient use. I see nothing that can be as good as C in term of down-to-earth syntax.
A good language is not a language that is well designed, a good language is a language everyone can use. C++ to me seems to be the least worse, except of course the toolchain.
However, the end game is to be able to load a WebAssembly module just like you load a JS script. So, we are going to get there, but it will take some time.
I am a TypeScript user and in the article he states:
Alright, so I don't need to be a C/C++ dude to get some WebAssembly goodness (maybe some day).
But now with this in my toolbelt. What occurs? What does it mean? If I have a particle simulation in TypeScript on Canvas using Shaders... Now I compile to WebAssembly, but now what? Can I still access Canvas or is Canvas like a 4th dimension? IF I can't use Canvas, how do I render to screen?
It is not hard to imagine a language with the same objective of TypeScript that is compiled to WebAssembly. Furthermore, if TypeScript was compiled to WebAssembly you could use it wherever there was a platform for the WebAssembly format. So if somebody created a project to consume WASM binary files and execute them from within .NET assemblies, you could use TypeScript outside the browser and node.
I apologise. I'm just… rather surprised at such an idea.
> It is not hard to imagine a language with the same objective of TypeScript that is compiled to WebAssembly.
I agree with this. Mentioning compilation of TypeScript to WebAssembly shows some misunderstandings of both of these technologies. WebAssembly was designed to run code already written in some low level language like C/C++ because running already written software in a browser is easier than rewriting it. Of course you can compile Rust into WebAssembly but that was not a reason for why WebAssembly (and it's predecessors, asm.js, PNaCL) were conceived.
This doesn't look like traditional TypeScript to me. The explicit loads/stores give away the whole trick.
And it requires JS twice the size of the TypeScript to actually use the output generated by AssemblyScript. I personally don't see WASM a valid target for AssemblyScript for another year or two, until either the gc or host-bindings proposals lands.
Search for WebAssembly demos. There's a crappy tank game. WebGL is doing all the real work; the game part is simple. There's ZenGarden. 200MB download. Again, WebGL is doing all the real work.
WebAssembly is going to be a way for sites to bypass ad blockers. That's the real use case. Maybe the only use case. It's all about reducing user power and giving total control to the big sites.
The WASM binaries are around 10 megabytes for this. That's a lot, for sure, but this includes a whole windowing system, a raster paint engine, and a C++ reflection mechanism.
All of which the browser already has.
The toolchain (emscripten) is still pretty bloated and I hope they will streamline the process somewhat.
Um .. no. It will make code execute faster but it will be more work to write and maintain.
On the other hand, if you're writing very specialized algorithmic code which will be executed a very large amount of times, then WASM and fine tuning is in place. As cool a technology as WASM may be, a very small fraction of our code is like that.
My prediction is that WASM will find its place, mainly within some of the more popular libraries available on npm. If there is anything to gain from rewriting, say, parts of ReactJS in WASM then it will eventually be done. But it will be totally transparent to the user of said library.
Taking a view at the server side landscape is also instructive. There is a plethora of languages which execute faster than JS, yet nodeJS thrives. So it doesn't seem like people are eager to ditch JS. Hater's gonna hate, of course.
- OCR (tesseract): http://tesseract.projectnaptha.com/
- Computer vision (opencv): https://docs.opencv.org/master/d5/d10/tutorial_js_root.html
- Physics engines, retro games/emulators, entire operating systems, and many many more: https://github.com/kripken/emscripten/wiki/Porting-Examples-...
I'm definitely aware of JS shortcomings, having had to deal with them for many years. I just found that particular quote to be a little snarky, and outdated.
Isn't this contrary to the whole idea of wasm?
EDIT: this is a completely serious question, I honestly don't understand why this is built into the existing JS engines instead of something separate.
at this point webassembly run in a sorta VM with a security model added.
the fact that wasm and JS share the interpreter should not have (ignoring bugs of the implementation, which is not trivial) a security effect
If you build a website and have to parse something for 20-40 secs for a user, they should fire you on the spot for bad design. Start working for adult sites, if loads longer then 2 secs, redesign.
Weird, I know.
Because any language would be compiled to this intermediate form (wasm) it would drastically simplify the distribution of complex applications.
Because any language can be compiled to that form, you can potentially reuse your existing company's code to run it inside a webbrowser
Because it's already integrated in web browsers, it means the technology deployment promise to be quick.
It's still missing a few key things (garbage collection, interaction with the DOM) but I see that as being the new standard for pretty much anything. I can see that being used everywhere, from server side to client side, to mobile devices, why not hardware support (Is that possible?)
Will it be really good for the web in general? I guess it will depend on the motivation of those who would peruse the technology.
My guess is that this is going to be another niche, underutilised technology (like WebRTC or even SVG) that is efficient leads to highly optimised products, and but lacks mainstream usage.
The problem is the irrational assumption one must use a corporate sponsored web browser to do anything interesting with a computer. This program automatically runs code from "anonymous", commercial third parties, which today happens to be JS, using the user's computer resources. The user today does not get to choose the language, but more importantly, given the level of coercion and absence of alternatives, she does not get to choose whether or not to run the third party code.
> It is not tied to the web browser anymore.
So, the beachhead is secure, time to advance into the countryside, eh? ;)
Writing in C++ won’t make DOM access or async I/O go away, though :-)
However it appears all the video editting effects render slower through WASM than JS, what's the deal?