I honestly think that one of the key ingredients to the success of the web is that just about any web page you can see, just about anyone with modest knowledge can trivially work out how it was done. It is only an accident of history that it worked out that way. But the effect has been to spread knowledge far and wide and it has greatly contributed to the success of web technologies. Furthermore, I can easily look at the source code for my bank web site or any web site I want to trust and get a good idea of how solid their implementation is. Yes I can't inspect the back end, but a whole slice of front-end security issues are easy for me to check for myself. This kind of scrutiny will get a good bit harder if web assembly becomes adopted.
"The WebAssembly text format, which is designed to be read and written when using tools (e.g., assemblers, debuggers, profilers), is specified as a textual projection of a module's AST."
This is apart of the MVP even. I think the authors of WASM actually agree with your opinions here and are working to a system that satisfies that concern.
BTW taken from, https://github.com/WebAssembly/design/blob/master/MVP.md
That's very reassuring - I hope it works out that way.
Now, to have worked with asm.js recently, that's a whole different ball game. That is unreadable, at least without spending copious amounts of time on it and having a deep understanding of asm.js to be able to read it.
It is true that it will still be lower-level than JS. But it remains in a structured AST format which is easy to pretty-print. And many of the worst patterns you see a lot in asm.js just won't be there.
On the one hand, I hope it gives a real build target for Rust and Go... on the other, I hope things don't get too obfuscated in practice.
In fact pick any language today that compiles output and you will find decompiling tools made for it, including the original compiled web stuff like Flash and Silverlight.
I'd much rather the web get faster for everyone than save a few people some time when deconstructing.
Many outside the HN web bubble are still plain HTML/CSS sites with almost everything done server side.
Java Applets were completely isolated from the surrounding web page, and vice-versa. I tried writing XEyes as a Java Applet back in the day, but the eyeballs could only follow the cursor while the cursor was directly on top of the applet's rectangle. WebAssembly operates on the single unified DOM and its event model.
Because it uses the DOM, the text and widgets are real. The text is selectable, looks like regular text, etc. Java Applets had their own GUI/widgeting system that looked bad and behaved differently.
Java only "behaved differently" if you wanted your applet to paint a square of the screen. Which was more about the problem that browsers also didn't support SVG, so if you wanted an interactive diagram you had to isolate yourself from HTML.
If you're worried about obscurity, look at what is generated by asm.js. WebAssembly won't be any worse than that (and is, in fact, more or less a more efficient/correct equivalent to asm.js).
1. Code size can be significantly smaller, reducing download time (even with gzip; tests show a 30% win).
2. Parse times for a binary format can be much faster than for code as text, by a very large margin.
I think that you would see a much bigger performance increase if websites would not load resources from 10 different domains for serving a single web page (you have a much higher chance of getting some unforeseen delay from one of those hosts).
- isolated (outside of DOM)
- non standard (not everywhere)
- third party (proprietary)
- alien (uncanny value of different UI style)
- bloated (even compared to the web stack, Spring was an over-engineered badly designed mess of a GUI API)
- insecure (full of exploits to this very day, tons of them closed just last year)
it does sound as a great improvement. Except in case you weren't there and can't remember.
I’m also hoping it winds up being friendly enough to hand write WebAssembly for small chunks of numerical code. With asm.js, that’s currently kind of a pain.
There problems were never truly about the idea being bad: it was the implementation that was terrible and was hindered even further by the Sun-Microsoft lawsuits, etc.
If everyone is on board with a technology and work to make it usable instead of 'feature complete', then it is very easy to make this work where Java applets failed. It's actually easy to make Java applets work where Java applets failed if you were to re-implement the idea with the correct focus and buy-in.
It's not technology - it is politics.
GC is generally a good thing precisely because it prevents use-after-free errors and enables programming at a higher level without worrying about memory management. There is often a performance cost to this, it's true, but for large classes of programs it's a cost that's worth paying.
It is no different than imposing a programming pattern like RAII.
I also have to think about the new image file format posted some time ago: FLIF (http://flif.info/). With WebAssembly you could implement this in the browser by just embedding a lib. No need to wait for the vendors to support it.
I feel like these innovations are being turned into a kind of "social media spectacle" with relatively little discussion of what innovation really means in this context.
Python put up a warning against its own "restricted execution" a long time ago: https://docs.python.org/2/library/restricted.html
Probably because it is stupidly difficult to do robustly, especially when the language and standard libraries weren't originally designed to do that.
Lua is perhaps best positioned for this given how tightly you can lock it down (you can remove the 'require' function that loads other modules). But even this is considered not robustly secure against untrusted code.
For those that just want to see the sandboxing part, I think most of the magic is in mueval but I could be wrong:
I disagree on one count, and on the other count I don't understand, so I'm requesting clarification.
First, the current fashion in programming languages is to allow an implementation to do anything (in Python, "import" takes no arguments, neither "require" in node.js)
I think life could, in fact, get easier.
Today applications do not regulate the code they load. It just has to work. Saying that any effort to regulate the source code of loaded extensions amounts to "shackling" is throwing FUD on the whole idea that an application should handle some (not all) of the responsibility for investigating the code it loads and not rely 100% on the underlying implementation.
With respect to your remarks about validation mechanisms, you'll have to explain the claim that this would "increase the implementation's own attack space" because I don't see how that is possible (unless we're talking about two different things: I'm talking about a combination of (A) static verification and (B) static transformations that instrument code with calls to runtime handler for certain sensitive operations (such as accessing a global variable and accessing a member of an object)
No amount of vitriolic words or technical nitpicking will change the fact that there will be more work to be done, both by application developers enforcing policies and by extensions having to oblige them.
> the current fashion in programming languages is to allow an implementation to do anything... I think life could, in fact, get easier.
Easier for whom? Not for the lib developer, who will have to follow more rules ("import" mechanisms already have some, by the way), nor for the runtime developer, who will have to enforce them. Easier for the final user who executes runtime and lib? Maybe, but then you cannot expect any serious traction from developers (whose life you're making harder). Trade-offs and all that.
I disagree that it is more work in total; I don't think you're taking into account the work that will no longer have to be done, namely work spent searching for cross-module bugs introduced when a security proof for one module relies on a condition which cannot be expressed in the underlying language.
I agree that it is a new type of work though.
Edit: easier for the mathematician who wants to prove that system X has security characteristic Y
Tcl has a feature for this purpose called "safe interpreters". It is production-quality and works with command-level granularity, the command being the basic building block of the language: http://tcl-lang.org/man/tcl8.6/TclCmd/safe.htm. The security section has some considerations that are relevant if you want to make a "safe" interpreter for any Turing-complete language.
So in other words it wouldn't be a good idea to encourage people to post and run demoscene Tcl programs on 4chan. That's the kind of safety and security I'm talking about.
You would have to ask the Python, Ruby etc. people why they don't work on it more, but from the side of the web, there is plenty of effort. First, several languages like those have been ported to the web, by porting their VMs:
Instead of each language needing to devise a sandboxing mechanism, by porting them to the web, they can be run safely there. This is also much safer for the web: just one sandbox, instead of many.
WebAssembly will improve those ports, by reducing code size and download times and so forth.
I don't understand what you mean. Can you elaborate?
Today we live in a world where the source code that runs our advanced industrial society is owned and maintained in secret. And the business climate is highly competitive so a lot of people cut corners in terms of security. This means that instead of actually proving that their systems are secure system designers do something else.
If, instead, all of the source code of an advanced industrial society were maintained in an online library (like github) and anyone could submit a pull request to anything, then systems would have to have some way of protecting themselves from the introduction of exploits (either intentional or accidental).
As for native, I believe there's some work being done in some fields to make it easier for users to reliably compile from source, which would give them the opportunity to scan the source (using whatever tools make sense, even MD5 hashes or similar) before compilation. IIRC the Debian project was working on improving the reproducibility of builds, you may find more of what you're looking for by searching for that project.
I don't see why you would get offended, he was talking to the specialists, not the generalists. You have to understand there's no shame in not knowing something outside your domain. For example, if you've never done AI programming before, would you be offended if an experienced AI coder skipped over some details in order to provide a more friendly introduction? It's not meant as an insult, it's not a reflection on your capability to learn about it, but rather it could be based on a consideration that you may not have taken the time to learn about it yet.
As a corollary, this is the same point that people asking 'why aren't engineers paid better' are really asking about, but can't or won't see...
Let's put it like this, if I was the guy giving the talk, and I knew I'd get this reaction, I would do nothing differently. I'm not going to waste my time pandering to people who would take a tech talk personally. I don't care if managers get paid more, if they want to water down their words to ensure no offence to anyone, they're welcome to take that burden.
> I will explain shared memory multi-threading in a bit, especially for those unfamiliar with the concept.
Versus the condescending:
> I will explain shared memory multi-threading in a bit, especially for the front-end developers.
Tiny change, makes all the difference. Be inclusive; don't be condescending.
It annoys me no end when someone talks down to me, or maybe even worse, asks about my skill level in a particular area when the implication is that they will then know how to dumb down what they're saying to a level that even I may understand.
But after watching the first 15 minutes of the video, I'm not seeing any kind of judgmental or condescending tone. Was it something in the remaining half hour?
He did ask for a show of hands of front end vs. native developers, but you have to admit that front end developers who've only done front end may never have been exposed to the concept of threads with shared memory - his topic for the next few minutes.
So my suggestion is to give it another shot. Based on the first 15 minutes, it seems like a fairly interesting and insightful talk.
Of course, people can release their js, python, java, et al that compiles down to WebAssembly, but because you are not forced too, I do not think most will.
What "open source"? Open Source is dead in web services, because everybody can just hide the source behind the server. Even with SPAs, you could have 90% of the actual functionality on the server side, behind REST APIs and the like.
What's wrong with that argument is the fact that although most users don't need it, it's an excellent way for those interested to explore into and find out how things work, and essentially get into web development with nearly no effort. If this trend continues, in the future when browsers become nothing more than dumbed-down interactive TVs, I think there will be very few, if any, web developers who started out of their own interest and exploration instead of solely thinking of it as a profession, and that's a really bad thing. (For the corporations who want to take control, this could be viewed as a good thing - why would they want to encourage independent thought and exploration when they could have dogmatic obedience?)
Computing systems are becoming more closed and proprietary, and this will be a big step backwards in terms of openness of the Web that lead to its growth and freedom in the first place.
If that were the case, why do people get interested in programming in any non-interpreted language without thinking about it as a profession? I have trouble seeing this as a big step backwards in this regard since there are plenty of non-interpreted languages that kids in high school have gotten interested in themselves, even when there wasn't nearly as many good resources for learning them online.
> Computing systems are becoming more closed and proprietary, and this will be a big step backwards in terms of openness of the Web that lead to its growth and freedom in the first place.
Web assembly was developed publicly from pretty much the very beginning on GitHub. We live in a world where the biggest closed-source players have open sourced implementations of their programming languages/platforms, compilers and all (e.g. OpenJDK, .NET Core, Roslyn, Swift). There are no popular languages around now where the leading implementations aren't open source. If this is what "becoming more closed and proprietary" looks like, then I can't say I'm opposed.
Thankfully for the openness of the Web, developer tools have been a source of intense competition among browsers. I see no reason why this will not continue to be true for introspection of Web Assembly.
We had to buy magazines and books to view any source.
Still lots of us got interested and a few guys managed to build the initial web infrastructure without a view source button.
I also doubt you are old enough to have used something like the Altair 8800 or have built your own computer from Elektor schematics.
I wonder if it hadn't been easier if we'd just gone with sandboxed, auto-updated native applications rather than taking a 15-year detour through html, dhtml, 'ajax' web 2.0, spi w/ 'compiled js' and now this...
1. Much smaller representation, which will mean faster download times
2. More control-flow structure retained, which means less needs to be rebuilt by JIT-compilers.
3. As a corollary of #2, we get easier and more helpful 'decompiling' of code blobs
4. Another corollary of #2: easier analysis for safety properties
(We've seen similar with emscripten: As much as I love it as an experiment, it's highly problematic in production.)
And of course, as others have already pointed out, it's likely to ruin the vision of the web as a mosaic of just a few trivial technologies that can be investigated with modest knowledge. (Compare the comment by zmmmmm.)
I've had clients who have tried to "clone" my work, but eventually come back because they couldn't figure out the backend code or scale.
I would love something that is the equivalent of installing an app on a phone, that is using a compiled app in a browser. Where the source can't be viewed.
Even better, if I can give a rich UX. By having access to the system.
Yes, I believe in some very small cases this will be welcomed. Otherwise, I think for everyone like 95% of the web, html, js, css, etc, will see be the way to go.
If they created their own software, they would not have to pay me any money. So without money, I have to go on welfare.
You get my point?
No, I don't want any of my clients to get access to my source code. I am not a giant corporation like redhat, google, oracle or other company that open sources it's software and then sells support or other services.
I'm a small company, who with more clients. Is hoping to them employ a small workforce and grow the company.
And the current ad-blockers almost never go through the source code of a script, only the src and the URLs
You want .NET on bare metal? Implement it on x64, ARM, MIPs and maybe PPC. You want .NET in the browser, implement it on wasm. You want Java in the browser, implement it on wasm. You want BEAM in the browser, implement it on wasm. You want Parrot in the browser, implement it on wasm. What you are not going to do is to get Google, Apple, Mozilla and Microsoft to all agree on any of those ILs.
The only possible benefit that this technology could have for the planet is to give Brendan Eich a reason to run yet another one of those video-game-inside-of-a-video-game demos at another one of those JS conferences that frontend devs go so nuts about.
Except this time, there will be yet another video game inside of that inner video game!
Excuse me, I haven't even BEGUN to scratch the surface of the awesomeness that is pnacl in my own project!
Asm.js "cheated" by creating a path forward that did not require consensus. Mozilla used it to innovate without needing to argue with anyone else. For other vendors, asm.js would technically "work" without other vendors putting in any technical or political investment. But, the hook was that if they didn't get on board eventually, their browsers would start to look bad if they ran asm.js sites poorly.
WebAssembly is the outcome of Mozilla's trick. Now that all of the vendors have slowly become convinced over a period of years that there is a smooth, gradual path forward over the next several years that does not involve any technical or political chasms to cross, they are starting to work together to make it happen.
Technically it was a plugin for Pepper and the only difference between it and Flash was that it was open source and only available to Chrome.
Agreed. Web Assembly is the solution :)
Even if Mozilla had the will, they really don't have any way to contribute very significantly to the development effort, because it sort of requires a lot of very talented people getting paid large sums of money for long periods of time... which is obviously something that Google can afford to do.
I think everyone has always known that NPAPI was eventually going to have to be completely overhauled or even replaced. For anyone to constantly harp on the idea that "all that matters is the open web" when they come from the very same organization -- although going by a different name -- for which the original binary plugin architecture was named (Netscape Plugin API) is just patently absurd.
There is so much quality, freely available C/C++ code in this world that it is pretty mind boggling, and the purely technical achievement of allowing this code to be efficiently used by end users through the web platform is, again, a fairly mind boggling idea.
I'm not saying that this web assembly concept won't have any use cases whatsoever in the future, but I am saying that the proven PNaCl technology that exists right now is way, way too powerful for any web-based applications developer worth his or her soul to pass up very easily.
And implementing a modern JS engine isn't? Or developing Rust? Come on.
The objections that many other browser vendors have to integrating PNaCl into the Web platform have been detailed repeatedly and have nothing to do with "X browser vendor doesn't have the talent on staff to implement the technology".
Today using Ajax is better practice because it integrates better with the browser and it'll work on mobile.
From the Native Client wikipedia page:
As of 13 May 2010, Google's open source browser, Chromium, was the only web browser
to utilize the new browser plug-in model. Mozilla has announced that they are
"not interested in or working on Pepper at this time." As of 2015, Pepper is
supported by Chrome, Chromium and Opera.
Safari and IE/Edge automatically don't matter because they are exclusive to their respective [proprietary] host operating systems.
That leaves Firefox.
Does anyone really think there is legitimate competition between Chrome and Firefox anymore?
I mean don't get me wrong, Firefox was a godsend in the bad old days of M$IE domination...
But this is 2015, and Google is basically Skynet, while Firefox is not much more than a warm, fuzzy memory.
Seriously... does anyone really want to turn this into a battle between Google and Mozilla?
My original reply was to a part of the article that talked about the "evil" non-open nature of web-technologies that are not humanly readable, textual HTML/CSS/JS. Then someone replied that PNaCl is only supported by a single vendor. Then I said that was a factually incorrect assertion, with the response being that I was engaged in hair splitting.
So then, I suggested that we should examine the rest of the vendors. In the morals based context of this entire subthread, I think it is plainly obvious that Safari and IE/Edge don't have any place here, when it comes to the question of whether PNaCl is a "good" technology, based simply on the question of the proportion of vendors that happen to support it.
If you are instead talking in a purely pragmatic, capitalist sense... then of course all that matters is brute numbers. But then there is no real discussion to be had, because in the cold, hard business world, whoever wins is just whoever wins. A equals A. It's a mere tautology.
Your whole line of argument is confusing.
The one method of running NaCl programs on their own relied on thousands of lines of Python code spread across a nested hierarchy of frameworks just to generate command line arguments to pass to the main runner binary -- and this used completely different interfaces into NaCl from Chromium's implementation. And all of this, being tightly integrated into the Chromium codebase, was subject to change at any point with no guarantee of API stability.
Maybe things have changed since then, but at the point when (P)NaCl was receiving the most attention I found it entirely impenetrable. Though the code may have been open source, the knowledge needed to integrate it with a new system was entirely proprietary. And anyone doing so would have ended up with an implementation bound to the evolution of the Chromium internals.
I'm instead talking about implementing one's own LLVM-bitcode-compiling zero-install plugin architecture with safety guarantees derived from static analysis (i.e. what you get by implementing the whitepaper), providing a similar API to plugins to NaCl's PPAPI.
(Shout out to my boys, Sam Clegg, Brad Nelson and company!)
Given that neither the linked text nor the FAQ they have does even mention and discuss that I'm even more worried.
What can you do today in Web Assembly that you can't already do in asm.js?
So the WebAssembly future will look like this: You will compile into WebAssembly multiple times, for different versions and implementations and you will need to deliver the correct artifacts to the clients.
The reason, the web still works is that everything is shipped to the client in source. And the client will then interpret and compile everything according to its capabilities.
WebAssembly will probably never gain traction because of this reality.