Hacker News new | past | comments | ask | show | jobs | submit login
Compiling to WebAssembly: It’s Happening (hacks.mozilla.org)
306 points by mnemonik on Dec 17, 2015 | hide | past | web | favorite | 216 comments



Web assembly always makes me a little sad. It feels like we are going back to flash only it won't be bad this time, I promise, no really.

I always feel like the most obvious use for it is to start writing truly hateful and abusive code.

I'm sure this is because I'm getting old.


What makes WebAssembly like Flash? Please elaborate.

You already can't just read someone's JavaScript code if they're using a transpiler or uglify or something like that, it looks like line noise and you basically have to go through a lot of work to reverse engineer it. WebAssembly is no worse.

The JavaScript environment on the web reminds me of the "walled gardens" that were Lisp machines back in the day, that force you to write code in Lisp, or if you wanted to write in C you would get end up with a bit of a nightmare on your hands. The Lisp machines were beautiful and integrated and the source code was everywhere, but they weren't for everybody, and in the end it was the diversity of Windows, Unix, and Mac OS that replaced them.

WebAssembly is Unix, JavaScript is a Lisp machine.


> What makes WebAssembly like Flash? Please elaborate.

It's pretty straightforward. Both tend towards the Big Binary Blackbox Blob.

It's true that WebAssembly has advantages Flash, Java, and Silverlight didn't really have in terms of being freely reimplimentable and (potentially) native to the browser. And it's probably a good thing that a browser can be a VM via a target-intended subset of JS. BBBB may be the right thing for some applications.

But to the extent that the browser becomes something developers see primarily as The VM That Lived (and there are clearly a lot of developers in this boat) yes, we're forgetting lessons we should have already learned from the difference between Flash/Java and the open web.

> You already can't just read someone's JavaScript code if they're using a transpiler or uglify or something like that, it looks like line noise and you basically have to go through a lot of work to reverse engineer it. WebAssembly is no worse.

I'm clearly in the minority, but I had qualms about uglify and other source manglers from the beginning for the same reasons people have qualms about Web Assembly: they break the benefits of view source.

I get that they can also be tools that help squeeze out performance, and I use them selectively for that reason, but as far as I can tell, most of the web development world uses this as an excuse to stop actually thinking about the issue -- and for that matter, to stop thinking about how they're putting together their web app ("we're using minification and gzip and doing an SPA using Ember/Angular/LatestDesktopLikeFramework because that's what professionals do now, why do we have performance problems?").

Similarly, I've seen a lot of people use compiling to JS as an end run around what are essentially aesthetic/subjective issues with JS as a language when they'd probably do just as well spending more time learning to use it (as far as I can tell in years of working with all of them, JS is in exactly the same league with Python and Ruby and Perl other similar dynamic languages). That doesn't mean there are no beneficial cases for JS as a target (personally, I'm intrigued by Elm), but I think I'm justified in being afraid that people will use it as insulation from superficial problems.

> that force you to write code in Lisp

Lisp doesn't force you to write code in Lisp. That's one of the reasons why it's awesome -- and potentially horrible. Transpiling/compiling can be similarly awesome and potentially horrible for a lot of the same reasons.


I'm sorry, this is going to be a rant. Feel free to ignore if you don't care about my opinions on the subject.

The open web has nothing to do with "view source". It never did. "View source" just makes debugging easier, it's a technical solution to a technical problem. It's why we use JSON or XML instead of ASN.1 for our daily work.

The open web is a web where no company is the gatekeeper. It's a web where we have multiple browsers, competing JS engines, et cetera. No one company can hold the web hostage. This is exactly why Flash, Java, ActiveX, and Sliverlight failed. Every one of those technologies was owned by a single company. Every one of those technologies failed because it's impossible for one company to be the gatekeeper for a technology that is supposed to run on billions of heterogenous devices.

So the lessons of Flash are this: don't put all of your eggs in the Adobe basket.

Meanwhile, you're fighting against WebAssembly because you are ideologically against black box software. Getting rid of WebAssembly does not actually achieve that goal. It's like you're fighting against condoms because they encourage promiscuity. The promiscuity is already there, and condoms just make it a better experience for everyone involved.

----

Footnote: JavaScript is on par with Python/Ruby/Lisp in a lot of ways, but there are some important deficiencies with respect to typing and static analysis, deficiencies which cause actual bugs in real world problems and cost developers time and money to deal with. It's why we've been inventing TypeScript, Flow, Dart, CoffeeScript, et cetera. You say that other people's problems with JS are "superficial", but that's exactly how I see your problems with WebAssembly.

People are going to write code in C++ because they can hit performance targets on platforms where they ship native code, and WebAssembly means a lot to the folks working with Unity, Unreal, or thousands of other existing projects which the new open web. The new open web, with WebAssembly, is an open web with more diversity than ever before, rather than a JavaScript monoculture.


Yeah ... blackbox blob binaries (BBBB's?) don't strike me as the parent commenter's problem. It's not having access to a software's _original_ sources that you can build and deploy yourself that is the problem. Ideally, there would be some nuclear-powered version of View Source that gave you nice access to the original sources for the code loaded on a page - sort of a formalized "Fork Me on GitHub".

When I am being philosophical, smoking a cigar and sipping tequila after midnight, I begin to understand that the only software I've written that has a chance of outliving me is my open source. Everything else I've done has just been to serve up fleeting amusements. There is literally nothing commercially closed that I have contributed to that shouldn't be utterly scrapped.

If I died tonight, my positive impact on this planet for its people now and in the future is probably just constrained to my kids surviving me. Not an inconsequential thing, I love my kids like crazy and they're going on to do greater things than me. I just have the sense I've got more to offer than procreation and raising good people.

In my opinion, any hairy audacious moonshot goals humanity chooses to tackle should be open sourced in every way possible - otherwise the efforts cannot be fully genuine and transparent to generations that follow. Perhaps the increased open-sourcing of code and designs we see from industry today versus 20 years ago is the best we can hope for in competitive capitalist societies; maybe what we have is good and is the most we can expect.


I agree with you on some points, and disagree with you on others.

> The open web is a web where no company is the gatekeeper.

Absolutely!

> The open web has nothing to do with "view source". It never did. "View source" just makes debugging easier, it's a technical solution to a technical problem.

I think "view source" was essential to the web we have today. It's not just a technical solution to a technical problem - it's an accidental UI/defaults solution to a social problem we didn't realize we had.

(note that, in general, this applies to the web of maybe 15~20 years ago. Geocities, inline styles, table-based layout, the works)

The problem, namely, is how to educate the general internet-using populace on the means of content-creation, and encourage them to actually create. If different companies can put out their web sites or their technologies with slightly reduced barriers on innovation, sure, that's kinda nice I guess. But, the web reaches it's true potential when the average nobody uses it not just to consume, but to create, produce, and publish.

There's a very small subset of the population that can just read an HTML spec (or at best, a tutorial), and actually churn out a working web page at all. Most people like to learn implicitly, by example and by imitating. With "view source" available, any web page you see is an example to learn from - or a code snippet to take, edit, and make your own.

---

now-a-days, we're up to our ears in HTML/CSS/JS classes, tutorials, etc., reasonably priced hosting, code snippets, the works. And yet, a smaller percentage of the internet population (and, I think, a smaller absolute number of people) actually create their own website.

Sure, there's more technology to grapple with, and the bar has been raised on what constitutes a "good" website the author would be satisfied with, but I think the real failure is that it's pretty much impossible to read the source of any interesting, non-trivial website.


I learnt to write HTML files in the early 90's (The Geocities era as you refer to it - forget tables back then everything was frames). I learnt exactly how you surmised by using view source on existing websites.

As a 13 year old kid who was mildly curious I was able to build a web site for my high school by hand in 'pure' html code. It looked laughably simple compared to the web pages of today but it was easy. I can and did teach many other people to make websites the same way.

Nowadays it seems to me like you don't have a hope in hell of hand writing a website (maybe some people still do seem like an exercise in frustration to me). Everything is all frameworks and scripting languages.

Earlier this year 20 years after I wrote html pages for my school's web site I had cause to write some HTML again. I work as an engineer (the non software type) in an industrial plant. I needed to display some non critical data to our plant operators on a screen in the control room. I thought 'hey I'll put it on a web page'. It would save me from having to mess around with C which is what our HMI system is written in. So I built something using 1990's era HTML mostly from my memory (although I did move on to using tables instead of frames) It worked.

A few weeks later some one asked to make it possible to view historical results so I built a simple html form (also 1990's era technology) to allow user to input the date range he wanted to view results from. It worked but did no input validation was finicky about the format you enter dates into it etc...

A few weeks later I decided I'd modernise it - modern websites have things like popup calendars which allow the user to "pick" a date instead of having to type dates into a text field. I found a framework written in javascript which provided those sorts of things and attempted to use it. Seemed to work but a lot of subtle bugs kept popping up.

When you clicked on a date in the calendar such as 12th of January the framework passed it to the form in US style 01-12-2015 format I live in Australia we use 12-01-2015 as out date format. I could find no way to change the way the framework functioned. I searched google and a few other people complained about the same issue seemed to be no solution. I thought I'd dig into the javascript file try to edit the program to provide date in format I needed it to be in. The file was 'minified' impossible to read javascript. I gave up and modifed my script to assume US style date input instead.

Maybe I'm just to old to learn new tricks now but I miss the older simpler web sometimes.


In some ways, the recent HTML specs allows the dev to go back to that old-school handwriting.

For example, instead of bringing in a JS framework, you could now just replace the input type with "date", and a modern browser would show the date picker.


That doesn't work in Firefox or IE - unfortunately my work has standardised around IE


If you were using an open source framework, the source is available. The minified file is just to save space.


Well said.

Its even built into the spec of WASM to at least try to make the bytecode convert into human readable code. https://github.com/WebAssembly/design/blob/master/HighLevelG...

But as you state, its not integral to a "open web". IMO WASM is a long time coming and I think it will drastically help the "open web" which is suffering from some library bloat. Next we need an open and better image format.


> The open web has nothing to do with "view source". It never did. "View source" just makes debugging easier.

This is about as true as the idea that the only use of seeing source code is debugging (or building/deploying your own version of the app).

Which, of course, every developer here knows is not true.

Reading source code is useful for figuring out how to do something you didn't know how to do. It's useful for discovering idioms you didn't know, even if you already knew another way. It's very useful if you're trying to do bookmarklets/extensions/mashups or even more significantly engineered interop.

I don't know if there's a formal gatekeeper out there with a list of all the things that capital-O Officially make the The Open Web(TM) great, so I guess there's no way to settle what's on it.

But I don't know how anyone could argue that these things weren't significant bullet points in a much longer list of reasons why the web has seen faster and broader adoption than any of the competing VMs and why it was/is often more useful and nice to work with despite more limited capabilities.

And I think it goes almost without saying that list is much bigger than the "no single gatekeeper" point.

> you're fighting against WebAssembly because you are ideologically against black box software

I am neither fighting against WebAssembly nor ideologically against any kind of black box software. My earlier comment makes it clear I think WebAssembly has a place (and, more generally, that transpiling does) as well as its hazards.

I am stepping in to point out that Flash and WebAssembly have significant things in common, some of which are tradeoffs against things that have made the Web an effective and proliferating medium.

> [JS] some important deficiencies with respect to typing and static analysis

See my aside about Elm. Given the fact that dynamic languages (including maligned ones like JS and even PHP) seem to be about as equally successful in terms of shipped software as statically manifest typed languages, whether or not even something like TypeScript is really going to give engineering teams an edge seems like far from a settled question to me. But I'm interested to see what happens where we're talking significantly augmented capabilities rather than the largely aesthetic differences that something like CoffeeScript represents.

(Speaking of which: "TypeScript, Flow, Dart, CoffeeScript"... at least one of these things is not like the others if you're invoking static analysis and typing as reasons for transpiling.)

> You say that other people's problems with JS are "superficial", but that's exactly how I see your problems with WebAssembly.

I can't tell exactly what you're saying here, but at a close approximation, it sounds like you might be suggesting that the readability of WebAssembly is going to be on the same order of difficulty that an experienced Python developer has reading idiomatic human-written JS (ie, aesthetic). This is potentially possible, I guess, in a world where everyone writing compilers that target WebAssembly is conscientious at/above the level that the CoffeeScript developers were (CS output is really fairly readable, arguably more so than minified JS). But then again, that's not what the name suggests WebAssembly is for. In practice, most of the time it's probably going to read... a lot like assembly language dressed in C-like syntax.


> Given the fact that dynamic languages (including maligned ones like JS and even PHP) seem to be about as equally successful in terms of shipped software as statically manifest typed languages, whether or not even something like TypeScript is really going to give engineering teams an edge seems like far from a settled question to me

I'm always so frustrated when I read this, because I've worked on a few dynamic language codebases as well as seen a few open source ones on github. Very often I see the pattern of the project contributors losing control over the dynamic language project. They slip up, the tests become insufficient and now they refuse / are afraid to touch critical parts of the code. Soon development slows down to a crawl. Or rather, there are no more changes - only additions / grafts to the existing codebase.

When I first started using TypeScript, I feared that I'm going to lose the benefits of a dynamic language and that I'll have to write a lot more boilerplate. I had no idea how little of that was true. TypeScript really goes out of its way to model as many JS idioms as possible, including some really dynamic ones:

  * Structural types are formalized duck type checking. You don't 
    have to say your object implements an interface: if it has all 
    the right fields, and they're of the right type, it does. This is 
    similar to Go interfaces (but a bit more powerful in what it can express).
  * Unions types plus type guards can model dynamic function arguments
  * Intersection types can model record (object) merging
The entire type system in TypeScript is basically a formalization of the typical duck typing patterns that are common in JavaScript. As a result, TypeScript can model almost any common JS idiom. The loss of expresivity was very minimal - maybe 1% of the most dynamic code only. And even there, you can tell the type system "I know what I'm doing" (cast to `any`), turn it off, do your dynamic stuff and then tell it the resulting types (cast to whatever types are the result before returning).

Given the above, theres been absolutely no doubt in my mind whether TypeScript gave us an edge or not. Its been incomparably better than just plain JS.


Hey incase you aren't aware, having WASM output a human reading version is part of the spec.

"define a human-editable text format that is convertible to and from the binary format, supporting View Source functionality." https://github.com/WebAssembly/design/blob/master/HighLevelG...


> JS is in exactly the same league with Python and Ruby and Perl other similar dynamic languages

There's a big difference to strongly typed dynamic languages that go out of their way to catch programming mistakes at runtime. With JS you get some "wtf" result when things go wrong and it propagates a long way before manifesting (if it's caught at all). Combine with JS's many weird semantics and special cases that are hard to keep in mind at once...

Recent favourite (minimized case after discovering unexpected piles of NaN's in some faraway place)

  > parseInt("0")
  0
  > parseInt("1")
  1
  > parseInt("2")
  2
  > ["0", "1", "2"].map(parseInt)
  [ 0, NaN, NaN ]


Can you explain that one? Does it have something to do with parseInt being able to take more than parameter?


Array.map does not work like map in other languages.

They decided that map shall receive the array index as the second argument, and there's also a third argument that does something else. JS happily mashes these 3 arguments into parseInt's argument list of (stringValue, radix) without error. Hilarity ensues.

(This code also sins against "never call parseInt without the radix argument", see point about too many minefields to remember at once)


Seems to me the answer here would be to bind the radix argument of parseInt with a wrapper before applying map


> JS is in exactly the same league with Python and Ruby and Perl other similar dynamic languages

I don't know about Perl, but Python and Ruby both give you a type error for [] + {} or {} + [] or 1 + "1". And both Python and Ruby give you an error when you do something like "".notavalue, whereas javascript just gives you undefined. They might look similar enough visually, but under the hood they are different in terms of strictness and types.


> I'm clearly in the minority, but I had qualms about uglify and other source manglers [...]: they break the benefits of view source

Not really: we have source maps for this.


“View source” is a tool for outside end-users who want to understand or reverse-engineer code on websites they don’t control. “Source maps” is a tool for inside developers trying to debug their own code. Whether or not the browser supports source maps is irrelevant if you don’t have access to the original code, but only the minified/mangled version with all single-character variable names.


>Whether or not the browser supports source maps is irrelevant if you don’t have access to the original code

Sourcemaps can embed the original source inline, though of course that is a decision made by the author and they can decide not to make sourcemaps that way. It is more likely that the end-user doesn't have access to the source maps anyway.


I would love to see examples of source maps working in the wild.


> they break the benefits of view source.

The benefits are not that great these days. At least not for people who don't support a web ad monetization model or have no interest in being on the VC boat to float model.


> It's pretty straightforward. Both tend towards the Big Binary Blackbox Blob.

A huge part of the project is to ensure decompilation to asm.js


> "walled gardens" that were Lisp machines back in the day, that force you to write code in Lisp,

Lisp machines come with Fortran and C compilers.


I never used it, but allergly they where not only comming with a Fortran, but the best Fortan IDE.


The idea that UNIX is somehow the "open" and "inclusive" variant of computing is a good example of "The victors write history."

It's really frustrating to see people parrot it. UNIX succeeded for a lot of reasons, but not because of these. In fact, at the time it was the more restrictive and closed option. It also put a lot of burdens on the developer in the name of time-to-market.

The entire "worse-is-better" mythology is essentially a story of how market velocity is a powerful force in the face of "doing it well."


> WebAssembly is Unix

The irony of this is that almost no UNIX offers a compelling, complete developer integration outside of things which produce slavishly detailed C-compat layers that introduce an ever growing number of undefined behaviors.

It also ignores that many lisp machines actually shipped with compilers for competitive languages. LispMs just had a lot of work (for the time done) on optimization for lisp environments. We take for granted the trivial execution overhead of interactive environments but this was no small feat back in the feat.


> WebAssembly is Unix, JavaScript is a Lisp machine.

Isn't this an argument against WebAssembly? We've already made the mistake once.


The main use for it (for me at least) is to write fast numerical code (physics modeling, audio processing, image processing, machine learning, 3D rendering, statistical analysis, etc.) in a deterministically fast way against an easy-to-reason-about programming model with an ability to manually allocate memory and structure data, instead of hoping that every browser’s heuristic-driven JIT will be able to optimize some high-level code the same way.

Most of the program logic can remain standard Javascript, but the little kernels of hot numerical code can be much more effectively optimized.

This gives Javascript/browsers the ability to handle problems which were previously only possible to tackle with native C programs.


The issue with this is that if you care about speed, you need to be using SIMD -- and wasm doesn't seem to want to support it -- stating a "minimum viable" standard from the year 2000.


SIMD.js is in progress for JS, so JS VMs are already working on it. It is on the roadmap for being added to WebAssembly, likely with a similar API.


Is there a technical restriction why JS JITs don't transparently apply SIMD to existing loops like normal compilers (=autovectorization)? Or is it in the cards to build it on top of SIMD.js?


Autovectorization is hard and fragile in the best of times -- tight stable loops with no (or few) branches and little/no memory access that may be aliased, etc.

For a dynamic language JIT this is pretty much infeasible (as I understand it). Every loop might have branches for guards/bailing out due to deopts, and at least in JS, TypedArrays are allowed to alias eachother.


Existing Javascript JITs also don’t output SIMD instructions as far as I know.

I expect SIMD support can be added to wasm at some future date.


I wasn't really referring to wasm vs JS (which is easy to outperform), but reaching equivalent speed to native, which is the ultimate goal, I'd hope. Otherwise you're just throwing performance away...

But, I was a little quick on the trigger. It's not obvious, but after drilling through the design docs for a half hour.. I found a tentative discussion of how SIMD might be added later (based on some kind extension mechanism).


Why wouldn't llvm be able to emit SIMD when it can vectorize? The idea the SIMD is the end all to fast computing is pretty short sighted IMO.


SIMD isn’t the “end all”, but upcoming Intel chips have instructions for handling 8 doubles or 16 floats per instruction, and if you’re trying to implement a video codec or large-scale physics simulation, an order of magnitude of speed difference can make or break your app.

To take an example where timeliness is crucial, think of the difference between, say, 10 frames per second vs. 60 frames per second in a first-person shooter game.


The important use case is to allow people to write web applications with their language of choice, instead of horrible javascript. And get near full performance.


I think this is important. Probably because I have the "worse is better" article from 1991 open in another tab, and if you s/Lisp/Javascript/g then many of the criticisms ring true. Some of what made Unix, Windows, and Mac OS healthy environments for development was the fact that all developers were equally well off, you didn't have to pay a tax for writing something in a language other than Lisp. You could keep your Fortran code, and mix it into a C program, run from a Bourne shell, running in terminal emulator written in C and a WM written in Lisp. You can replace any part with a part written in a different language.

JavaScript will continue to be dominant, but we desperately need to be able to write things in the language of our choice.


Hmm? You paid a huge tax in desktop development for writing in a language other than the one the OS was written in. All of the platform documentation & examples were in its "native" language. You usually had to marshall data structures yourself to fit the data formats of the native language. You had to write shims (oftentimes in assembly!) that would bridge the calling conventions of your preferred language to those that the frameworks were written in.

There's a reason that C became the dominant language during the 80s and early 90s: it's because Win32, UNIX, and MacOS >=7 were all written in it. That's a large part of what Worse is Better was about. Richard Gabriel founded a company to write software for Lisp Machines, pivoted it to run Lisp on commodity hardware, found that all of his customers would rather just write in C, pivoted it again to do a C++ dev environment, and eventually went out of business.

The renaissance for other languages was really during the web era, when everything just spit out HTML and it didn't matter what the server was written in. Once customers started demanding rich interactivity on the client, there was a strong incentive to write everything in Javascript, and then a strong incentive to write the servers in Javascript too, and then a strong incentive to use Javascript for other things like native apps and IoT devices too.


Yes, you paid a tax. But my argument here is that the tax for writing in different languages on Win32, Unix, and Mac OS was far cheaper than the tax for writing in other languages on a Lisp machine. This is because translating low-level code to high-level code is not as straightforward as the reverse operation. For example, if I compile C to JavaScript, how do I even think about writing a function like memcpy()? You end up with a monster like asm.js where interacting with the DOM is a chore.

Yes, the web brought a renaissance in different languages. But we already had a bunch of different languages lying around when the web became important. We were already using different languages to write desktop software. I didn't even learn C until the late 1990s, but I was happily writing software before then.

WebAssembly is going to make languages other than JavaScript palatable in the browser, and I can only see that as a good thing.


> Richard Gabriel founded a company to write software for Lisp Machines,

The company Gabriel founded (Lucid Inc.) was never supposed to write software for Lisp Machines. Its mission from day zero was to develop a Common Lisp implementation for stock hardware (SUNs, etc.). Symbolics, the Lisp Machine maker, refused to fund a portable Common Lisp implementation (which Gabriel and Benson, the latter then at Symbolics, proposed).

> found that all of his customers would rather just write in C, pivoted it again to do a C++ dev environment, and eventually went out of business

Actually the Lisp business of Lucid Inc. financed the C++ development.

It was a gamble. They easily could have continued to develop and sell Common Lisp (their main competitors from that time, Franz Inc. and LispWorks are still alive) and stay in a small/shrinking niche. But Lucid tried to diversify and to grow. The idea was to write a C++ environment with a similar and improved development experience as a Lisp system. They used the cash cow they had and eventually sunk the whole company when the C++ system flopped in the market. Their C++ system (Energize) was very expensive, technically complex, ...


I think that's like saying: "I prefer to use the hammer instead of the screwdriver" for something with a helical ridge.

You are better off re-inventing Java or Python, but in a container and streamed over the Internet from an URL.


I can't believe people still bash JavaScript.

I've used a lot of different languages including Java, C#, C++, AVR Assembly, Python and others but JavaScript is my favourite and I would not want to go back.

I think its a shame that some people just didn't seriously try JavaScript. It's a very powerful, expressive language.

Also, testing with JS is amazing - Especially unit testing on Node.js. It lets you do stuff like redefine entire objects, properties or methods at runtime (for stubbing).

Also, JS is great for writing asynchronous logic.


I can't believe people still bash JavaScript.

I can't believe people actually like it. It might be understandable if you're comparing it to enterprisey Java, but I'm baffled that anyone could prefer ES5 to Python or Ruby. (I will acknowledge that ES6 puts it somewhere in the area of Python 2.5).

It's an incredibly powerful, expressive language.

Not if you want super advanced features like a hashtable with non-string keys, or checking if two objects are equal.

It lets you do stuff like redefine entire objects, properties or methods at runtime (for stubbing).

As does any other dynamic language.

Also, JS is great for writing asynchronous logic.

As is any other language with first-class functions. And with others you don't have to do silly contortions to work around JavaScript's broken "this".

I think its a shame that some people just didn't seriously try JavaScript.

Tried it, have written it professionally for many years, and as a result am very much looking forward to WebAssembly.


An interesting aspect which JS has and other languages don't is the "objects are hashes" notion. Combined with TypeScript / Elm / PureScript's ability to write and check the types of these (especially Elm before the refactor that removed the add/delete field features), this is very powerful. I often wish Haskell's built in records were as powerful as Elm's / PureScript's, but wonder if thats doable in an efficient way without the JIT logic in engines like V8.

Also, there is something to the whole "modules are records/hashes" idea which is quite elegant, IMO. I'm not sure why we still put up with the idea that the module system needs to be a whole different language with different rules. But I'm not sure if there is a type system capable of modelling this very well.


Are there any dynamic languages in which objects are not hashes?


Ruby -- object data is all private, and the reader/writer/attr things just make the getters and setters for you.


Python is another example. You can assign dynamic properties, but there is no object literal syntax


> I can't believe people still bash JavaScript.

Can you believe you can satisfy every programmer out there with a single language? of course not. Why did you have to use all the languages you listed? Because some made sense in a specific context, other didn't.

> Also, testing with JS is amazing - Especially unit testing on Node.js. It lets you do stuff like redefine entire objects, properties or methods at runtime (for stubbing). Also, JS is great for writing asynchronous logic.

Testing with Python is also amazing. It doesn't matter how amazing it is if I hate writing Python code.

No, the reality is that, in 5/10 years, javascript skills wont matter, only a good knowledge of the DOM and WebAPIs. In fact I'm pretty sure you'll see more opening for C++ developers on the front-end than Javascript ones.


> In fact I'm pretty sure you'll see more opening for C++ developers on the front-end than Javascript ones.

I would be happy to take a bet that this will not be the case.

The fact is that JS is much easier to learn than C++, has a broader ecosystem in the browser, is faster to write than C++ due to memory safety among other considerations, and is fast enough for app logic.

Think about it. C++ code has been supported for years on mobile. Yet Java/Dalvik is king on Android, and Objective-C is king on iOS. JS is faster than both (in the case of Objective-C, JS property access is faster than Obj-C virtual method dispatch due to ICs). So I see no reason why this will not be true on the Web as well. Web Assembly is very needed and important, but JS won't be going away.


If you're dissatisfied with objc_msgSend performance, you can write C or C++ seamlessly - or use that other language that's partially designed to address the dynamic dispatch nightmare.

If you're dissatisfied with JavaScript call performance, you have no choice.


> If you're dissatisfied with JavaScript call performance, you have no choice.

Other than the topic of this article?


Right - but this subthread is saying that JavaScript is enough and everyone should write JavaScript and not complain.


That's not what I said. What I said was that JavaScript would continue to be the most important client-side language on the Web. It's great that Web Assembly gives developers alternatives, and I hope it gets used to write games and to accelerate performance-critical parts of apps. I also hope that it allows people who don't want to code front-end logic in JS to deploy alternatives--I'm not even passing judgment on whether you should use JS (although I think choosing C++ over JS for high-level front-end app logic is a very poor decision). I'm just nearly certain most code will continue to be written in JS.


I have no idea what you mean by JavaScript being "easier" to learn than C++. My experience has been the exact opposite.

C++ is a giant beast, but most of it was relatively easy to learn after I internalized the general principles that guide the language's design. These principles were the first coherent account I ever found of how programs manipulating ephemeral resources should be written. [I am aware that Rust improves on C++, but it builds on, rather than replace, the general principles established by C++.]

On the other hand, in JavaScript I have never found anything even remotely close to methodological guidance for writing programs. JavaScript seems to make sense of anything as long as it is syntactically valid - a very low bar. As a result, I felt like I had to navigate a really huge space, hoping to eventually find a correct program somewhere.

JavaScript's ability to run in the browser is, as far as I can tell, its only advantage over C++.


> I have no idea what you mean by JavaScript being "easier" to learn than C++. My experience has been the exact opposite.

You are a tiny minority in this. I've taught C++ and JS and have never once seen JS be harder to pick up.

> On the other hand, in JavaScript I have never found anything even remotely close to methodological guidance for writing programs.

The #1 selling JS book is (was) literally called "JavaScript: The Good Parts". It teaches "methodological guidance" for writing JavaScript programs. Just as modern C++ books teach the "good parts" of C++.

> JavaScript seems to make sense of anything as long as it is syntactically valid - a very low bar.

That's (a) not true, with the static resolution semantics in modules; (b) to the extent that it is true is mostly a criticism of dynamic typing, which has a lot of advantages and is not a criticism of JavaScript.

> JavaScript's ability to run in the browser is, as !far as I can tell, its only advantage over C++.

Memory safety? GC? A module system (as compared to #include)? Dynamic typing? This is silly.

(NB: I also think C++ has a lot of advantages over JS for certain domains. I'm a language pluralist.)


> The #1 selling JS book is (was) literally called "JavaScript: The Good Parts". It teaches "methodological guidance" for writing JavaScript programs. Just as modern C++ books teach the "good parts" of C++.

I'm talking about guidance from the language itself, not external parties. For instance, C++ templates don't provide such guidance - it's very difficult for C++ compilers to tell you where and how exactly you're using templates wrong. On the other hand, C++ destructors do provide such guidance - just put all your finalization logic in destructors and you're done.

> That's (a) not true, with the static resolution semantics in modules

Even with strict mode and statically resolved imports, JavaScript requires extremely defensive programming to get useful diagnostics when anything goes wrong beyond using identifiers not in scope.

> (b) to the extent that it is true is mostly a criticism of dynamic typing

Python (not to mention Scheme, which JavaScript is allegedly inspired by) is dynamically typed, but does a much better job of treating nonsensical code as an error.

> Memory safety? GC?

It's true that garbage collection makes memory management a non-problem in the vast majority of situations, but it doesn't even begin to address managing other resources. Unfortunately, a program whose only avilable resource is memory (and perhaps the standard streams) is nothing but a fancy calculator.

RAII is a comprehensive solution to a wide class of problems that includes memory management as a special case, so I think C++ wins this one.

> A module system (as compared to #include)?

Of course, you're completely right about this.

> Dynamic typing?

I can get exactly as much dynamic typing as I need in C++ programs, without affecting the parts that are meant to be statically typed.


I've tried JavaScript, and I decided I preferred type safety. This kind of thing:

> It lets you do stuff like redefine entire objects, properties or methods at runtime

Sounds horrifying to me, because, as in Ruby[1], library authors will decide that's a good idea. Typeclasses/protocols solve this problem perfectly, while maintaining type safety.

[1]: for some reason, this seems to be less of an issue in Python and Obj-C, even though it's totally doable?


If type safety is what you miss, why not to use some transpiler like TypeScript?

http://www.typescriptlang.org/Tutorial


Then you're not writing JavaScript, which was OP's concern. TypeScript is fine (although Elm, Swift, and Haskell are more interesting, IMO).


TypeScript is very close to normal JavaScript. It's basically just JavaScript + type annotations which a compiler can check.

Compilation phase removes the annotations, after that point it is pure JavaScript.

How are Swift and Haskell relevant for client side web development?

Edit: Removed Elm


Are you aware of GHCJS, Haskell-to-JS, compiler? In the new and rather popular "build/dependency manager"[1] for Haskell, named Stack, you can now quite easily install this compiler.

And I think it is only a matter of time till someone writes a Swift-to-JS compiler (Apple might already have it on it's radar).

[1]: I know it is not a "build/dependency manager", but I don't know how to call better in for sake of this discussion.


Elm's only target is the browser/JS.


I stand corrected. Never heard of Elm before, but I assumed from context it's non-JavaScript targeting language like Swift and Haskell.


Look at purescript. In order to install this madness you need to install not less than 5 package managers, but you might like it.


The situation has improved since cabal install was required.

npm install -g pulp purescript

should be enough now.


> I can't believe people still bash JavaScript.

== vs ===

!==

hasOwnProperty

> Also, testing with JS is amazing - Especially unit testing on Node.js. It lets you do stuff like redefine entire objects, properties or methods at runtime (for stubbing).

Doable in Common Lisp for 21 years…

> Also, JS is great for writing asynchronous logic.

ITYM JavaScript has first-class functions. So does Lisp, so does Python, so does Go…


Now try to take a look at some real languages. At the languages with well-defined, simple, orthogonal semantics.

And, no, JavaScript is nowhere near any "powerfull, expressive language". It is embarrasingly low level for a supposedly scripting language and it does not provide any powerful productivity features whatsoever.

JS is also a nightmare for the implementers, it does not have a sane specification, therefore most of the tooling is not comprehensive.


It's not a bad language, but it's not good either. But who is it powerfull? It's a very poor man's scheme, and scheme is not powerfull either. If javascript has anything to over it's IMO simplity and not power.

But I'll never understand who thought this asynchronous API was a good idea.

I just wanted to draw pictures in a canvas _in order_, because they should overlap. A common task you could mean. I ended up building a monadic builder for callback chains, that creates a javascript string that is evaled. I felt like this language and the api was incredibly cumbersome, minimalistic and limited. It lacks a blocking api, monad support, dsl support, macros and lazy evalution.

But mayme there is a simple solution to that, that I'm not aware of.


> scheme is not powerfull either

In what sense is Scheme not powerful? It has TCO, syntax-rules, call/cc, etc.

Of course, all of those are things scheme has that JavaScript doesn't.


No types, a very minimal syntax, a standard library so slim that everything interesting is a implementation-detail. And, as I a said, javascript has less. Javascript even lacks integer variables, while scheme has the numeric tower. (Although that is not required by rsr4/rsr5 and that what most implementations care about)


> No types, a very minimal syntax

I don't see how these make the language less powerful. Especially the syntax part, which plays a big part in Scheme's power, as an enabler for macros.

> a standard library so slim

I agree with this though.


Sorry to say, but the web isn't about writing apps in the language of your choice. If it's beyond plain, passive HTML, it's about running applications on foreign hosts/resources without a well specified license to do so. Scripting languages provide both audible code and a small load as compared to binary object code (which is, why we had them on servers and clients in the first place).

If we turn the web into an anything-goes bonanza using binary code without any keys, credentials, or permissions, WebAssembly (and Turing complete CSS as well) may be well the beginning of end of the web as we know it, giving raise to a new, leaner and more restricted platform (for which some are already on the lookout, BTW).

[Edit] A small real world example: Client asks me to implement a third party plugin to allow them direct communications with their users via their website. A quick scan of the source code tells in a minute that the script isn't just doing that, but is also tracking user behavior and is phoning home related data. Now I can ask the client, if they really want to expose their users to this. With WebAssembly, there's no chance to do so.


Try looking at the source of Google.com and saying all it's doing in under a minute.

You could look at the APIs it's using - why is it calling XMLHttpRequest or looking at the user's cookies? - but you can do the same with binaries, you just have to use a tool, e.g. 'nm -D <binary>' shows you the external functions the program calls.

I held the same position until circa 2008 (?), when JS minifying became truly widespread; nowadays, I think that battle is already lost.


So, just give up?

(Please mind that there has to be a strict relation between a minified code and a plain source code provided at some repository. This is not true, at least in terms of the resources needed to verify this and the probability of this being covered by average budgets, for binary codes. – Recommended reading: "Reflections on Trusting Trust" by Ken Thompson)

Edit: At best you're winning a year's worth of Moore's law in performance and are paying for it in terms of page load. On the down-side, binary code is just as bad an idea as e-voting: You're putting the interests of your users against literary trillions of dollars of interest in exploiting the system (yes, the leverage would be enorme) – and if the worst has become true, it's already too late to revert.


Probably solve it the way we do with the rest of shared libs and open computing:

Require the source. Download compiled code from trusted sources.

Download a nuget package; dll's. Download an apt package; binaries and so's. This won't blow up web.


Not your everyday real-life story. Client: "Integrate this (see attachment)". You, "No, they have first to hand over the source code in order to allow me auditing their software." Client: (gone).

As for the real-life example given above: The third party is billing the client only a few bucks, since the real business is in profile building and exchanging profiles. So, are they expected to hand out the source code for 5 bucks or so? Probably not. Who is to lose? Everyone visiting the website.


Why are you under the impression there won't be disassemblers and decompilers for WebAssembly?

It's not like a bunch of minified JS is going to be "quick" to go through.


Even, if there would be a suitable disassembler, this would just account to an exponential curve in terms of auditing any software. (We're not speaking of minutes here anymore, but rather of months or even years – who will be willing to pay for it?) BTW, with minified JS, you've just to recode variable names (while anything adequate to system calls has to be in plain text somewhere by definition), with WebAssembly, this becomes an entirely different story. – No comparison.


This is basically the same case for traditional binaries. I'm no RE expert, but when I've done such work it consists of "renaming variables" including functions and looking for calls to imported functions. Intentionally obfuscated code is harder.

But nothing stops JS from loading a bunch of encrypted strings, self-modifying at runtime, using eval+substring (at various offsets) on loaded and renamed functions to make it hard to know if there are calls to other functions, let alone what they are.

It can still be done, and obfuscated JS is probably easier than obfuscated x86 but saying it makes an audit only take minutes means it's not really being obfuscated.


>But nothing stops JS from loading a bunch of encrypted strings, self-modifying at runtime, using eval+substring (at various offsets) on loaded and renamed functions to make it hard to know if there are calls to other functions, let alone what they are.

There is a solution to that. Control the platforms. You have like what, 4? major vendors of browsers. Convince them to make eval disabled by default and you warp the entire usable market. The percentage of people who would bother to go hunting the setting to turn it on would be minuscule.

Use the power of the default to affect the whole space.


I really don't get the down voting: While we've come globally to the conclusion that we require signed software, curated app stores and kill switches for traditional applications, because sh#t happens, we're going to distribute binary software in the browser without any such limitations? With the average user not even knowing that she is running some software from untrusted sources? (Yes, I know, it will be sandboxed, and there will never be a zero-day anymore in any browser ...)


We will eventually come to terms with the fact that Flash was 15 years ahead of everything else on multiple fronts and we ought to have been just chastised the terrible developers doing awful and horrendous things with pretty technically sound software.


HTML5 _still_ isn't able to deliver the quality interactive experiences (Games) without applying a lot more effort. Even "simple" things like cross-browser low-latency sound effects are still difficult.

Flash presents a single platform with a single vendor that can innovate as quickly as they like. The web platform is inevitably cumbersome and slow in comparison -- over a decade later they're still playing catch-up.

I'm looking forward to WebGL ads that eat battery life with excessive shaders.


As a strong supporter of open source, doing a comparative analysis of HTML5 and Flash, I have to admit that private enterprise was able to kick the ball forward so much faster here ...

I remember watching a vector animation version of a "tell-tale heart" in flash in 1998. On a 28.8 connection it played smoothly full-screen on a 120mhz pentium with 16mb of ram. I remember clicking the play button and having it just miraculously starting to play without any wait, streaming down and uncompressing in real time. I was floored by it. 12 years later I was invited to watch a spiderman animation demo using HTML5 ... there was a 30 second load time, the framerate was probably 1fps, the audio didn't work, the content didn't render properly ...

It's like how Microsoft was able to pull off nearly everything we could do today by shoe-horning their activeX technology into ie3. Just load a bunch of cab files and drop them into the page like OLE components and bam, you got just about everything. The interactivity could bootstrap - that is, not need any extra plugin and you could engage with the other content on the site using vbscript or javascript interfaces in a two-way manner. It was pretty nice.

Netscape retorted with their JVM integration but it just wasn't the same ...


If you want to go fast, you go alone. If you want to go far, you go together.


For all the problems Flash had, one problem it didn't have was a lack of mature tooling (compared to its alternatives at any given point in time). A lack of tooling makes making HTML 5 games very difficult, the abstraction layers that existed for Flash don't for HTML 5, at least not to the same level. Even if HTML 5 were capable of precisely the same level of performance, or better, as Flash today it would still be some time before the experience of a content creator reached parity with what already existed for Flash.

At no point has the debate between Flash and HTML 5 content ever included "Content creators will have an easier time with HTML 5". It's an ecosystem that has to develop over time, and HTML 5 the content platform has only approached feature parity with Flash in the past couple of years.


I'm glad you pointed this out. My roommate's career was a flash developer and his primary skills were animation and graphic design. He spent hours doing that and got me to help out with the few numerical/mathematical tasks he needed to fill-in.

I was a dork and overacted to his use of 100% global variables, etc. He gracefully tried to use Flash's OO to appease me and improve himself, but really he got quite far in life doing graphics and even basic database access in Flash.

I think the tooling might be the point where HTML5/JS (h5js?) becomes divergent. Think about it: making a flash-like editor for HTML5 requires an investment in time and different groups/companies will want to do it differently... Then, they'll start to add tiny features to push their implementation over the time and poof~ towerofbabel.jpg.


I place very strong bets that Adobe will create a dominant tool here. It will be nearly identical to flash but emit all the new fangled html5 asm.js stuff instead - in some nice tight drop-in way ... like

<div id="container">

  <script src="//ad.be/UNIQID"></script>
</div>

Where ad.be is a "cloud" service you pay $xx/month to "host" your "app" for you. Essentially you make the thing, it saves to the cloud, generates your uniqid, and you put it in a container. They keep the source file and can continually regenerate the js as their "engine" improves and browser tech moves forward. It's a future-proof plugin-less flash with a large existing user and customer base.

If that's the flow, Adobe might as well just start minting money.


> I'm looking forward to WebGL ads that eat battery life with excessive shaders.

As opposed to flash ads that eat battery life and bring the browser to its knees?


I think that was his point.


How would that have fixed Flash’s horrible CPU cost, chronic security vulnerabilities, awful touch support, or constant prone to crashing, even in the hands of competent developers?


Like HTML5 is better at any of those? I'm pretty sure Flash crashed an order of magnitude less often than WebGL does.


Performance-wise flash wasn't that bad on Windows. Probably because Adobe put in a decade's worth more engineering into Windows than Mac. And of course they were caught off guard by the iPhone, like the rest of the industry.



If only Flash weren't proprietary…


Web assembly doesn't add new APIs or capabilities to the web platform. It only makes code run faster and makes porting C++ code easier. What about that is hateful or abusive?


That now we can’t read the source of webpages at all anymore?

The whole "anyone can look at it, learn from it" part is gone?

We’re steering towards more proprietary code.

Say, for example, if I want to run Google’s "Star Wars" easter egg in Firefox. With JS, I could grep through the de-uglified and de-obfuscated code quickly, and find that its useragent detection would work if I’d just append "AppleWebKit Chrome/45.0.0.0." to the UserAgent. With WebAssembly, I’d have to spend far more work.


You can't read the obfuscated code directly. You have to use tools to deobfuscate it. With Web Assembly it will be the same. There will be tools to help you read the code. You may proclaim that they won't work as well but I think it's premature to say that. Web Assembly is not machine code.


Web assembly is very generic byte code.

I fear I’d end up spending ages in IDA breaking a DRM scheme implemented in WebAssembly, just like I did with NaCl.

That’s not neat.


Sorry, my intent was not to imply that web assembly was hateful or abusive but that the code I might write using it falls into two categories

1. Performance critical code 2. Sneaky stuff I really don't want the user to be able to read.

Category two seems like the sort of thing that I would absolutely want to be able to write binary code that executes without user interaction.

It seems like right now the focus is quite rightly on #1, but that #2 seems like it will inevitably become an issue.


> 2. Sneaky stuff I really don't want the user to be able to read.

The user is already going to have a hard time reading the unuglified JS. If they are looking for "phoning home" they have to search for WebSocket sends and ajax calls. Both of those will have well defined APIs that will be just as easy to spot in dissembled webasm as they are in unuglified JS. I bet they'll be even easier to spot.


You're missing category 3: Code that runs in more than a browser. If I have code written in Fortran that already solves my problem, why should I have to rewrite it? If I'm more comfortable in C#, why should I be forced to use Javascript for writing front-end code? WebAssembly fixes the mistake that the web should be a monoculture, and allows people to use the best language for the job.


3. Any other code written in a language better than JS.


Many of the worst parts of flash had to do with (IMO) the single-source nature, that is to say Adobe. Flash is using all your RAM? Flash is slow? Flash is hard to develop good UI for and integrates poorly in browsers? Too bad, it's Adobe's runtime & plugin, or nothing.


WebAssembly is a way to finally get rid of that overgrown scripting abomination that people started to take for a real language. And arguably it is the only way to get rid of that thing, any other will simply create another insufferable abomination.


The big difference is that WebAssembly can only do what Web APIs allows it to do. You'll have to do the same AJAX calls, the same DOM manipulations as with Javascript. There is no plugin here.

> I always feel like the most obvious use for it is to start writing truly hateful and abusive code.

Sure, but it will also developers to have a better experience by allowing them to chose the language they want, and not to be forced to use JavaScript. Choice is good, it's something you cannot deny.


Do you like any other visions for the future of the web? Brendan Eich's startup hasn't shared much about their plans, http://www.cnet.com/news/mystery-startup-from-ex-mozilla-ceo...


>Web assembly always makes me a little sad. It feels like we are going back to flash only it won't be bad this time, I promise, no really.

Only if you miss the obvious and blatant technical and license differences.


This seems like the modern trend, and I like it. I'm going to compare this to the recent developments with OpenGL and Vulkan. With OpenGL, you ship textual source code for your shaders written in GLSL, and you have to hope that the compiler on your client's machine does the right thing! With Vulkan and SPIR-V, the compiler is taken out of the equation, and you can use whatever language you want to write shaders, validate them ahead of time, and ship the validated binary blobs to the client. Incidentally, I'm looking forward to WebGL 2. I really miss being able to use texture arrays, integers, and instancing.


Web browsers are turning into giant, poorly designed operating systems. My current operating system can already run binaries, this is reinventing the wheel in a massively over engineered way.


Your current (desktop) operating system allows these binaries to run with your full user privileges. Any software you download and run has complete access to all of the data from your user account, no matter which software created it. Software can interfere with other software (spyware), can affect anything on the account (malware), and can even harm the system itself (e.g., by consuming resources). Worst of all, because there's no isolation you can't get rid of bad software: once you run software from the internet on your PC, you are boned. There is no way you can get rid of something that doesn't want to be gotten rid of since it can literally rewrite other executables to be itself.

Current (desktop) operating systems were not designed for a world where you routinely run code from someone on the other side of the planet who you have no relationship to, and don't trust, so you can see cat pics or read a forum.

Browsers have a lot of problems, but their security model and ephemeral install model are inspired designs, which directly enable the safety of the modern internet.

Having to fall back to classic desktop apps for real speed or power is a terrible thing for end-user security. Either browsers need to get more powerful, or desktop OSes need to take on a browser-like security model.


The process model is a sandbox. Every process runs as if alone, with seemingly continuous processor time and memory addresses starting at zero. The ailments you described are all system calls, special access granted by the kernel.

So the process model is not fundamentally different than the browser model, but WebAssembly enjoys two advantages:

1. The browser security model sagely segmented privileges by origin rather than user.

2. Like bytecode, WebAssembly AST does not target a specific processor.


Totally agree. The process model is actually a better sandbox than, e.g., Firefox per-origin one (because it sandboxes CPU time and memory as well). But the shape of the sandbox is incorrect for the modern era.


It seems like the real solution is to have proper sandboxing in the OS's, though it would take much more coordinated effort to accomplish.

I see no reason why each domain couldn't have a chroot for example, the browser doesn't need to implement those things.


> Your current (desktop) operating system allows these binaries to run with your full user privileges.

Actually it can sandbox them already.


But doesn't, by default. And that's crucial. On the web I can just run any random program from anyone and there's a very strict limit on what it can do to me and to other software.


We should have had a standard like WebAssembly from the beginning. The lack of it is the reason for the outrageous explosion of features in web browsers. At first html made sense: it was ideal for quick transfer and rendering of documents. But today that's not enough. So we keep tacking on layers atop already fat abstractions. And all this fat is trying to support a moving target. At first it was about rendering text, but then it was animations, and then videos, and 2D games, and advertisements, and now 3D games and full fledged apps. Notice that operating systems don't play this game trying to build an abstraction for every possible use of a computer because it's unwinnable. And now, some 25 years since its inception, the web is learning the same lesson.

Web browsers have been like operating systems all along, because they execute programs, albeit with different performance and safety characteristics. That the two are converging upon the same solution to the problem of hosting apps should be reassuring, not concerning.


There was. It's Java. I remember when it was first released and it promised the ability to "write once, run anywhere" but delivered via the Web to run as web applets.

And before that in the 80s, there was UCSD Pascal. I know it was available for the Apple ][ (used it in high school) and the IBM PC (one of three operating systems available when IBM launched the IBM PC in August of 1981) and probably a few other platforms I'm blanking on. A defined VM and "executables" could run on any platform running UCSD Pascal.

And even before that, IBM pioneered VMs for their own hardware, which is probably what inspired UCSD Pascal in the first place.


Web applets are terrible because

  - The JVM takes forever to spin up  
  - The JVM tries to do too much with tons of class libraries  
  - The JVM is insecure  
  - The JVM is proprietary. While there are open source
    implementations, it is still tethered to Sun and now
    Oracle. They call the shots on the features and have
    sued both Google and Microsoft for implementing their
    own versions.
Similar arguments can be made against Flash.

We shouldn't expect WebAssembly to have the same pitfalls, since

  - WebAssembly does not take forever to spin up  
  - WebAssembly doesn't try to do too much. There is no huge
    standard library. For now it doesn't even include a GC.  
  - WebAssembly isn't insecure. Why would it be? I assume
    applet exploits are a product of the large standard 
    library (more attack vectors) and privilege escalation
    (certain exploits let you break out of its security
    settings to gain control). All of this seems like it's
    because web applets are monkeypatched on top of the 
    existing JVM.  
  - WebAssembly isn't proprietary.


And I don't understand why. Take the two most popular mobile platforms, iOS and Android: people there routinely download and install new applications and typically never interact with Facebook, Twitter, Gmail or Instagram via their browsers. Why should the situation be different on the desktop? I feel that the efforts should not be going into making the browser into an OS that can run general-purpose software, but rather getting a packaging system that is cross-platform and easy for users to use. My own preference would be something based off of Nix so that you can avoid many problems related to library versions and whatnot, but anything where a user could be pretty much guaranteed that if he clicks "install", he'll be able to use his application in the next couple of minutes.


> Why should the situation be different on the desktop? I feel that the efforts should not be going into making the browser into an OS that can run general-purpose software, but rather getting a packaging system that is cross-platform and easy for users to use.

Cross platform is a red herring. The iOS and Android app stores are not cross platform. Ease of use is also increasingly a red herring. The Windows and Mac app stores have been around for a long time and are quite easy to use. Yet they have not ushered in a shift away from Web apps on the desktop.

I think we should be looking at why Web apps have been successful on the desktop rather than pretending they have no advantages.


> I think we should be looking at why Web apps have been successful on the desktop rather than pretending they have no advantages.

I think there is a perception of "try before you buy" with web apps that is appealing. Even when apps/programs are free, you feel like you are giving something away by installing them.


It's like the old "How do you get an untested drug on the market? Pretend it's a dietary supplement". How do you get corporates and users to install a remote application runtime that lets them run programs from the internet? Pretend it's a document viewer.


I think you are right, this is a high technical debt solution for letting lay persons install software more easily. App stores already did a pretty good job at this anyway, and have the added benefit of curation.

The number of layers of in our software stacks grow faster than Moore's law can handle.


> App stores already did a pretty good job at this anyway

Then why isn't the Mac App Store (to name an example) a runaway success?


I kind of answered this with a comment above that mentions "try before you buy". Of course I am just guessing.


Mobile app stores are junkyards with terrible discoverability and they have a captive audience.


> people there routinely download and install new applications and typically never interact with Facebook, Twitter, Gmail or Instagram via their browsers. Why should the situation be different on the desktop?

Er, because that would be really silly.

I like reading HN from time to time. I would never install an app, because I don't use it frequently enough. I definitely would never go through the pain of installing a HN app every time I wanted to read HN. I really doubt I'm alone or even abnormal in that regard.

That's the beauty of a browser: I can be reading HN in under a second when I want to, with no cluttering of my desktop just so I can read HN from time to time.


I don't imagine HN requires web assembly or javascript to work, so it isn't a good example. But your point is right.

I think maybe application sandboxing is an OS job, and the browser should do the caching and invoking of the operating system sandbox.


I'll go a step farther, I typically don't install apps on my phoens.. why, because most of them have additional spyware and ask for permissions they should never need... I uninstalled Facebook over a year ago, as it was the biggest battery user... I get annoyed at websites that don't work on my phone, and more so for sites that try to get me to install an app, where there's no advantage to the stand alone app.


> Why should the situation be different on the desktop?

Because users want it to be. Whenever there isn't a significant performance hit people will always choose the browser solution. The only reason people download apps is because of performance and data limitations. Take those away and people will use the web based version.


> people there routinely download and install new applications

You should look at the data on that.


In the recent press tour about Swift, Apple seems to be really gung-ho about people using Swift everywhere.

Since Swift is built on LLVM and there's direct LLVM support for WebAssembly, I wonder if Apple will get behind WebAssembly so they can get Swift in the browser.


If I'm not mistaken Webkit is implementing experimental WebAssembly support. It appears to be the only implementation being worked on upstream out of all the major browser engines right now.


Is there (or are there any plans to add) a WebAssembly -> asm.js compiler, so that I can write some code by hand in WebAssembly and still get it to run fast in old browsers? Or are there features of WebAssembly that would be impossible to add in asm.js?

The reason I ask is that asm.js is really painful and cumbersome to write by hand and wasm seems substantially nicer, but I only have small bits of numerical hot loops which I want to use wasm/asm.js for, and I have no desire to bring a bunch of code written in C into my little project.


Yes there is. That's the wasm2asm project Alon mentioned in the post.


Whoops, I read right past that line. :-) Thanks, I’ll take a look.

For anyone interested, the code is in wasm2asm.h: https://github.com/WebAssembly/binaryen/blob/master/src/wasm...


I agree asm.js isn't fun to write by hand, but wasm is also primarily a compiler target. You might not necessarily find it easier to write (its text format isn't defined or even sketched out yet, so it's impossible to guess).

If you just want to compile a few small functions with hot loops, it might be easiest to write them in C, and use a new option in emscripten that makes it easy to get just the output from those functions (no libc, no runtime support, etc.), see

https://gist.github.com/kripken/910bfe8524bdaeb7df9a

and

https://gist.github.com/wycats/4845049dcf0f6571387a


Both of the proposed syntaxes I’ve seen for wasm text format (an s-expression syntax and a C-like syntax) seem pretty nice. The sexp format in particular seems like it would be a great target for simple bits of purpose-specific code generation (for code that doesn’t need a “compiler” per se).


This sounds like an odd question, but I honestly need somebody to explain this to me...what is the motivation behind the modern trend to put everything on the web? Is there something you get by running your program from a browser that you don't get from downloading and running an elf or a text file, or is this entire trend based around appealing to users who don't actually know how to use their computers?


Some reasons:

1. Web code is portable. The Java dream of “write once, run anywhere” is alive on the web. All you need to run a web app is a device with a browser. No need for a specific CPU architecture or operating system.

2. Web code is accessible. Just click a link and BAM. No need to download anything, no need to install anything, no need to worry about where to put something you might not want later. The web is the lowest friction platform (for users/customers) yet created. Even better, it’s easy to access web data from anywhere in the world on any device: I can read my web email on my grandma’s iPad or on the library’s computer or on a 10-year-old backup laptop, without worrying about whether I’ll have the data I need.

3. Web code is mostly safe. Anything that runs in a webpage is theoretically sandboxed away from harming other webpages or user data stored locally, and web users have come to expect that clicking arbitrary links won’t harm their computers. Untrusted blobs of compiled C code are a completely different story.

4. Web products can very easily be kept up-to-date for all users. This is double-edged for customers, because often website feature changes make later versions more confusing or less effective than earlier versions (cf. most Google product changes from 2008–present). For developers though, it dramatically simplifies support, because every customer can be presumed to be running the latest software version.

The big question is, what do you mean by “everything”? I don’t think everything is being put on the web, or should be. For example, professional “content creation” software is not going to be put on the web anytime in the next few years, because the initial barrier to entry is small in comparison to the required time investment to learn and use the software, and because such applications need hardware access and fine-grained control over compute resources.


If you want someone to just try something, the willingness of users to click a hyperlink is _massively_ higher than the willingness of anyone to download an executable, or run a script from a command line, or run an installer. This has absolutely nothing to do with "knowing how to use a computer".

Personal anecdote: I like building programming environments- sandboxes for playing with unusual languages. My target audience is people who are interested in programming and generally people who are very computer literate.

I spent about 3 years working on a complete development toolchain for a fantasy game console- compilers, profilers, documentation, examples- the works. It was spread around, and has hundreds of stars on github. Problem: you need to have a java compiler on your system to install and work with the tools. Number of people who developed programs using these tools aside from me: close to zero.

More recently, I built a browser-based IDE for another obscure game console. I made a complete toolchain, wrote docs, loads of examples, etc. This time, though, you could share your programs with a hyperlink, and there was no installation required. You could easily remix other people's programs from a public computer. The difference was huge. Dozens of other people wrote hundreds of programs using this system over a matter of months.

If you're making a project you want to share with other people, a web browser removes friction to a degree which cannot be underestimated. Believe me, I _hate_ working with broken, incompatible and terribly designed browser tech, but removing those barriers to entry is invaluable.


One way to describe a browser: it is a VM for running untrusted code. That's one thing that truly distinguishes browsers and the web from other platforms, and is a good characteristic to keep in mind.


You get a largely cross platform, consistent UI that can easily talk to outside services without being configured. All of these things help make the application work consistently across the many devices that exist now (Computers, smart phones, tablets, mp3 players, refrigerators[1], etc.) so you really can develop once and deploy everywhere. It also does seriously help the users that don't know how to use their refrigerator[2] so they can look up recipies and plan shopping trips.

[1] http://www.digitalafro.com/samsung-smart-fridge-serves-up-re... [2] https://productforums.google.com/forum/#!topic/calendar/Uhfp...


You want to reach as many people as you can, and nowadays people are pretty proficient with using a browser to go to a web page. If you can reduce the number of steps between you and your users, I'd say that's a good thing. Right now that can mean putting everything on the web, in the future, who knows.


I imagine the motivation is improving the platform that a whole industry relies upon.

Now, what you get with all this is software that runs on any system a capable browser is present. Like how you can open a video or audio on any system with a capable media player.


It's the current fad. But nothing new. Since the beginning of computing the tides go from thin-clients and fat-clients back and forth. Currently fat-clients are en-vouge.


And here I was thinking (mobile) apps were the current fad.


Are they? The current trend is to have everything done server-side, thus making sure the users get minimum value possible out of software they use.


Exciting stuff. Just further solidifies in my mind that C/C++ is one of the few languages that will run everywhere.


Does code written in WebAssembly have access to the DOM somehow? How will that work?


In the current proposal there is no way for WebAssembly to directly access the DOM (or any other "Web API" objects).

Basically you will need to drop back down into javascript and handle that there (basically how asm.js does it now).

But there is a proposal to eventually integrate direct DOM access into Web Assembly, but that's for after they get it up and running.


If they wanted to do it right, then an HTTP requests itself should serve a compiled HTML binary based on an AST. The binary version of HTML.


I would consider that the worst scenario. If each website was its own binary application it would undo the open source aspect of websites (even though that's only partly true today due to all the obscured javascript). But maybe that would actually be desirable for all companies which rely on advertising revenue and want to prevent any modification of their web content (such as removal of ads). It'll be interesting to see how the web develops in the future. WebAssembly definitely seems to give more control to the author.


Just because it's a binary AST doesn't mean it's closed.


If you are using the XHTML syntax, you could send it in a binary XML format such as Efficient XML Interchange (EXI). (Never seen anyone actually do this of course, since browsers don't support it; not sure if it is a good idea either - would it really be much faster than just say GZIP, sufficiently so to justify the extra complexity?)


In asm.js currently the way you'd access the DOM is through a JS glue layer that manages GC objects and makes DOM calls. The same applies here.


WebAssembly lets people write in C++, Ruby, Python, etc and for that code to work in the browser like Javascript does at the moment. Am I correct?


Mostly. The initial target is C++ and similar languages, and it will not have DOM access, but using suitable libraries (libc, SDL, etc., for example emscripten's) you can write a normal C++ program and have it run in the browser.

You can also run Ruby and Python in the browser by just compiling their C or C++ VMs. But that won't still work "like JavaScript" - their objects won't be native VM objects in the browser, it won't use the browser's GC, they won't be observable in the browser's debugger, etc.

So far all of that was already possible, and done, with asm.js.

In the future, it is a goal to work to do GC object integration, so that something like Ruby or Python could actually compile down to something with native VM objects, and that would also allow calling DOM APIs directly. (This will likely still require a compiled VM, though.)


Its worth pointing out that it would hardly be practical to expect users to download the code for the entire Ruby VM (and libraries) when they visit your web page.


Depends how you look at it.

If you're using a web page that is mostly just a CRUD app, sure, no-one will wait around for that, but if you're shipping a version of IPython that can run in the browser, I think people would be pretty happy about it as long as the blob can be cached and hopefully shared between sites since it's on a shared CDN.


I think you're right. While downloading a VM doesn't make sense for a small site, for many cases it could. There are also fairly compact VMs for some languages. And WebAssembly will make the VM download substantially smaller.


Caching is hard because we serve the same code from so many different servers. IPFS fixes that by referring to files by a hash of their content. Commonly used binaries will be cached on your browser, and every app will refer to them the same way.


Thanks for the explanation


Yes, but it is a very limited subset of the whole "web ecosystem".

In its first version, it will not be able to access any/most web APIs directly (so you won't be writing a web app in 100% C++ any time soon).

The goal is to allow "Computationally intensive" bits of code to be compiled, while leaving JS to act as the glue for it all.


Yes, that's correct.


What about opengl and hardware inputs for instance? What kind of standard libraries will be available ?


WebAssembly will (with a few small exceptions) use existing web APIs.


Does anybody know if there are plans for an API for garbage collection? The WASM spec as it currently exists seems to be only useful for non-GC languages, and it would be a shame if we ended up shipping a new GC implementation for every page that we load. Perhaps something that would allow compilers to tap into the native JS GC?


What would the GC scan? What do pointers look like? At the assembly layer, you have the flexibility to not have a C runtime (and therefore no C stack), and you might be doing fun things like having tagged pointers that the GC would have to know about.

In short, the WASM layer is IMO the wrong layer for GC. I think the closest-layered applicable solution is caching & pinning guarantees for common libraries, which may already be addressed by the same solutions for common JS libraries (use a common CDN & let the browser caching keep it pinned).


The GC would scan whatever the program told it to scan, by using its API, like the Boehm GC library: https://en.wikipedia.org/wiki/Boehm_garbage_collector

I think it only makes sense, seeing as interaction with the native JS VM will be inevitable for a long time.


Perhaps then you should just use the Boehm GC library then.


There are long term plans for a GC rooting API that will let you interact with the JS heap and the DOM, yes.


Could see languages/libraries with auto pointers and stuff getting more popular.


So does it just do what NaCL was already doing, or at least is the objective the same?

I'm more worried about more specific things like hardware access (GPU, mouse inputs, networking, windowing)

It seems wasm runs at native speeds and take full advantage of optimization, but can it really be a solution fits all? There must be some things wasm can't do. And so far, since JS did almost everything, I don't see the point of wasm if it can't do what other language can.


The main difference is that NaCl used a plugin API (PPAPI). WebAssembly, like asm.js, can access JavaScript, and so it has indirect access to DOM APIs, with no new powers over the existing web.

The main point of wasm, from my perspective, is startup speed. wasm will allow much smaller downloads of large codebases, and much faster parsing (due to the binary format). For small programs this might not matter, but for big ones, it's a huge deal.


Hi Alon,

thank you for sharing. I am a Computer Science Master student and i would like to contribute to the development. The git looks really full and i don`t know where to start.


Which area were you interested to contribute to?

For Binaryen specifically, this bug could be a good starting point: https://github.com/WebAssembly/binaryen/issues/2

Other issues in the tracker there as well.

Bigger topics are to make progress on wasm2asm, and to start an implementation of the current binary format (link is in the design repo), which Binaryen needs to support.


ok. Thanks, i'll have a look.


So the current toolchain involves using emscripten to generate asm.js and then using binaryen to convert asm.js to WebAssembly. Unfortunately emscripten depends on a fork of LLVM (FastComp), with no plans for a proper LLVM asm.js backend.

Are there plans for a properly WebAssembly LLVM backend that does not depend on forking LLVM (like emscripten)?



WebAssembly's LLVM backend is currently under development in LLVM trunk by multiple people.


Is there any progress on making the compilers utilize the js gc instead of including their own entire runtime?


This is largely blocked by the ecmascript committee's incredible slowness in introducing key features like Weak References, but it will happen eventually.


Does this mean anyone can write Python to wasm converter then run it on browser, looks like LLVM backend?


Someone could write a "PyWasm". You could then ship your Python code as a wasm blob, but you'd have to include all of the standard library parts which you use, which are probably significant. The library could be a separate resource, so clients can cache it separately.


It's more like CPython to wasm or http://pypyjs.org/ -> wasm


I'm very interested in compiling Go to WebAssembly. Based on what I read, it seems that so far you can primarily try it with C/C++ code.

If one were to build a Go -> WebAssembly compiler, what are good routes to take? I can see there's going to be multiple possibilities.


Will be a huge pain in the ass. Go's compiler uses Plan 9, and WebAssembly uses LLVM.


There is llgo[1] which is an implementation of go on top of the llvm toolchain.

But if I'm not mistaken, WebAssembly won't accept just any llvm bitcode. It's similar to how emscripten will work with the bitcode that clang outputs, but not other llvm compilers like rust or ghc.

[1]: http://llvm.org/svn/llvm-project/llgo/trunk/README.TXT


Cool, but why not just use, you know, Java bytecode? Existing toolchains, compilers, runtimes, virtual machines could all be reused I'm sure. Actually there are a hundred different great virtual machines that could be used... Why, yet, another?


All the compiler backend/code generation is using LLVM, and the VMs are all pre-existing (JavaScript with some extensions), so there is a huge amount of existing code being used here.


I see WebAssembly this way:

WebAssembly is to JavaScript what WebGL is to Canvas


My experience is that the barrier to entry for JavaScript is not that it's a new language, but that you have to learn async thinking and are restricted to a single thread.

Does WebAssembly address either of those points?


Regarding threads, I don't think so. And even if they would address threads, then there's still the problem of a shared address space between threads to be solved, and the implementation of mutexes, etc.


I'm hoping for a day where I get two threads in my JS runtime. Now that would be nice...


I'm sorry, but I'm not following. What's the problem this is supposed to solve?


WebAssembly sounds interesting, you could use it to write little apps that embed into a page.

And call them "applets". Nobody's ever done that before, right? :trollface:


Finally! Maybe now someone can build a web app that enhances the reading and discover of documents. Each browser could be a repository of text files, each with an address, so you can have words in the text pointing to another document's address.


I suspect too many web applications are going use WebAssembly to obscure their code and the way they work, thus making it impossible to learn by studying their code. As someone who learned programming mostly by looking at other people's code, I'm afraid the web will change in a way that would make it a lot harder to do so.


Being able to view source is a design goal of WebAssembly, according to [0].

If an organization doesn't want you reading their JS, there are already plenty of tools to make it nearly impossible as-is. Do you really learn anything from reading minified, obfuscated code? At some point you're just reverse engineering, which is obviously still possible with WebAssembly.

'Open source by default' is a problem to be solved at a cultural level, not a technical one.

[0] https://github.com/WebAssembly/design/blob/master/FAQ.md#wil...


Do you really read other people's minified/obfuscated js?


It was far less common to minify JS code when I started doing that, but yeah, even now I much rather have minified JS code than no code at all.


So decompile or disassemble WebAssembly?


Basically, you took it for granted that web had source as the binary, unlike the rest of the world (native binaries are not the source).

You'll still be able to find and read source of open source web projects, like like you can for open source non-web projects.

> 'Open source by default' is a problem to be solved at a cultural level, not a technical one.

Highly agree.


I'm not taking it for granted, I think it makes a better world. I'd rather my oven to come with a book saying how it was assembled and how should I fix common failures it might have, and I rather a chair to come will a little paper saying what kind of colors were used on it so I can repaint it when it gets old. I feel one of the things that made the web great is "View Source".


Agree. So previously "View Source" was enforced to be there due to technical limitations. In the future, it should continue to be there, but it will have to be as a result of cultural decisions.


There are plenty of open source projects to help you do that (re: github). Proprietary software's source code is not meant to be read unless the developers specifically want it to be. The web allowing it for JavaScript was a happy accident, that is basically gone now with the uglifiers.


Just because the source is viewable doesn't mean that you're allowed to do anything with it. Your fears are already true. Any IP you or someone else creates is automaticially under copyright unless you choose a proper opensource license but then the real source code is probably already available on github.


YES YES YES!!! OMG! THANK YOU <3 <3 <3 I've been waiting so long for this! Maybe my dream of running native Lua on the browser will come true? Will I already be able to run Lua's interpreter now? :D :D Gonna look deeper into this as soon as I have time, omg so excited <3


The Lua interpreter was one of the earliest test cases for the C++ to JS compiler.

https://kripken.github.io/lua.vm.js/lua.vm.js.html


ohh!! I know this project! I thought it had nothing to do with web assembly, thank you for the heads up




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: