Go binaries, being static and embedding the whole go runtime, tend to be quite big. Now, I don't care about it because 5 or 10MO on my server filesystem is nothing. If I ask every visitor to download 10MO before running my webpage, that will be a problem, though.
Is this something we will have to deal with? Or will this be a lesser featured go implementation?
I've just tried on a go binary that was 2.4MO. After building it with `go build -ldflags=-s .`, it's 1.6MO. After stripping with upx, it's 480ko (and still works, and has panic stack traces).
When downloading a WASM file over HTTP, the server can gzip it transparently. It'd be interesting to compare the effects on the size with what upx does (which is more specialized than gzip).
Indeed, given how browsers already handle gunziping resources, it's probably best to rely on it. My binary, stripped by the go compiler and gziped, is 525ko (against 480ko using upx).
Also, can you use UPX on WebAssembly binaries? If so, will that trigger any widely used antivirus software?
Granted, UPX is specialized so it might have a slight compression edge over gzip. But layering UPX over WA will require three passes over all the code before it's ready to start (1: browser downloads and executes UPX stub and compressed data. 2: UPX unpacks and reloads uncompressed code. 3: browser re-compiles and executes unpacked code again.) This would completely decimate browsers attempts to make WA startup fast like firefox's streaming compiler  that starts compiling it before it's even done downloading.
Flash bad, Flash compiled to wasm, good.
Essentially, from the developer perspective WebAssembly is like embedding native code as a scripting language in the browser.
Only primitive data types can be passed back and forth between the foreign function interface between the two.
About llvm, there's a WIP prototype that doesn't quite work yet.
And about GCC, GccGo is behind in Go's version. GCC 7 only supports Go 1.8 (with caveats). Also, it had issues at some point (but this is from 2014, it may have been fixed since then).
And, on a more pragmatic point of view, when talking about Go, the huge majority talks about Go compiled with the standard Go compiler. That's why you here a lot of positive feedback about compile-times for instance.
It seems running reasonml/ocaml under nodejs with wasm (and in the browser) should be a great fit for ocaml modules?
With huge performance cost and complicated toolchains.
Npm has issues, but it's still way better than the state of the art of dependency management in Go.
Left-pad was bad but the problem was fixed on npm's side after a couple hours. Go's equivalent isn't fixable: https://mobile.twitter.com/davecheney/status/855049071307792...
V8 can be embedded in C/C++/Go/etc applications. That allows you to use JS for scripting. This is quite different from compiling to Wasm, though.
> Currently WebAssembly has no threads, but they are on the roadmap. Most Go code can run fine on a single thread. The only drawback is that “sysmon” is not available, thus there is no preemption of goroutines.
How this works in Go ? Thoroughly simplified it happens because at every flow control statement in Go, including function calls, what really happens is that, first, Go asks it's runtime to switch goroutines, and only then goes into the if, or subroutine, or ...
However, the concurrent, non-pausing GC of GoLang is not easy to match in efficiency. I'd be interested to hear about actual high-quality implementations of GCs in LLVM.
And then there's the problem of making it all work efficiently in WASM (which doesn't yet support advanced concepts needed by GCs such as memory barriers).
IMO the languages designers and developers are better off by looking at the GC solution from top-down rather than bottom-up.
Every language designer would have to completely redesign it's runtime system just to be able to run on this one backend (in addition to having a new compiler target and linker and debugger and ...). Hoping, of course, that this is possible at all, as many languages have various ways to store pointers to things in memory, which including not storing one at all (ie. pointer arithmetic).
Why would they do this ? Why should they do this ? You could simply make the existing runtime libraries run.
One answer could be that it would enable multiple languages to interface in a better way, and allow garbage to be collected across language boundaries.
Tech people seem to think that making code run twice as fast is a huge step forward. But in the grand scheme of things its nothing.
And, as a reminder, there's often bugs in that VM. Meltdown was one of them, but there's tons of them.
Pure RISC CPUs were the outliers, but coupling RISC ideas with micro-code turned out to win the match.
Even modern ARM versions use micro-code on their CPUs.
So, I'm assuming you have something else in mind, and I'm wondering what it is.
Though I guess you could say it challenges stopgap measures like asm.js.
Wonder how it works? does it use the regular call stacks or construct it's own and so on?
> Go’s garbage collection is fully supported. WebAssembly is planning to add its own garbage collection, but it is hard to imagine that it would yield a better performance than Go’s own GC which is specifically tailored to Go’s needs.
Presumably a runtime will be compiled into the wasm output, including Go's garbage collector.
Some work would be needed to keep the GC happy between interpreted and compiled functions, but no more complex than JS JITs have to do. Go's GC at least already has the capability to move objects, that's a help.
In reality, JITs are used to infer typing information that dynamic languages only discover at runtime. Go doesn't have this problem, and an AOT can probably discover stronger optimizations.