Unlike the current mainline Go's WebAssembly output (min ~2MB file size), the Wasm generated from TinyGo is practical in size. eg ~1kb for the toy examples
This is all leading edge dev stuff too, so updates and improvements are happening pretty frequently. :)
> Skill: 64-bit addressing
As a WASM backend implementer in an environment w/ only 32-bit addresses, I hope people don't move to 64-bit addresses too soon. Are we really reaching the limits here already?
> Skill: Portability [...] A POSIX for WebAssembly if you will. A PWSIX? A portable WebAssembly system interface.
Yes please, but dev'd outside of WebAssembly. This is needed for interoperability between languages. Even just strings would be nice. A stdlib interface to rule them all, so to speak. If kept modular and avoided a lot of bikeshedding and had a full test suite, it could have a great benefit even outside of WASM.
That would be great. I’m normally a fan of multiple competing implementations, but this might be an area where a single common library implementation, developed in partnership by all browser vendors, would be beneficial. All the code would be in wasm so it would be portable by definition. And with everybody using exactly the same implementation there shouldn’t be any sneaky incompatibilities. With a single master repo it could be versioned and updated cleanly.
Just as long as it’s kept small (1MB sounds like a good goal) and there isn’t too much churn -- those are the real challenges.
It's because deployment is a harder problem than development (at least, it's harder for developers, because deployment is a social problem). The web is a pretty poor development target (though it keeps getting better), but it's a damn terrific deployment medium, and that's a winning tradeoff.
(Also these apps are not necessarily written in JS, it's a compile target too)
I'm still seeing laptops sold with 4GB which would be used really quickly.
It seems to me that GC tuning and memory models are very language-specific. A purely-functional language like Haskell or Closure can make different assumptions about the memory regions but also generate a lot more young generation items compared to OOP languages.
If DOM objects can hold on to Web Assembly objects--as for example happens in the case of event listeners implemented in wasm--and Web Assembly objects can hold on to DOM objects, then the only sensible solution to memory management is a GC that can trace both kinds of objects. All other solutions eventually lead to uncontrollable memory leaks (see IE6).
It could probably be done in a way that allowed multiple GC's to coexist and cooperate.
That's the whole reason they're adding a GC, how is it not an urgent problem?
We've been interfacing C with e.g. Python without needing a garbage collector that crosses language boundaries. So why would we need one now?
Because C doesn't need garbage collection, it uses 'manual' memory management. This scenario is exactly analogous to the WASM-JS world that exists now (WASM is manual, and JS is GC).
The goal here is to have a GC that's usable in WASM, and WASM is going to be interoperating a lot with JS for many applications, so it needs to cope with that. A better analogy in this case would be IronPython which is Python built on the .NET framework rather than C. In that case, the Python GC is the .NET GC, so that objects can be passed back and forth 'between worlds'.
Let's keep WASM simple. It's probably the best thing one could do in view of security.
Trying to shoehorn everything into one model isn't something that should have 'asm' in its name.
> In systems designed to support multiple programming languages, reification brings a different problem. All languages must deal with the complexity of reification; worse they must conform to the expectations of the reified generic type system of the "master language" (C# or Java, for example).
> Consider .Net, the poster child of generic reification. Originally, .Net was intended to be a multi-language system, but dynamic language support there has suffered, in no small part due to reification. Visual Basic was a huge success until .Net came along and made it conform to C#. And what Iron Ruby/Python programmer ever enjoyed being forced to feed type arguments (whatever those might be) into a collection they are creating?
Not saying CLR's approach is bad; just that it doesn't seem like an example of one-size-fits-all as is sometimes claimed (and the same goes for the JVM, etc. too)
from System.Collections.Generic import List
lst = List[str]()
(I would also claim that the quoted snippet vastly overstates the problem, while also misrepresenting it - it has all to do with static/dynamic type system mismatch, and practically nothing with reification. In practice, if you want to interop between C# and Python, you'd just use "dynamic" in C#, and access native Python collections directly - that's exactly the scenario it's intended for. The only time you'd need to muck around with generics from the Python end is when you're calling into a library written with C# - but that's just FFI, no different in principle than having to specify types when you're using ctypes to invoke into C.)
On mobile this seems to depend on Apple and/or Google deciding to support WebAssembly for native apps, which seems unlikely.
Developers need laptops to run their development environment in. I'm wondering if this might just be Linux tools running in a container on your laptop. Chromebooks seem to be moving in that direction.
The revival of something like Sandstorm might be interesting, but that's pretty speculative.
Otherwise you are severely crippling yourself for no real gain. You're going to lag the hardware features by years if not decades, you're not going to get arch-specific optimizations as readily, and you're going to have huge startup costs. You're also stuck with sandbox restrictions that you've self-imposed, making it harder to do things that would have otherwise been trivial or needs to be re-invented.
You could maybe squint and claim you need a portable IR, but given there's only really 2 ISAs in widespread use (x86 & ARM) does that _really_ matter? Similarly you could claim cross-OS support, but again as there's only _really_ 2 OS's for each target (Windows/Linux for server, Android/iOS for mobile) does that _really_ matter? Critically does it matter to such a degree that you'd rather severely cripple your own app to get it instead of just compiling it 2, 3, or 4 times?
Once you get a _good_ solution to embedding untrusted code, you might find it turns out to be pretty useful.
No, process isolation does not work for us, because it does not scale well enough.
The point of Cloudflare Workers is to distribute your code across Cloudflare's entire edge network -- 154 locations and growing -- so that you can respond to requests at the closest location to your end user. We want to put every customer's code in every location, rather than forcing you to choose a handful. Meanwhile, obviously, not every one of our locations is a mega-datacenter.
This means we need a way to support extremely large numbers of tenants per machine, with relatively low traffic per tenant (because each tenant's traffic is spread out over the world). We need this to be efficient.
This means, we need low memory overhead (to support many tenants per machine), very fast cold-start time (since cold starts are a lot more common in a decentralized scenario), and low overhead to switch between workers (because we're mixing traffic from all our customers everywhere).
Using embedded V8 isolates within a single process gives us 10x-1000x better performance on all these metrics than if we gave a whole process to each customer.
Relatedly, here's a talk I gave at Heavybit about the neat things you can do when you have fine-grained server compute: https://www.youtube.com/watch?v=YZSvJNBZsxg
We've spent a lot of time thinking about and building mitigations for speculative side channel attacks. For example, early on in the project -- before anyone even knew about Spectre -- we made the decision that `Date.now()` would not advance during code execution, only when waiting for I/O. So, a tight loop that calls `Date.now()` repeatedly will keep getting the same value returned. We did this to mitigate timing side channels -- again, even though we didn't know about Spectre yet at the time.
Chrome has indeed stated that they believe process isolation is the only mitigation that will work for them. However, this statement is rather specific to the browser environment. The DOM API is gigantic, and it contains many different sources of non-determinism, including several explicit timers as well as concurrent operations (e.g. layout, rendering, etc.). Side channel attacks are necessarily dependent on non-determinism; a fully-deterministic environment essentially by definition has no covert side channels. But, there's no way Chrome can get there.
The Cloudflare Workers environment is very different. The only kinds of I/O available to a worker are HTTP in, HTTP out, `Date.now()`, and `crypto.getRandomValues()`. Everything else is perfectly deterministic.
So, for us, the problem is much narrower. We need to make sure those four inputs cannot effectively be leveraged into a side channel attack. This is still by no means trivial, but unlike in the browser, it's feasible. `getRandomValues()` is not useful to an attacker because it is completely non-deterministic. `Date.now()` we've already locked down as mentioned above. HTTP in/out can potentially be leveraged to provide external timers -- but the network is extremely noisy. A practical attack would require a lot of time in order to average out the noise -- enough time that we can do a bunch of higher-level things to detect and disrupt possible attacks. It helps that Workers are stateless, so we can reset a worker at any time and move it around, which makes attacks harder.
NetSpectre demonstrated that even physical network separation does not necessarily protect you against Spectre attacks. There's simply no such thing as a system that's perfectly secure against Spectre, process isolation or not. All we can do -- aside from going full BSG and giving up on networks altogether -- is make attacks harder to the point of infeasibility. Luckily, we have lots of tools in our toolbox for making Spectre attacks infeasible in the case of Cloudflare Workers.
The Battlestar Galactica was the only ship in the fleet that hadn't networked its computers, because the captain was paranoid. When the Cylons (AI) attacked, they instantly hacked all the other ships, but the Galactica stayed under human control and got away.
It's fiction, but I'm honestly really impressed with this bit of writing. It's both an entirely plausible and almost realistic strategy, and it gives the writers an excuse for the crew members to interact rather than let the computer do everything. Whereas on Star Trek one wonders why they bother with a bridge crew -- the captain might as well be punching everything into a computer directly rather than inefficiently giving orders to humans.
I'm comforted by the fact that if there's a breakout in V8, Chrome is a much juicier target than we are -- and has an incredibly strong security team that will jump on the issue. I think we would be overall less secure using a lesser-known JS implementation.
1. No process isolation
2. Data from lots of users on any given node
Am I missing something?
* While process isolation makes Chrome somewhat less interesting than it used to be, keep in mind that process isolation is being rolled out as a defense to arbitrary-read attacks (like spectre), not arbitrary-code-execution attacks. Executed code can potentially talk to the rest of Chrome and perform actions that a web site is not intended to be allowed to. (At least in the past, this included the ability to read all the user's cookies, though it's possible they've improved on that, I don't know.)
* Cloudflare has potentially multiple tenants in a node, but the attacker has no control over who they are. There's no particularly good way to target a specific end user nor a specific web site.
* V8 vulnerabilities are likely to be memory safety issues, which are difficult to leverage into an attack without an understanding of the process's memory space, which is hard to get without having access to the source code or a compiled binary. The attacker would be flying blind and would almost certainly just segfault or be caught by our layer 2 sandbox (which blocks almost all syscalls), which incidentally would immediately alert us to their activity. (I don't particularly like this as a defense, since I like open source -- but frankly, it is a pretty big barrier.)
* The attacker would be burning their zero-day by uploading it to Cloudflare. We block eval() and the like, such that the only way to run code on our edge is by uploading it through our deployment APIs, which keep a copy.
* Last I checked, Chrome still does not use site isolation on mobile, because the overhead is too high.
If anything the industry is increasingly adopting bytecode as distribution format, kind of returning to the early 60/70's designs with microcoded CPUs just with another approach to the final execution.
The real benefit is having a portable bytecode that 1) can accommodate any language out there without constraining their object and memory model, and 2) is guaranteed to be compiled to native code by an optimizing compiler when deployed. Add some basic APIs, and we could have a situation that has heretofore has been enjoyed pretty much only by x86 - the ability to compile code, and run it 30 years later as is.
But C, C++, Rust, Go, D, etc... all support compilation to those platforms. This is a really common thing to do.
Now, I don't think it'll "take over" the way C has, but I also wouldn't be surprised to see it edging out other VMs in many contexts, like the Lua or Python VMs, which may end up porting to WebAssembly instead of trying to justify another dependency for your app.
I hope we'll finally get to the point where I can choose what language to use for a specific task based on the merits of the language instead of making the larger argument that it's worth getting it to work in our existing app.
I've been watching efforts to port Lua to WebAssembly (the official VM looks simple enough, but luajit won't be easy) because I want to have an app that shares code with the browser, and it would be awesome to play with WebAssembly as the plugin system of choice on the backend.
I'm also super excited for nebulet, which is a micro-kernel experiment using WebAssembly in Ring 0, which is pretty neat.
I'm really excited to see where WebAssembly can go. I know many of these projects will peter out, but it's exciting nonetheless.
Absolutely. On the Lua front someone made a PoC to add Lua as a language for Iodide  using Fengari  which (while I haven't used it yet) definitely looks interesting. Hadn't heard of Nebulet before but will definitely be following it now. Cheers!
People are already looking into WASM VMs for blockchains https://www.parity.io/wasm-smart-contract-development/
Docker has won to a degree, though it still requires complex orchestration technologies like Kubernetes built on top of it, which are rapidly developing and not a particularly stable target.
Yes, this is where I see WASM really hold its own. Serverless and edge environments want to be able to have code running in a persistent process that is completely managed by the environment. Right now, they frequently offer just JS, or a couple of language runtime environments like Python as well; for serverless environments, you can generally load native modules if you want to, but its awkward, while edge environments will only run JS, though now they're starting to offer WASM.
It's the places where being able to run multiple tenants within a single process, without paying IPC and process isolation overheads, that make the overhead of WASM more likely to be worth it.
> On mobile this seems to depend on Apple and/or Google deciding to support WebAssembly for native apps, which seems unlikely.
I don't think it would be too unlikely for Google to support WASM for native apps. For one, it would allow them to do architecture independent apps without ART, on Fuchsia, for example. Right now, I think Dart + Flutter compiles to ARM code, but if they want to target a wider range of devices then platform independence could be important.
Not strictly true. I spent a bit of time using Cloud 9 and found it to be quite a useable experience. It’s certainly very handy to be able to access a dev environment from web browser wherever you happen to be.
I hear a lot of developers going on and on about laptops these days but I can't figure out why. Is working out of a coffee shop the norm now or something?
Well, unless you add Kubernetes on top of that... then yeah, it gets... complicated.
I’m not a UXer, never was, and I moved into management long before programs even got pretty so I’ve never even had to pick up.
Yet I can make a pretty web application with vue with minimal efforts. Add stuff like graphql, and wasm looks so ridiculously old aged before it’s even born.
Of course that kind of defeatist attitude is silly, and I’m looking forward to see where wasm goes in the coming years, because if people just gave up, I wouldn’t have been able to make a pretty app in vue either.
Also, JS isn't the best tool for all programs, and programmers would like more of a choice when it comes to things that are more than just your standard web app.
Why not? Developing pretty apps in Vue (or anything in JS-land) is an horrible experience, why shouldn't we use WASM to port some better environment into there?
Presenting WASM as a threat will lead to it facing stiff resistance from entrenched interests (amazing that people in their 20s and 30s are entrenched interests, but that’s modern front-end development).
A more gradual, evolutionary, low-hype approach seems prudent, as the platform matures and until/if someone makes the “killer app”.
We used to work with, first WPF then web forms and later MVC with razor, and we very quickly bought Telerik because the standard UI components in .Net are horrible.
Aside from that I’ve worked with JAVA I can’t recall what we used before FX but both were terrible.
In the JS CSS environment I can literally make an interface that doesn’t look like ass by simply using the standard setup in a front end with very few lines of code.
I guess a think like Blazor.net might make wasm competitive, but unlike JS, that requires Microsoft not to drop it in a year because they don’t have the OS movement JS does.
I understand why people dislike the package he’ll of modern JS, but so far, it’s proven itself to be just as trustworthy as any curated library.
On the other hand you’re not tied down in JS. When Microsoft made entity they wanted people to use it, even though it’s fucking awful. In JS you can very easily replace a component if something better comes along. That’s a great strength.
Graphql has been really instrumental for us for instance. We operate apps in low connective environments, so being able to only transfer what is needed is really great.
Graphql isn’t JS only, but it takes very little time and effort to setup an Apollo, graphql and vue app, and it’s got very few issues. The .Net integration of graphql is still a 3rd party library that doesn’t really work that well and isn’t easy to use.
That’s the thing with JS, it’s just really productive in the real world.
Blazor.net looks really productive, but it’s only productive when you don’t need it to do something Microsoft hasn’t thought of, and it’s the only WASM integration that I’ve seen that is even remotely competitive with JS.
The thing with JS is that it is a no batteries, no novelty, no shortcut, not very productive language, that manages to be both old-school and novel on all the bad ways. You can't just carelessly gather code and use it, because there are no safety boundaries; most code that is not at the frontend (yeah, like Apollo) have no consideration about security, so you can't simply expose it; the entire ecosystem is a house of cards; and the language itself is a pile of WTFs just waiting for you to discover another one.
Besides, the web stack is pretty much controlled by Google nowadays. Yes, they manage development resources better than Microsoft, but I would prefer working on a stable and free environment.
VS is great too, but it’s performance is really, really, terrible when your documents and settings directory is on one network drive and your codebase is on another. Not exactly VS’s fault, but it’s an ide for enterprise that can’t function with the most basic enterprise drive setup?
VSC, the electron app on the other hand doesn’t give two shits.
People usually tell me VSC is the one off unicorn. But isn’t discord better than team speak and ventrilo? Isn’t slack excellent?
I don’t think sunk cost is a negative when people are building better tools with it.
That said, even then, I'll take Qt over Electron any day of the week.
My reaction to using the canvas instead of the DOM is very, very, very negative by the way. Webpages with fake text that is actually pixels rendered to a canvas are currently mostly restricted to dystopian notions of either the end result of the ad-blocker arms race or else extreme anti-copying measures.
I'm certainly not using Xcode + Cocoa + the MVC abstraction for iOS/macOS apps because it's the superior way to build software. It has its own warts, like the effort it takes to swap out MVC with some other pattern, or how it comes with no async abstractions -- you'll just be writing callback code like JS developers had to before the Promise.
It's tempting to unmask "ugh, JS developers" as a bunch of idiots, but I think you'd only be kidding yourself if you think your pet environment is some global optimum.
Hell when we were still building interfaces in .Net we paid a subscription to use Telerik on the frontend because the standard .net stuff is absolutely terrible.
I never used Telerik, and we only bought ComponentOne once for a Windows Forms project, started before WPF even existed.
It's nuts that visual layout editors for the web have, if anything, regressed from the WebForms editor in Visual Studio 2008.
In any case you aren't obliged to use MVVM in WPF, it is just a best practice for easier unit testing and composability. Then again Web Components are only now starting to be a thing.
Some people will argue on this, but I'm actually in agreement.
That said, it is important to realize WHY the JS productivity is so important: We're iterating a LOT. Web is so young, web interfaces are so young, the problems we are trying to solve on web are ever changing, and the devices we use to interface with web are changing almost as fast. We have to iterate again and again to be decent at the problem we're trying to solve only to toss it aside in favor of the new problem.
Some of this is wasteful, some of this is just part of being part of a big technological change. (I'm sure TV, radio, fax, printers, copiers, etc all had similar iterations, albeit at not quite the same speeds). But regardless of the "if we should" aspect, it's important to note that the current JS needs are both (1) currently real - trying to adopt some philosophically "pure" tenet in the defiance of these needs will fail and (2) not necessarily permanent - We can extrapolate into the near future from this, but I'm hard pressed to say much about more than 2 years in the future.
> Add stuff like graphql
Funny, I thought about graphql when the article mentioned HTTP caches, since graphQL _can't_ make use of that (most graphQL implementations use non-GET for everything, so you have to rely on server-side or client-app caching, since you can't rely on the network or browser) and took it as a hit on graphQL, not wasm.
That said, I'm interested to see where wasm goes - short term, we're in agreement that anyone expecting it to be the "Js-killer" is missing the mark, but long term I don't expect it to be dead-on-arrival, just used for not-what-was-expected.
To me the main benefits of wasm are that you can
* make web pages with languages besides JS
* do lower level things not possible with JS (low latency real time games?)
If you already know JS well and want to make a normal website I don't think you need WASM necessarily although it will enable those who don't know JS or want to use JS to make them too.
So, it may not be completely intentional, but WASM does have roadmap items that may end up competing with JS. WASM starts to look more like a general purpose VM over time.
Stuff like UnrealEngine running in browser, low-latency audio processing, etc. Lots of potentially cool solutions where you need strong control over allocations and memory placement.
You don't need a jet engine to make paper airplanes. Gluing jet engines to paper airplanes would indeed be counterproductive.
> and wasm looks so ridiculously old aged before it’s even born.
Wasm is about achieving near native level performance. For libraries and such. Stuff that your pretty high level js frameworks might use.
I don't see any explicit mention of what I understand to be the killer feature: the ability to directly access the browser's WebAPIs, from Accelerometer to Document to MimeType to XPathExpression. Start with modifying the DOM, and go from there. Of course fast WASM/JS interop lowers the cost of the obvious workaround. And yes, there's discussion in the article of getting WASM to play nice with JS GC, which I understand is a prerequisite. But there should be explicit discussion of what the predicted path is for making WebAPIs available in WASM, what the sub-goals are, and how to measure progress.
And the first paragraph of the article is a joke, claiming that people thought the 2017 version was the final version. No, we're waiting for a relevant version (this sentence is an exaggeration, but it's more true than what TFA said).
See https://github.com/WebAssembly/gc/blob/master/proposals/gc/O.... Things like DOM access are specifically mentioned.
Near as I can tell, it serializes everything through the linear memory; this would have a huge impact on the performance of both ends, would it not? (Particularly if you wanted to do DOM manip w/ it.) I believe this is what the article means here,
> You need to pass values into the WebAssembly function or return a value from it. This can also be slow, and it can be difficult too.
> There are a couple of reasons it’s hard. One is because, at the moment, WebAssembly only understands numbers. This means that you can’t pass more complex values, like objects, in as parameters. You need to convert that object into numbers and put it in the linear memory. Then you pass WebAssembly the location in the linear memory.
Look, the supported types in wasm-bindgen are somewhat restricted. The full list is here . Other than "atomic" types like numbers and pointers (I assume the opaque types are treated as pointers too), the interesting one are str/String (basically &[u8] under the hood) and number slices. It's apparently not great if you have to copy these around, but since these are well-aligned, why can't you just pass the starting address and length? Again, I don't know enough about wasm or its current state to say if this is doable. Maybe it doesn't like "someone else's memory" at the moment.
(By the way, wasm-bindgen allows you to access pub fields of structs from JS, but they have to be Copy. That is to say, the opaque types are not completely opaque, but I'm not sure if the pub field access is achieved through implicit getter methods or serialized in the first place.)
Anyway, the frictions might be a big problem if you're doing React-style DOM re-renders all the time, but in my app, I'm only running the computation heavy and memory sensitive tasks in wasm, and communications are few and far between, making it a non-issue.
0 - https://github.com/WebAssembly/host-bindings/blob/master/pro...
In practice, that means having an standard library on browsers that you can call in a standard idiom.
But many things that were discussed in TFA are blocked on other things that were discussed, and yet still got hundreds of words of article space.
Ditto the feeling. I don't "get" WASM from a typical in-house CRUD development. Game makers, maps, movie editors; sure they need the speed. But some say it's gonna revolutionize most browser-based application dev, but I can't get specific examples that are relevant to us. And even for those domains listed, relying on inconsistent and buggy DOM as found in browser variations is a problem it probably won't solve. DOM will still suck as a general purpose UI engine.
WASM just makes DOM bugs run faster.
(I have to confess I've never understood the objections to the DOM. I have literally never had an instance in which I had to use the raw Win32 API that didn't turn into a miserable experience.)
Look how hard Wine's job is to be sufficiently compatible.
I believe we need to either simplify the front-end standards (pushing as much as possible to the server), or fork browsers into specialities: games, media, CRUD, documents, etc. What we have now sucks bigly. Try something different, please!
Wine isn't comparable, because the Web APIs are designed by an open vendor-neutral standards committee and have multiple interoperable open-source implementations.
Your proposals break Web compatibility and so are non-starters. Coming up with fixes for problems in the DOM and CSS is the easy part. Figuring out how to deploy them is what's difficult.
Re: "Your proposals break Web compatibility" -- Web compatibility is whatever we make it. A one-size-fits-all UI/render standard has proven a mess. What's the harm in at least trying domain-specific standards? We all have theories, but only the real world can really test such theories.
As far as Qt, while it may be a good starting point, I don't think direct C++ calls is practical. Some intermediate markup or declarative language should be formed around it: a declarative wrapper around Qt.
I think I might be misunderstanding you. Are you saying accessibility is usually not a goal for the kind of applications that people need to be able to use to do their jobs?
QML is exactly that.
In addition to rich applications for 3D modelling, photo editing, etc. Adoption of WebXR as a distribution platform for VR/AR content could drive development. MagicLeap's Asset Builder Web Tool for example is built on top of NextJS and ThreeJS
No idea how long until we can play around with it in browsers though.
My experience is that modern JS engines are already pretty good at optimizing that kind of code, so I'm wondering if there are still significant speedups to be had by using WASM, given that it's still pretty new and didn't have much time to get optimized further yet.
For my emulator stuff (mostly bit twiddling on integers: https://floooh.github.io/tiny8bit/), I saw a very slight improvement of WASM vs asm.js on browsers that have special handling for asm.js (in the meantime I have dropped asm.js, and only compile to WASM, since all browsers support it).
On iOS Safari (which I guess doesn't have special asm.js handling) WASM is a whopping 3x..5x faster than running the same code compiled to asm.js.
From the look of it, in Node.js, WASM suffers less data exchange overhead while been decently close to performance of the native addons.
Modern JS engines are pretty good at speeding up the weirdness that is JS, so maybe WASM will only have small benefits, but using either for numerical work would be making your life unnecessarily hard.
GPU access is about browser APIs rather than being something tied to WASM. We have webgl now and it seems like there's more coming hopefully (https://github.com/gpuweb/gpuweb).
>and above all is simple enough that you can figure out why the compiler isn't performing the optimizations you expect.
Reading WASM seems easier to me than reading real assembly, so double-checking a WASM compiler's output seems much easier than double-checking a native compiler's output. Of course it's on the compilers to support WASM well, but hopefully that simplicity aids them too.
They do? When people point out that WASM can't do stuff they are simply pointing out that WASM would need to be extended. That is a pretty simple concept and a very important observation because historically that is the phase where everything falls apart.
Creating an intermediate language that all the higher level languages can compile to is not a new idea. It has been done over and over again with little practical result. Some current scepticism is in order.
- the (bad) sandboxing? WASM uses the same sandboxing model as JS, and thus provides the same security guarantees?
- the requirement of an external plugin? WASM comes preinstalled in all browsers.
- the proprietary aspect? Wasm is an open standard.
- the obfuscated nature? WASM can be decompiled very easily. Besides, js served by modern websotes are already minimized and obfuscated beyond recognition.
What am I missing here? I consider Flash to be a truly great piece of technology. It allowed easy creation of multimedia contents Its flaws were in its implementation, not in what it allowed. I want wasm to succeed as a reincarnated, better flash.
The GP I was answering too was talking about figma, a sort of content creation app in the browser. Being able to write those tools and deploy them via the web is a net positive for the web. Just like all the good ol' flash based games were an awesome thing for the web.
When flash was a thing, most web content didn't move to flash. Not only did Flash behave in a foreign way, breaking the user's habit, it was significantly harder to create flash content than web content. Similarly, it's delusional to think most web content will move to wasm. The barrier of entry is enough to prevent that.
Why can’t WebAssembly developers compile ahead of time and just serve the binary?
>But if all we have is the link, there are two problems here that we haven’t addressed.
>The first one is… you go visit this site and it delivers some code to you. How does it know what kind of code it should deliver to you? Because if you’re running on a Mac, then you need different machine code than you do on Windows. That’s why you have different versions of programs for different operating systems.
>Then should a web site have a different version of the code for every possible device? No.
>Instead, the site has one version of the code—the source code. This is what’s delivered to the user. Then it gets translated to machine code on the user’s device.
Portability - you would need to know what you're compiling for, in the same way you need to compile a different version of your program for your PC and your phone. You could imagine a system where some pre-compiled versions exist for certain browser/host combinations to make life faster for the majority case thought except:
Security - the primitives that WASM has available are limited and only affect stuff in the sandbox. The browser then compiles that to machine code and can know that the machine code will only affect stuff in the sandbox. This is exactly how JS works - unless there's a browser bug, JS can't write to an arbitrary place on your harddisk (for instance). If machine code was provided directly, then the browser couldn't easily verify that the machine code only did safe things (because machine code can do anything, unlike WASM).
If you think your solution does that, meditate on what "non-technical" means.