Hacker News new | past | comments | ask | show | jobs | submit login
Mozilla announces WebAssembly System Interface, what JVM should have been (theregister.co.uk)
259 points by PandawanFr on April 22, 2019 | hide | past | favorite | 152 comments



No slight against WASM (which I've promoted), but early Java people would be well-advised to sit down somewhere they can sob in privacy, before reading this article:

> ...you can't run Java code in a browser without a plugin...

> ...WASM, being memory safe and tuned for validation, also has security advantages over Java applets...

> ...explained the difference between WebAssembly and Java thus: "WebAssembly has been designed to scale from tiny devices to large server farms or CDNs...

> ..."That's how important it is. WebAssembly on the server is the future of computing. A standardized system interface was the missing link...

> ...But a write-once, run anywhere binary represents a worthwhile effort...


>"WebAssembly has been designed to scale from tiny devices to large server farms or CDNs..."

What's ironic is that the "tiny devices" and even "high end professional desktop workstation and server devices" that Java was originally designed to run on when it was started in 1990 were MUCH tinier than the devices considered "tiny" today.

How many more times faster is a typical smartphone today (Raspberry Pi 3: 2,451 MIPS, ARM Cortex A73: 71,120 MIPS) that a 1990 SparcStation 2 pizzabox (28.5 MIPS, $15,000-$27,000)?

http://www.wikiwand.com/en/Instructions_per_second


Forget full computers, an official Java subset even runs on every GSM SIM card. It is incredible!

https://en.m.wikipedia.org/wiki/Java_Card


People keep bringing this up but Java Card is a really small subset of Java so you can't draw any conclusions. In particular it has no garbage collection, which is easily the heaviest part of Java.

> Many Java language features are not supported by Java Card (in particular types char, double, float and long; the transient qualifier; enums; arrays of more than one dimension; finalization; object cloning; threads). Further, some common features of Java are not provided at runtime by many actual smart cards (in particular type int, which is the default type of a Java expression; and garbage collection of objects).


To add on, here's a quick intro to the headache that goes into writing Java for GSM cards - https://youtu.be/31D94QOo2gY?t=607


But writing Javacard for ICCs is like writing another language and ecosystem entirely. The syntax is familiar but there the comparison kind of ends.


I am not surprised. As someone who cut his programming teeth on Java and .Net, modern web development feels really out of touch with reality.

I really don't get why the community is hell bent on reinventing the wheel, especially when the main defense of npm is that you don't have to reinvent the wheel.

It's really disheartening to see that the best minds of the generation are busy spending time to reinvent the same old things instead of trying to improve upon the existing systems.


>I really don't get why the community is hell bent on reinventing the wheel, especially when the main defense of npm is that you don't have to reinvent the wheel. It's really disheartening to see that the best minds of the generation are busy spending time to reinvent the same old things instead of trying to improve upon the existing systems.

I think it might have to do with people considering 40 as very old in Silicon Valley and those in their late 20s and early 30s have barely learned this lesson.


It may surprise you that the people behind Web Assembly are largely senior compiler developers who have certainly been around the block. Dan Gohman has forgotten more about compilers and VMs than 99% of the commenters here will ever know.


Sorry that was just replying to my parents on Web Development. I assume anyone who is doing compiler and VM development would be very senior and on a whole different level of expertise.


I often wonder why this is the fact in software development and not nearly much so in other fields. Isn't the point of education that each generation doesn't have to learn past mistakes by repeating them? Do computer science curricula not include enough historical perspective?

I feel like other fields of engineering don't have such a dismissive approach to their own pasts. Show an electrical engineering class an old analog instrument and there's generally wonder and curiosity. Show a computer science class a slide with a 1 MB hard drive compared to an 1TB SD card and there's generally ridicule and laughter. "Look how stupid they've been", not "we've learned many valuable lessons since".


> Do computer science curricula not include enough historical perspective?

Anecdotally speaking, the average web developer does not have any formal computer science education. Most are self taught or attend a bootcamp or two at most. They have excellent vocational skills, but little knowledge of anything in computing outside of their narrow path of learning.


People who develop webassembly are not your average web developers


Oh man, this is so true. An acquaintance of mine who had recently graduated a bootcamp was crowing about how much he knows. I asked him for the big-O performance of adding an item to a linked list. He looked at me like I was speaking Martian.


And a lot of times, these are the people writing confidently on the internet about web development. Problem with programming is most of the people in the community are amateurs and they communicate via the internet. And on the internet, no one knows you are dumb.


I feel like the world of computer programming is caught in a very fast iteration of the old adage, "Those who cannot remember the past are condemned to repeat it." Which oddly, I was curious about that quote's origins, so I just went and looked it up. The full quote in context is:

"Progress, far from consisting in change, depends on retentiveness. When change is absolute there remains no being to improve and no direction is set for possible improvement: and when experience is not retained, infancy is perpetual. Those who cannot remember the past are condemned to repeat it." - George Santayana (1863 - 1952) [The Life of Reason (1905-1906) Vol. I, Reason in Common Sense]

This neatly dovetails with the thoughts that were floating in my head. This part, is the explanation I believe, for the current state of programming, "when experience is not retained, infancy is perpetual."


You realize that the people behind Web Assembly, for example Dan Gohman, are some of the foremost world experts in compilers who have worked for literally decades in the field?


I don't think that GP was talking about the people who actually made web assembly, but about those who hype it. I'd be surprised if the former were not full of awareness of and respect towards the technologies that came before.


Your appeal to authority notwithstanding, yes I do realize that. I was not referring to, nor condemning the authors of WebAssembly for their work. I was proposing an answer to the question above, about why it seems like a lot of software development is re-inventing the wheel.


I wonder the same thing. I suspect the age of the field plays a role. While theoretical computer science is about 200 years old practical computers only exist for 70 years and are widespread for about 40 years. Many professors teaching computer science today couldn't study computer science back in their days because it simply wasn't offered.

This has wide reaching effects, one of the more obvious ones being that 20 year old knowledge seems ancient to most.


In other fields of engineering no one would ever think of calling themselves engineer after a 6 months bootcamp training.

Sadly on several countries this is a thing, which also contributes to a low level of expectation, regarding the quality of delivered work.

My Informatics Engineering degree certainly did include historical perspective of previous languages, operating systems and hardware architectures.


.NET ecosystem is fantastic right now. .NET Core is fast and efficient, asp.net is better, faster and more full-featured than ever, and MS is ahead of everyone with Blazor Components that finally provide a real alternative to JS frontend frameworks and can run either server-side or as WASM modules.


I agree largely from a development perspective, but the client-side javascript frameworks are ultimately more performant, particularly on mobile, and are not as bad as they used to be. Neither server-side nor WASM are ideal for mobile devices. Downloading many megabytes of .NET dependencies for a WASM app is obscene compared to like 30 kilobytes for React.


It won't be megabytes, that's only in the experimental phase. Binaries can shrink dramatically once run through a linker and trimmed down to only used APIs.

As far as performant, I agree it wont matter in most trivial interactions but large datasets with computations already see an impact (for example we have million row datatables to show sometimes).


> Binaries can shrink dramatically once run through a linker and trimmed down to only used APIs.

It's worth pointing out that .NET Core has an assembly/executable generator that can package only the .NET libraries that are actually used.


Rust-based WASM front-end libraries will likely be small and also high performance. Unfortunately, all of the existing ones are highly experimental right now.


> “30 kilobytes for React” This is just not even close to being true.


The compressed size is 31 kb, and other frameworks are smaller.

https://gist.github.com/Restuta/cda69e50a853aa64912d


There's 3kb Preact with almost identical API.


So what's the other 27KB+ for?


Fiber, support fo async rendering, suspense, hooks. A different VDOM algorithm.


Better error handling and messaging...


With 5g that won't be an issue anymore...


Until they decide that 5g is good enough to allow them to ship the browser with every website. Net speeds have been increasing every year and yet the websites take longer to load.


In 2026 maybe...


IMO, server-side Blazor is a non-starter, basically just a proof-of-concept to get something up and running before the WASM version is usable, but definitely not something I would use in production.


Why not? Works fine in our tests, and a large part of .NET apps are internal apps that work great with this kind of framework.

I think the biggest issue is sending every interaction over the wire but there are ideas to separate the event handling or even split which components run server vs client-side. That would allow highly responsive inputs with no lag while the parent component can run all its logic on the server.


I mean, sure, if you have tens of internal users maybe (but then it could be argued that it could just as well be deployed as a desktop app -- I could buy the deployment method as an argument, mind you). But thousands? Call me sceptical. It just makes me think of `asp:UpdatePanel` all over again.


Are you referring to the session state and websockets connections? Both scale horizontally with minimal effort, I don't see the problem.

Asp:UpdatePanel was great for it's time, and no different than doing an XHR request now and replacing the contents of a div with the response. You can still get basically the same effect with libraries like Turbolinks which is what Github does.


> It's really disheartening to see that the best minds of the generation are busy spending time to reinvent the same old things instead of trying to improve upon the existing systems.

I hear that often in regard of programming languages. But I see a lot of value in remixing a lot of existing ideas in a new package. A wheel can only be so round and its interaction with the road is quite simple, but tech is different. Systems have widely different requirements. Old implementations have shortcomings and a hard time fixing those, especially due to backwards compatibility of complex systems. I cannot imagine how hard is to remove null from an existing programming language, as C# is currently trying. At some point it's just easier to start from a clean slate.

Is all the effort worth it? I don't know. But I wouldn't want to imply they are just doing it because their out of touch with reality.


Applying the "reinventing the wheel" meme to scenarios like this kind of misses that wheels actually have been reinvented numerous times resulting in major advancements in wheel technology.

Reinventing the wheel takes a lot of effort to be worth doing, because it takes a lot of effort to improve foundational technology, but it is sometimes worth doing if you are going to put an appropriate level of resources into it.


With proper versioning, I don't see why even major features like that would be a problem.

Null removal is completely compile-time anyway so it's complicated to compile but relatively simple to implement as a language upgrade at a certain version, and in this case it's also opt-in.


MS and Oracle claimed copyrightability of APIs. Think about that. It surely didn't help accepting .NET and Java as worthwhile universal tools. Too bad for them. Good thing WASM avoided this pitfall.


1. is true

2. Yeah, I read that and was super confused. Literally that was one of the main point of the Java language itself, enforced by the JVM itself.

3. I think this is more a matter of expected level of abstraction, but I agree this is fairly weak.

4. This just seems like standard "the new thing is vastly superior to the old thing", with a valid touch of "the JVM is too heavily abstracted from how computers work"

5. Yeah, this was also weird.


I think what he means is that these were literally the same promises that were made about Java and the web back in the early 1990’s.

Java was the solution that would provide a robust, symmetric (server+client), secure, highly capable, and portable platform for complex web applications.

Early Java folk weep because Java failed so badly on the client, and something else is stepping in to do what Java could not.


Thank you for clarifying that.

Also, one thing less-known than that Java applets were in the popular Web browsers at one point (when JavaScript was said to be just a glue language to invoke the Java applet), was that Sun already had a richer, Java-centric Web browser, which used Java for content-type handlers. Imagine providing content as data of some type/format, and the browser would automatically download an appropriate UI for that type, on the fly, and integrate it into UI, and it would all be secure.

(I first saw Java when it was called Oak, and Sun had great people doing major things, Java only one of them. When Java applets first hit conventional Web browsers, most people thought they were for replacing animated GIFs, thanks to a demo program. I probably wrote some of the first Java desktop application code outside of Sun, partly to demonstrate that Java was a real applications development language. Well, the language was there, and in many ways a huge improvement over the C++ that most shrinkwrap and technical desktop application developers were moving to, though the library support took a while to catch up, and performance took longer.)


You mention it in your last sentence, but most of these tales forget to mention it at all: java was slow. I mean, slooooooooooooooow. And that reputation stuck for much longer than it was actually true. Java applets were an abomination and you wanted to repeatedly stab yourself in the head while using even a simple one. On a then-average p166 with 24mb of ram, your browser would go unresponsive while loading, jumpy mouse cursor, os starting to swap, and then some irregular annoying hangs while the GC did its thing. Not to mention how ugly awt and swing looked even back then.

Modern web is a joke regarding resource usage and complexity etc., but java was a shitshow in practice, except on beefy servers.


> On a then-average p166 with 24mb of ram, your browser would go unresponsive while loading, jumpy mouse cursor, os starting to swap, and then some irregular annoying hangs while the GC did its thing.

Reminds me of client-side JS frameworks. Or Electron apps.

> Not to mention how ugly awt and swing looked even back then.

Web UI is still as ugly as ever, of course. Peak UX usability was native UIs in the 1990s and early 2000s, after that it was nothing but steady decline.


> Reminds me of client-side JS frameworks. Or Electron apps.

Even big framework web apps on a core 2 duo running old Firefox, that is nowhere as bad as early Java trying to run something as a web applet.

I remember staring at that crazy Java applet load spinner and hating Java, and this was already in the early 2000s (on the first white iBook running Mac OS 9). It would have been much worse 7 years earlier.

It was really just the JVM’s startup time and the clunky AWT-based UI that made Java lose on web clients. Flash easily dethroned Java there because it started almost instantaneously and could do fancy animations (recall dial-up speeds meant video wasn’t an option).


Thanks for teaching me something totally unrelated on accident today. I was under the impression that all of the post-clamshell iBooks were OSX-only and had to check everymac. Now I'm slightly wiser on a topic that no one gives a shit about.


"Now I'm slightly wiser on a topic that no one gives a shit about." - I'm taking this quote and integrating it into my life.


Who is downvoting this?

>Peak UX usability was native UIs in the 1990s and early 2000s

So much this. Give me Win2K and Office2K style applications - multiple tiled or overlapping windows, modeless views, regular menus and toolbars, context sensitive right click menus, etc. over this single pane with a hamburger button and search box crap any day of the week.


I’d argue that Mac OS (9/X) hit peak usability in the mid to late 2000s, when they were still trying to dethrone Windows.


Java has been quite successful on my client devices running on my pocket, TV and tablets.

Also on client devices running on my credit card, factory management client screens and couple of car infotainment systems.

And on client devices across many corporations still safe from Electron madness.


Java applets are how a depressingly large amount of CC processing happens.

(I think Java Card maybe? is responsible for CC handling in all the magic CC features in phones)


> I think what he means is that these were literally the same promises that were made about Java and the web back in the early 1990’s.

And as a whole (except for run in browser without plugin, which was never promised, AFAIR), they were all true or Java, relatively speaking, compared to what was available before.


Sorry, which point were you referring to? (I recognize the article as bad, I'm just not sure which problem you're referring to :D )


I think the article author knows exactly what they're doing. "Write once, run anywhere" was literally Java's strapline.


I'm so often confused by these comments.

Are you saying that we could stick a JVM in the browser and ship Java applets around to be progressively/streaming loaded and executed in browser?

I don't know much about the JVM but it seems the goal of WASM and the JVM were quite different, no? And yes, WASI has overlap with JVM, but if WASM catches on and people like it, how does WASI make any less sense?

Are you advocating we have WASM and JVM but no WASI? Is that better? Not sure why anyone is crying here..


> Are you saying that we could stick a JVM in the browser

More or less. I mean, that used to be a thing you know.

> the goal of WASM and the JVM were quite different

WASM: A universal write-once, run anywhere bytecode for heterogeneous networks of systems

JVM: A universal write-once, run anywhere bytecode for heterogeneous networks of systems


> More or less. I mean, that used to be a thing you know.

I did not know Java applets can be stream loaded (ie, optimized for the web).

> WASM: A universal write-once, run anywhere bytecode for heterogeneous networks of systems

Not at all.. at least, not in my view. WASM is optimized for platform issues specific to the web. That is to say, how it behaves on load is the first priority.

I never got that impression from the JVM. If you say it is, then it seems to be a failure in marketing of the JVM or something.

Since you seem to feel WASM is a waste, what are your thoughts on the failure of the JVM? Ie, why don't I have Go and Rust compile targets for the JVM, with JVM browser code running my JVM targets and etc?

If JVM truly does have all of these WASM-web oriented features then it is an impressive failure on JVMs part. Quite curious


> ...why don't I have Go and Rust compile targets for the JVM, with JVM browser code running my JVM targets and etc?

The JVM is not a true VM. It's heavily coupled to the Java object model and the Java GC.


And yet so many posts here cry of history repeating itself and seem to imply we should just be using the JVM.

I'm honestly puzzled. Am I the one missing something? Why is the JVM crowd seemingly upset here?


> Why is the JVM crowd seemingly upset here?

Nobody is upset, really. Java web client lost a very long time ago to Flash.

But imagine, if 25 years ago you poured your heart and soul years ago into a project because it promised to be a particular kind of great (but later failed), it can sort of drive the knife deeper and throw extra salt in the wound when your project's ugly red-headed stepchild (i.e. JavaScript) grows up into something great and gives birth years later to a beautiful standard (asm.js, later WebAssembly) that becomes what your project thought it would be, and much more. Or something like that.

Java certainly has a well-established place in software, no doubt about that. But its original promises had it (combined with AWT/Swing, applets, and XML, ... umm yeah) becoming the ultimate "write once run everywhere" platform that could scale up to massive servers and down to tiny embedded chips, and every client platform in between. In retrospect, C# and .NET would have done well not to attempt to emulate Java's initial scope.


I'm not anti-WASM. Actually, I think it's pretty cool.

And I have no particular JVM love. I use it exceedingly rarely.

It's just that you're being kind of a frustrated, pedantic tool right now.


The only difference is that WASM will be available ( I assume ) in most PC Chrome Browser within 2 - 3 years of releasing, that is 800M+ assuming Chrome with stable market shares. Along with Android, that is a potential of 2-3B devices, assuming stable upgrade cycle and shipment.

That is more penetration than Java could ever get on Desktop and Mobile.

Although Java is actually doing ok in embedded space, I am not entirely sure how WASM would work for that segment.


Maybe my sarcasm detector is failing, but:

Java is the most used programming language in the world (https://www.tiobe.com/tiobe-index/)

It not only powers all kinds of desktop things, it is foundational to Android, which currently powers ~85% of all mobile devices.

How can you get more penetration than "most used programming language" and "foundational to the largest mobile OS"? I mean there isn't even a category that fully encompasses the degree of dominance of Java. The next step, I assume, would be to rename "Programming" (all of it) to "Java".


I should have use the word JVM rather than Java. My Apology.


> Android [...] assuming stable upgrade cycle and shipment

Good one.


From TFA:

>> Mozilla this week announced a project called WASI (WebAssembly System Interface) to standardize how WebAssembly code interacts with operating systems.

This is just another implementation of holes that allow web sites access to the rest of your system. It really is supposed to be the operating systems job to manage resources and what can be accessed. The problem is this functionality keeps getting re-implemented by others with different agendas.


> This is just another implementation of holes that allow web sites access to the rest of your system.

Well, unlike systems designed in the 90s, it's designed for the modern "everything-is-a-threat" mindset rather than the optimism of the 90s that everything would be safe.


What? No. WASI isn't meant to be implemented by the browser. It's meant for WASM runtimes outside of the browser, like Lucet and Wasmer.

People have been experimenting with "WASM outside the browser" for a while now, and WASI is just an API for making OS calls that a runtime can implement.


I thought the point of WASI was to give you an API that allows you to bring your WebAssembly apps out of the browser and run on the OS itself, not to allow deeper integration of the app in a browser to the underlying OS.

Edit: words


> WebAssembly has been designed to scale from tiny devices to large server farms or CDNs

Has it, though? What we have now is an MVP, and easily 90% of the features aren’t there yet.

When WASM gets garbage collection, will it still be able to run on tiny devices? We don’t know yet.


I would hope it's opt in for those scenarios. If the host is too "tiny" to carry around GCs, then a whole host of languages are not viable anyway. Requiring a GC wouldn't do them any favors.


Hard to say without reading all the specs/proposals: https://webassembly.org/docs/future-features/

But yeah, even specification itself is WIP


Regardless of any technical merit or reasoning, one big advantage is that Javascript/WASM is not controlled by any one large company or organization.

This provides substantial differentiation from .Net/Java/Flash/etc.

That said, this advantage risks diminishing if Google's influence continues to grow.


> Javascript/WASM is not controlled by any one large company or organization

It’s disproportionally affected by Google.


Given that Mozilla has done tons of WASM work, and that one of the earliest implementors was Rust, comparing WASM to Oracle's iron fist over the JVM doesn't seem to jive with reality...


> Oracle's iron fist over the JVM

Still doesn't compare to the Sun God and his silicon fist.

(With apologies to Steve Yegge.)


Jive != jibe[1] (but agreed about Moz's role in WASM)

1: https://www.merriam-webster.com/words-at-play/jive-jibe-gibe


I didn’t say Mozilla, I said Google.

I didn’t say iron fist, I said disproportionately affected.


And it isn't true, because Mozilla is very involved, and every browser engine will have to get on board.


First and foremost Chrome has to get onboard. Hardly anyone cares if Mozilla implements anything.

A few people will care if Safari does.

Edge is out if the picture and is being replaced by Chromium (aka Chrome).

On WebAssembly specifically. Here’s WebAssembly working group: https://www.w3.org/2000/09/dbwg/details?group=101196&order=o...

I count 18 people from Google, and only one from Mozilla. The second largest representation is from Microsoft (8 people), but their input is now Chrome/Chromium input.


Exactly, did we ever move beyond monoliths dominating a standard or did they simply evolve to calling something "open" while exerting disproportionate influence on the standards body?


It's kind of true -- it's designed with a recognition that computers are real things, and shouldn't be abstracted out in terms of a specific programming model. The JVM approach (somewhat by design) limits the easer of interaction with other systems.

The biggest drawback in WASM to me has alway been the memory model. While it is memory safe, it's currently opaque to the host environment which means that the general implementation is to over allocate address space and essentially implement pointers as integers. That results in pointers that can't be trivially shared between environments.

That said, I think that if I /were/ to be making an app that had plugins, or an OS from scratch, etc I would define an interface via WASM. Screw loading unbound untrusted code into my process. My code is 100% bug free, I don't want to other people to break it.

This is not remotely true :D


How do you feel about the app extension model, i.e. loading bound semi-trusted code into a sandboxed process and communicating via XPC?


App Extensions are a really good solution* to the same problem domain, with the additional constraint that you have to support arbitrary compiled code, with all the intentional or unintentional memory safety violations.

* I am biased here.


> I am biased here.

Haha, I'll defer to you then. But I'm still curious about this:

> you have to support arbitrary compiled code, with all the intentional or unintentional memory safety violations

But there are multiple barriers here, are there not? First, your code needs to be signed (and notarized, at some point), and then at runtime the app sandbox/entitlements/process separation ensures there's not much you can do even if you're running arbitrary code. Sure, the first "barrier" isn't necessarily all that restrictive or thorough, but the other one should ideally stop anything horrible?


It's not so much a problem of signed vs. unsigned code.

There's the "are you malware that's got through whatever rules exist in the target App Store", but there's also the "is the extension code buggy".

In my experience the latter is the real killer - there are all sorts of things an App Store review process can be used to prevent malware, but they're all dependent on the code doing what static analysis claims it's doing - they can't protect against terrible code -- my experience writing software has taught me that no one rights 100% correct code, 100% time.

Historically GPU drivers have been a terrible source of bugs (it's why WebGL has such tight restrictions on features and shader syntax) not because of intentional malfeasance, but because you're processing arbitrary untrusted content in a trusted environment (my understanding/belief is that the windows driver model has pushed this out of the kernel? please confirm/deny), so functionally forcing that into a restricted execution environment is a win.

Obviously the memory model of WASM still allows for things like use after free, out of bounds access, etc; but at least it's not unbound into the host address space.


I guess you're assuming that the platform/runtime that's doing the isolation is close to being correct, which we both know isn't quite true ;) The only benefit I see is that you're adding an additional level of abstraction (namely, you're executing a vetted selection of native code rather than arbitrary native code), which makes reaching the point where you can actually break things harder. Was this the point you were trying to make?


The idea is to reduce the ability of the untrusted code to go wrong and compromise the host.

Going wrong can mean “exploited by malware” through to “extension code trawls the host process address space to provide ‘features’”

The latter of these two used to happen all the time with “haxies” on OS X.

The run on security benefit of providing a semi-virtualised environment for third party/untrusted code is that if the VM is exploited you are able to fix and ship a fix for the VM. You can’t fix the untrusted code.


In the future wasm can add more memory flexibility, for now you can only use one big byte array but the standard can be extended to allow for many of them and for some dynamics or special host semantics. That was my impression in reading the standard, that a lot of restrictions are of the form "we still do not know the best way to standardize this". Like RISC-V 128 bit


The quotes from the Web Assembly team at Mozilla and Solomon Hykes read like these people are suffering from some kind of late 1990s amnesia. Till Schneidereit is saying exactly the same thing Sun said when Java was announced. Acting like they are breaking new ground strikes me as unhelpful and it's not clear to me how it will help the cause. Maybe WebAssembly will enjoy more success than Java, I truly hope that it does.

IMHO, the success of these products has very little to do with how good they might be technically. It's all about sales and marketing and, eventually, politics. Perhaps that's where the motivation to re-write recent history is coming from.


> It's all about sales and marketing and, eventually, politics.

Not in this case. Java failed because it sucked from a technical point of view. It traded better 'OO purity' in exchange for worse security, portability and performance.

The end result is that users got something that was slower and buggier for no gain.


Where did you get this "Java failed" idea? Java is one of the most widely used languages in the whole world...


Java failed as a portable language-agnostic VM for user-facing apps.

You know, the use-case it was originally designed for. (I read the hype for Java 1.0, I was there.)

Nobody at the time could imagine that Java would eventually become the enterprise COBOL replacement.


I'm not so sure it failed. I use IntelliJ IDEA daily, people in my office use WebStorm for Javascript development. There are Swing applications out there that people use, admittedly the ones I've seen are mostly business software. And of course all Android applications are leveraging Java.

It's argued that only IDEs use Java, but Java + Swing strikes me as the most popular cross-platform language and toolkit currently in use.


that's only if you don't consider "web/browser" to be "cross-platform"


We develop on Windows, deploy across OS X, Windows, UNIX, mainframes and embedded devices, without re-compiling.

Looks pretty much living the dream of portable language-agnostic VM for user-facing apps to me.


Java didn't succeed in terms of running code in a web page but it has been wildly successful in regards to "write once, run everywhere". It is widely deployed and has been successful isolating code specific to a particular OS or platform.

While it's clear that Javascript has been successful in the "run code in a web page" sense, it's primary deployment target remains the web browser; even when it's marketed as a cross-platform solution it's often targeting something like Electron.

WebAssembly has its work cut out for it trying to succeed in both of these spaces. I think the project might enhance it's chances for success by remembering Java and its browser plugin rather than pretending they are the first.


I wonder what new standard we'll come up with 10-15 years from now that'll be titled "What WASM should have been".


Isn't that what the goal of software should be? Solving problems with older implementations?

Do people think WASM is not better than the JVM for streaming browser based usage? Did it learn nothing from the JVM? The sarcasm in these threads always confuses me.


Yes, because WASM is basically politics due to the refusal of other browser vendors to adopt PNaCL, and it doesn't provide any security for memory bound and null pointer exploits inside the sandbox.


So where were JVM targets for all my favorite languages, or any browser support for that matter?

I'm not going to debate any points against the JVM, but all I know is that I can compile languages I want to all browsers today. Why can't I do that for the JVM?

Something must be a massive, gory failure of the JVM and/or the companies involved with the JVM for it to be an equal to that of WASM, and yet have zero traction for modern languages and browser targets.


Depends on the language I guess, I can compile Java, Kotlin, Scala, Clojure, TCL, Python, Ruby, Common Lisp and any other one able to produce LLVM bitcode on the JVM.

Also 86% of the mobile OS market runs on Java.

Additionally smartcards, a big portion of electricity meters, factory automation, smart copiers, M2M gateways, Bluray Players,... run on Java.

Alongside JavaScript and .NET, it rules software development across the corporations of the world.

Hardly a failure, only for naysayers.


If only there was something else but web development...


That would be a good thing for those working on WASM, their work would have had an impact.

The worst thing that can happen is that in 10-15 years from now nobody remembers the WASM fling, and all these man hours go to waste.


s/10\-25/20\-25/ Even then, WASM does a LOT of things right that Java absolutely failed. Namely: security.


Kind of funny. WebAesembly is basically like a pre-installed Java Browser Plugin.


Without all the bugs and security holes.


Everything was intended to be secure but no one writes perfectly secure code, same goes for WASM. Java exposed a lot of system access (files, devices, I/O) that you just don't have in WASM. In that aspect it's more secure, but it also does less. If you're going to include the browser APIs, you can argue they're just as insecure as Java given their less than stellar history.


Name any technology that loads and executed remote content with a better track record than browsers and JavaScript. At least anything with even 1% the capability.


I wish JS made it easier to sandbox itself with existing mechanics - like simply unreferencing a system API. I am looking forward to this.


Java is a language backed by the JVM it tried to do two things:

Be _the_ language (Java)

Be _the_ platform (JVM)

Wasm is a compilation target, it's better to compare it to the JVM than Java. The thing that has me excited is that _every_ language will be write once run everywhere not just some new and unproven language (like Java was at the time).


The JVM in the browser was supposed to be safe, too.


The underlying problem is what the threat model was - in Java/JVM a significant amount of effort was put into addressing "what if the byte code itself is malicious", with the core applet VM having quite a lot of power over the system - e.g. file I/O, GPU access, etc.


Actually only when one did not bothered to spend any effort with Security Managers, Classloaders separation and JAAS.

All the tools were there.

WebAssembly is still not safe from internal memory corruption, due to lack of memory tagging and bounds checking.


> WebAssembly is still not safe from internal memory corruption, due to lack of memory tagging and bounds checking.

Unsafe applications won't ever be magically safe when compiling them to WebAssembly. Neither would this be the case when compiling to Java bytecode - if this was possible at all.

IMHO WebAssembly is a compilation target and therefore not the right layer to solve this. This is the responsibility of the language or the specific application. If you want to solve this in WASM, I predict you couldn't just compile all different languages to WASM anymore without significant changes to the codebases. If this would be feasible at all..

Rewriting those huge C/C++ codebases is simply not an option, new applications can be written in safe languages and then compiled to WASM.


Java bytecode requires bound checking and null pointer validation, as per the JVM specification.

CLR proves the contrary, by having C++ support, with the difference between safe Assemblies (where typical memory corruption opcodes are not allowed, compilation via /CLR) and unsafe Assemblies, where WASM like opcodes are allowed.

To load an unsafe Assembly, the host has to explicitly allow it.

Similar examples on IBM and Unisys language environments, e.g. on ClearPath, the admin must allow the execution of binaries tainted with unsafe code.


WASM has bounds checking on the linear memory as well.


Care you point it where in the standard, because I don't see it on the memory access opcodes.


> A linear memory is a contiguous, mutable array of raw bytes. Such a memory is created with an initial size but can be grown dynamically. A program can load and store values from/to a linear memory at any byte address (including unaligned). Integer loads and stores can specify a storage size which is smaller than the size of the respective value type. A trap occurs if an access is not within the bounds of the current memory size.


In general the groups doing standardization for browser APIs and runtimes don't seem to care much about whether web applications are compromised, only whether the browser or host platform are compromised. It's reasonable for the latter two to be priorities, but when we're talking about huge gmail-tier applications running unsafe C in a sandbox that have access to All Your Important Data, we're going to massively regret letting type safety and other features slide.


> ...we're going to massively regret letting type safety and other features slide.

PHP is memory safe, and yet is a larger source of data breaches and security bugs than C by (rough guess) an order of magnitude.

C is not the bogeyman you're looking for.


PHP may be memory safe but the shitload of poorly written C extensions enabled by default on all hosts that shipped with the distribution from 1995 to 2010 sure as hell weren't.


C extensions wasn't the part of PHP that broke the amateur Internet, SQL injections were.


Using the WASM GC part instead of lieanr memory will solve that, right? I know it's not a possibility for all programs, but overall, it'll help?


Not at all, because a large majority of applications being ported to WASM are written in C and C++, with all security caveats it entails.

Garbage Collected implementations of C and C++ do exist, and only thing that their GC fixes is use after free.


Web browsers also have all those things.


Yup, and work super aggressively to restrict it -- back when the JVM was supported, a large amount of the browser sandbox had to be extended to support the arbitrary behavior of the JVM, subsequently moving it out of process still required providing unrestricted access to OS components that you would rather they didn't.

At a very basic level it can be summed up with: the JVM approach to GLSL was to just throw it at the GPU, whereas the browsers worked on restricting WebGL to a super constrained subset (this was my fault, but is the correct thing to do :D )


By the time browsers were considering out-of-process plugins, the JVM was already thoroughly eclipsed by Flash. I think Flash's general brokenness (over half my browser crashes/hangs were caused by Flash) was the main motivation for out-of-process, not JVM's hugely broken sandbox.


Applets were eclipsed because of their monumentally expensive launch requirements - both time and memory.

The applets also made it hard to do a bunch of the simple games that were super popular - flash included a large amount of multimedia functions built in, and an editing environment that was geared towards interactive design.

The reason java took so long to move out of process was because the java<->browser bindings were unique to java, it did not use any of the normal plugin apis, and expected a lot of direct linkage to the host system.


The WebAssembly advocation force will tell you that everything is great and invented by WebAssembly, hand waving over the endless bytecode distribution formats that exist in computing since the 60's.


Is there a link to the WASI spec?


This seems to be the current documentation: https://github.com/CraneStation/wasmtime/blob/master/docs/WA...


Why do they call it webassembly if it doesn't even have goto.


This previous thread covers differences between the JVM and WebAssembly:

https://news.ycombinator.com/item?id=19502702

tl;dr

- JVM doesn't do enough to separate computation and I/O

- JVM doesn't run C code very well, or it requires research-level technology to do so (Graal). This applies to both computation and I/O -- it has a completely different I/O interface than C programs rely on.

- Photoshop / Word / Excel / etc. were never ported to the JVM. The browser actually has better equivalents of them.


That last one is really stupid.

The reason they weren't ported is because they were legacy codebases with some code going back to the original versions including lots of assembler. It was always going to be a monumental task to rebuild them from scratch.


That you don't have to rebuild such apps from scratch on WASM is exactly the point.

You can compile legacy codebases in C and C++ to WASM. It was designed for that. Doing that on the JVM requires "research-level" techniques because the bytecode has a completely different design.


Actually there was an Office like suite written in Java, e.g. Corel Office for Java.

JVM separation between computation and IO can be managed via classloaders and JAAS.


OpenOffice / staroffice were also partially Java based for a while if I remember correctly


They could use whatever JVM one has installed, but the dependency became optional and can be turned off (Tools->Options->LibreOffice->Advanced on LibreOffice). I keep it turned off as I don't have Java installed anywhere and probably I lose some features in the process, but I don't recall having any problems in normal use. I'm definitely not an advanced LibreOffice user though.


I see nobody's mentioned that Kotlin can compile to WebAssembly (in addition to Java bytecode, native binaries and JS).

Kotlin is a beautiful language and hopefully WebAssembly will help it to gain traction outside the Android world.


What Kotlin is now has no bearing on what JVM was then or even what JVM is now


Isn't most of the work done already with LLVM intermediate representation? Stick to POSIX (and Qt if necessary) in the code, distribute the IR with a compiler which compiles and installs it, and job done.


There's no such thing as a single LLVM IR. It's actually a family of IR's where the actual representation is always dependent on target architecture.


If you want to pull the chain on this diarrhea:

https://github.com/stevespringett/disable-webassembly


Please don't post name-calling dismissals of other people's work to HN. This is in the guidelines: https://news.ycombinator.com/newsguidelines.html.

The article you've linked to is much better than this and deserves a better representation than that.


Note that:

> To be fair, many of the threats above also apply to Javascript


Speaking as someone who has made their career with Java, "what the JVM ought to have been" is marketed for server-side use from the beginning.

It was poorly suited for the browser from the earliest days, and even now the language and runtime have evolved to make it a round peg for that square hole no matter how WASM develops.

Yet because students and younger webdevs see it through the prism of trying to be a browser-side solution, it taints the brand for people who lack business application experience. Java is thought of as "insecure", even though the OVERWHELMING majority of security patches are for the browser applet plugin, which hasn't been widely used in twenty years and now being retired. Instead of being "a thing that makes large-scale business server applications more performant, and more tenable for large teams to work on", it is often seen by newbies as "a failed React.js alternative" where it comes up short.

Actual Java developers stopped thinking of it that way decades ago, but the history unfortunately persists.


Java in the beginning was standaline on the client side too. The web browser plugin came post 1.0 and the pre 1.0 (Oak) target applications were more IoT, not web




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: