With this and Ruffle, we will soon return to the days of Internet Explorer 6 but with 4k monitors!
All we need now is to run ActiveX. I think BottledWine can run Windows executables in the browser, how hard would it be to simulate the ActiveX bindings?
If you run Windows and are jonesing for some ActiveX action, and want to be really careful, you can run WSL, run LXDE on WSL, and run https://copy.sh/v86/?profile=windows98 on Firefox on LXDE[1].
That’s maybe enough layers of indirection to make ActiveX safe :-)
This reminds me of the time I was running Linux on my desktop PC back in high school. I'd just received a very suspicious .exe email attachment, and to demonstrate Linux's inherent safety (despite not running any antivirus software!) to a friend, I double-clicked it – and Wine, invoked through binfmt_misc, perfectly emulated the virus it contained, writing garbage to my home directory (fortunately mostly the emulated Windows directories in it!).
Context: a classic episode from the Flash animation series Homestar Runner (and my favorite) https://www.youtube.com/watch?v=q1F4W8-DqjE. Original release date January 2003.
Does anyone know a simple and portable flash and java player? A firefox 3 in a folder, or something like that. Or maybe a small linux in a virtual machine.
I had a book on DHTML in the late 90s long before ajax was a thing. It simply meant scripting the DOM. The MS version did have some fancy transition filters too.
They were. Microsoft Outlook Web Access was exactly that, it's why Microsoft invented XHR. Google Maps later got a lot of credit for kicking off "Ajax", but OWA laid the groundwork.
While we’re narrowing actuallys, it was Gmail that got the credit. And somewhat relevant to the article, I believe that was Google’s first major showcase of the Java-based GWT.
kotlin is back on the menu! I love this language so much! I no longer use/need the JVM so i stopped using it, but with WASM comming, I might pick this language again
The company I work for was a very early adopter of kotlin/js. During the pandemic we published an open source contact tracing web platform aimed at universities. https://github.com/studoverse/campus-qr
The stack is actually great to work with. Having every property in React typed is nice and sharing classes between front and backend ensures you can't accidentally send the wrong fields or types.
If only we could get a new version of Java for the browser. Something more of a scripting language that is not compiled but interpreted. Could probably knock out a proof of concept in a week, might call it Javascript or something...
In all seriousness, anything that spits out canvas for a simple textbox is completely misguided about what the web is.
>If only we could get a new version of Java for the browser. Something more of a scripting language that is not compiled but interpreted. Could probably knock out a proof of concept in a week, might call it Javascript or something...
I get the joke. It is sad though how Java didn't end up being the ubiquitous runtime for browsers... It started off so promising... Compiling source to byte code is really fast in Java (and modern browsers generally do much more work compiling JavaScript). Even if the browser downloaded source code (and why not? --byte code is so much more compact and faster to parse), you wouldn't even notice the compiling down to bytecode step.
> It's amazing how badly the Java ball got dropped.
It still boggles my mind that Sun Microsystems fully owned the tech for the only ubiquitous browser plugin for full fledged applications with > 95% deployment and somehow screwed it up and went out of business a few years later. Despite all their engineering prowess it seems like they just couldn't make Java work seamlessly and smoothly. I'd like to think it was just a hard engineering challenge, but as someone who went all in on Java WebStart for deployment at one point and then watched them take it from a highly attractive platform to actively ruining it within a few years, I have to attribute it more to incompetence.
I worked at Sun during that time period, and as much as they liked to pretend they were a software business, they were a hardware business whose primary feature was software, which is really, really not the same thing.
They never really figured out how to be in the software business if it wasn't in service of selling hardware. Such an amazing fail.
I recall those days. To be fair, it was asking a lot of the hardware at the time to run both a browser and Java itself. I also recall the JVM had painfully slow start times that weren't rectified for years. These were also largely the 1.x versions of the language to boot.
On top of it all, Java integration was really just an excuse to plop a grossly out of place app UI in the browser window. Would have been much better if we could have had deeper behind the scene integration with the DOM layer.
HotJava was pretty much proof that you could have Java well integrated into the browser. If you can do the browser entirely in Java and run applets smoothly, then there is no reason why a "better" browser runtime/implementation couldn't be even smoother.
The problem was that the browsers were busy loading up on every feature they could (including JavaScript) while also providing Java runtimes that would load after the fact (and the Java load times were often much better than the browser load times, but since the Java runtime load was deferred, it was a much more painful experience).
You could keep browsers lightweight by doing much of the stuff browsers were doing into Java, but the problem with that is that the browsers were in a race to differentiate from each other, and the whole point of Java was that the experience was uniform. So instead we got a rush of poorly considered browser capabilities that we were stuck with for an extremely long time.
I can't blame the browser makers... Sun never came up with a business proposition for them that encouraged focusing on a better Java experience.
I graduated college around that time. Microsoft held an event on campus that took two hours, explaining how great it is to work at Microsoft. A month later Sun came to campus and spent two hours telling us how awful Microsoft is and we shouldn't have anything to do with them. What they didn't do was tell us why we should want to work for Sun. On a larger scale, Sun did seem to lose focus and concentrate too much on getting Microsoft back for making a faster and more stable JVM. What they didn't do was come up with improvements that users wanted. At least not at the rate they should have.
Yeah, Sun stock really soared when they started going after Microsoft, but I kept pointing out that the valuation was from the distorted perception where they looked bigger because they were taking on a big foe. It was a sugar high that was obviously going to come crashing down if they weren't focused on delivering a better product.
* Build SPAs in Java with HTML templates and components
* Fast build times and batteries-included build framework
* Easy calls to Java web services, just invoke a method and Flavour handles marshalling and unmarshalling, you just see/use Java objects.
* Full-stack refactoring
* Built on TeaVM, so you get the all the benefits of a mature, performant framework with support for threads and multiple JVM languages (bytecode-based transpilation).
Blazor has the full task API. Asynchronous tasks run just like Javascript promises.
Does TeaVM have true threading, or is it emulating threads via some higher-level concept? (IE, if I have a multicore CPU, will TeaVM take advantage of multiple cores?)
Unless you're trying to do true CPU-intense parallel computing inside the browser, where you need the power of real CPU-level concurrency, "real threads" in the browser has no gain over the task API.
One of the nice things about SPAs is that they allow navigation without the latency needed to re-load all assets.
You can get there the old-fashioned way if all your assets are cached in the browser with long timeouts; but that implies that you're doing something like putting hashes in the asset URL and hashing them as part of your build process. (This, is why loading JavaScript and CSS from CDNs helps with performance.)
Otherwise, every time you navigate to a new page on a non-SPA, the browser still needs to send an IF-CHANGED-SINCE request to the server.
As a developer, though, the BIG drawback of SPAs is that they require building much stricter APIs for everything. In a traditional server-side HTML page, you can quickly prototype something where the code that's putting together the HTML has access to privileged data that you can't expose through an API. (IE, if you just need to prototype something, server-side HTML rendering code can directly talk to the database.)
> One of the nice things about SPAs is that they allow navigation without the latency needed to re-load all assets. [...] You can get there the old-fashioned way if [...] you're doing something like putting hashes in the asset URL and hashing them as part of your build process.
But that solves the problem, doesn't it? And it seems a lot simpler than implementing an SPA.
> And it seems a lot simpler than implementing an SPA
Depends on your design goals.
BTW, I'm not pro/against SPAs. It's just a tool, and like all tools, they have their advantages and disadvantages.
In general, the fact that you can update the page without a server-side round trip is a major advantage. Granted, you can also do that with server-side rendering; but then your rendering logic is defined in two places.
I think you’re a little misconceived, WebAssembly runs entirely in the client and have no bindings to the server, as far as the server is concerned such apps may as well be static HTML. This is Blazor WebAssembly. It will be just as trivial to run such on Mac, Linux or Windows server or any other plain web servers.
TeaVM seems to be the same except it compiles Java to JavaScript which means has fast startup times.
Web browsers and their fundamental development foundations are best described as a set of mostly well meaning compromises, and so even modern browsers have been updated to support all the other disagreeable things you have mentioned, even having the ability to navigate the browser history within an SPA
Ah, sure, for whatever WASM variant you're referring to. Whichever form of blazor I use pretty obviously makes calls back to the server to interact with client state on a per-event basis. If the backend has to restart, the clients all lose connection and die. I won't pretend to be an expert in the framework, or even C#. I had avoided both until recently, and this introduction hasn't made me particularly fond of either.
Server-side or WASM is an option when you create a Blazor project in Visual Studio.
You can write libraries and either approach can use the same library.
Server-side Blazor keeps a websocket open to send messages back and forth. If you can tolerate the latency, it allows your UI code to be able to handle data that needs to remain secret on the server.
Its pretty fundamental stuff to know the difference between what is server side and what is client side. I think you will have a lot of trouble undertanding a framework without this.
Blazor WebAssemly requires you make explicity Ajax-like Calls to external web services in order to interact with a server and data. There is no server side framework in WebAssembly for maintaining state. State is entirely client side.
Blazor Server basically uses Web Sockets (with fallbacks) to communicate changes between client and server, this is far more efficient than using http request calls. State here is server side. But differs from all other frameworks in that state is trasmitted over sockets not http requests.
I'm not convinced you have actually used any kind blazor tbh
At no point was I confused as to the server-side vs client-side nature of the software. I've been using what I assumed was the default of server-side blazor and was casually agreeing that, yes, a WASM compiled variant would obviously run fully in the browser.
>I'm not convinced you have actually used any kind blazor tbh
It's just a question of time before the containerized crowd discover that the exact version of Chrome they test their website with can be compiled to the WASM+canvas platform, bundled with the website, and solve all browser issues forever...
Then we would have a horrible culture of polyfills and transpilers. We would have code that gets shipped unmiminized and with comments that weren’t removed. Eventually, that language will get repurposed for other tasks such as server-side applications or even mobile-applications (imagine shipping a separate runtime for extra language in mobile apps, ugh) due to the growth of the web and the rise in the number of web developers.
That sounds like a horrible idea. I hope it never comes to fruition.
Amusingly, there were two different ways of running Java in the browser when it came out in the late 90s. One was Applet style where Java takes over a part of the canvas of the page and has to implement everything, like Flash or ActiveX. The other was headless and you had access to the DOM. Why no one built interesting applications with the latter is beyond me.
>The other was headless and you had access to the DOM. Why no one built interesting applications with the latter is beyond me.
Frameworks, HTML5 or even DHTML/AJAX didn't exist at the time. If you were already paying the cost of invoking Java/Flash/ActiveX, why would you access the DOM and pay extra rendering price - for an interface suited for documents, not applications?
The idiomatic and library-supported way to make a Java web application was to make a richer interface using Java itself. Why reinvent the wheel to make a HTML interface, only to incur a performance penalty?
EDIT: Java applets failed because of Java's poor integration and (at the time) performance and sandbox security. That had little to do with the DOM interface. Using that would have made things worse.
On the flip side, is there a JVM-based Wasm interpreter? JNI is great and all, but I love the thought of compiling native code to WASM to provide a single package for all plaform without extra redundant packages.
> But the point of JNI is to provide language bindings. WASM doesn't automatically fix the need for those.
True. I was imprecise. I'm fine with needing the language bindings / JNI. My main beef is with the platform specific native binaries that JNI binds too, and the complications (size or platform targeting) that they introduce to the build and distribution process.
I think what's needed here is actually just a cross-platform binary format that can be linked and loaded without using the OS native linker, along with an integration with cross-compilers. Java apps mostly use JNI to either access operating system specific APIs, or to access high performance hardware the JVM doesn't directly expose (like simd), and WASM doesn't help you with either.
And then you also have the problem that WASM/WASI is basically a new OS which is incompatible with every other, so it requires ports.
On the other hand, if you had something like ELF, PE or Mach-O but designed for cross-platform distribution and which had a nice and convenient cross-building toolchain and a portable dynamic linker, then it'd be a lot easier to compile and distribute once but use on every OS. Sort of like MachO fat binaries but more fine grained and not Apple specific.
Graal also has some sort of capacity for that in the way it can execute LLVM bitcode. You can do it either inside a sandbox that looks like a virtualized Linux (syscalls are recompiled into upcalls into the JVM), or it can be allowed to access native code. There is a toolchain that you simply point your PATH at and it produces the necessary files. But, it's not well known, LLVM bitcode still requires JIT compilation before it can execute on the CPU, and the toolchain doesn't really cross-compile in the way I mean above. Plus bitcode isn't a stable format.
> On the other hand, if you had something like ELF, PE or Mach-O but designed for cross-platform distribution and which had a nice and convenient cross-building toolchain and a portable dynamic linker, then it'd be a lot easier to compile and distribute once but use on every OS.
I mean... .NET VM and JVM kind of achieve this, but I think I get what you mean, you would either have to build the environment that these binaries run on for each OS, and recipients would have to install it, or you hope the OS vendors adopt your standard. I would love to see such a project though.
Bonus points if it can be similar to DMG files where you drag it to your "Applications" folder to install, delete it to uninstall.
I'm thinking of something a bit lower level than app packaging. But yeah think something like "jar for native code" but with the ability to directly dlopen() and dlsym() from it. So the file could contain native code that's OS or CPU specific and the dynamic linker (bundled with your app) would be able to successfully load the right one and get symbols from it. It'd just be a more convenient way to ship code that can fully utilize the CPU and OS, without excessive duplication (e.g. think about merging the text sections and symbol tables together but using something a bit like symbol versioning to keep the code for each OS separated).
Apple already went down that road with bitcode and they abandoned it, or so I thought.
The problem is that you can't solve the problem of developers not adopting new hardware features by abstracting the hardware to a lowest common denominator, which is what WASM does and what it will always do (because the web guys have no interest in letting people write ARM or Intel only web pages, let alone NEON only web pages). You can see this problem in the writeup where the use case is posited to be SIMD, but that's only one of many possible features CPU vendors could add. What about all the others? Now instead of waiting for Android devs to adopt new features, you have to wait for WASM to get it and users to update their OS and Android devs to then adopt the new features as well. This doesn't sound faster.
So I'd guess that a better investment would be in better developer toolchains and emulators.
There are other problems with that approach:
1. How can devs measure performance if it's the Play Store that compiles the app? Upload it, download it and measure that? You never know what you're gonna get and the app may even be recompiled behind your back without you even doing a release.
2. An example of a painful transition was 32->64 bits, said to be hard due to the need for doubled up testing. Well, WASM doesn't fix that and it's not obvious how it could. It has a 32 and 64 bit variant and the need for testing both versions is driven by the way C/C++ work, not the way native code is expressed.
3. Even just abstracting SIMD isn't all that easy. The Java guys have spent years designing a SIMD abstraction that covers up the differences between AVX and NEON. C++ doesn't have any such abstraction, you literally code against intrinsics for the specific instructions. So it would only work if you assume a really awesome auto-vectorization compiler module, but the JVM guys also spent years trying that and eventually gave up. The JVM can auto-vectorize some things, but to exploit the full power of SIMD units automatically is too hard.
> Apple already went down that road with bitcode and they abandoned it, or so I thought.
Indeed, however it was mostly caused by relying on their own LLVM bitcode fork to achieve the stability that LLVM bitcode doesn't support, and eventually getting fed up of wasting development resources keeping it up to date with upstream.
As for WASM/NDK as replacement for JNI, I don't think it is a good idea, mostly due to how bad the overall NDK experience happens to be, and this won't make it better anyway.
Regarding 3, .NET does much better in this regard, with system numerics and processor specific intrisics.
Usually people tend to forget CLR was designed for C++ workloads as well, and it is reflected on its bytecodes.
WASM in general, I think its place is on the browser, anything else is just yet another take on bytecode deployments since the dawn of computing.
Maybe someone should write a P-Code to WASM compiler, to bring UCSD Pascal back into modern times, and make Pascal cool again.
Interpreter is slow, you need to run native. There is wasmer java, which is JNI binding to wasmer runtime.
Then graal has wasm support, but you need graal, not just any JVM (you can also run in stock JVM, but only in interpreted mode with som graal libs, but this mode is not supported).
And both of these are ok-ish for pure functions. If you need callbacks to your host (typically needed for plugins) you are screwed.
For anyone looking for a free similar alternative (no swing), I can recommend doppio. I used doppio to provide an online version of the open source language interpreter I am building, you can try it here: https://www.timestored.com/jq/online/
The main pain is that it's really really slow to load on firefox.
I was hoping to see an AOT - but they say that currently it is just interpreter + JIT. This is beneficial for compatibility but counterproductive for speed. Nevertheless, an interesting project.
In article they explain, that previous version (CheerpJ 2.0) used AOT, but "CI setup was often felt as an unwelcome burden, and many of enterprise users found it hard to run the actual AOT compiler binaries on their controlled environments". So they decided to go to interpreter + JIT instead of AOT...
Considering the time-limit of JIT compilation, I'd say almost certainly yes. AOT can analyze code basically forever (ask C++ compilers), so there's a whole class of time-intensive optimizations that JIT simply cannot do.
I'd personally always bet on AOT performance to beat JIT - to convince me otherwise is what I would require evidence for.
The tradeoffs are more subtle. A JIT compiler has more timing constraint, but more information about the runtime behavior of code.
As a counter-point to your argument, consider a highly dynamic language such as JavaScript, in particular the '+' operator. An AOT compiler would have no choice but dispatch to a generic heavy implementation that could handle any possible combination of inputs. On the other hand a JIT compiler can observe the runtime values used and specialize using Inline Caches, potentially down to just a few instructions.
Probably depends on what you are optimizing for. If this metric is an app startup time then AOT is super beneficial and always beats JIT without strings attached.
Why is this being downvoted? I find it pretty clear that there is a trade off between startup time and performance. You can also observe this sort of behavior by tweaking -XX:TieredStopAtLevel when launching Java. You get marginally faster startup times but at the cost of run-time performance. And anecdotally I've seen native images start up faster but also have marginally slower runtime performance than launching a JVM and letting C2 do its thing.
It's some evidence. Perhaps the HotSpot thing outperforms AOT on its own, it could be true.
Somewhat equivalent to HotSpot but in AOT would be GraalVM's profile guided optimization. If runtime information is what brings JIT its advantage, then PGO could bring the same advantage to AOT. I don't have any hard data though.
Why is there a time limit? Doesn’t Java do multi tiered jitting with background compilation? Should be able to take as much time as needed in a background thread while code is executing in less optimized tier.
Java has 2 tiers, but even so it can’t take some unbounded amount of time doing every kind of analysis, as a big selling point of JIT compilers is optimizations based on assumptions, with some low chance of needing a de/reoptimization later.
For what its worth, there is a vendor that uses LLVM as a JIT compiler (Falcon), but that really is not the bottleneck in most java applications.
Using anything other than Javascript in the browser is absolute madness, and I'm a python dude.
Learning the limitations, workarounds and build processes for using anything other than Javascript is going to take longer than just learning Javascript. I get it for moving large C++ applications onto the web, I get it for occasional fiddly bits and pieces like maybe a codec. But new web front end code in anything other than Javascript? Nah.
For whatever its worth, clojure.js is something I can accept - javascript was made as an attempt at a lisp with C-like syntax - using a hosted language which is a lisp only makes sense, especially that you can then share code with your backend.
And I am no lisp enthusiast, just find the abstraction very minimal there, plus good interop with the host language.
The thing with these kinds of ports is that they do some 'neat stuff' but ultimately, they don't come close to the real thing, often, not close enough to be a material substitute.
The JVM is a very sophisticated and nuanced thing, 'duplicating it' and the standard libs is a very big deal.
Theses are cool projects though and they have their use.
"CheerpJ is based on an unmodified OpenJDK environment, guaranteeing the same behavior on the browser compared to a native JVM. It includes many emulation layers to ensure Filesystem, Networking, Printing, Clipboard and many other subsystems work seamlessly."
Did you actually manage to find anything that works in OpenJDK but not in CherpJ 3.0?
WASM has memory, and any native code compiled to WASM will allocate on WASM memory. CheerpJ's runtime is written in C++, which when compiled to WASM knows how to allocate memory, so there's no reason why direct buffers won't work.
I did, as I said, those are likely 'Virtual WASM Direct Buffers' which are not really 'Direct (Memory) Buffers' - and therefore, not really what they are meant for, though it's impossible to tell.
Given how WASM works, I'm pretty sure (but not certain) that anything that uses nio to interface to anything else is not going to work, other than maybe things which are 'fully contained'. The 'test' would be to dynamically load a lib/dll and see if the references are valid, which I doubt, firstly that'd imply that native dll/libs are possible, which I also doubt.
These ports generally don't work 'as expected' the whole endeavour is about identifying the parts that work differently, and if/how to work around them or live within the constraints.
So your complaint about a JVM environment in the browser is that it can't load DLLs from your Operating System?
Anyway, let's assume you're talking about a DLL or native lib it gets from the same origin. That also should work totally fine. Because CherpJ implements a virtual file system. I suspect even JNI will work fine. Anyway, doing stuff like that on a browser is way out of the norm, so it seems to me you're looking for a challenge and are not really interested in whether this can be used to run nearly every Java application out there, which it seems it can easily do.
No, it's not going to load native libs, and 'jni' almost certainly does not work as normal, nio is widespread in java depends on 'native' buffers (actually native) and is a core part of Java. This is just one of a few examples of where 'this isn't really Java'. 'Most' apps will not run out of the box in this WASM config.
Clearly several people have no idea what you’re talking about, so either you’re bad at explaining or everyone around you is an idiot. I’ll let you decide which option you think is more likely.
Exactly. These 'ports' are more intellectually exciting than the are pragmatically useful, but they definitely can be useful, it's a matter of having the right product parameters around what they are good for and what not.
This is really an exercise in engineering thinking vs. product or 'solution oriented' thinking. You can usually tell where people's heads are at by how they react to these kinds of things.
Should note that 'people doing stuff because' is a core part of organic development, most things would not exist without that ethos floating around.
Have you looked at the vendor’s website? This doesn’t strike me as a purely academic/intellectual "because we can" project at all.
And you still haven’t explained in any way why the lack of DirectBuffers or "IO" is a disqualifier for being able to run legacy applets in a modern browser without a JRE or other plugins.
Just the opposite, the company has been around for 10 years and they have are an intellectually oriented professional services provider. This is quite common. The VM they built may or may not be a direct demand from a customer, my bet is it was an idea that someone had and they decided to make it, to see if there was some customer buy-in. There might be a niche need.
The problem with the 'A Customer Can Run An Applet In A Browser' as a solution, is because this won't materially work for most wide, public style deployments - there will be any number of snafus, and for more 'internal' style IT deployments, there are just easier, more robust ways to deploy a Java app.
I have explained how 'Direct Buffers' are a problem to anyone who understands what VM and Direct Buffers used for (they are inherently about bridging to native memory), which obviously is not going to be accessed in WASM context.
Actually - WASM itself is a great analogy for what is going on here. It's 'perennially almost there' tech aka a neat idea that has limited value in the real world, it's been around for so long and there isn't that much activity, or at least not commensurate with what it's supposed to be able to do. By the time WASM catches up, the performance of JS in V8 gets so much better it becomes 'good enough' which obviates the need for WASM. And so on it goes.
Something may eventually come from JVM in WASM, but what we see is a very early experiment.
> By the time WASM catches up, the performance of JS in V8 gets so much better it becomes 'good enough' which obviates the need for WASM.
One main benefit of WASM arguably isn't performance of newly developed applications, but the possibility to run arbitrary old applications (or new applications built on massive stacks of historically grown libraries) much more efficiently than emscripten alone allows.
> There might be a niche need.
Of course it's a niche need, Java applets haven't been mainstream for many years now! But as with any technology, a long tail exists.
> I have explained how 'Direct Buffers' are a problem to anyone who understands what VM and Direct Buffers used for (they are inherently about bridging to native memory), which obviously is not going to be accessed in WASM context.
I've only ever used Java on the server – are "Direct Buffers" commonly used by Java applets not bridging to JNI (which obviously won't run in a pure Java emulation/compatibility layer)?
Might be worth mentioning that Java applets could call into native DLLs (which was a major security risk), while there's no way to do the same from WASM.
They could only call into the JVM runtime, not arbitrary DLLs. WASM can do the same, that's how it talks to the browser. The difference is mostly the kernel sandboxing that browsers do.
No, you could definitly call into native DLLs from a Java applet. Everything had to be code-signed, and at some point a permission popup was introduced, but it totally worked (that's how I "integrated" a Windows D3D game written in C++ into browsers ca 2010 until Chrome removed Java plugin support).
Well I'd go with Swing and SB for the back end. Opens up the possibility of a monorepo though! Also that'd mean web and desktop code would be the same. That's awesome
Yes. I also just realized I can now use Hibernate to abstract all my db calls away in case I want to change db backend later. This is a life changing tech!
Strikes me though that all these "managed-memory language VM inside a a WASM VM" projects for WASM keep having to build out garbage collection support (or run an existing GC inside WASM) for their runtimes. It's a bit of a stacking turtles problem, and seems wildly inefficient. I keep wondering how far off GC support for WASM runtimes is.
In the meantime, while this is neat, I feel like it's a mis-use of what WASM is most suited for: cross-compiling C/C++ etc binaries to a "web" target. WASM as a container for multiple language runtimes seems wildly inefficient.
In the case of CheerpJ the virtual machine itself is pure C++ compiled to WASM, but the JIT-ted code and the objects are pure JS. Garbage collection is natively handled by the browser.
As WASM GC stabilizes we will consider supporting it as well.
Ah, this is an interesting approach. Does JITing over to JS introduce problems with concurrency support though, given the lack of threads in the JS runtime?
WASM Garbage collection is coming pretty soon. Currently, it's behind a feature flag in Chrome. I think the chrome flag for GC is due to be removed in a few months. I don't know when Firefox and Safari will roll out support. But I assume they won't be that far behind.
One of the things I'm following that will use garbage collection support is the Kotlin wasm compiler which is currently available as an experimental option with kotlin 1.8 and beyond. And of course with wasmer and wasmtime, it will also be possible to target edge computing with this eventually (as they roll out gc support in the next months). Fair warning, this stuff is very cutting edge currently. Wait six months or so for this to become more usable. The upcoming kotlin 2.0 is probably going to be seeing some usable early access version.
There is already some experimental wasm support for Jetpack Compose multiplatform in the works as well. Currently you'd mostly use the kotlin-js compiler for that on the web but pretty soon you should be able to compile to wasm instead. This will work both with components that render to a canvas as html based web components. Advantages of kotlin wasm over kotlin-js: faster compilation and loading speeds. And having used kotlin-js, faster compilation speeds are very much welcome.
If you are into multiplatform UI development, another thing that is interesting with this is the IOS support in compose. So with compose you should be able to target Android, IOS (native), Web (js and wasm) and desktop (jvm). Interesting new framework with not a lot of alternatives beyond react native and Flutter so far. And of course Kotlin is popular on Android and for cross IOS/Android development already. This stuff will take some time to mature but there was a lot of buzz around this topic at kotlin conf a few weeks ago. And of course with wasm and wasm components, you'd be able to do cross platform UI development and integrate the same components via wasm on each platform rather than having to engineer bespoke components for each platform.
Last I checked (a year or so ago, but bug report not updated to indicate current release differs much) CheerpJ provides the full Java Accessibility API, however, from the bug report on this issue, the API calls are "not interpreted and converted by CheerpJ to corresponding browser functionalities". So from a practical standpoint currently lacks support, but have stated they may improve this in future. I haven't had the time to attempt it, but based on the API they expose [when I looked, may have changed since] to talk to the runtime from the browser, it looked feasible but not fun to wire up at least basic screen reader / automation support. From my experience with a Java application converted to running on its jvm, the performance is on par to running it previously hosted on Oracle jvm via Internet Explorer. It is good, except I hate I can't copy using ctrl-ins or shift-del because it requires ctrl-c pressed after initiating copy (presumably most people just press ctrl-c twice so minor inconvenience or maybe its a quirk of our installation).
CheerpJ is being sold as a commercial product, specifically for running the kind of line-of-business application that people need to be able to use to do their job. So it seems like a question worth asking.
Who would want a lazily ported Java desktop UI in the browser?
Web apps are bad enough as it is. But downloading massive jar files and execute them at a major performance penalty? No thanks. I will actively avoid websites that use heavyweight tech like this.
This lets you run an existing Java web applet that you already have, from way back when those were supported, in a modern browser with real sandboxing. Even if you don't have the original source. It's for legacy stuff, not new projects.
Their AOT solution was clearly too complex for some potential clients, that seems like the real reason they rewrote it all, support for higher Java version is mostly a neat benefit (though Java 9 did still support Applet from my understanding, just deprecated).
A JVM in the browser wouldn't be so bad these days, since we have things like same-origin policies and sandboxing. Something like it could only run headless, had no AWT/Swing, and had an API to manipulate DOM and bindings to JS.
public class JavaFiddle {
public static void main(String[] args) {
new Thread(() -> System.out.println("Hello World from another thread!")).start();
System.out.println("Hello World!");
}
}
Being able to execute Java threads does not imply that they are being mapped to actual OS-level threads (or any other concurrent construct).
Back in the days of J2ME, some embedded JVMs emulated threads at the VM level, for example (due to the OS they were running on itself not supporting any threads, which was true for e.g. Palm OS and many non-smartphone OSes). This was called "green threads", as far as I remember.
good job, and kodus to you guys.
BUT, it's sad to see that java (as a language not VM) still exists.
It's such a bad language compared to many other existing options
Very small language initially, with some 1990s warts that we all understand, and new features are only added long after they've proved their worth elsewhere.
This means a lot for long-term hiring and maintenance.
All we need now is to run ActiveX. I think BottledWine can run Windows executables in the browser, how hard would it be to simulate the ActiveX bindings?