Hacker News new | past | comments | ask | show | jobs | submit | mwcampbell's comments login

Sorry if this is too far off topic for this thread, but I'm curious if you've done any work on packaging JVM-based desktop apps, whether using JavaFX, Compose, or something else, using GraalVM Native Image. The idea of bringing Native Image's minimal startup time to desktop apps is really appealing to me.

Yes there have been some experiments with that.

https://github.com/hydraulic-software/conveyor/discussions/6...

Gluon has a version of GraalVM that can compile JavaFX apps. They do indeed start impressively fast and use much less memory. It's still a road somewhat less travelled though. Someone also tried it with Compose but it didn't get further than a demo repo and a few comments on our Discord.

There are a few issues left to resolve:

1. General developer usability.

2. Native images aren't deterministic, which reduces the effectiveness of delta updates.

3. Native images can quickly get larger than the JVM+bytecode equivalent, as bytecode is quite compact compared to machine code. So you trade off startup time against download time.


Is bytecode still more compact than native code when you factor in the ProGuard-like optimizations that Native Image does as you said in an earlier comment? Also, how does native code compare to bytecode once you compress it?

A small native image will be smaller than a jlinked JDK+JARs, but it doesn't take long for the curve to cross and the native image to become bigger. ProGuard doesn't fundamentally change that.

The native code produced by native image compresses very well indeed. UPX makes the binaries much smaller. But then you're hurting startup time, so it's not a good trade.

The best way would be to heavily compress downloads, then keep the programs uncompressed on disk. Unfortunately most download / update systems don't support modern codecs, so you're very limited in how much you can reduce download times. Also codecs like LZMA often result in much slower decompression, so on fast internet connections it can actually be better to use less compression rather than more. Really modern codecs like Brotli or zstd are much better, but browsers don't have good support for downloads.

None of this is especially hard to fix but it's a quiet area of development. I think it'll need a bit of a paradigm shift to become a more popular way to do things on the desktop/cli space.


Interesting observations on compression. As a young programmer, I used to compress executables, and maybe some DLLs as well, with UPX without a second thought. Later I understood that executable compressors prevented the OS's memory-mapped file I/O and demand paging from working as designed, and moved to only compressing the installer and update packages (another of my misadventures as a young programmer was doing my own updater with its own package file format).

I guess the ideal solution would be if the download server offered a few compression options negotiable at download time, via Content-Transfer-Encoding or some other form of HTTP content negotiation, trading CPU time against bandwidth (the server would have to pre-compress or at least cache the compressed versions to scale), and then the download was stored as some kind of archive that could be mounted as a filesystem (this implies random access and therefore not "solid" compression). Then delta updates would be done against that filesystem image. That way, you wouldn't have the "installing" process of uncompressing and copying files. Of course, that would require platform support that we don't have on Windows and macOS. At least I can dream about desktop Linux.


macOS actually has the best support for that. DMG files are mountable disk images and the contents can be compressed with LZMA or some Apple-specific codecs that are quite good. Opening them mounts them into the kernel and then there's random access. Even code signatures are checked JIT during page faults.

The main problem with DMGs is the poor UX, and very slow mount/verification times. Users can start the app from the DMG and it will seem to work, but be unable to update. They forget to unmount the "drive" or don't know how. The format is also undocumented and a PITA to work with as it's basically a full filesystem, which also has to be signed and notarized independently and that's super slow too. So it makes the whole build process a lot slower.

There's quite some low hanging fruit here that I might experiment with soon. I have a design in mind already.


I wonder if the bigger cause of stupid slowness in our current bad software is Electron, or misuse of declarative UI frameworks like React.

> the Windows guys were somehow allowed to reject the whole .NET concept

What I heard was that the higher-ups pushed .NET as a core part of the OS pretty hard during Longhorn, and it failed, leading to the Longhorn development reset around 2004 or 2005.

Disclosure: I was on the Windows team at Microsoft, but long after all of this happened (2017-2020), and I never learned about Longhorn history from the inside. I don't remember sources for what I said above, though I think Herb Sutter has talked about it.


That is basically correct. All the Longhorn .NET code I'd written in the past 2 years was junked. Never understood why the concerns that led to .NET being dropped for OS development weren't addressed before development started.

Yeah I've always assumed there must have been some really traumatic experiences in the past for Windows/.NET to become so completely divorced. I mean it'd have made sense to port or rewrite a pile of all the little utilities and subsystems that aren't performance sensitive at least. But I guess the backwards compatibility story w.r.t. .NET generics killed off their interest in allowing .NET to become a part of the Windows API.

One of the few Windows utilities that was allowed to be based on .NET, in Vista and Windows 7, was Narrator, the screen reader shipped with the OS. But Narrator was rewritten in C++ for Windows 8. That was still before I got there, but of course I asked about the history, since I was on the Narrator team. I think the deciding factor for doing the rewrite was the sluggish startup of the .NET version. I guess the need to run on Windows Phone 8.1 and Windows 10 Mobile was likely another factor, though now I don't recall if my teammates specifically said that it was.

Back then performance profile of .NET (Framework) was just not where it is today sadly (even with C++/CLI there were warts, but I only know what is publicly available unlike the sibling comment).

I'm not so sure about that. Many people who write software have learned not to care about anything other than developing for the platform(s) where the users are, in the most expedient way possible. So unless there really is a mass user exodus from Windows, which I doubt will happen, we'll continue to develop for Windows.

False dichotomy, with what is likely extreme hyperbole on the JS side. Are there actual sites that ship 20 MB, or even 5 MB or more, of frameworks? One can fit a lot of useful functionality in 100 KB or less of JS< especially minified and gzipped.

Well, I'm working right now so let me check our daily "productivity" sites (with an adblocker installed):

  - Google Mail: Inbox is ~18MB (~6MB Compressed). Of that, 2.5MB is CSS (!) and the rest is mostly JS
  - Google Calendar: 30% lower, but more or less the same proportions
  - Confluence: Home is ~32MB (~5MB Comp.). There's easily 20MB of Javascript and at least 5MB of JSON. 
  - Jira: Home is ~35MB (~7MB compressed). I see more than 25MB of Javascript
  - Google Cloud Console: 30MB (~7MB Comp.). I see at least 16MB of JS
  - AWS Console: 18MB (~4MB Comp.). I think it's at least 12MB of JS
  - New Relic: 14MB (~3MB Comp.). 11MB of JS.
    This is funny because even being way more data heavy than the rest, its weight is way lower.
  - Wiz: 23MB (~6MB Comp.) 10MB of JS and 10MB of CSS
  - Slack: 60MB (~13MB Compressed). Of that, 48MB of JS. No joke.

I sometimes wish I could spare the time just to tear into something like that Slack number and figure out what it is all doing in there.

Javascript should even generally be fairly efficient in terms of bytes/capability. Run a basic minimizer on it and compress it and you should be looking at something approaching optimal for what is being done. For instance, a variable reference can amortize down to less than one byte, unlike compiled code where it ends up 8 bytes (64 bits) at the drop of a hat. Imagine how much assembler "a.b=c.d(e)" can compile into to, in what is likely represented in less compressed space than a single 64-bit integer in a compiled language.

Yet it still seems like we need 3 megabytes of minified, compressed Javascript on the modern web just to clear our throats. It's kind of bizarre, really.


js developers had this idea of "1 function = 1 library" for a really long time, and "NEVER REIMPLEMENT ANYTHING". So they will go and import a library instead of writing a 5 line function, because that's somehow more maintainable in their mind.

Then of course every library is allowed to pin its own dependencies. So you can have 15 different versions of the same thing, so they can change API at will.

I poked around some electron applications.

I've found .h files from openssl, executables for other operating systems, megabytes of large image files that were for some example webpage, in the documentation of one project. They literally have no idea what's in there at all.


That's a good question. I just launched Slack and took a look. Basically: it's doing everything. There's no specialization whatsoever. It's like a desktop app you redownload on every "boot".

You talk about minification. The JS isn't minified much. Variable names are single letter, but property names and more aren't renamed, formatting isn't removed. I guess the minifier can't touch property names because it doesn't know what might get turned into JSON or not.

There's plenty of logging and span tracing strings as well. Lots of code like this:

            _n.meta = {
                name: "createThunk",
                key: "createThunkaddEphemeralMessageSideEffectHandler",
                description: "addEphemeralMessageSideEffect side effect handler"
            };
The JS is completely generic. In many places there are if statements that branch on all languages Slack was translated into. I see checks in there for whether localStorage exists, even though the browser told the server what version it is when the page was loaded. There are many checks and branches for experiments, whether the company is in trial mode, whether the code is executing in Electron, whether this is GovSlack. These combinations could have been compiled server side to a more minimal set of modules but perhaps it's too hard to do that with their JS setup.

Everything appears compiled using a coroutines framework, which adds some bloat. Not sure why they aren't using native async/await but maybe it's related to not being specialized based on execution environment.

Shooting from the hip, the learnings I'd take from this are:

1. There's a ton of low hanging fruit. A language toolchain that was more static and had more insight into what was being done where could minify much more aggressively.

2. Frameworks that could compile and optimize with way more server-side constants would strip away a lot of stuff.

3. Encoding logs/span labels as message numbers+interpolated strings would help a lot. Of course the code has to be debuggable but hopefully, not on every single user's computer.

4. Demand loading of features could surely be more aggressive.

But Slack is very popular and successful without all that, so they're probably right not to over-focus on this stuff. Especially for corporate users on corporate networks does anyone really care? Their competition is Teams after all.


This is mind blowing to me. I expect that the majority of any application will be the assets and content. And megabytes of CSS is something I can't imagine. Not the least for what it implies about the DOM structure of the site. Just, what!? Wow.

too much crap holy and this is worse case scenario with adblock

I just tried some websites:

    - https://web.whatsapp.com 11.12MB compressed / 26.17MB real.
    - https://www.arstechnica.com 8.82MB compressed / 16.92MB real.
    - httsp://www.reddit.com 2.33MB compressed / 5.22 MB real.
    - https://www.trello.com (logged in) 2.50MB compressed / 10.40MB real.
    - https://www.notion.so (logged out) 5.20MB compressed / 11.65MB real.
    - https://www.notion.so (logged in) 19.21MB compressed / 34.97MB real.

Well, in TFA, if you re-read the section labeled "Detailed, Real-world Example" yes, that is exactly what the person was encountering. So no hyperbole at all actually.

I agree with adding very little JavaScript, say 1kB https://instant.page/ to make it snappier.

I'm getting almost 2MB (5MB uncompressed) just for a google search.

This is how Flutter for web implements accessibility. It's been a while since I checked, but last time I tried using a Flutter app on the web with a screen reader, there were problems. I don't remember specifics.


Flutter on Web is still mainly not accessible when it comes to screen readers, despite what anyone wants to make you believe.

I checked it last a couple of weeks ago on my Android phone, and the result was so poor, I didn't check with other platforms.

You need to "opt in" to accessibility as a user, so in the beginning you have one button visible to screen readers, then your screen reader sees that single button, then you need to "double tap to activate" to make the rest of the document screen readable.

Then, somehow they made a funky screen reader voice read stuff out loud, not my system's voice, with all the awkwardness and issues that come with it.

On lists, it only recognized the items that were on screen, if you had to "scroll", you are out of luck.

Now, I'm sure someone will come and tell me how I'm holding it wrong, but somehow, every website and even Flutter on mobile got the accessibility mainly right by default, on Flutter web the accessibility situation is very poor.


Do you think you'd write your 90s style widget set and desktop environment in Rust, or do you think Rust itself tends toward bloat (in non-embedded applications)? I know you use Rust in other projects, which is why I'm asking about that language specifically.


And would mental health issues have kept him from working on xz if that had been part of his day job?


I’ve been in similar burnout situations, and the difference between work and side projects did not matter to my mental health. The money was not an issue, it was headspace and fatigue.


The "users are mean" story is something that we can all do something about, and, honestly, I prefer stories with morals that most of us can actually put into practice.


Ah, well, I guess my thought is that we need to figure out what to do about the "take over from a burned out maintainer and then inject malware" attack.

It's not obvious what to do about it to me either. Which is why I'm concerned to talk about it.

That this attack was run by someone who had been participating in the project for possibly years before making the attack -- is not what i would have expected, and makes it even much harder to defend against. Before I was thinking it was about awareness of "don't just turn over the thing to someone new who just showed up, they might be an attacker." But not that easy in this case.

I suppose "try to get users to be less mean, by doing our part by being less mean individually" is arguably one piece of strenghtening defenses to this kind of attack, I guess, ok.


> I suppose "try to get users to be less mean, by doing our part by being less mean individually" is arguably one piece of strenghtening defenses to this kind of attack, I guess, ok.

Yes, by denying cover traffic.


Here's my favorite example of a pop song (though arguably second-tier in actual popularity) from the early 90s that, in my opinion, had some sophistication in the music, if not the lyrics: "That's What Love is For" by Amy Grant: https://www.youtube.com/watch?v=lPOe8VKqgJM Observe the way it moves between keys, the chord progressions in the chorus, and the way the guitar solo leads into a key change. Contrast that with so many later pop songs that are just a repeating four-chord pattern all the way through. I mean, there have been four-chord songs for decades (there's that famous Axis of Awesome video), but the classic four-chord songs usually didn't repeat those four chords through the whole song.


Here's a more recent example of a popular song on the theme of love and marriage from the UK.

From a band that focuses on the poetry and word play and who now draw in collaborators from about the globe and routinely pack venues.

The music is minimal but from song to song constantly inventivelt simplistic.

Official Video: https://www.youtube.com/watch?v=iKcbSOjIzjQ

Live @Nottz Arena: https://www.youtube.com/watch?v=8cGo4isi1Tw

Lyrics: https://www.google.com/search?q=mork+and+mindy+song+sleaford...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: