Hacker News new | past | comments | ask | show | jobs | submit login

I think the premise is wrong when there still doesn’t exist a core set of frameworks that are abi stable on Linux. On competing platforms there are way more frameworks out of the box (CoreImage, CoreAudio, CodeML, SceneKit, AppKit, etc) and they don’t break as often.

I know in Linux they have fun things like snap and flatpak but it is really solving the problem using a bit of infrastructure and package management instead of doing it in the frameworks and programming languages themselves (which are what you are asking people to write apps in).




The Linux enthusiast community actively fights against anything like this because they want everything to be modular and made to fit specific applications.

Linux does have a de facto set of standards, they're not quite as stable because they change them and deprecate old stuff, but it's better than it looks, and with Snaps you can at least partially solve the issue.

But people choose distros that don't have those standards, and then you lose half your potential users if you don't support all the niche configurations.


The so-called "Linux enthusiast community" is better described as the Linux corporate enterprise community. Understanding this makes your comment make a lot more sense. The "specific applications" are in fact merely the priorities of the giant corporations who fund the overwhelming majority of Linux development for the purpose of accumulating profit.


That's certainly not my experience. Practically every rant about systemd includes the idea that evil corporations are trying to force it on everyone rather than letting them use some random login daemon whose last commit was in 2008.

Corporate interests are generally biased in favor of avoiding hyper-specific modifications that mess with their own economies of development scaling, not seeking them out.


Rants are very different from actually influencing the software. The same people rarely care for the alternatives to systemd.


The corporate enterprise community is who appears to create most of the attempts at user friendly, one size fits all, stable standards, because.... that's what seems to sell, and what saves development budgets not having to support lots of different OSes, as Windows has shown.

Many of the non corporate hobbyists are fine with everything needing tweaking and maintenance, they chose Linux specifically because they want to tweak stuff.


Not sure what point you're trying to make but the "non corporate hobbyists" are ineffective to the point of irrelevance when it comes to Linux core development. Everything they do is downstream from the influence of giant corporations.


I had read somewhere that Win32 (via Wine or Proton) is the most stable target for Linux right now.


>Win32 (via Wine or Proton) is the most stable target for Linux right now

Tangential, Winamp 2.xx from the '90s runs and plays MP3s just fine on Windows 11 today. There are better apps for that today, but I still use it because nostalgia really whips the llama's ass.

Pretty wild that the same thing is not the norm in other OSs.

Even wilder is that I still have my installed copy of Unreal Tournament 99 from my childhood PC copied over to my current Win 11 machine, and guess what, it just works out of the box, 3D graphics, sound, everything. That's nearly 25 years of backwards compatibility at this point.

If that's not stable, I don't know what is.


The most underrated feature of windows probably ever.


It really is mindblowing that Windows 11 is still capable of running 32-bit programs written for Windows 95, that's 28~29 years of backwards compatibility and environmental stability.

If we look back to programs written for Windows NT 3.1, released in 1993, and assume they run on Windows 11 (because why not?) then that's 30 years of backwards compatibility.

Did I say mindblowing? It's downright mythological what Microsoft achieves and continues to do.


There's no guarantee that all the older apps from the Windows 9x/XP days will work today, as some apps back then, especially games, made use of non-public/undocumented APIs or just straight up hacked the OS with various hooks for the sake of performance optimizations. Those are the apps guarantee not to work today even if you turn on compatibility mode.


Personally I've had little luck with even running XP applications on Windows 7. More generally, going by the difficulties experienced by many companies and organizations in the transition from XP->7, it's hardly an isolated problem. Perhaps Windows maintains the best backwards compatibility of any mainstream OS, however I would hardly describe it as "mythological".


The most fascinating example for that is SimCity: They noticed it didn't run under Windows 95 as Windows 95 reused freed memory pages, but SimCity did a lot of use-after-free. This would have been aborted by Win95. Microsoft developers however knew that people would blame Microsoft not Maxis, thus added an extra routine I. The memory manager, which detected SimCity and then didn't reuse memory as much.

I don't want to estimate how much such hacks they accumulated over time to keep things as compatible as they could.


NTVDM could have been ported to 64-bit Windows, but MSFT declined to do so. Leaked Windows source code shows it would have worked[0].

That would have given 16, 32, and 64 bit compatibility.

[0] https://github.com/leecher1337/ntvdmx64


I don't get it. Would NTVDM have been better than say, using DOS-box?


Yes, because it's integrated far more closely with the system.


Yes, it would allow to create pipes between 16-bit NTVDM processes and native 64-bit processes.


Linux can do this; binaries from the 90s work today.

Something like xv (last release: 1994, although the binaries were built against Red Hat 5.2 from 1998) still work today, and the source still builds with one very minor patch last time I tried it.

The problem running the binaries is:

    % ldd ./usr/X11R6/bin/xv
        linux-gate.so.1 (0xf7f82000)
        libX11.so.6 => /usr/lib32/libX11.so.6 (0xf7e22000)
        libjpeg.so.62 => not found
        libpng.so.2 => not found
        libz.so.1 => /usr/lib32/libz.so.1 (0xf7e08000)
        libm.so.6 => /usr/lib32/libm.so.6 (0xf7d0b000)
        libc.so.6 => /usr/lib32/libc.so.6 (0xf7a00000)
        libxcb.so.1 => /usr/lib32/libxcb.so.1 (0xf7cde000)
        /lib/ld-linux.so.2 => /usr/lib/ld-linux.so.2 (0xf7f84000)
        libXau.so.6 => /usr/lib32/libXau.so.6 (0xf7cd8000)
        libXdmcp.so.6 => /usr/lib32/libXdmcp.so.6 (0xf7cd1000)
And Windows has exactly the same problem, but the tradition is to ship these things with the application rather than just assume they're present on the system. And you can "fix" it by getting old versions, or even:

    % ln -s /usr/lib/libjpeg.so.8 libjpeg.so.62
    % ln -s /usr/lib/libpng16.so.16 libpng.so.2
You'll probably run in to trouble with PNG and JPEG files, but e.g. loading/saving GIF and whatnot works fine. Note how libc and libX* work out of the box.

tl;dr: much of the "Windows compatibility" is just binaries shipping with all or most dependencies.

Try it yourself: http://www.trilon.com/xv/downloads.html


Much of the Windows compatibility is "just" stable API for Windows controls, GUI event handling loops, 3D graphics and sound (DirectX). Linux has stable API for files and sockets (POSIX), but that's all.


This is also the biggest different between proper desktop operating systems since forever, and the fragmented Linux distributions.

Available API means the whole stack, everything needed to write applications end to end, regardless of their purppose, not CLI and daemons.


And I am saying you don't need to rely on any of that. You can just ship it yourself (statically link, or use LD_LIBRARY_PATH). That's what Windows applications that rely on GTK or Qt do as well, and it works fine, which works well, and it works fine for Linux too. The basics (libc, libX, etc.) are stable, and the Linux kernel is stable.

And this is what Windows does too really, with MSVC and dotnet and whatnot redistributables. It's just that these things are typically included in the application if you need it.

It's really not that different aside from "Python vs. Ruby"-type-differences, which are are meaningful differences, but also actually aren't all that important.


Not really got the point, no wonder Linux Desktop development is as it is, and Google needed to step in.


Stop spreading FUD, X and OpenGL have maintained stable ABIs. There is Wayland now but even that comes with Xwayland doe maintain compat. Sound is a bit more rocky but there are compatibility shims for OSS and alsa for newer audio architectures.


Stop claiming that I'm spreading FUD and show me at least one Linux app which was compiled to the binary code in 1996 and exactly that binary code still runs under modern Linux desktop environment and has similar visual style to the rest of builtin apps.

Got no counterexamples? Then it's not FUD at all, rather a pure truth.


> It's downright mythological what Microsoft achieves and continues to do.

This seems like it was meant in a positive way, but I really don't think that if compatibility with your system requires "mythological" efforts, that should be seen as a good thing for your system.

It's also worth noting that backwards ABI compatibility only masters when people limit their software by not distributing the source. Early UNIX software can run fine on modern GNU by just compiling it.

API Compatibility Is All You Need ;)


> It's also worth noting that backwards ABI compatibility only masters when people limit their software by not distributing the source. Early UNIX software can run fine on modern GNU by just compiling it.

Have you ever tried building decades old programs from source? It's not as easy as you claim.

Here's source for grep from v6 unix. I'd be interested to know the smallest set of changes (or flags to gcc) needed to get it to compile under gcc on linux and work.

https://github.com/takahiro-itazuri/unix-v6/blob/0316b457acb...


Note that this is not a fair comparison since that code is almost 50 years old, predating ANSI C and before even the 8088 existed.

Still, there is only one actual error in gcc 13.2.1 which is the use of =| instead of the later standardized |=. I'm not sure if that was a common thing back then or if it was specific to their C compiler. Either way, I don't think gcc has a switch to make it work. Switching that around in the source gives linker errors since it seems back then the way to print to or flush a particular file descriptor was to set a libc variable and then call flush or printf. If you had the right libc to use with it that one tiny change might be all you need. But you would likely need to set up a cross compile to be able to use the right libc.

My understanding is that most of the compatability issues on Linux are due to not having the right libraries rather than the kernel not supporting older system calls. It is just a lot of not that fun work to keep things working and no one is that interested (instead, some people just use decade old versions of Linux :/ and the rest use package systems to recompile stuff). NetBSD had better practical binary compatability for a long time, although I think some of it was remove fairly recently since there isn't much commercial NetBSD software (there was one lisp binary from 1992ish IIRC that some people were still using and I think that compat was kept).


Thanks for the details, I did give compiling it a try and saw some of the things you mention, but my knowledge of C, pre-standards C, and libc wasn't enough to fully make sense of them. I'll agree it looked better than I expected; I've seen much worse cases of single-decade old programs not compiling/working (one in Haskell and another involving C++ and Java, although they were much larger programs).

But I don't think it was unfair to use that as an example of early Unix software, or to point out that it was harder than "just compiling it". One defense against my argument could be that a version of 'grep' has been maintained through C versions and operating systems, with source code availability playing a part in that (although presumably GNU grep avoided using Unix source code).


My extremely limited experience with Java is similar, it seems to do much worse than C at compatability over time. Of course you are right that it is a fair example of early Unix code, I just didn't read the comment you were actually replying to carefully enough :(. Pre-standard C has more issues, so I don't think that is fair vs Windows but that is not what you were replying to. Source availability gives some additional options that you don't have with only binaries but I think you are right that without it being open source the use is limited (and potentially negative, like how the BSDs were limited by the USL lawsuit in the early 90s). Useful open source software is likely to be maintained at least to the point of compiling, though it may take a while to break enough for someone to bother (sox is in this middle stage right now with package systems applying a few security patches but no central updated repository that I know of).


Early UNIX software can run fine on modern GNU by just compiling it.

...providing you can find a suitable compiler that isn't obsessed with exploiting undefined behaviour.


> If that's not stable, I don't know what is.

What is? An immense amount of resources (developers) poured into developing live patches to make applications work on each newer version of Windows (or helping the application developers fix their applications). It's an interesting conceptual grey area - I don't consider it backward compatibility in a strict sense.

This is documented in the book "The old new thing" by Raymond Chen (it's possible also to read the blog, but the book gives an organic view).

It's fascinating how far-sighted Microsoft was; this approach, clearly very expensive, has been fundamental in making Windows the dominant O/S (for desktop computers).


It's because Microsoft understands and respects that computers and operating systems exist to let the user achieve things.

The user ultimately doesn't care if his computer is an x86 or an ARM or a RISC-V, or if it's running Windows or Mac or Linux or Android. What the user cares about is running Winamp to whip some llama's ass, or more likely opening Excel to get work done or fire up his favorite games to have fun.

Microsoft respects that, and so strives to make sure Windows is the stepping stone users can (and thusly will) use to get whatever it is they want to do done.

This is fundamentally different to MacOS, where Apple clearly dictates what users can and cannot do. This is fundamentally different to FOSS, where the goal is using FOSS and not what FOSS can be used for.

It's all simple and obvious in hindsight, but sometimes it's the easiest things that are also the hardest.


It's amazing how people don't want Linux to "Be like Windows"... but as far as I'm concerned windows is close to ideal, just with a few flaws and places where FOSS can do better...


This has very severe drawbacks, so it's not unambiguously desirable.

Windows APIs are probably a mess because of this (also ignoring the fact that only company with extremely deep pockets can afford this approach). There is at least one extreme case where Windows had to keep a bug, because a certain program relied on it, and couldn't be made to work otherwise.


And yet Windows 11 keeps taking away user control, becoming more like Apple every update.


> I don't consider it backward compatibility in a strict sense.

Just in the sense that 100% of the people who use the phrase "backward compatibility" mean.


Sure, from a user perspective, but not from an operative perspective: in the cases of live binary patching, Microsoft required to call the application developer to be legally clear; in orther cases, APIs behave differently based on the executable being run. There's a lot more than just keeping the API stable.


I get that my initial comment was a bit of a throwaway, but I can unpack it a bit. I think it’s a mistake to regard a working backward compatibility functionality as deficient because it requires maintenance and the cooperation of the parties involved. That’s just… engineering, right?


In a world where security flaws are so common, I'm not sure I want to run old software outside a virtual machine.

I also wish I could agree that win32 is a stable target on Linux; it may run old software, but in my experience it is often quirky. It's usually a better use of my time to just boot windows thanks to figure out how to get software to run under wine.


This might be true.

Another abi I see games target are ubuntu... 14.04, 16.04, 18.04, etc

Ubuntu seems to be stable enough for the corporate world.

I think things like flatpack or snap add bloat and behave in ways you don't want.


Actually, web standards are.


Yeah exactly!


Which is quite something considering I can't have foobar running for more than an hour before it crashes.


That core would be GNOME or KDE frameworks, coupled with the FreeDesktop standards, at least that was the plan about 20 years ago.

However as the site says doing distributions is what most folks keep doing, and naturally there isn't a single stack that can keep up with snowflake distributions.

In the end, Google took the Linux kernel, placed two core set of frameworks, one in Java, and the other in JavaScript, and naturally those are the winning Linux distributions for regular consumers.


None of that core is standardized across even a hand full of distros.


Which is the entire point. Having working software beats having a standard. (Naturally having working software that adheres to a standard is even better, but at least they got their priorities right.)


That's great, but then don't complain to your community for creating more distros? Thats the thing they are good at, because they have been given great tools to do so.


Naturally, you missed the part I mentioned it was the plan 20 years ago, not how it looks like today.


tldr


Chrome/Chromium is quite standardized, across many distros.


Name one distro that ships with chrome out of the box? I dont even think ubuntu comes with a chromium browser. That means devs cant write apps against it and just hand out exes like they do on Mac and windows.


Chromium is the default browser on Raspberry Pi OS.


Play Protect Certified Android and ChromeOS are popular ones among consumers


Nice. Maybe more desktop distros should build on android as a core. That could solve a lot of problems.


I agree. With how much resources are invested into Android for offering an OS for consumers, desktop Linux distros using the freedesktop stack are missing out. It is not easy for distros to acknowledge the sunk cost.


You are talking about Android and Chrome OS right? I agree, those are the top two Linux distributions, and everything else is behind by an absolutely huge margin


Yep. Just browse their documentation, that is the kind of development experience GNOME and KDE were expected to provide, 20 years ago, yet fail short due to Linux distributions fragmentation, the devs using other environments, or still stuck with plain window managers and xterms workflows.


If only GNOME and KDE were backed by one of the largest companies in the world, with tens of billions of dollars at its disposal.


If you mean Red-Hat in regards to GNOME, they have long moved away from Desktop Linux for consumers, as there is no money to be made there.

The Slashdot posts on the matter from those days are quite easy to find.

GNOME like CDE in his day, is good enough for corporate users to connect into Linux servers.

GNOME today is also not the same as GNOME 20 years ago, it got rebooted multiple times with incompatible code, glade was dropped and now people are expected to write their GUIs in code or manual XML code (yes I am aware of the Web based ongoing replacement, what a broken idea for a native desktop), and plenty of other issues that make GNOME in 2023 even less atractive than 20 years ago.


NeXT under Steve Jobs had a maximum of 500 people working with him in .

There has been much more efforts than that in desktop linux.

Yet NeXT delivered a mostly coherent programming platform, back in the 1990s.

One thing NeXT didn't do is to introduce shitloads of useless complexity with fragmentation for the sake of it and half assed solutions on top of that.


NeXT had Steve Jobs. And I don't mean a man of his calibre, I mean a person saying yes and no to features. This is sorely lacking in the open-source world, where they have a total aversion to any kind of structure and oversight.


There is no standard at freedesktop. Everything is a moving target, just like KDE and Gnome.


Cuplrits are mostly glibc devs with their manic abuse of version names (and very recently, a GENIUS who added a new ELF relocation type): this is a pain for game developers to provide binaries which span a reasonable set of distros in time. Basic game devs install one of the latest mainstream and massive distros, build there, and throw the binaries on steam... but that recent distro had a glibc 2.36 and now their binaries have version names requiring at least a glibc 2.36 (often, rarely not). They have to force the right version names with... the binutils gas symver directive (see binutils manual) until all their version names are compatible with a glibc reasonably "old" (I would go for a reasonable 5 years... 7/8 years?):

https://sourceware.org/glibc/wiki/Glibc%20Timeline

Of course normal game devs have not the slightest idea of those issues, and even so, they won't do it because it is a pain, that for 1% of their market. Not to mention they better statically link libgcc (and libstdc++ if they use c++) to avoid the ABI issues of those abominations (the word is fair) which have been plagging game binaries for TEN F.... YEARS ! (getting better has more and more game binaries default to -static-libgcc and -static-libstdc++, if c++, gcc/clang options).

There is light at the end of the tunnel though as godot engine is providing build containers which are very careful of all that (unity seems clean there too, dunno for UT5.x though).

But you have engines really not ready, for instance electron based games: you don't have the right version of the GTK+ toolkit installed on your enlightenment/Qt/raw X11/wayland/etc distro? Nah, won't run. And packaging properly a full google blink engine for binary distribution targetting a wide spectum of elf/linux distros? Yeah... good luck with that.

elf/linux is hostile to anything binary only, that due to manic ABI breakage all over the board, all the time (accute in the SDK and core libs).

If it is already that hard for game binaries, good luck with apps. I can hear already their devs saying: "we don't care, just use and install microsoft suze gnu/linux, every else is unsupported"... I think you get the picture where all that is going.


I agree. I recall a talk from Linux Torvalds on how bad the glibc folks break things and why he won't ship a binary version of his scuba tool. If the binary breakages start with the darn C lib, you're gonna have recurring problems all the way up the stack on a regular basis I feel.


If the binary breakages start with the darn C lib, you're gonna have recurring problems all the way up the stack on a regular basis I feel.

In general glibc maintains an extremely stable ABI[1]. Forwards compatibility in both glibc and libstdc++ is something many companies depend on for their mission critical applications, and it's the entire reason Red Hat pays developers to maintain those projects.

[1]: https://abi-laboratory.pro/index.php?view=timeline&l=glibc


This has the case for 10 years with games on steam. The worst being libstdc++ ABI issues still around because many devs forget to statically link their c++ libs with -static-libstdc++. Because in windows, ABI stability is really good and then devs are used to that.


?? on windows you always have to ship the libc/libc++ along with your app, as part of the VS redistributables. That's pretty much the same than static linking, the symbols are just not in the same binary.


Not since Universal C Runtime was created, and made part of Windows 10 systems components.


As far as I understand, Windows solves this libc++ versioning issue using the side by side cache. I guess this is a little bit like flatpack.


Windows standard C and C++ ABI is stable since 2017. So the last 3 releases of the Visual C Compiler and C runtime hasn't changed the ABI.

However Windows also has a lower level system API / ABI i.e. Win32 that's always stable all the way to Win 95. WinSXS sometimes helps for certain legacy apps too. This allows apps using different C libraries to work together. Win32 contains everything about the OS interface. Creating windows, drawing things, basic text, allocating pages from the OS, querying users, getting info about files is all stable. Win32 is the lower level of the C library. There is also a better separation of the functions in different DLLs. Win32 has multiple DLLs that has different jobs and they are mostly orthogonal. Kernel32 contains core almost system call stuff. Shell32 contains interactions with the OS shell i.e. creating windows, message boxes, triggering different UIs.

The surrounding libraries like DirectX / DirectPlay / DirectWrite that are also used by the games are also part of the system libs starting from 7. They are also stable.

On Linux world there is no separation of the system level libraries and normal user libraries. Glibc is just another library that one depends on. It contains the entire lower level interface to kernel, all the network stuff and POSIX user access stuff. It also contains a component that has no job in a C library: the dynamic executable loader. Unlike Windows on the Unix systems the C library is at the lowest level and Glibc being the default libc for Linux and making itself the default provider of the dynamic executable loader makes writing stable ABI almost impossible. Since other libraries depend on Glibc and executables depend on Glibc's dynamic loader everything is affected by the domino effect.


I agree.

The elf loader should be extracted from the glibc... but there is price to pay, and it is where those guys could do their manic ABI breaking again.

Some very low level interfaces will have to be defined between the external elf loader and the posix/c runtime. For the moment those interfaces are private, just look at how intimate they are about threading and TLS.


Windows solves this issue by not making language runtime part of the OS libraries in a way that pollutes other libraries.

That means that you can have your application linked with library A version X, and load another library that is linked with library A version Y, and so long as certain practices are followed, everything works because you're not getting cross-contamination of symbols.

Meanwhile on Linux the defaults are quite different, and I can't load OpenGL ICD driver that depends on GLIBC_2.38 symbol on application that was loaded with glibc-2.37. Moreso, a lot of APIs will use malloc() and free() instead of language-independent allocator, unless the allocation comes from kernel. And no, you can't mix and match those.


This what the "pressure-vessel" container from collabora (used by valve on dota2/cs2), is trying to solve... but now the issue is the container itself as it does _NOT_ follow fully the elf loading rules for all elf binaries and does ignore many configuration parameters of many software packages (data files location, pertinent environment variables, etc), basically presuming "ubuntu" to be there.

Basically, I have my vulkan driver requiring fstat from 2.33 but the "pressure-vessel" (sniper version), has a glibc 2.31, then it does parse partially my global elf loading configuration and the configuration of some packages to import that driver and all its dependencies... including my glibc elf loader... ooof! That level of convolution will have a very high price all over the board.

I have arguments often with one of "pressure-vessel" devs because of some shortcuts they took which do break my distro (which is really basic and very vanilla, but not "ubuntu"). It took weeks to get the fixes in (I guess all of them are in... until the glibc devs manage to do something which will wreak havock on "pressure-vessel"). Oh... that makes me think I need to warn them about that super new and recent ELF relocation type they will have to parse and detect in host drivers...

On my side, I am investigating some excrutiatingly simple modern file format which should help a lot fighting those issues, there will be technical trade-offs obviously (but would run-ish on "old" kernels anyway with elf "capsules" and an orthogonal runtime). Like json is for xml, but for elf/pe.

It seems only the linux ABI can be trusted... til Linus T. is able to hold the line.


>It seems only the linux ABI can be trusted...

The ABI is not stable. Google has to do extra work monitoring for ABI breakages to make sure that pushing out an update of an LTS branch of the kernel does not break people's drivers.

https://source.android.com/docs/core/architecture/kernel/sta...


The module ABI is not stable, but that's not the ABI typically referred to when someone says "Linux ABI". That would be the userland ABI, which is stable.


Linux kernel userland ABI is stable. Nothing else is on linux.


What language independent allocator? I'm unaware of any memory interfaces in programming languages that aren't specific to that language.

What would you use instead of malloc and free?


The OS provided memory allocators - HeapAlloc, its wrappers GlobalAlloc and LocalAlloc (remainders of 16bit era), VirtualAlloc (page granularity, can be compared to MAP_ANONYMOUS with mmap(), and CoTaskMemAlloc which can be shared across COM processes + their respective "free" functions. malloc() is explicitly called out in documentation as "runtime dependant".

Similarly other OSes used to have memory allocation services that weren't linked in any way to a language runtime - VMS has several calls, all language independent, ranging from low-level equivalents of mmap() to malloc-alternative.

And yes, a big chunk of Windows portability is that various APIs use those language-independent calls internally, and so can developer in order to avoid creating issues - and documentation promotes those language independent methods.

At no point you get into situation with Windows API that you call free() on memory allocated by malloc() from a different libc.

In comparison, the available language-independent APIs without using any special libraries on Linux, are to directly call mmap() and sbrk() through inline assembly (beware the glibc wrappers!).


malloc in (g)libc isn't any more language dependent than HeapAlloc or VirtualAlloc. Both have a stable ABI that can be used by any language.


HeapAlloc and VirtualAlloc do not bring the whole language runtime with them, like glibc does, nor are they specified as part of a specific language runtime only (the unix primitive for allocating memory is sbrk() and mmap(), not malloc()).

And with how applications are linked by default on linux, whoever loads glibc first sets the stage for every other module in the program, which makes for "fun" errors when you get a library that happens to be compiled with newer version (or older). And even if you force windows-style linking (good luck, lots of infrastructure needed), you end up dealing with modules passing each other malloc()ed blocks and possibly trying to call free() from different allocator on it.

Anyway, for all practical purposes, glibc has no stable ABI at all.


I suppose they meant a "libc independent" allocator, e.g. jemalloc/tcmalloc/mimalloc/etc. Although, using such allocators comes with complications of their own: https://lwn.net/Articles/761502/


I think there are very few c++ ABI versions on windows and it is easy to select the one you want until it is installed, they can be there side by side.


If they're going to put the game on Steam then they should be using the Steam Runtime which is available here[1]. Otherwise they're shooting themselves in the foot.

[1] https://github.com/ValveSoftware/steam-runtime


> If it is already that hard for game binaries, good luck with apps.

Video games are some of the most complex and most difficult pieces of software both to build and to get right. Modern games are coded so poorly that driver makers have to release patches for specific games.

It's the gamedev world that can't get its shit together. Valve settled on an Arch-based distro for the Deck. The specific distro doesn't matter since Steam already establishes its system requirements and comes with a runtime.

Beyond that, I really don't see the issues you're talking about. Generally, any issues you have are fixed by symlinking the correct .so files. This is a side effect of targeting a specific version of software instead of keeping to its more base features. That's on the dev, not the user or distro.

You act like Windows has never had ABI breakage or versioning problems. I'd like to see the specific issues you ran into; maybe there's an easy fix like a symlink.


> Video games are some of the most complex and most difficult pieces of software both to build and to get right.

Actually as far as binary distribution goes, video games are some of the easier to build software as they tend to need limited operating system integration besides creating a (possibly fullscreen) window, getting user input, driving a graphics card to display the result and output audio somewhere. Only completely headless programs like command-line tools or servers have it easier. Contrary to the popular meme, creating Linux binaries for games which will run on all (current) distros and will keep running in the future is not rocket science. Not completely trivial, but something that any competent developer should be able to manage.

Where things get really hard is desktop applications which users will expect to have a much tighter integration with the rest of the environment. Neither Qt nor GTK have a long-term stable ABI and shipping them with your program just means you have a million more unstable ABIs you depend on.


> But you have engines really not ready, for instance electron based games: you don't have the right version of the GTK+ toolkit installed on your enlightenment/Qt/raw X11/wayland/etc distro? Nah, won't run.

Hm, not sure what you mean here.

At least with the nw.js toolkit (from which Electron was forked IIRC) I've never gotten a report of a distro where it would refuse to run because of an incompatible GTK version.


Why would a distro have GTK in the first place?

More reasonably, binary GUI apps should not expect more than the window system, on elf/linux, wayland with a legacy fallback to x11. It means, the GFX toolkit is an application choice on top of the windowing system. Binary GUI apps have to distribute it... they have to distribute their own version which would not conflict with the version installed on the user system... if any...


> Cuplrits are mostly glibc devs with their manic abuse of version names (and very recently, a GENIUS who added a new ELF relocation type): this is a pain for game developers to provide binaries which span a reasonable set of distros in time. Basic game devs install one of the latest mainstream and massive distros, build there, and throw the binaries on steam... but that recent distro had a glibc 2.36 and now their binaries have version names requiring at least a glibc 2.36 (often, rarely not). They have to force the right version names with... the binutils gas symver directive (see binutils manual) until all their version names are compatible with a glibc reasonably "old" (I would go for a reasonable 5 years... 7/8 years?):

Or ... just build against the oldest glibc they want to support.


Isn't the solution to this problem to just distribute the source code. Let package maintainers worry about creating binaries for their distros.


not when you actually want to sell the games for money.


Distributing source code is not an obstacle to selling games as for all but the most trivial games the meat is in the assets and scripts. You can open-source game (engines) while continuing to sell the rest - which is what e.g. allows you to run Doom on any device imaginable.


yes, but a game with non-free assets is still not going to be picked up by linux distributions to package, i think


If I could have that spread of frameworks that macOS offers on Linux, I’d be targeting Linux with my side projects yesterday. Having such a wide selection of tools to readily reach for with zero consideration about how long it’ll be supported, which fork is best, how well it meshes with a pile of other third party libraries, etc is amazing. It reduces friction massively and you just build stuff.

The KDE Qt ecosystem and its GNOME/GTK analogue are closest but still aren’t quite there.


> If I could have that spread of frameworks that macOS offers on Linux, I’d be targeting Linux with my side projects yesterday.

I doubt it. More likely, you would be complaining that you need to spend effort rewring your code to use Linux frameworks for a tiny userbase.

That's why most cross-platform software ends up using OS frameworks only where neccessary and ships its own version for most stuff. Coincidentally, that approach matches which stable ABIs are available on Linux.


Have you looked into GNUstep?

https://gnustep.github.io/


Yep, have been aware of it for a long time. It’s great in concept but as far as I’m aware a good deal behind current macOS — last I knew it was compatible with OS X 10.6 (released 2009) with no support for Swift or for any of the advancements in Objective-C made since then.

I’m also not sure how well GNUStep apps would fit into a modern GTK or Qt-based desktop, e.g. if they’d theme controls to match.



Stuff like Flatpak and Snap exists because the framework side kinda is built-out. Isolation technology had matured, newer desktops emphasized per-window security and needed APIs to portal in and out of each instance. The desktops needed a packaging/infrastructure solution to tie that together and make it presentable to the user.


Yet I still can't compile an app on some arbitrary release of some arbitrary distro and just run the darn exe on another and be 100% sure it will work.


On other platforms people don't even try to support anything but one "distro".

You could make an AppImage or snap that would work across pretty much any mainstream non-hobbist-oriented distro, and Snap/AppImages is pretty much the Linux equivalent of EXE.

Raspberry Pi OS just moved to NetworkManager and they have PipeWire, which was the last reason I had to deal with less common software stacks, so it seems like stuff is getting more standardized.


Snaps will not work on anything but Ubuntu.

AppImages are just a fancy self-mounting disk image. If you do it right and include all required contents then it will work ~anywhere but you can just as well do that in a .tar.{gz,xz,zstd}/whatever archive.


>Snaps will not work on anything but Ubuntu.

I installed VSCode as a snap on my Fedora install and it worked fine as far as I could tell. This was in Mar 2021, and I chose the snap because code.visualstudio.com/docs/setup/linux said that "Due to the manual signing process and the system we use to publish, the yum repo may lag behind and not get the latest version of VS Code immediately. . . . Updates are automatic and run in the background for the Snap package."

(I don't have an opinion on Snap as compared to Flatpak or AppImage or none of the 3.)


+1 AppImages. And the beauty is you can make them portable by creating a folder with the same name as the AppImage and append .home to the end, and boom: portable software. For example

    image-editor.AppImage
    image-editor.AppImage.home (folder, all settings are stored there)


Actually, I don't think I've tried AppImage. Does it do static linking or does it do something more similar to flatpak?


It's a self-mounting image so closer to flatpak than static linking but without the strict isolation.


On a large enough timescale I don't think you can reasonably expect this on any of this big 3 OSes. From a less macro perspective, I think tools like Appimage and Flatpak will fill that role.


On a long enough time scale we are all dead, and Linux doesn't exist. That doesn't mean that in the decades that computer software has to be useful to actual people that things have to suck this badly right now.


The "actual people" using Linux on the desktop are not running .so files off the internet. I get what you're saying, but you know it's facetious to pretend that packaging is simple on Mac and Windows too.


> The "actual people" using Linux on the desktop are not running .so files off the internet.

They do this all day every day when they use a little something called Steam or when they install Google Chrome.

> I get what you're saying, but you know it's facetious to pretend that packaging is simple on Mac and Windows too.

It is...


> They do this all day every day when they use a little something called Steam

Which while (initially) officially only released for a specific Ubuntu version ran on pretty much any up to date distro from day zero.


Packaging is fairly simple on Mac and Windows if you use the dev tools for the platform and are not doing things like installing services or drivers or changing OS configuration.

Your packaged app will almost always work too. There are not 50 distributions of these OSes.


exactly


The trouble is you're trying to distribute it as a binary yourself. The there are two traditional ways for distributing software for Linux:

1) The system package manager. It will download a binary from the repository which is the right one for that system.

2) make && make install. This is mostly for software in development that hasn't made it into the package manager yet. It will compile from source and produce a binary for the target system.

All the problems are from people trying to do something other than this.


That's the problem. The Linux way of distributing apps is wrong for trying to compete with the other consumer platforms. The ChromeOS or electron style of doing things is another story though.


What exactly is wrong with it?

If you have a stable widely used application suitable for being installed by unsophisticated consumers, have the distributions put it in their package managers. This is hardly any different than consumer mobile platforms that require you to use an app store, except that it isn't actually required, just the thing you ought to do absent a good reason to do otherwise.

If you're distributing something sufficiently esoteric or experimental that the package managers won't touch it, your correspondingly sophisticated or adventurous users can compile it from source.

We don't need some malware-facilitating norm that encourages users to install opaque binaries from random websites.


Alot of you may be looking at it wrong.

Each distribution is a different operating system. You cannot package the "same" app for Windows and mac, so why should you be able to package the same application for Debian and Arch, which, even though they run a lot of the same code, have different underlying layers and assumptions?


And on a lower level, `fgetwc()` gets crashed deliberately in `libc` when applied to a stream created with `fopencookie()`. It should be incredible, but Linux `libc` is not Unicode-capable in 2023.


Which libc?

There's glibc, musl libc, etc. Also consider you may be using the functions incorrectly.


Seriously? “You're holding it wrong”? Read the comment at the crash site (line 584) yourself:

https://elixir.bootlin.com/glibc/glibc-2.38/source/libio/gen...


I wrote "consider", not "you are without a doubt at fault here". Learn how to read.


He did consider it; You are without a doubt at fault.


Wide chars were a mistake and are not required to support Unicode.


Ensuring an app looks and feels the same across various distributions seems quite challenging when it’s not only different flavours of the OS but also different desktop environments.

At the same time, the OS flavours don’t seem to offer a unified way for handling payments, subscriptions and in-app purchases which is a significant burden to implement from scratch by every app developer.


Why do distributions need to offer this when SDKs exist for services like Stripe and PayPal?


Because a Linux distributions are used all over the world. Stripe and PayPal are good, but insufficient as a choice - one can't be expected to pick SDKs that work equally well in all regions, preferring local payment options, registering with local tax authorities etc.

Also, Stripe and PayPal don't offer tools to check if an app is running with a valid license/subscription, it's not pirated etc. (the equivalent of the App Store receipt signature).


The "feel" of the app is identical across distributions because the app controls everything inside its window and the look only varies slightly in terms of colorscheme and window decoration if that.

There is absolutely no legit purpose for in app purchases on a desktop OS. Its not hard to pay for substantial software suites and most of what would be an app on platforms which at one point would have had an anemic browser experience is simply a website on platforms where fast cpus and 14-28" screens are normal.

Nobody needs a bunch of adware apps you can pay $3 to decrapify or games that are a slog if you don't buy fake potions for real money.

There are enough good free basic apps for virtually any use case and the more complex use cases need up front investment and users not in app purchases.

What use case do you imagine for this feature?


> Nobody needs a bunch of adware apps

You can't stop people from making crap apps, they exist even today.

> The "feel" of the app is identical across distributions because the app controls everything inside its window

To make an app feel at home, it needs to "blend" with the rest of the OS. To achieve this, one can use OS-components to build an interface. Just a simple example, GTK apps are so different from KDE apps in their look and feel - you can always tell if an app is native GNOME or KDE app. Now consider all the other desktop environments - it's just way too many to account for in one's code and testing.

Then comes the question of system integrations - how do you offer a unified photo picker experience, how do you ensure you always ask for the right permission to access the camera, the clipboard or network APIs - all these should be provided from the OS so they don't confuse the user and prevent abuse by naughty apps.

> no legit purpose for in app purchases on a desktop OS

I strongly disagree - what if you want to offer a "try before you buy", or you'd allow users to purchase additional content/credits for a service bundled with your app? What if you want to adapt your pricing for a specific event or holiday period...etc, the sheer scope of possibilities is not practical to include in one comment.


> how do you offer a unified photo picker experience, how do you ensure you always ask for the right permission to access the camera, the clipboard or network APIs

XDG Portals do that


TLDR: Release GTK/QT apps with menubars and xdg-desktop-portal on flathub and use stripe to impliment any and all desirable business models. You can do this today and your app wont look any more out of place than the browsers and office suites. IAP are the STDs of business models.

> You can't stop people from making crap apps, they exist even today.

The particular crapware that exists on Android is notably absent from built in software management GUIs so it looks like you CAN do this.

> To make an app feel at home, it needs to "blend" with the rest of the OS

68% of desktops are GTK based, 26% are KDE which themes GTK apps the same as KDE apps. Superficially apps blend well. Looking deeper you will notice many similarities. Many common shortcuts and the same, common UI paradigms, idioms. Look a little closer and you'll note differences especially gnome with its client side decorations which cram a toolbar + window controls into a single line with extra spacing, QT apps with the more traditional menubars and certain shortcuts in common. Then there are incredibly common apps that are obviously not fully consistent with any, firefox, chrome, gimp, libreoffice, thunderbird.

https://linux-hardware.org/?view=os_de&formfactor=notebook

What one ought to realize shortly is that there is no singular Linux desktop to blend into and it works fine as is.

If you make a GTK app with a traditional menubar you will look reasonably at home on virtually all desktops. At least as at home as most of the most popular apps listed above.

> Then comes the question of system integrations - how do you offer a unified photo picker experience

xdg-desktop-portal

>how do you ensure you always ask for the right permission to access the camera, the clipboard or network APIs

In native apps none of this is controlled at all. Don't install things you think might look at you through the camera and upload your nudes to the cloud. In flatpak the permission to do so is front loaded into the permissions required by the app. Don't install things that require camera and network permission if you think they might upload your nudes to the cloud. Flatseal provides a way to modify this permission after the fact if you want to install something and modify what permissions it gets.

> I strongly disagree - what if you want to offer a "try before you buy",

The most obious thing to do is time or feature limit your usage and "unlock" your app by opening a url and handling payment on your website with stripe and then having the user copy a code and or open a link to communicate it to the app. This is relatively easy AND lets you keep 100% of the money and no capricious app store rejection can keep your users from using your app. If you for any reason were ever to have a challenge distributing your work on the official source both flatpak and traditional package management has the concept of multiple sources. Customer adds your source and your apps and official apps are displayed in the same integrated app store interface.

Flathub is supposed to introduce one off paid apps/subscriptions. I'm not clear what the progress on that feature is. I do not believe there is any plans for in app purchases however. Probably because 99.9% of the use case is dominated by porn, shitty games, and adware. Asking why non-gross environments don't impliment IAP is like going to the whore house and shouting "where all the gonorrhea at!"

It is better from the user perspective if payments/subscriptions are either managed on the website for the service or better yet one a singular store interface where customers can make an intelligent decision rather than having the dev low ball them and them ride the sunk cost fallacy and FOMO to a fuckin payday.


I'd rather IAP developers would stay away from Linux tbh.


> On competing platforms there are way more frameworks out of the box (CoreImage, CoreAudio, CodeML, SceneKit, AppKit, etc) and they don’t break as often.

I would not use MacOS as an example of stability, each version breaks and deprecates a massive amount of API and old apps will require changes.

Windows is the only OS with serious backward compatibility.


> there still doesn’t exist a core set of frameworks that are abi stable on Linux

Motif ?

WxWidgets ?

Openstep ?

Gnome and Kde do reinvent the wheel with every release, but they shoouldn't be taken seriously.

There is a general problem on the SW world. They like to reinvent the wheel every now and then.


How many distros are motif and wxwindows installed on by default? How many distros come with the full GL and vulkan stack?


> How many distros come with the full GL and vulkan stack

All of the ones intended to run grahical applications. Unless extraneoous stuff like GLUT which noone should be using anyway.


People like to shit on tools like Electron, but there's a reason they're popular. If you need to reach a broad audience with a native tool, using heavy-handed web-based plumbing is a bigger win for Linux users than supporting only windows and macos where like 97% of desktop users are.


Hold on mate, isn't that what Java was supposed to solve. I remember before the days of electron when I was a wee lad in the 2000s, all cross platforms apps were Java.

Look at Ghidra, it's a Java app for Windows, Linux and Mac. The "holy trinity" of operating systems, covered with one language and framework.

So what happened? Did devs forgot Java exists and felt like reinventing the wheel but worse this time?


Java simply has a much higher barrier of entry. Not only in regards to figuring out the language and resources available but also the fact that creating a GUI still requires external dependencies.

Electron isn't just cross platform, it is cross platform based on technologies (html, css and javascript) that also by a huge margin have the largest amount of developers available.


  > Not only in regards to figuring out the language and resources available but also the fact that creating a GUI still requires external dependencies.
What external dependencies does Java need that's not in the JDK itself? I have an app with Mac and Windows installers (and thus bundles JDKs), it also runs on Linux (via a fat jar), I tested it on Ubuntu, but for the life of me I couldn't figure out how to package it properly. It was more complicated that I cared to invest it at the time.

As for the barrier to entry, I feel the same way about the web. I find the JS and Web ecosystem to be utterly overwhelming, one reason I stuck with Java over something like Electron, and the installers/footprint of the app are smaller to boot.


For Linux, I'm using jpackage to package my Java software to .deb (x64 architecture) file. For all the other Linux variants, I've a .tgz file that contains the jar file, libraries and icons of the applications.

The problem I have with Linux is named at the end of the website: "Sharing your creation". It's pages and pages of documentation that is not relevant to the packaging of your application where you can spend hours of work without finding what you want of finding out that it doesn't work for you because for example it's not on GitHub. Hopefully jpackage was able to fix it for the .deb format. Instead of working on more documentation, working on better and easier to use packaging tool would help.


The JRE itself is an external dependency that you need to bunle because it is not part of most Linux distributions. And even if there is a JRE installed it is not guaranteed to be able to run your Java application.



> What external dependencies does Java need that's not in the JDK itself?

I mean that it doesn't come with Java itself, but you as a developer need to pick a UI framework and not all of them actually work all that well cross platform or will get you an actual modern interface.

Edit: I should also note that the threshold for entry I am talking about is for people just generally starting out. There simply are way more resources available for web related development than there are for java.

Also, when you start bundling your JDKs I am not sure you can talk about a smaller footprint anymore.


Well, Swing is still bundled with Java. Netbeans uses the "Flat Look and Feel" and looks ok to me. I find Swing a lot more work compared to FX.

JavaFX used to be bundled with Java, but was removed. Some JDK distributions bundle FX like it was before, and adding FX to a new project is simple and straight forward. Maven packages it nicely, and it includes the platform specific binary parts. If you can use Log4j, you can use Java FX. Onboarding onto FX is not a high bar.

I can not speak to SWT.

There's several examples of "modern" UIs in FX, I can't speak to any of them, I don't pay much attention to that space. It's imperfect compared to the web, but not impossible.


Java FX seems better than swing, but it's an external dependency now though, isn't it? I thought it got removed from the jdk a few years ago.


It was. Even before it was more "bundled" with the JDK than "part of Java".

But, to be honest, that's a real nit. It's a standalone dependency, it's 4 lines in a POM file, it doesn't drag the internet with it, and it only relies on the JDK. So, while it's a large subsystem, it's a "low impact" dependency in terms of side affects and complexity.


> it's a "low impact" dependency in terms of side affects and complexity.

I wish that were true in my experience. But we have struggled to support {macOS, Windows, Linux} x {x86_64, arm64} with JavaFX and one .jar for our application.

This is a 250-line diff, not a 4-line diff: https://github.com/ra4king/CircuitSim/pull/93/files. We have to manually manage .dlls and .sos by hand.

If you know a solution that is 4 lines, we would be very grateful. All we want is one .jar with JavaFX in it that supports many OSs and architectures.


Not sure what your requirements are.

My point about 4 line dependency is to point out that the barrier to entry into FX is low. What you are doing I would consider unconventional, as demonstrated by all of the hoops you're jumping through to achieve it. Packaging, yes, is still a bit arcane at this point.

My project, https://github.com/willhartung/planet packages macOS and Windows installers, and can be run as a fat jar on a Linux machine (tested on Ubuntu). You can look in there to see my POM file, and my build scripts. They're much simpler than what you're doing. I don't have a package for Linux, as I mentioned earlier, it was just a bit to confusing to figure out Linux packaging for my tastes, so I punted. If there was crushing demand for it, I'd look into it deeper.

None of those artifacts are "cross platform". It's not a single artifact for all platforms, they are platform specific. I build the Mac one on my machine, and the Windows and Linux versions on VMs. Currently, the vision for Java distribution is to bundle the runtime with the application. Use jlink and the module system to narrow down your JRE, and jpackage to combine them into an appropriate, platform artifact. jpackage requires to be run on the host OSs. I do not have ARM versions of any of my code yet.

If you want to ship a cross platform jar, then it's probably worth your time to require a JDK with FX already installed. Azul does this, I think there are others. Then the FX, and it's platform specific binaries, are no longer your applications problem.

Also, there is a project, https://jdeploy.com that offers tooling and infrastructure to distribute native FX bundles, it even offers automatic updates. It will install its own JDK in its own directory structure to run your applications. If you have multiple applications, it will share the JDKs among them. It's quite clever, and perhaps worth considering depending on your requirements. I chose to not do that just to make my projects as simple as practical for the end user and myself.

I'll be fair, getting to this point was not drag and drop. jpackage and jlink can be fiddly to get started with. Documentation can always be better.


> What you are doing I would consider unconventional

It wasn't before JavaFX was removed from the Oracle JRE. That is my point. JavaFX used to be a trivial dependency, but now it is quite painful in otherwise identical configurations, definitely not "low-impact."

> If you want to ship a cross platform jar

We do. Isn't that the point of Java, "write once run anywhere"?

This program is also used as a library in autograders. We do not want to distribute 5 versions of each autograder for 2-4 assignments. The autograder should be distributed as 1 jar. Undergrad TAs are creating that jar and may not have knowledge of complex CI pipelines etc.

> then it's probably worth your time to require a JDK with FX already installed.

That is not appropriate here. This is an educational tool, and students are enrolled in other courses that use Java frequently. We should be able to use the same JRE that students already have installed — it is unreasonable to require installing a different third-party JRE to run a digital logic simulator. It also adds another hurdle for freshmen/sophomores who may not have a natural ability for juggling different JRE installations. (Source: We tried requiring Azul and it was painful for everyone.)

> I do not have ARM versions of any of my code yet.

We have >900 students in this class, so it is necessary to support M1/M2; in fact, a large portion of our students had M1/M2 laptops. It sounds to me like you could just provide a fat jar in your case, actually. Supporting aarch64 is where we hit problems with our fat jar[1], since the aarch64 native libraries have the same name as the x86_64 libraries.

To summarize my point: yes you can make the build/install process more convoluted and avoid this problem. But we have an installation flow that has been battle-tested by thousands of students for 13 years (download circuit simulator .jar and run it) we have no good reason to abandon. The combination of the arrival of M1/M2 and JavaFX getting yanked from the JRE has made supporting our existing (extremely reasonable) flow nothing close to "low-impact."

1: https://github.com/ra4king/CircuitSim/pull/93/files#diff-648...


Makes sense. I worked a bit with Java years ago, but never with GUI stuff. Most of what I remember about it was drowning in boilerplate and being really good for coordinating a lot of developers around a big well-organized codebase. I probably couldn't write hello world from scratch without reference if I was being held at gunpoint.


> Also, when you start bundling your JDKs I am not sure you can talk about a smaller footprint anymore.

What, do you bundle Electron source and Electron build environment with your Electron app?

Why would you do the same and bundle Java source code + Java compilers in your Java app?

Why would you do the same and bundle <any language> source code + <any language> compilers in your <any language> app?

If you need to create a "just works without dependency b.s." experience in Java, you use the correct tooling for that, jlink.


> If you need to create a "just works without dependency b.s." experience in Java, you use the correct tooling for that, jlink.

At which point you are including a similar footprint as Electron does by shipping chrome. I mean, you must have realized I was talking about the inclusion of the JRE and whatever else is needed to make a java application run on a system as a standalone application.

So I am honestly not sure what you are arguing besides semantics.


when i see a java application i think, hmm, this is likely going to be bloated (but not necessarily) but for sure it's going to run.

if i want to create a cross platform application where i don't even have to think about testing on multiple operating systems, then java is going to be a serious contender.

and if i have to choose between an app written in java or electron, i'd probably pick the one in java.

so yeah, i don't understand what happened here either.


Java is great for making huge well-organized codebases with a lot of developers, especially if you've got good tooling support or a rich ecosystem of existing code to work with. Outside of that... If it was a good development ecosystem for native gui-based apps targeted at end users, why wouldn't the preponderance of native user-facing apps be written in Java, anyway? Ask nearly any experienced mobile app developer if they're more productive in Java on Android or Swift on iOS-- it's not even close. Sure, some of that is the OS itself, but a whole lot of it isn't. On the desktop, the one time I tried to make something with Swing I wanted to Fling my computer out the window. Clunky.


It’s about branding. Swing and JavaFX looks like other desktop app (aka not cool to a lot of designers). And it has a high barrier of entry (ever tried QT, AppKit or Win32). Electron is easy, but it’s shoehorning a document process to software interfaces.


Yeah the architecture for electron is absurd, but it's important to not relegate UI flexibility to mere aesthetics. For most of my career, I was a back-end web developer, but more recently I've done a lot of interface design after getting some formal education in it. The overwhelming majority of developers I've worked and interacted with conflate aesthetic and interface usability. Heck, even I did before I really started digging into it professionally. I think it's because applications that have experienced designers make a good, usable interfaces will also likely hire visual designers to do aesthetic/branding work, and especially in waterfall environments, developers get it all handed to them as a "design." And for many reasons I will (uncharacteristically) not rehash here, FOSS lacks both.

However, a good interface and a pretty interface are not the same thing-- both are communication mediums that communicate through the software interface, but visual/branding/identity designers communicate things to the user about the brand as a marketing device, and interface designers figure out how to communicate the software's features, status, output, etc. with the greatest efficiency and reduce unnecessary cognitive overhead. Branding and identity is a very specialized form of design that's usually done by design houses-- even huge companies with big teams of designers often contract this work out to specialists. They might go so far as to recommend certain animations for interaction, but you don't want them designing your interface. In small companies, the designer will probably have to implement their design to conform to the design document, but they're using tools like gestalt, alignment, color grouping and type to create information hierarchies, existing expectations for layout and functionality, etc. that tell the user what they need to know as effectively as possible, and how to act on that in the ways they need to.

A good example of the power of interface design is in many dark patterns. You can simply have a plain system-standard dialog box asking if a user consents to some creepy analytics that nobody really wants, but instead of "OK" and "Cancel" in their normal spots, put "Off" in bold letters where "Ok" would normally be, and "Consent" in non-bold letters where "Cancel" would normally be, and I'll bet you at least 60% of users would choose "Consent" having only skimmed the familiar pattern. That experience isn't branded or styled in any way-- it solely uses juxtaposition, pattern expectations, and context to influence users behavior.

When you've got an inflexible, counterintuitive UI kit that developers must fight with to get the results the interface designer carefully put together, you hurt the usability of that tool for end users a hell of a lot more than mediocre performance does. This is very counterintuitive for most developers because of the curse of expertise. We have a working mental model of how software works on the back end and consider the interface a tool to expose that functionality to the user. To users, the interface is the software, and if you're design is more informed by the way the software works under the hood than the way a nontechnical user thinks about solving the problem they're trying to solve, it's going to be very frustrating for everyone who isn't a developer. Developers like to talk about marketing as the primary reason commercial software is king, and it's definitely a factor, but developers aren't magically immune to marketing, and you can't get much more compelling than "Free." To most users, the frustration of dealing with interfaces designed by and (inadvertently) for developers is worse than paying for software--- hence FOSS nearly exclusively being adopted by technical people.


There are many factors influencing adoption, including prior experience (I'm using it at work) and network effects (that's what my friend use). What native controls offer is seeing the whole OS as one thing. But with the advent of branding in software interface, people are expected to relearn what a control is for each software (Spotify vs Music).

> When you've got an inflexible, counterintuitive UI kit that developers must fight with to get the results the interface designer carefully put together, you hurt the usability of that tool for end users a hell of a lot more than mediocre performance does.

I have not encountered a UI kit that does not expose the 2D context to create a custom UI. But designers always want to redo native controls instead of properly using them, creating only the necessary ones. I don't believe anyone can argue that Slack UI can't be better.


Common practices do not equate to widespread approval. Most developers don't like electron-- even the ones that build with them much of the time-- but they're everywhere. Designers' opinions are no more generalizable than developers opinions.

As someone who's studied and professionally practiced interface design, I can assure you that there's nothing magical about system UI elements to the vast majority of users. Developers often focus on that because managing them is such an important part of developing interfaces, and developing with them is way easier... but in design, it's a small slice of the components that make a real difference. A lot about usability, as is the case with any other communication medium, is extremely nuanced, and native UI kits suck for creating that nuance. It's usually possible, but once again, especially now that HTML/CSS/JS isn't the accessibility catastrophe that it used to be, the extra effort to get polished results using native stuff just doesn't pay off.

As a long time developer before I became a designer, I am intimately familiar with the sort of blind spots and misconceptions developers have about interface design. Having a working mental model of software in your head significantly shifts the way someone works with computers. Developers see interfaces as a way to expose application state, data and functionality to end users, but to nontechnical end users, the interface is the application. That is not a trivial distinction. Many things most end users prefer chafe most developers. Most things that developers prefer are absolutely unusable to most non-technical end users. And most importantly, most developers assume that their technical understanding makes them better at knowing how interfaces should be designed, when in my significant experience, it makes us worse at it. The curse of expertise obviously stymies in documentation and education-- they're the two most obvious communication mediums in software. Most developers don't even consider that the interface is the most visible and consequential communication medium in any GUI application, and going based on your gut instinct about what that should be works as well as going based on your gut instinct about making a tutorial for nontechnical users. It doesn't.


I'm not saying that native controls are better because they are native. Or electron is suffering from some defects that impair usability. With equal time and effort, a software built with native controls will be more usable. A random user will not be able to distinguish which is which, but I dare say that the native ones would felt better if the only difference is what is used to build the interface.

When designing native controls and using common patterns of the OS, you lessen considerably the amount of efforts required to learn that application for the user of the platform. Most non-technical users only use one platform. Creating the same interface for two or more platforms is impairing users on that platform. And I include the web as a platform.


The JRE itself is an external dependency that you need to bunle because it is not part of most Linux distributions. And even if there is a JRE installed it is not guaranteed to be able to run your Java application.

So yeah if you redefine your problem to "run on systems with the right JRE" then Java makes things "easy" (your program will still stick out like an unpolished turd). But if you can just require stuff like that than you can also require the right dependency versions for native programs.


Java is objectively terrible for writing good apps on modern personal computers. The one platform that did adopt it (android) had to practically rework the entire byte code and VM as well as the set of APIs for writing apps to make it work.


Why is it terrible? Asking for real.


Because it’s not “sexy” anymore. Now “sexiness” lies with web crap, so Electron is a “great tool”, while Java is “terrible”.


Well, so I can only tell you as much as I know and understand. Some of this pulls in some outdated information too.

So, JVMs and languages that abstract the underlying machine are always going to have overhead. The original interpreted stack-based JVM model is really bad for performance because you can't do great optimizations on the code because you can't have a great view of the operands that are being defined and then subsequently used, on top of that you have to either JIT or interpret code which also has overhead. This is why Android's original Dalvik VM originally started by converting the Sun byte code format to a register based format. So, now you have a format you can do some optimizations on: great. But you still depend on a VM to generate and optimize for native code: that means code-caches and that means using excess memory to store the fast optimized code you want to run (which could have been evicted, so more overhead when you have to regenerate). Next you have frameworks like the classic Swing in Java that were frankly implemented with priorities that did not include having a really great and responsive experience even though its platform agnostic as far as the way it draws widgets. These days we can take GPUs for granted to make this approach work, but a lot of the Java UI stuff came from another era.

I am not really sure if I am right here, but to me all this means that to have made the Java system work well for modern PCs and mobile it would have required a ton of investment. As it turns out, a lot of that investment went into the web and android instead of polishing Sun and Oracle's uh... product.

Java's also kinda been sidelined because for years Oracle threatened to sue anyone that dared fork it as Google had, and Microsoft kinda spent a decade making C# and .NET more confusing than it already was so theres that too.


> The original interpreted stack-based JVM model is really bad for performance

And we addressed that today by launching a copy of Chrome with every app?


Isn't that a tooling problem, not a language/VM problem?

If the apps were distributed as PWAs, or had a shared VM (kinda like flatpak?), wouldn't that solve your nit?


yes

I think it's hard to beat the tide that is the web as a content and app delivery system. The web is also getting all the billions in investment from every massive faang.


> So, JVMs and languages that abstract the underlying machine are always going to have overhead.

Well, so JavaScript and WebAssebly isn't that great either in the end?

> The original interpreted stack-based JVM model is really bad for performance because you can't do great optimizations on the code because you can't have a great view of the operands that are being defined and then subsequently used, on top of that you have to either JIT or interpret code which also has overhead.

What a paragraph. But it's kinda false.

WebAssembly, you know, is also a stack-based virtual machine.

Javascript might not be a stack-based virtual machine, but you're interpreting it every time you run it for the first time. How is that faster that bytecode? It isn't.

In fact, modern Javascript is fast specifically because it copies the same workflow of the Java HotSpot JIT optimizer - detect and compile code hot spots in native code, run that instead of VM code.

> This is why Android's original Dalvik VM originally started by converting the Sun byte code format to a register based format. So, now you have a format you can do some optimizations on: great. But you still depend on a VM to generate and optimize for native code: that means code-caches and that means using excess memory to store the fast optimized code you want to run (which could have been evicted, so more overhead when you have to regenerate).

Nope, that is totally not the reason. Dalvik was done because it was believed that you needed something that starts faster, not something that runs faster.

Those are 2 different optimization targets.

It was pretty known since the start of Dalvik that Dalvik had very poor throughput performance, from 10x to 2x worse that HotSpot.

The reason why we don't have Dalvik anymore on Android is that it also didn't start that much faster either.

That of course is not because register machines are worse either, but because nowhere near enough optimization work was done for register type VMs compared to stack type VMs in general.

> Next you have frameworks like the classic Swing in Java that were frankly implemented with priorities that did not include having a really great and responsive experience even though its platform agnostic as far as the way it draws widgets. These days we can take GPUs for granted to make this approach work, but a lot of the Java UI stuff came from another era.

Ok, but does your favorite, non-web GUI framework use the GPU, and use the GPU correctly at all?

Even on the web it's easy to "accidentally" put some extremely expensive CSS transformations and animations and waste a whole bunch of GPU power on little things.

> I am not really sure if I am right here, but to me all this means that to have made the Java system work well for modern PCs and mobile it would have required a ton of investment. As it turns out, a lot of that investment went into the web and android instead of polishing Sun and Oracle's uh... product.

You're mixing things here. "Sun products" were very expensive UNIX workstations and servers. Not things for your average Joe. Those very expensive Sun workstations and servers ran Java fine.

Java itself is a is very weird "Commoditize Your Complement" ( https://gwern.net/complement ) attempt to commoditize this exact very expensive hardware that Sun was selling.

From Sun. Marketed at very high expense by Sun. A self-inflicted self-own. No wonder Sun no longer exists.

> Java's also kinda been sidelined because for years Oracle threatened to sue anyone that dared fork it as Google had, and Microsoft kinda spent a decade making C# and .NET more confusing than it already was so theres that too.

C# not having nice GUI is another story, that of Windows-land never having anything above pure Graphics Device Interface being stable since forever.


Fantastic demonstration of Cunningham’s law, but I think you missed the point.


It's got more security holes than Swiss cheese.


You're living in the past. Applets and Flash lost against the HTML/JS/CSS stack and Oracle owned up to it. Applets are terminally deprecated now.

Edit: admittedly, one of the reasons for that was that the sandbox was indeed prone to security holes. Also, the developer ergonomy of the SecurityManager was unsatisfying for both JDK and app developers. Good riddance.


I'm living in the future. We have golang and rust now. Java is a pile of junk and should be relegated to the past.


Golang's only consistent advantage over Java is lower latency on compilation, startup, and GC. OpenJDK will eventually level the playing field with Project Valhalla. In terms of FFI and language features Java has already caught up. And faster startup can be achieved with CRaC.

Rust in turn is not competing with Java.


Applets are still a thing. We just call them webasm and canvas these days.


The crucial difference is that these technologies are embedded differently. Java Applets had access to dangerous APIs that had to be restricted by the SecurityManager. Also, the JVM was installed externally to the browser, turning it into an uncontrollable component which made the browser vulnerable in turn.

The newer technologies were designed from the beginning with a well-defined security boundary and are based on a language that was designed from the beginning to be embedded. Everything is implemented within the browser and can be updated together with it.


It's the JS kiddies. They got Node and then decided the whole world should be written in Javascript, lol.


If Java applets were anywhere as popular as JavaScript web apps, I'm sure Java desktop would be as popular as Election.

But Java applets aren't popular.


As someone who uses Linux as a daily driver, I can recognize these gargantuan apps a mile away and stay away from them. They are absolute hogs of system resources, and for something simple like Etcher there's no excuse.

Things like Electron are good for devs but bad for users. We have more computation power than ever and yet programs still run slow.


Oh, it gets better. Even the default Weather app shipping with Windows 11 is also an Electron pile of trash that uses ~520 MB of RAM. Just let that sink in. 500MB of RAM just to show you the weather forecast for the day and week. That was my entire system RAM of my Windows XP gaming rig.

Same for the Widgets app, it's not only bad because it shows you news and ads when you open it, it's worse because it's also, you guessed it, an Electron app.

Some VP in Redmond must be off their meds.

I assume Microsoft just can't find devs to write C#, their own damn programing language for their own OS, and one of the dozens of frameworks they have for Windows GUI, that they need to resort to using Electron for what are just Windows-only apps.


The Weather app in Windows 11 a UWP .NET Native wrapper around WebView2 controls. It's exceptionally silly that it's basically just a web browser with predefined tabs and that it uses so much RAM, but it's not Electron.


Oh my bad. I think I saw it call some Edge system components in task manager and I assumed it must be Electron.


Good lord, that's crazy, haha. You'd think with all of their different frameworks, one would have been more suitable than starting from scratch with a browser tab, jeez.

I recently upgraded to 10 because of Steam requiring it in a few weeks, and it's been an adventure. Lots of crashes and restarts that I didn't ask for. I really don't know who exactly modern Windows is for, because I'm a gamer and programmer and it's not been good for either of those tasks...

Windows 7 was solid and I almost never had issues out of it. It booted and got out of the way.


Worse for users than nothing? IT shouldn't be a default, but if it's that or nothing-- as it often is when it comes down to limited resources-- I think it's better than nothing. If you're looking to make a useful tool for a broad audience that must run locally, you have to support windows because that's where 80% of the users are. You should support OSX because that's where 15% of the users are. That's two codebases with dramatically diminishing returns. You need a damn good reason to justify adding ANOTHER codebase on there to scoop up the remaining handful of users on Linux.

Also, aside from startup time, I don't have any trouble with electron apps running slow on my machines. I think many developers are conceptually annoyed with the absurd, bloated architectural underpinnings rather than the experience that translates into when using them. Perception means a lot when judging performance, and I'll bet with most end users using, say, slack, the speed of their internet connection affects the speed of their work more than the speed of the application.


I'm really not swayed by "we must use this turd because the alternative is nothing". "Nothing", to me, is a technical challenge and a sign I should probably start writing the thing myself.

Yes, not everyone has the skill or time to do that, but it's also no reason to accept half-baked solutions that don't take the user's system resources into account. Compute may be cheap but it's still a resource we need to use wisely. Not everyone is running a system like the developer's Macbook Pros on 5GHz wifi hooked up to fiber.


If you only care about the best technical solution, and don't care about economics, then you almost, by definition, only care about what existing FOSS users are doing, and I don't find that scope limitation useful in any way. I love FOSS. I've been a regular contributor to FOSS for decades. But the impact user-facing FOSS apps have on the overwhelming majority of users is miniscule, as is the comparative number of regular FOSS users. Server apps? Apps developers use to make the apps everyone else uses? Absolutely. A music player? A chat app? Nope. Software that makes a visible impression on users is commercial. That's just reality. And companies that don't consider ROI on the products they create aren't companies very long.

The most popular as-is FOSS app for users is probably Firefox with a browser market share neck-and-neck with Opera and Samsung Internet, and everything less popular might as well not exist among probably 99% of users. Why? It's certainly not performance, I assure you. It's because it's poorly designed and users find it infuriating to use. Sure, you can find people complaining about their bloated slack client being slow on their machine. You think that's bad, find a professional photographer and ask them about the one time they tried to use Gimp.

I spend a lot of time talking about how FOSS could be a lot more usable to end users, and technical supremacy isn't it. If you showed your average end user an electron app with an intuitive, professional design that gets the job done well enough, and then you show them the blazing fast linux native version with a typically awkward homespun interface, I will eat my hat if they don't choose the electron version. Sure, in a perfect world, all tools would be forged specifically for their intended purpose. In reality, you are in a miniscule percentage of people that would rather have nothing than something which doesn't perform optimally because of it's bonkers architecture. But if you actually want to maximize the usability of any giving tool, the only reason developers automatically go to performance is because to a hammer, everything looks like a nail.


Your postulate that it is electron or nothing is wrong from the very start.


You postulate that I said that, which is wrong.

> IT shouldn't be a default, but if it's that or nothing-- as it often is when it comes down to limited resources-- I think it's better than nothing.

Sometimes it is. Sometimes it's not. It's certainly an option that's very efficient for dev resources, which is often the primary limiting factor. It's certainly the only real option if you've already got a team of web developers, which is very common.

The current state of commercial software supporting linux with native apps is a pretty good indicator of how companies are viewing this equation. The amount of resources it takes to make a native java app is vastly different than the amount of resources it takes to make a native electron app. If you don't understand how that would be something that would open the possibility of supporting linux in many cases, I'm not sure what to tell you.


Look, business is about maximizing profits and minimizing costs. Business should absolutely not be looked to as an example of an entity that makes sane or suitable tech decisions over the long term. Their goals are different from everyday users, and both are different from power users and programmers.

Why should normies suffer through worse software than those that know what they're doing? Web-based "native apps" are that worse thing.


Reread what I wrote. I'm not saying they're sane technical decisions and I'm definitely not saying that electron is an objectively good architecture for user-facing apps, and I'm definitely not saying that profit is a good way to determine an ideal engineering strategy. But trying to discuss the viability of an option in real-word software creation without acknowledging that profits are often the driving factor in these decisions means you're not really discussing it.


Part of that is because profit should not take precedence when making a technical decision unless you are a business.

The other part is that if we accept only things that lead to easy profit, we'll avoid all sorts of things that take more initiative but become better products. Short-term stock price chasing is not a way to make tech decisions. It's a way to make profit decisions.

I seem to be on a website where nobody can picture doing anything without taking money from someone else.


> I seem to be on a website where nobody can picture doing anything without taking money from someone else.

And by the way, I'm not remotely capitalist, but I don't have the privilege of expecting the rest of the world to operate like I wish it would. In the US, unless you're independently wealthy, you've got to work to eat and feed your family, as I do, living together in a tiny apartment in a modestly priced city. Unless you're well-supported enough to spend a ton of time volunteering, which I am currently not, you've got to pull in money for what you do far more often than you give it away. Ignoring the realities of resource distribution in our society means you're more interested in patting yourself on the back for political and ethical purity than actually making a difference in people's lives. I've met a lot of people like that in political circles: their parents supported them and they were more interested in 'cred' and feeling cool than progress.

After years of hoping volunteer-driven FOSS would revolutionize our world, I now realize that the preponderance of developers are way more interested in coding as an intellectual exercise than solving real people's problems with their code. Think I'm wrong? Try putting in a PR, or heck, even a comment on an issue proposing some way to make something easier for non-technical users, and if anybody even responds, it will be to flame it into outerspace while essentially saying "they should just RTFM," or reflexively bikeshed it into oblivion. But what about all of those amazing well-loved heavily used FOSS apps popular among non-technical users, you might ask? Blender? Signal? Tally up the ones that aren't grant-funded and managed by people tasked with making the biggest impact possible with the resources they've got. I'd be pretty surprised if any of them had the luxury of choosing their tech stack completely independent of the resources required to develop with it.


I'm betting you haven't actually had to make decisions like this in a role where you were responsible for getting the most out of a limited set of resources. It doesn't even have to be commercial: even in well-funded non-profits that I worked with, maximizing the impact of your work is more important than maximizing the quality. If we were talking about shaving 5% off labor costs to put the extra trim on the shareholders' Lamborghinis, that's one thing. If good-enough quality will make triple the impact per grant dollar spent, the correct answer is pretty clear. If technical purity means the project isn't feasible with the allotted budget, it's even clearer. It's about how much you can do with the amount you have. Nobody who gives you money to promote childhood STEM education is going to care how much more technically correct your solution is if you run out of money before you can ship, and they certainly aren't going to give their new cancer-research grant a haircut because you wanted to save on end-users' ram usage. Using the existing giant labor pool of web developers to make an electron application isn't just a little less resource intensive than making a Java application for nearly any purpose-- it's vastly less resource intensive. That's just plain old reality. I'm not saying it's good, or worth praising; it's just the way our world works outside of volunteer-driven FOSS.

When I got paid by a nonprofit to work on MIT-Licensed FOSS full-time for years, these questions were as important as anywhere else where resources aren't free. If I had my choice, I'd have chosen Elixir and Phoenix to do many of our web projects because BEAM had built-in tools to solve a lot of the odd architectural problems we had. But if you suddenly had to hire a few Elixir developers to do something that would work fine with Node.js, just not as elegantly, that grant isn't going to get any bigger just because I'm making sound technical decisions. At the end of the day, your software is valuable for what it does for people-- not what it is.

And really, how much does using electron limit the actual utility of the tool itself among people with limited compute resources? I don't mean what's the difference in the memory footprint, I mean how often are people unable to solve their problem using an electron app because their computer can't run it? You obviously don't need bleeding edge hardware to run an electron application as hyperbolically suggested-- it's not much different from using, well, Chrome. Projects that work with really really under-resourced populations, such as the houseless, don't make desktop applications anyway-- one that I can think of off-the-top of my head used SMS because it was so much more readily available than a computer you can install an app on.

Developers like to focus on things like memory and compute resources because they know how to address them. Hammers looking for nails. But if you really dig into users' biggest frustrations with software, as I have in my interface design work, performance is almost never mentioned. When it is, they're almost exclusively dealing with shitty internet access, not slow local application execution. If FOSS projects wanted to do more good in the world rather than having a hobby project to nerd out about, they'd actively solicit designers to figure out what problems real users are actually running into trying to solve their problems rather than just assuming the solution is more technical correctness. They could be one of the few volunteer-driven FOSS projects isn't solely usable by people who already have a working mental model of software development.


I'd rather use Wine than Electron tbh.


I like to shit on electron because plain old tcl/tk is a better/leaner alternative. Apart from the myriad of other[0] alternatives that exist.

[0]: https://github.com/sudhakar3697/awesome-electron-alternative...


I think that anyone who recommends Electron to a young developer interested in learning native application GUI programming should be slapped. It's the wrong tool for a specific job. That's a pretty niche use case to judge overall worthiness against, though.


I suspect the biggest issue with electron is that it leads to lots of devs packaging various V8 versions individually with their app. On windows they have been trying to get devs to switch to something called WebView2 where the OS provides an electron compatible chromium where unlike electron the resources are centrally managed by the OS.


> and they don’t break as often.

Meh, kinda.

It's not a case that proprietary software vendors only target RHEL, Suse and Ubuntu.

RHEL is a perfect target for stable software and Ubuntu might be as well if you decide to only support LTS releases.


Targeting only a hand full of frozen releases is not abi stability. Abi stability is when any app can be compiled off any release and run on any future release (and ideally older releases too).


> Abi stability is when any app can be compiled off any release and run on any future release (and ideally older releases too).

Hang on, if that's the standard why is MacOS getting a pass? I'd believe that Windows meets that bar, but I see posts on a routine enough basis about Apple forcing rewrites, unless I really misunderstood something there.


Apple's actually quite good at this, but they do break things on purpose from time to time for reasons which they announce pretty publicly at WWDC when they do (32->64bit, deprecating GL, etc).

So, for example a dev can target an app at iOS 8 and it still works fine on iOS 17. Thats almost a decade of OS updates that didn't affect an app. Here's an example:

https://apps.apple.com/us/app/bebot-robot-synth/id300309944


Similarly, it’s possible to compile a Mac app that targets PowerPC, x86, and ARM supporting all of the versions of macOS implied by that spread of CPUs.

X Lossless Decoder[0] is one such app, supporting all three archs and Mac OS X 10.4 up through macOS 14.x (current) in a single binary. It’ll work just as well on that circa 2000 400Mhz G3 iMac you picked up from a local yard sale as it will on a brand new M3 Pro MBP.

[0]: https://tmkk.undo.jp/xld/index_e.html


Such a platform is doomed to never take a step forward because "oh no, something changed and now I have to increment my dependencies".


The Apple ecosystem of platforms take steps forward all the time, and they do a pretty good job at keeping binary compatibility with releases decently well while they are at it. They partly do this by only shipping C, ObjC and Swift platform frameworks though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: