Hacker News new | past | comments | ask | show | jobs | submit login
"Rust does not have a stable ABI" (gnome.org)
163 points by caution 11 months ago | hide | past | favorite | 215 comments



This whole ideology of "the user should get all their software from their Linux distribution" and it's implicit consequence: there's no clear difference between system software (internal tooling) and application software installed by the user (Audacity and friends) should just die already.

I want my OS to just provide a decent interface over which I can install application packages myself, packages that I get from my own sources, just like on Windows. if those packages are statically linked, fine. I know most Linux users disagree, but I don't want the relationship between software vendor and user to be distorted by some distro maintainer, or having to be limited to a package manager. I want to be able to store application installers in my filesystem.

I also want my distribution to hide it's Python binary from me so I can install my own Python without breaking the OS.

Basically: stop assuming that I want to live under your wing. I just want you to give me a nice desktop environment, a terminal and a well docomented way to install third party software.

I know distro developers don't owe me anything, and it's fine if they do something else, but this is the actual reason why Linux isn't used in the desktop.


First of all, you can do that already. No one stops you. So I guess what you actually want is someone to fix the problems that come with your wishes, but do not agree with the way the distributions try to do so. You know what? Thet really don't owe you anything. Go ahead, build the system you want. But please don't come along whining when it ends up unmaintainable, instable, or starts acting against you, like, you know, Windows does.

Second, linux distribution maintainers are usually much better informed about the technical details of installing software in a stable manner than any software vendor. Sure you can get your software from the vendor. But then you should be willing to accept that it is often broken, inefficient, and insecure. Software vendors have no interest whatsoever in installing their software in a professional and sustainable way on your system, package maintainers do.


> But please don't come along whining when it ends up unmaintainable, instable, or starts acting against you, like, you know, Windows does.

As a die-hard linux user, this line of thinking really plays against Linux. Software distribution hasn't caused problems in windows since XP. Likewise, all mac users seem pretty happy with .app bundles.

> Second, linux distribution maintainers are usually much better informed about the technical details of installing software in a stable manner than any software vendor. Sure you can get your software from the vendor. But then you should be willing to accept that it is often broken, inefficient, and insecure. Software vendors have no interest whatsoever in installing their software in a professional and sustainable way on your system, package maintainers do.

Do you realise how insulting this comes across ? As a software dev, my personal experience of "professional linux distro packagers" is people literally removing random lines of code of my software until it builds - who cares if it crashes as soon as you do more than open it anyways. I'd rather not have my software packaged than have it like that. My subjective experience from the software I use is that stuff like AppImage made straight by the dev is generally much more stable and works much better than whatever chtonian hack a debian packager decided to apply.


>As a software dev, my personal experience of "professional linux distro packagers" is people literally removing random lines of code of my software until it builds - who cares if it crashes as soon as you do more than open it anyways.

That is, at least in my experience, very false. They often do care about build systems more than the maintainers, as they're the ones who have to make it work for everyone.

I've worked with the Gentoo packagers and those people I have interacted with have a lot of knowledge about how to distribute software. It's not for nothing the Gentoo community has to push changes upstream when upstream assumptions don't hold up downstream.


> They often do care about build systems more than the maintainers, as they're the ones who have to make it work for everyone.

Every interaction I've had with packagers has been them trying to make the build system work for their single distro. None of them have ever tried to help make it work for "everyone" running linux, much less spend any time thinking about the other two platforms that have 90% of the actual users.

IME they also tend to be much more focused on making the build system work on their distro than making the actual software work. Not surprising given that that's what they know, but they tend to be perfectly happy to make changes which cause functional problems to get things to build.


Software distribution in Windows and Mac comes with a lot of problems, they are simply problems that you-personally don't see as part of the class of "distribution problems". Please don't tell me you've never encountered post-install errors with Windows and Mac software. Linux packages have a higher bar to meet for integration quality, and also have a more difficult job to do because of the more-fragmented nature of the platform.

> As a software dev, my personal experience of "professional linux distro packagers" is people literally removing random lines of code of my software until it builds

I'm sorry this happened, but I can assure you that the vast majority of maintainers do not do anything like this.


>Please don't tell me you've never encountered post-install errors with Windows and Mac software.

Mac user here, can't ever remember encountering post-install errors with Mac software in 15+ years (bugs sure, like every software has, but nothing related to incomplete installation or whatever)...

>Linux packages have a higher bar to meet for integration quality

Isn't it actually a lower bar?

They don't have to follow the platform's look and feel and UI libs (there isn't a standard, a distro can package Gnome, GTK+ only, KDE, XFCE, even CDE and Athena stuff, plus all kinds of ad-hoc UIs). They don't have to work with the same shortcuts, use the same system for configuration, or even play well with the same window compositor...

When it comes to "integration quality" in Linux it's mostly "compiles with the distro's version of libs".


> lease don't tell me you've never encountered post-install errors with Windows and Mac software.

Sorry to say, but I have to raise my hand here.

The only post-installation issues I ever had on Mac and Windows were related to drivers, not applications.

Plus since like a decade or so, I rarely "install" software on Windows anyway, because all the tools I use come as a "portable" version, which is just a ZIP archive.

In an Apple-like fashion, these "just work" after unpacking to an arbitrary location (e.g. a USB-stick or SD-card)...

What am I doing wrong here? ¯\_(ツ)_/¯


One specific problem I always had in Linux but never had on other systems is installing a newer version of a software than the one that is present or supported by the official package manager.

It's most of the time difficult, and sometimes impossible, which is very user hostile; I, the user, should be deciding which version of the software I want to run, not the distribution.


> One specific problem I always had in Linux but never had on other systems is installing a newer version of a software than the one that is present or supported by the official package manager.

Because the newer version (of the lib, I presume) changes the ABI and borks all the packages that were compiled against the old version?

Trying to upgrade something like Mesa from Rawhide or whatever is just going to mess up your install like nobody's business but if you just need a couple minor libs (for a specific program) I've not had any problems.

For programs, I've had ones running for years after they got dropped from the distros with zero problems or that I built and installed from some random srpm I found on the interwebs.

In all honesty the only 'song & dance' I have to do on a semi-regular basis is rebuild my python modules when Python gets updated on a distro upgrade.


There are distributions that cater for the bleeding edge, there are distributions that focus on stability, there are plenty that fall somewhere between.

But with any distro, you call install whatever you like wherever you want. You, the user, make that call. You, the user, should have the knowledge to tidy up if it goes wrong.


What if I want a stable distribution, but need latest versions of some software?

Installing anything outside of distro repositories and Flatpak (& alternatives) is, in most cases, PITA.

Tracking down missing dependencies, figuring out which libraries are compatible, downloading source for those libraries, because the system provided are outdated, etc. You can literally spend a whole day just building a somewhat simple application.

I'm not a fan of binary blobs, like it's customary on macOS/Windows, but at least I can actually use an application in a minute or so, after downloading it. There is no fussing around, it just simply works.


> There is no fussing around, it just simply works.

Because they have huge teams maintaining API and ABI stability.

On any Linux distribution worth its name you can also use the software within seconds after installation. The teams are just smaller and their approach is more centralised and principled, but I never had a problem using, e.g., emacs after installing it.

You need to fuss around with software from their parties. Simply because these third parties do not care enough about your Linux to make the installation work well. These same third parties also do not care much about how stuff is supposed to work under windows, they just drop their binaries and assume that the windows ABI will somehow keep working.


You should try out Arch linux or one of its derivatives. I never have any of the issue you're talking about, were almost always at the bleeding edge, and for packages that aren't, we have the AUR which is straightforward to build into your system from upstream git repositories.


Arch looks tempting but my feeling is that a lot of software built by devs who run Arch needs initial help to run on Ubuntu/Debian. The other way around works better (software built to run on the latest Debian usually runs everywhere).

Maybe that observation is incorrect, please give me data points. Also this won't mean you have to run Debian (but maybe a VM with Debian in it).


If you need to run something that only runs within debian, you'll likely be able to that in a straightforward way using LXC or docker containers. I wouldn't reach for a VM unless I needed the extra layer of security offered by the VM.


> Please don't tell me you've never encountered post-install errors with Windows and Mac software.

I don't remember the last time that happened. Not that I'd exchange my ArchLinux setup against either but you gotta be honest - installing stuff on windows and mac is a solved problem.

However the number of times a Debian or Ubuntu update borked something... that I don't have enough fingers nor toes - I had at least 3 installs of debian that I fucked up enough that I couldn't recover them in the last ten years - in comparison the last time that happened for me on windows was in 98' era. Not that windows doesn't suck - I still get the occasional BSOD even on win10 while Linux kernel panics are... uncommon for me but that's just my experience).


> Please don't tell me you've never encountered post-install errors with Windows and Mac software

For most user-facing Mac software that comes with a GUI, there are no post-install. Installation simply consists of copying the application bundle into the /Applications folder.


What do you mean by post-install errors?


> As a software dev, my personal experience of "professional linux distro packagers" is people literally removing random lines of code of my software until it builds

That's sad. But fortunately, not all packagers are like this. As a software dev too, I have a great experience with packagers from various distributions. When they have an issue (either build or runtime) they tell me about it and we discuss a patch together that I include in my next release.

> My subjective experience from the software I use is that stuff like AppImage made straight by the dev is generally much more stable and works much better than whatever chtonian hack a debian packager decided to apply.

Sure it works if you are in the most common case (Linux on x86_64), but it has some downsides:

* It only supports CPU architectures the dev can build for

* It only supports operating systems the dev can build for

* The dev needs to publish an update every time there is a vulnerability in a bundled dependency (assuming they are even watching for vulnerabilities)


> As a die-hard linux user, this line of thinking really plays against Linux. Software distribution hasn't caused problems in windows since XP. Likewise, all mac users seem pretty happy with .app bundles.

If you don't know what you are missing, you won't miss it.


Agreed. What happens in two weeks when a security vulnerability or data loss bug is identified and fixed?

Nothing, unless you have a application-specific upgrade daemon running. Sounds like a good design, right? :)


> What happens in two weeks when a security vulnerability or data loss bug is identified and fixed?

The automatic update process breaks software by making incompatible changes, thus preventing a security problem by just not allowing me to run the software in the first place?


Don't Linux distributions like Debian avoid this by backporting security fixes rather than upgrading to an incompatible version? https://www.debian.org/security/faq#oldversion


> Software distribution hasn't caused problems in windows since XP.

The windows way of installing software (download random .exe or .msi files from random websites on the internet) has been a problem for every non tech-savy people and the people helping them manage their computer forever.

Every modern OS comes with its own package manager nowadays for a good reason.


Windows has chocolatey, which works quite okay. I usually use this with with automatic update scripts to keep the software on my parents windows machines up to date.


Maybe differentiating between types of software could help? Desktop apps or games don't seem to have the same requirements as system and development stuff. As a Mac user I'm indeed pretty happy to have both app bundles and the Homebrew subsystem depending on the type of software I want to install.


I don't really agree with the comment you're responding to, but your comment is needlessly incendiary and not particularly useful. All software critique has an assumed element of "if the developers wish to make this better for users like me", and so remarks like "[they] don't owe you anything" should be reserved for people who lean into specific developers with demands, not people earnestly offering suggestions to make the third-party software ecosystem better.


One of the big problems with the Linux Desktop evangelism community is that they've entangled their identity with their OS of choice, and thus interpret all criticism as a personal attack.

I still maintain that for all its many technical and architectural faults, the biggest thing holding back wider Linux Desktop adoption is the Linux Desktop community itself.


If you think that some linux distribution maintainer knows my software better than I do, then you are high as a kite.


That's not quite the right perspective.

You most likely do know your software better, after all, you wrote it. However, their concern is the integration of your software into larger system, and they may well be more familiar with those issues than you are.

In all my years of (Debian) distribution maintenance, this was the most typical cause of disagreement between packagers and upstreams. It's very common for upstreams to make fairly arbitrary decisions which might make sense in the context of an individual project, but which are not appropriate when considered as part of the system as an integrated whole. Neither are wrong per se, but as an upstream, your focus is on your project and it's easy to miss the bigger picture issues. Conversely, packagers' focus upon the system as a whole, so their view is at a higher level.

Additional to this, don't just discount the knowledge and expertise of distribution maintainers. There are of course some who just blindly package up other people's work and aren't experts. However, there are others who are software authors in their own right, with decades of accumulated experience and understanding. The latter may well know more than you do, and might be able to provide some keen insights you could benefit from. There are an awful lot of developers who don't know how to make software releases properly in a way that allows it to be consumed straightforwardly by others, with proper understanding of API and ABI issues, proper versioning, and proper use of build systems.


[flagged]


> Ok, smartass, where and how do you locate the fonts installed in a system?

Rabbit's nest but the commonly accepted community API for font discovery is fontconfig.

> How do you know if the system is on battery?

Again, rabbit's nest but the commonly accepted community API for this used to be DeviceKit-Power. Then pm-utils. Now I think you're supposed to use UPower or poke through /sys/class/power_supply?

https://upower.freedesktop.org/docs/Device.html

I wrote the battery applet five years ago in gnome-shell and even I can't keep up.

> How do you make sure that the temporary directory your service needs is actually mounted and writable?

In theory /run and systemd-tempfiles are new enough standards but that hasn't stopped a misguided packager from misinterpreting a policy and patching my software to support /tmp, which was not a tmpfs on their distribution, causing users to blame me and send bugs my way.

For all the things it gets wrong, Docker removes an entire class of misconfiguration errors that allows for more reproducibility and automation in configuring systems, and they do this partly by making the distribution irrelevant.

For Linux to be a world-class OS, it needs to step up its ABI and distribution game and figure out how to squash bickering over things like application menu standards.


> If you only ever develop desktop client software for Apple and Windows Systems where there is a whole fucking company to tell you these things via an API you are really not even in the same league as a developer that tries to ship software to different Unix like systems and at least somewhat adhere to standards and technical best practices that have been developed by a community.

Isn't this an admission that linux is worse? Less skilled developers can successfully create working applications for Windows/Mac because they have unified and documented APIs to access these operations. But on linux you have a heterogeneous system that uses standards developed 50 years ago and require greater expertise to work with.

"My system is harder to use" is not a selling point.


> "My system is harder to use" is not a selling point.

Unless you're a Linux Desktop evangelist, then it is because it lets you feel smugly superior to people who don't like wasting their time with shit like this.


I run a dual boot system since half a year now (have been running windows for more than a decade).

And the one thing I absolutely loved about Linux is that it does what I say. On Windows 10 there are things that are impossible to get rid of. That new browser that automatically adds itself to the desktop icons? Or the Onedrive icon in the explorer which you can only remove through registry hacks (and which might come back at any time after a update).

Customising Windows is so amazingly hard and cumbersome, I was just blown away how easy and straightforward this was on Linux.

I once gave my mother a Ubuntu installation after I had been particularilly annoyed by a malware she managed to catch on her Windows 7 machine. She has no idea how to use computers, but she really loved Ubuntu and told me it feels much clearer. She used it for nearly 4 years till the hardware died. To my surprise she had significantly less questions about how to do $x than before.

My feeling is that the Linux desktop can be great for people with very simple needs or very advanced needs. It is that middle ground where it currently sucks I guess.


>My system is harder to use" is not a selling point.

It kind of is when the reason it is is a corollary to fundamentally supporting letting you and your User community do whatever they want. I've used Slackware, Debian, Ubuntu and MacOS; and to be frank, at least with Linux, even if I do have some greater degree of exposure to dependency/ABI hell, I'm free to determine my own approach to meeting those challenges.

With Windows/MacOS,you have a 500 pound gorilla shoving sh*t down your throat because they decided it's better for them. I simply cobble together a workable solution for the use-cases my Users really need, keep it simple, well-defined and documented, and most importantly, kick software or decisions that impact my freedom as a sysadmin to run the system I'm fine and dandy to maintain to the curb.

To hell with devs who don't respect rule number 1 of system architecture: the System one is helpless to influence or get to understand is the first thing to go when the chips are on the table. At least for me anyway.


> My system is harder to use" is not a selling point.

No one is trying to sell you a system. If you want your software to work well on Red Hat, Debian, and BSD then you either play with their respective standards or you let the maintainers do their job. Just stop whining that these systems are not windows and you need someone to that work for free but exactly the way you want it done.

If you feel well inside Apples walled garden, have fun there, but don't come complaining when some Unix tool or programming language does not work there.

And no, Red Hat or Debian is not harder to use than MacOS. It just supports way more nontrivial use cases out of the box. That comes with a certain complexity. If you don't need that, no one suggests you run it on your desktop.


> No one is trying to sell you a system. [...] If you don't need that, no one suggests you run it on your desktop.

I disagree. People have been recommending Linux to people with non-complex needs for a long time, especially since Ubuntu went mainstream. “No one suggests you run it on your desktop” is thus wrong.

> If you feel well inside Apples walled garden, have fun there, but don't come complaining when some Unix tool or programming language does not work there.

Is this really an issue? As far as I’ve heard, macOS is a POSIX-compatible system with a BSD userland, and most Unix tools work fine. Checking out the Homebrew repos, it seems like all the Unix tools I’m used to are there. And even some Linux-specific systems like FUSE have been ported to macOS. As far as I can tell, macOS thus supports Unix tools and programming languages as well as most Linux distros.


"It's not harder to use it's just more complex" is a wonderful summation of Linux on the Desktop™


So your argument is that Linux is missing a massive number of standardized apis which other operating systems have had for years? That sounds like a massive gap that should be fixed and not something to just accept and shrug and have hacks to work around.


It's open source, go write them, then. I'm serious. That's the whole point of this article and thread ultimately.


The Linux ecosystem is managed by people and convincing dozens (or hundreds) of people to support what I write (which is required for an API to be anything but a N+1 toy) is no small effort. I play enough politics in my day job and have little desire to do so in my free time.

As I see it, it'd take someone with a lot of political connections and political capital in the Linux ecosystem to be able to pull it off successfully. I have neither.


It's called an API. I'd say an operating system that doesn't have that, is not much of an operating system. Do you really think that a missing API is fixed by some distro maintainer packaging your app properly? No.


You are right on only one thing: 1 version globally is stupid. Global properties in general are stupid. Have a notion of public vs private dependencies and one gets the right amount of coherence, not too much or too little.

Otherwise, hell fucking no. This misconception is causing so many mistakes. Software is an ecosystem; I give 0 shits about individual libraries programs whatever, just that the end composition meets my criteria. The app bundle / flatpak / corse-grained Docker vision of software is plain wrong, and will basically prevent future gains in productivity.

Nix and friends get it right: no single version dumbness, everything is installed the same way, be it by the admin or by regular user, and with proper notions of dependencies.

Socially, I understand where OP is coming from. The common view that distros are some crusty 90s holdover held hostage by a bunch of neckbeards who don't care about users not like themselves. But distros like NixOS put the user in full control. There's feeling of de-alienation using a NixOS machine that's really hard to convey to those that haven't yet tried it (and gotten over the initial learning curve).


Yeah one version really is an antediluvian limitation. The ideal state of this would be something like this: I can have 100 different versions of a library, stored on disk only once per each version, and any application I install could use the special version it wants as a shared library. Does NixOS provide this capability currently?


Yes. NixOs uses the NiX package manager (that you can install in any other distro too).

The NiX package manager installs each version of a package to a directory, the trick being that each version of each package goes to a folder whose name is a hash of all the inputs of the build (files, build parameters, specific build folders of this builds dependencies, etc).

As a result, not only can you have different versions of a package: you can even have different builds of the same version (that you built with different build configuration and/or versiona of its dependencies).

The concept is sound. The issue is that you need the ability to actually build the application (you need the source) and you probably have to adjust its own build process to get it to work properly with NiX.


Yes, just as the other comment says.

To me, Nix is "Noah's trampoline", no-holds-barred mud-wresting all the nasty details of Unix and software as it actually exits, all the way down, into something just every so slightly saner so we can sale over the floodwaters and land on the shore of an actually sane practice of computing.

And yes, I would basically want describe in in those terms even if it weren't for your nick and vocabulary :).


I think having a separate package manager for every piece of software I use is just terrible. I also don't want to be forced to use application bundles.

You get isolation (re "hide its Python binary"), multiple package variants, a unified software management interface for all applications, etc -- from functional package managers like Nix or Guix.


And you only have to learn two new programming languages to use them!


Well, one, because you wouldn't use both of them.


To disagree with the least important part of what you said:

> but this is the actual reason why Linux isn't used in the desktop

I still believe the actual biggest issue with Linux on the desktop is graphics card drivers (and other aspects of the graphics stack like handling High DPI). Too many machines fail the basic test of 'can I install Linux, plug my screen in and have it behave sensibly'.


Just stick to Intel/AMD graphics and it's absolutely fine. Wifi/Bluetooth/LAN support is a bigger issue, tbh. Even audio can be an issue sometimes.


>absolutely fine

It's mostly fine. But high DPI, or worse, mixed DPI desktops are somewhere between unusable and sort of tolerable.

I dual boot my desktop, and I've given up getting my 4K main screen and old 19" off screen working together in Linux (and I've tried everything). I like moving IRC off to a smaller screen to glance at once in a while, so I never miss an internet argument, but mixing DPI monitors just doesn't work in Linux.


I do have to run a non-distribution supported mainline kernel (with some DMKS config changes and some drivers installed from the distro's dev branch) for my relatively recent AMD GPU (RX5500, released at the end of last year) not to freeze seconds after I start chromium, or at least twice a day if I avoid it. That's tolerable for a technical user, but definitely not "absolutely fine".


TBH unless you're on Intel you'll still probably have some trouble. That's still a problem though - you shouldn't have to (within reason) select your hardware carefully, ignoring a large part of the consumer market, to have Linux do basic things.


If you really want your system to work, you have to. This has been the case since the dawn of the computer age. Nowadays most desktop hardware is intended for Windows users. Therefore, hardware vendors make their stuff work with Windows systems and jump through Microsoft's hoop to get certified. Linux is just irrelevant for many hardware vendors. And without vendor collaboration, it is just plain difficult to write drivers that make full use of the hardware.


There's still weird AMD graphics bugs. Built a new Ryzen based system, with a RX 5500 XT, but get weird GPU lockups all the time


Graphics card drivers is a solved problem. Intel/AMD "just work". Nvidia works okay-ish if you don't mind closed source drivers and unless it's a card nvidia has dropped support for. In both cases, switching to Windows is not an improvement: all the bad parts of nvidia on Linux also apply to Nvidia on Windows.

HiDPI is mostly fine, with the exception of mixing high dpi in xorg. Mixing DPI works in wayland. So I'll grant you that the combination of nvidia+mixing DPI doesn't work, because nvidia's drivers don't support wayland.

Otherwise I strongly disagree, and I'm genuinely confused as to why this myth persists. Wrangling drivers on windows is a huge pain. There's no one "update" button I can press to update all my drivers, let alone all my other software. I have to go through the driver manager and manually right click and select "update driver" which frankly nuts. And to update the graphics card I have to periodically go to nvidia's website and check manually? What year is it again? Why doesn't Windows do this for me?


Windows Update keeps drivers up to date except for ones installed manually.


It will not update all of them, just some. Clicking through the device manager on my work computer, it seems the driver for "Standard Enhanced PCI to USB Host Controller" was not up to date, and had not been updated by Windows Update. I never installed this driver manually.

If you're on a Windows machine right now, try it. Right click on a bunch of stuff in device manager and hit update driver. Clicking through on my machine and maybe 1/5 have an update.

The crazy thing is that it's a really painful process. It takes 15-30 seconds per device just to come back with "no updates" and you can't just select a whole bunch of them and do them all at once: you have to do it one at a time and go through the wizard for everything. And there are simply too many devices to do them all, so all of us are using plenty of outdated drivers. This is an absurd situation, it's 2020.

There are third party apps that do this automatically, but I don't know that I trust them.


This is likely a result of your system admins not including drivers in their WSUS or other patch management system.

WSUS is sort of a corporate Windows Update server where you can approve updates to groups of machines on whatever schedule you want.

A lot of WSUS admins skip drivers because they massively slow down the sync/approval process and take tons of disk space. Windows has tens of thousands of drivers.


Ubuntu Mate + Intel GFX + 2 4k monitors = Works perfectly™


Wayland or xorg?


xorg, but I'm open to wayland in the coming years.


I think the original reason for shared libraries, and the only true one is that it’s meant to save hard drive (and maybe memory) space. But the ratio between assets vs code is now so big (media files, or data in data intensive algorithms) with code representing nothing, i don’t think any optimisation is really worth it anymore.


There are also security implications. Update one shared lib with a security patch and all applications that use it are now using a patched version.


It's become increasingly common for desktop applications to aggressively and/or automatically update themselves the same way the OS does, often without even having to go through a "wizard" prompt of any kind. Off the top of my head Chrome, Firefox, VSCode, and Slack (probably my four most-used non-system applications) automatically update themselves whenever I restart them (and prompt me to do so when applicable).

Not to mention the fact that an increasing fraction of consumer-facing software now lives in the browser, where updating implicitly happens every time you go to use it.

Regardless of how you feel about these trends, they make out-of-date applications much less of a concern than they used to be.


This is very often repeated argument in favor of giving control over shared libraries to the distribution, but it is mostly distribution's marketing. In practice, most users don't really care about security. That very small portion of users who do care about security, don't wait 2 weeks or more for distribution to make fixes available, they fix it/mitigate it themselves as soon as possible.

It is true that it is easier to fix one version of a library than 10 different versions. But if you need 10 different versions for different applications, you probably do not need to patch all 10 of them.


Most people don't care about security ... until they do.

Most people don't understand security and are not equipped with the necessary knowledge to correctly judge risk. As long as security is just something that gets in the way to get some job done, most people will just plow ahead, since not getting the job done right now has a high and easy to understand cost.


That's absolutely not true for desktop GUI apps because so many of them are written in Electron these days.

Even small desktop accessories carry their own copy of the Electron and Chromium libraries, usually around 150 MB. It would be a tremendous improvement to use a shared browser engine for this, but developers are resistant.


Shared browser engine or any other library would be ideal and in a single project a preferable way. But in practice it is hard to synchronize requirements of different applications, because they are different projects and different people on different schedules. 100 times 150 MB is still only 15GB, a minor part of hard drive, a very acceptable price for 100 functioning independent applications.


How many of those apps are running simultaneously? Often quite a few. The memory footprint of the engine is larger than the code size on disk. Users may be wasting gigabytes of RAM on trivial desktop tools.

The point of the web is to support multiple user agents rather than forcing the user to a specific browser. I don’t understand why that shouldn’t apply to web apps running on the desktop.


> The memory footprint of the engine is larger than the code size on disk. Users may be wasting gigabytes of RAM on trivial desktop tools.

That may be the case. Usually it is not a problem, there is mmap and swap and unneeded parts are paged out of physical RAM.

If it becomes a problem, one has to change tools or get more RAM.


Shared libraries are also important for apps that want to load plugins via the shared library mechanism.


I’m not a lawyer, but I recall seeing a comment that one reason companies currently prefer dynamically linked apps in bundles like Snap and Flatpak over static linking, is that dynamic linking permits more open-source libraries to be used without legal issues. I think it had to do with what legal precedents had been set for linking GPL/LGPL libraries with non-GPL binaries?


Not for the vast majority of libraries at any rate, but for libraries that most OSs consider part of their core platform (a concept eschewed by Linux Desktop) they are still a good idea.


Yeah, I'm happy to keep track of all my third party software that I install manually and I never forget about what applications I have installed so far and I always update my packages manually when there are any security flaws and I get to know about all the security flaws right away when they are discovered because I'm on all seclists. Even if I forget to update my third party apps/packages, my third party apps/packages remind me of the new updates and I never turn the update-notifications off.


> This whole ideology of "the user should get all their software from their Linux distribution"

This isn't true at all, and I struggle to think where you came up with this idea.

Linux distributions do distribute a hand-picked set of packages. That's essentially what a distribution does: distribute packages. Some are installed by default, others are made available. That's pretty much the full extent of it.

Yet, just because a distribution distributed packages, that doesn't mean you are expected to no use anything else. In fact, all the dominant Linux distributions even support custom repositories and allow everyone to not only put together their repository but also offer a myriad of tools and services to allow anyone to build their very own packages.

Even Debian and debian-derived distributions such as Ubuntu, which represent about half of Linux install base, offer personal/privayr package archives (PPA), which some software makers use to distribute their stuff directly to users.

So, exactly where did you get the idea that that so called ideology even exists?


You can live in this world right now.

1. Snap and/or Flatpak allow you to install GUI applications from most places nowadays. The internal tooling (system packages) are kept separate from the User installed applications and are effectively sandboxed in this way

2. Linuxbrew allows a Mac OS-like separation from your personal development tools and your OS's internal packages. Notably, this also allows you to install far newer tooling than your distribution would typically provide

3. Drop application binaries in ~/.local/bin if all else fails

If I weren't on a rolling release distribution I'd probably go that route. I hate being restricted by whatever my distro provides, I hate upgrading the entire world when new releases are made, and I hate third party repos and the hell the provide reconciling everything together.

It's really the in-between state that's terrible. Either go full *BSD or Mac OS and separate the concepts or full Arch Linux (w/ AUR) and don't. All other ways of distribution software tend to be more server centric anyway


> Either go full *BSD or Mac OS and separate the concepts or full Arch Linux (w/ AUR) and don't.

Regarding the first alternative, I think it will be interesting to see how Fedora Silverblue [1] turns out. It’s basically going for an immutable base system coupled with Flatpak for apps the user installs.

Haven’t tried it myself, but for average desktop users, I think that sounds like a very good solution in the long run: a stable base system with up-to-date user-facing apps.

[1]: https://docs.fedoraproject.org/en-US/fedora-silverblue/


What do you make of Go's approach where the build outputs a single binary file [0] that, roughly speaking, 'just works'?

[0] https://stackoverflow.com/a/19286458/


Now the Go dev has to voluntarily update said binary whenever a security issue or bug is found in his code or any of its dependencies.

And you have to replace that whole binary everywhere it is installed.

Think about how many systems would still be vulnerable to heartbleed if every thing that used libopenssl had to be re-compiled and redistributed as a statically linked binary.

This proposed “statically linked” world might be worse than the current mess. See Docker.


> And you have to replace that whole binary everywhere it is installed.

Not if your package manager supports binary diffs [1]. Which for many platforms is already the case, and other package managers would likely implement the feature if downloading large binaries becomes a problem. Right now, I’d however call it a premature optimization as the bandwidth of most people is dominated not by downloading updates but by streaming. But it is a solved problem since e.g. Fedora has supported package diffs for a decade.

Regarding the heartbleed, that is a valid and often repeated argument, but I think it is also overblown. One could e.g. have a static binary with embedded or bundled info about what library versions it was statically compiled with, and a way to automatically flag which binaries are thus vulnerable. Such a system would make it possible to push out new builds of high-priority targets like Firefox and Chromium without testing every other libssl-based app for compatibility first, thus cutting down on the response time for patching the most vulnerable apps. But the flagging would make it clear that the other apps need updating too. (Or in the case of automated builds, that the other apps need testing.)

I’m not saying that the whole OS should be statically linked, but for end-user apps I think it makes a lot of sense, and it’d remove a lot of friction where a library update can break a previously working app (that happens).

[1]: https://en.m.wikipedia.org/wiki/Delta_update


This is exactly what Rust does by default too, btw.


Thanks, I didn't know that.


Given the current state of the ecosystem, I think Go's approach is the sane thing to do.


So if you want to download precompiled static binaries and "install" them into your home directory or /usr/local/bin that's fine. You can do that, and it works, although now you're stuck in the same quagmire Windows users are stuck in.

My question though... why? Everything about that seems to me worse in every way.

Specifically regarding python, on my system right now, I have python 2.7, 3.8.5 and 3.9rc1 installed on my system and maintained by the package manager. When 3.9rc2 or 3.9.0 final is released it will update to that. It's configured to run 3.8.5 when I just run "python", although I can manually run other versions by running the command python2 or python3.9, and I could reconfigure the default to be one of the other versions if I wanted. The package manager has versions going back to 3.4. I guess I don't understand the desire to install python manually - what can you do with a manually installed python that you can't do with the one installed by the package manager?


> and a well documented way to install third party software

I have made a few packages for Debian ARM Linux when I needed them. I wouldn’t call the documentation great, but it’s not too bad either. Same with the infrastructure, the OS support versioning and dependencies, can be configured for custom repositories and to verify signatures…

It’s not terribly hard to make them fully working, install/remove/start/stop systemd services, support upgrade/purge/reinstalls, matter of a few shell scripts to write. The command to install a custom package from a local file is this:

    sudo dpkg -i custom-package_0.11.deb
However, this all was for embedded or similar, where I had very good idea about the target OS version and other environmental factors. Not sure how well the approach would work for a desktop Linux.


Arch does this best. If you are displeased with the official repos, you can use the AUR.


Additionally, there's often a package in the AUR for the software you want to install already. Just gotta look at the votes and comments beforehand, and if you're feeling paranoid, the PKG itself.


>Just gotta look at the votes and comments beforehand, and if you're feeling paranoid, the PKG itself.

ALWAYS check the PKG-file when using AUR!!:

https://www.bleepingcomputer.com/news/security/malware-found...


So do that? Distros provide packages but they don't make you install them. Download the applications from their website and stick them in /usr/local or your home directory. If the applications don't provide builds, that's their problem. We have AppImages for a single-file solution, but I've used plenty of application directories you can just unpack and use. Julia is one.


Put your stuff in /usr/local. Use Gnu Stow for poor man's package management. Then you can have your programs coexist with the OS.


Or use a BSD where System and packages/ports are two things.

/bin /etc = System

/usr/local/bin /usr/local/etc = Programs


…? You can have all of this today.

> this is the actual reason why Linux isn't used in the desktop.

Yeah, no.


Just curious, what do you think the actual reason is?


Hm, hard to find “the one reason”. I think the most important factors are:

* What people (not necessarily the users but corporate decision-makers) are used to (Windows)

* Enterprise manageability (achievable on Linux, but built-in on Windows)

* What is preinstalled (Windows)

* What some exotic business applications work with (Windows)

Almost purely soft factors, very hard to counter if virtually all the computers you can buy at your local PC shop come with Windows preinstalled.

My neighbors (70+) accidentally got hold of a laptop that didn’t come with Windows but Ubuntu and they’re perfectly fine with it. It still has Firefox after all. :-)


Preinstallation is as close to "the one reason" as you can get. Microsoft had a hard time getting Windows over with end users -- until Windows 3.0, when the average new low-end PC had a 386-class processor and VGA, and oukd reasonably run Windows. Microsoft pressured OEMs to bundle Windows, and turned Windows into something that could be reasonably expected to be installed on a new computer, and that attracted ISVs, which meant end users were more likely to use Windows to get work done.


> My neighbors (70+) accidentally got hold of a laptop that didn’t come with Windows but Ubuntu and they’re perfectly fine with it. It still has Firefox after all. :-)

What you think you just said: Linux is so easy to use anyone can do it and it works better than alternatives!

What you actually just said: Linux makes a great webkiosk!


And what else is it that the average PC user needs? Because I cannot think of a single thing.


That's because you have a disingenuous definition of "average PC user", a strawman, so that your argument works.

What you call the "average PC user" is really the average smartphone user these days. PC users are doing office work, running a business, development, gaming, content creation in more ways than I can enumerate, etc.

People who have a bulky desktop computer in their home just to browse the internet are vanishingly few.


> People who have a bulky desktop computer in their home just to browse the internet are vanishingly few.

1. a PC isn't necessarily a "bulky desktop computer", it can also be a slimline laptop!

2. "browse the internet" and "office work/run a business/etc" are not mutually exclusive. Google Docs/Office 365 etc. proves that. I'd argue the majority of business operations requires some level of internet interaction these days.

It seems to me like you've come up with a strawman definition to bat away another strawman definition. Almost everyone I know owns both a smartphone and a laptop, and the vast, vast majority of their time on that laptop is spent in a web browser.


What argument even? I never said or implied that Linux is better or whatever. That was you.

I merely stated that a lifelong Windows user of advanced age can in fact use Ubuntu because it is, on the surface, similar enough. Same goes for a Mac, of course.


largely it's the network effect. in days of yore unix dominated the minicomputer market and VMS played second fiddle. these days, windows dominates via the network effect and macos plays second fiddle (and linux third).


The article is pretty interesting and I learned quite a few things, but it looks like the author is knowingly not answering the issue they raise by themselves.

In my opinion, the most important thing distros does that is incompatible with how rust currently works, is handling security/bug updates.

The one libjpeg.so for everyone is meant to fix libjpeg flaws for everyone. And it has many security flaws. And it has many users. There is no denying the way this is done by distros is good.

Now, to pick author's code, one of its dependency is a CSS parsing, which is prone to flaws. (Maybe not /security/ flaws, but still) The question is, how is the distro supposed to handle that?

I know rust has tooling for that, but it seems to me that with the perfect version match crate build system, every dependency will happily break API. So let's say author no longer has time to develop rust librsvg, and cssparser crate has a major flaw, which is fixed only in a new API rewrite branch, then what? Distro are supposed to fix that themselves? Sounds like much more work for them.


> There is no denying the way this is done by distros is good.

Let me tell you, the way it is done by distros (Centos, Debian) is far from being good. You will get the fix a long time after the bug is published. And you only get it if your system is recent enough.


I appreciated the author's approach. They did address many of the ancillary concerns while "staying with the question" about whether the dominant Linux distro way of handling libraries is indeed still the best way. Sometimes teaching or blogging on a topic helps a person clarify their own ideas over time.


Yes, every crate using a different versions of their dependencies involves a lot more work for distros, especially when a crate uses a -sys crate (e.g. libgit2-sys) and libgit2-sys does an API break. Now every crate that uses libgit2-sys in the repo manually needs its dependencies updated, which is a rather time consuming process (especially if the bindings in libgit2-sys are only built against some random git version).


From a security point of view, you shouldn't be using unmaintained libraries anyway, no? And if librsvg is maintained, then all the distro has to do is package the latest version.


This is a bunch of nonsense. Rust prefers static linking because it is predictable. These supposedly "huge" binaries are laughably small on a modern >1TB hard drive. If you're building a tiny embedded system, by all means optimize your builds system-wide, you have total control! But for a desktop, is this really a concern?


If you add 9MB to each binary on the system by statically linking them, and your system runs 200 programs (including system services) on average, your system now uses about 2GB more memory (to be fair, probably not all the time but it does increase memory pressure needlessly). Shared libraries aren't just about storage space. They also provide page cache sharing (memory for a shared library is only mapped once and the mapping is shared by different programs).

A slight aside but they also provide the ability to apply security updates sanely to all programs using that library on the system (just update the library and restart the programs, as opposed to having to install a rebuilt version of every program that uses the library). Is this a game-changing feature? These days probably not, but it is (again) just needless waste.

And I say this as someone who is currently developing a shared library in Rust that will probably be included in a lot of distributions because I expect quite a fair amount of container runtimes will end up using it. (But I do also work for a Linux distribution.)


I used to think that way, but have come to the conclusion disk and memory issues should be handled by page / block level deduplication.

Security updates are a plus, but introducing bugs through a shared library is a minus.

I would prefer we focus on robust application sandboxing instead.

At the server datacenter level its pretty much how its handled anyway, you have isolated vm / containers sitting on deduped storage.


Sandboxing doesn't actually solve the biggest security problems, because most systems are there to run applications. That means that the interesting things on those systems are inside of some application's sandbox. It doesn't matter if I can't get root if I can get all the actually important assets on the system by cracking the application that actually processes those data.

... and in my experience, most application developers, and especially the developers who are the most truculently hostile to working with the system versions of libraries and the most likely to demand to bundle in their own copies for "stability" purposes, are really horrible about paying attention to security issues in their third-party dependencies.

When I'm being a user and a sysadmin rather than a developer or a packager, and I have the choice of trusting a developer versus trusting a packager, then, all else being equal, I will tend to trust the packager. Not that I trust either one that much.


But the linker doesn't try to keep code in a dedupe friendly format. I don't think this would help on a pile of Go/rust binaries.

Maybe there would be better sharing if I extracted the ELF sections first.

I just ran an experiment on the 20 largest items in my Mac's /usr/local/bin, many of which are Go binaries

    for file in `find /usr/local/bin -perm +u+x | xargs ls -SL | head -20`; do split -a4 -b4k $file `basename $file`; done
    ls | xargs md5 > md5.txt
    cut -d' ' -f4 md5.txt | sort | uniq -c | cut -c1-4 | sort | uniq -c
This gives: (number of 4kb blocks, frequency of sharing)

    128021    1
     12105    2
        13    3
         1    4
         1 1137
so about 10% of blocks are shared... once. And 1 blocks is shared 1137 (all zeroes).

To your second point, we still want fast security updates and more secure code even with sandboxing! EG, sandboxing would not help with heartbleed -- critical data leaked.


System wide shared libs is the opposite were we are trying to optimize space at the file level rather than block, and the linker instead optimizes for that along with all the other baggage of shared libs.

So if block level dedupe where the norm linkers would optimize for that instead. Coming from the .Net / Node world you don't statically link but you typically don't have shared libs, all the libs are in the same folder as the executable and dedupe works just fine there because you have just a bunch of duplicated files in different directories. Same with containers and Vm's all duplicating a bunch of the same system files. You get isolation and reuse because its done at a lower abstraction.

As far as security, again my point was a trade off, yes you could potentially get security updates faster along with breaking changes faster. If the sandbox apps had a good way to stay up to date, then they can update upstream after being integrated and tested with the app. If you have an app store of some sort it can watch for and notify to the authors of security issues in libs these use and prompt them to upgrade or be taken down.

I know the app store argument probably isn't popular around here but from my experience it is a far more secure solution especially for the non technical user. If something is really so widely used it should be a api provided by the OS, otherwise it should be bundled with the app individually and code sharing is done in source repos / build systems, at least thats my opinion after all these years.


> Same with containers and Vm's all duplicating a bunch of the same system files. You get isolation and reuse because its done at a lower abstraction.

Deduplication of containers is incredibly coarse-grained -- it's done a whole-layer basis which means that an update to any distribution package in a base image (resulting in a new layer hash when you rebuild on top of it) results in zero sharing between containers based on different versions of the base image. I am actually trying to rectify this problem with the whole OCIv2 effort -- but I wouldn't argue that today's containers make this problem non-existent. To be honest I hesitate to call the current model "deduplication" at all (yes, it might technically be deduplicated but it's not doing it to the degree you think it is).

And note that the page cache sharing you get from containers is also on the file level, so it's strictly no better than shared libraries (since the container filesystems contain shared libraries and that's what's being shared through the page cache).


You can't deduplicate if your 200 applications are using 20 minor versions times 50 mixes of compile flags. You're lucky if any two share the same output.


If your apps are using 20 different minor revisions and 50 different compile flags they prob can't share those libraries anyway? At least if some parts are the same they will dedupe at the block level even if files are different.


Indeed, you can't share in that case. The work of the distributor, as described in the article, is to make them share by unifying all those and requiring libraries to have stable ABI.


> Security updates are a plus, but introducing bugs through a shared library is a minus.

The minus you mention is a consequence of "Move break fast things" webshit philosophy that's so pervasive these days.

Depending on undocumented behaviour and not properly documenting behaviour.

Edit: and forgot to mention ignorance towards security updates caused by modern PL package version locking etc..

It is a culture problem and technology can only do so much to fix that.


> memory issues should be handled by page / block level deduplication

I seem to recall ZFS deduplication needing tons of RAM. Wouldn't the memory deduplication you suggest similarly require even more memory?


Both Mac and Windows already do memory compression.

I don't know how ZFS works but the dedup I have seen runs in the background consuming minimal resources.

I would think if your mem-mapping a file that has been depdupped then its could just memap the already deduped block and cache in memory and not consume more memory if another file uses the same block and is memmapped but not sure if any OS does that.


9MB is a lot. For scale: I have a rust application; assets (font + music), GUI via GPU, networking, unzipping, a bit of cryptography, it's all in there. 400 dependencies in total (yes, i don't like it).

By these metrics it's by far the largest rust application I've seen thus far. When fully optimized, it's 11MB in size.


I just used the metric from the article:

  > librsvg-2.so (version 2.40.21, C only) - 1408840 bytes
  > librsvg-2.so (version 2.49.3, Rust only) - 9899120 bytes
For your application, I imagine most of your application's code size is in your dependencies -- so if each crate was a separate shared library (for instance) then you'd end up reducing the amount of duplicated code if you had 50 applications that all needed to use RustCrypto.


Yes, it's a concern. Firstly, hard drive space isn't the only reason to make binaries small - you have RAM pressure, cache pressure, and bandwidth to save. Secondly and more importantly, waste adds up. If you replaced every binary on the system with a Rust equivalent - which, to listen to some advocates, is the eventual goal - you could end up with a base system that's many times larger.

In a larger sense, something that sets out to be a "systems programming language" needs to be exactly the sort of thing suitable for a tiny embedded system, even if it isn't running on one, because everything else builds on top of it. The attitude that "we have tons of power, why not waste it" just doesn't fly at the very lowest levels. You can write a desktop application in Python, and it's broadly fine - but try writing an OS kernel!


There are patches that let Rust run on ESP32 systems, so think it's entirely suitable for tiny embedded things. What makes it bloaty is the linked in standard library, but it's not an unsolvable crisis; you can dynlink against glibc, there is a crate for core rust IIRC. That'll get you reasonably sized rust applications.

And for writing kernels the same applies; without the stdlib, it gets a lot smaller very fast. I've done it, so I think I can count myself on having some experience there. The biggest part of my kernel is a 128KB scratch space variable it uses during boot as temporary memory until it has read the memory mappings from the firmware and bootstrapped memory management on a basic level. The remainder of the kernel (minus initramfs) is then about 1MB large, the largest part after the 128KB scratch space using about 96KB.


Regarding embedded, you generally only have one program. So dynamic linking buys you precisely nothing in that space.


Depends on if embedded refers to microcontrollers or just single board computers in various devices, but it wouldn't buy you much either way, yes.


>Firstly, hard drive space isn't the only reason to make binaries small - you have RAM pressure, cache pressure, and bandwidth to save.

FWIW, this is does not seem to be empirically true. The savings are possible in theory, but false in practice.

https://drewdevault.com/dynlib.html

The most used largest libraries, like libc and libpthread, would be dynamically linked by Rust anyways.


I seem to keep running out of space on my relatively small SSD where my OS is installed, so, yeah, it totally is a concern.


If applications were portable, then just put them on a different disk. This isn't rocket science, we've been doing it since the 80s.

Problem is that portable applications are an alien concept to Linux.


> Problem is that portable applications are an alien concept to Linux.

Static builds, AppImage, FlatPak, containers, etc would like to have a word with you.


Buy a bigger ssd if you can, they're cheap. The last drive I bought was a 1TB Intel SSD for like $90.

But the pain point is on laptops like MacBooks which have comically small storage space for their price point. I think the base models have a measly 256GB SSD and charge crazy amounts for upgrades.


Wouldn't it be possible to have a C-like language that is somewhat backward compatible with C, and have the nice security features of Rust?

I get that Rust is awesome, but I'm not certain you need to make an entire new language just to have the security stuff.

Of course it might be complicated to do, but in the end, aren't there linters or other validators that can give the same security results rust has, but with C or even C++?


These are valid questions a lot of people new to Rust have, so:

1. Rust is "backward compatible" in the sense that Rust code can use C libraries and C code can use Rust libraries - both ways via CFFI [1]. Security gaurentees only apply to the Rust code.

2. We've tired static and dynamic analysis of C to find security bugs for decades, there has been a plethora of research and commercial tools in the space. None fix the problem like Rust does [2].

[1] https://michael-f-bryan.github.io/rust-ffi-guide/ [2] https://msrc-blog.microsoft.com/2019/07/18/we-need-a-safer-s...


Almost any language can call C functions and we don't call all languages backwards compatible with C when they can merely interoperate with it.

Objective-C and C++ are the only two languages which offer backwards compatibility. AFAIK it's complete in the case of the former and there are some limitations for the latter.

None fix the problem like Rust does, but it's worthwhile to examine why: typical companies and developers have an aversion to paying for tools and for anything which slows down development. That's why usually those tools and languages which are reasonably user-friendly are more successful. Ironically that's both an advantage and a problem for rust: it's nicer to use than some C tools, but still not user-friendly compared to alternatives like Go or Java and in some cases even C++.


There is Cyclone, Checked C, Deputy. Such "C-but-weird" languages have an "uncanny valley" problem:

• "C, but safer" on its own is not very enticing. With no other benefits, it's easy not to switch, and instead promise to try harder writing standard C safely, or settle for analysis tools or sandboxes.

• People who use C often have backwards compatibility constraints. Switching to another compiler and a dialect that isn't standard C is a tough sell. You can still find C programmers who think adopting C99 is too radical.

• Programming patterns commonly used in C (rich in pointer juggling, casts, and textual macros) are inherently risky, but if you try to replace them (e.g. with generics, iterators), it stops looking like C anyway.

So "safer C" is unattractive to people who are tied to C implementations or don't want to learn a new language.

But people who do want to learn a new language and use modern tooling, don't want all the unfixable legacy baggage of C.

Rust dodges these problems by not being a "weird C". It's a clean design, with enough features to be attractive on its own, and safety is just a cherry on top.


Cyclone was not fully realized though.

And there are languages that try to keep to C and add some minor safety improvements, eg my language C3 (subarrays/slices, contracts, all UB has runtime checks in debug builds and more)


Sounds like marketing to me. It will convince young, gullible CS students, but not software engineers.


I think maybe you're conflating what Rust's borrow checker does with the notion of "security." They're related in that the borrow checker does some stuff that makes it difficult to create certain bugs that can be security issues, but they're not the same.

But to answer the question, I suspect no, and if you did it would be basically re-engineering the borrow checker and forcing Rust semantics into C and C++.

I don't know if anyone has proven it, but my hunch is that borrow checking C is undecidable.


1. You could possibly get closer, but you'd lose a lot. Most of Rust's "nice security features" are wildly incompatible with existing C/C++ code and inherent language features.

2. No. If C/C++ could be made safe* Rust would not exist.

* everyone agrees on this point, including the richest and largest software companies on the planet


> Most of Rust's "nice security features" are wildly incompatible with existing C/C++ code and inherent language features

Such as?


Lifetimes and the borrow checker.


A linter can do that.


Nope.


Just because you lack the ability to imagine something like that it does not mean that it is impossible. Sadly this kind of sentiment seems to be way too common in tech.


This wouldn't be possible in a practical sense. One could imagine a static analyzer of some kind which infers all lifetimes, though that would likely quickly be unworkably slow on real codebases, since you need to infer programmer intent. Additionally c static analyzers intended to catch all memory errors/UB etc. have existed for decades, and no one uses them, b/c they don't work or have too many false positives. For example did you know that until C++20 allocating an int with malloc was UB in C++? Would you want to work in a language where the linter marked every other line as UB?


It is impossible given the spec and syntax of those languages, which is why Rust exists, to enable these features...


C lacks the information given to the compiler to do this. It can't be added without adding that to the language. So no linter can do this alone, without language changes.


You can use _Pragma/#pragma, regardless though, you can achieve a lot of what lifetimes can without any annotation.


Pragmas are extensions to C, not part of the C language standard. You can add things to C, but it then ceases to be standard C and becomes GNU C or MS Visual C or early C++ or something else. But not ISO Standard C.

And yes, just about anything non-lexical lifetimes can do you can do with static analysis. But the set of things you can do with Rust includes things that need explicit lifetime annotations, which C doesn't have.


If you think we could get this right in C (48 years old) and C++ (35 years old), don't you think we would have done it by now? :-)


No, people tend to be quick to create a new language in addition to lacking imagination and thinking that it is impossible to do in C.


Too quick to create a new language?! Rust was started in 2006 and announced in 2010. Its improvements are based on research done before and after C/C++ existed.

If you're the only person smart enough to solve the $64B problem so everyone can start writing safe and backwards compatible code in C/C++, please, go right ahead.


yes writing a mini-rust compiler in C is possible and you can write in the mini language. But the problem is: at what cost?


This has nothing to do with what I am talking about.


You certainly do not even need to make a new language. Just making a new C implementation that aborts on buffer overflows for example would be enough.


This is an intractable problem because checking for buffer overflows requires buffer bounds and C pointers lack buffer bounds. Rust solves this by "fat pointer", pointer that knows its size, but fat pointer can't be the same size as thin pointer hence it would be backward incompatible.


Your C implementations lacks "buffer bounds". C by itself doesn't. There is nothing stopping a C implementation tracking the size of the data that a pointer points to (and these implementations are not even hypothetical, they actually exist right now).

> hence it would be backward incompatible

Rust is backward incompatible with C. A C implementation with fat pointers would not be backward incompatible with C. Depending on how it was implemented it could be ABI-incompatible with older C implementations but this is not necessary either.


The problem fat pointer based implementations are massively slower. Rust aims to provide safety without massive performance penalties. If one wanted safety without performance they would just use java or similar.


> The problem fat pointer based implementations are massively slower

I am challenging that. I do not believe that a C implementation that uses fat pointers needs to be any slower than Rust.

> If one wanted safety without performance they would just use java or similar.

Or C with a safe implementation.


Yeah like, rust also uses fat pointers. They should be equivalent. The issue is pervasiveness not speed.


Not an expert, but if you want a stable Rust-to-non-rust ABI, you can use the C ABI as the article mentions. If you want a stable rust-to-rust ABI for FFI there's a crate for that https://crates.io/crates/abi_stable

It seems somewhat unrealistic to expect a really new language commit across the board to the same sort of ABI stablility as a decades-old language such as C.


With everything slowly( or sometimes rapidly) moving into containers (docker, systemd portable services, flatpak, snaps) I think the concept of system library will probably become irrelevant at some point not that far into the future.


I don't think we're anywhere near a future where "ls" will be run in a container.


I used to think that we're not anywhere near a future where most applications came bundled with an entire browser, yet here we are.


It's still a system library just a system stuffed into a container.


Containers (and containerish systems like flatpak) still have portable dependencies that are equivalent to "system" libraries in this context.


This would make it impossible to achieve the most basic level of security.


How does it matter whether a library is a "system" one or not?


The expectation is that with a "system" package, one can update that one package and (basically) everything on the system now uses that new version. Practical for security and important bugfixes.


> While C++ had the problem of "lots of template code in header files", Rust has the problem that monomorphization of generics creates a lot of compiled code. There are tricks to avoid this and they are all the decision of the library/crate author.

Is there any research on having compilers do some of these tricks automatically? A compiler should, at least in principle, be able to tell what aspects of a type parameter are used in a given piece of code. Such a compiler could plausibly produce code that is partially or fully type-erased automatically without losing efficiency. In some cases, I would believe that code size, runtime performance (due to improved cache behavior), and compile times would all improve.


Implementing type erasure for Rust compiler is a research problem. In principle it ought to be possible, but I am not aware of any prior work.


In a way I'm happy rust does not have a stable abi. Swift does, but the stability is 'whatever apple swift emit'. Very little documentation, that what's there is out of date, so the only practical language that can interact with swift is swift. To be able to interact from another language, one would have to parse swift, make the semantics of all types and generics match exactly to be able to do the simplest things. (For example array and string, two core types are stock full of generics and protocols)

I'd hate to have the same happening for rust.


I didn't know that it was possible to export Rust enums with a C ABI like that, that's nifty!


It might simply be the time to invent a more modern ABI.

If for some reason modern languages insist on monomorphization, we should be able to design an ABI that suits that need.

Rough sketch:

A modern shared library is code that generates code (very much like a dynamic linker is actually code that links code).

The interface of the library would consist of:

a) a description language for the shape of data types (not types themselves, mind you)

b) a list of generators for functions

c) a list of function applied to shapes as a requirement

The job of the modern dynamic linker would then be to invoke all the functions to the necessary shapes, put the resulting code into memory and link it. It might be useful to support this with some kind of caching mechanism.


There's a lot of either/or false dichotomy being discussed in here. Distro packaging (or not) comes with various tradeoffs of course, some good or bad depending on perspective.

To get to the point, I quite prefer the package manager way of installing software to "hunt down a single release" app installers of Windows/Mac. The only issue is that sometimes software is a bit out of date. That's what the snap/flat pack/appimage projects are trying to solve.

As soon as one of those three get their user-hostile issues fixed, it will be a software paradise. :D


The optimized and stripped library in rust was about 8 times the size of the C version. While 9MB is not a lot by itself, if a significant portion of libraries decide they want to switch to rust that would explode disk usage!

Though I think my problem with rust is that they make breaking changes in their compiler and spec every release. People regularly building on rust unstable to get features not yet released. This all makes things complicated for a distro.

But the points at about making breaking changes at the bottom resonated with me. Stability is what has allowed the Linux ecosystem to grow so well. Many interconnected parts all moving in unison. Not being able to fix a bad design decision because of this does suck. Still, having everything work and sadly is more important than perfect design.


> Though I think my problem with rust is that they make breaking changes in their compiler and spec every release

That's not accurate.

Rust is backwards compatible. Most 1.0 code would still compile perfectly fine today. There have been some minor breaking changes for soundness reasons, if I remember correctly. There also has been one edition upgrade (2018 edition), but every new compiler still supports the old edition, and most new stuff actually works in the old edition as well.

There are experimental features, which are only available on the nightly compiler and have to be opted in to with a "#[feature(X)]" attribute. But those are very clearly labelled as experimental, unstable, and evolving, with no stability guarantees whatsoever.

The criticism I would share is that while Rust is backwards compatible, it is obviously not forwards compatible. Rust evolved dramatically since 1.0, and many developers jump on new features once available. So compiling actively maintained projects with a older compiler in distros like Debian is not fun.

But the rate of change has slowed down a lot over the past year or so.


> That's not accurate.

Fwiw, I recorded at least four different times a Rust release broke timely / differential dataflow, and have seen a few others in other folks' code. Afaict, none of them were soundness related, and were instead due to ergonomic additions and performance improvements.

They haven't been recent (e.g. were 1.17, 1.20, and 1.22, and the others earlier) and I agree that things are much more stable now.


Interesting, I never got hit by those I think.

The biggest issue I had was with the macro import and visibility changes in the 2018 edition.

That broke quite a lot and was rather painful.


Yeah, for example I have an open PR in differential dataflow to make some changes to avoid a planned break in the future.

https://github.com/TimelyDataflow/differential-dataflow/pull...

I think they are doing the right thing with the break (it adds something I've wanted) but at the same time it's another break.


I think the concerns are more towards the ecosystem and community, which are very curiosity-driven. I tried Rust about one year ago and nothing would compile without the nightly toolchain. After reading the Rust book there was this realization that I now have to go and read all the new language proposals, because everybody is using them already. This and the excessive use of metaprogramming by some users drove me away from the language.


> and nothing would compile without the nightly toolchain

That has changed a lot over the past 1-2 years. Many popular crates have matured a lot and work just fine on stable.

Not all, and Rust is still evolving, but things are much more stable now.


You might want to try again, one year is a very long time in rust land. We've seen some very high profile stabilizations quite recently.


> one year is a very long time in rust land

That's the problem and stabilization doesn't solve it. Only rejecting language changes will.


> nothing would compile without the nightly toolchain.

This problem is solved though. Almost all nightly users shifted to stable after async-await was stabilized. The last holdout (IIRC) was Rocket, which also compiles on stable now. I can't think of any popular libraries or frameworks that require nightly. I can't think of any popular feature that people would want to use nightly for.

I think you could learn Rust now, write your code and then not worry about any new features that are added, ever. Your code will work without breaking. Keep upgrading the compiler every 6 weeks and your code still won't break. That's a guarantee.


Rocket should _soon_ not need nightly, but the documentation for currently available releases from github states that it still does[1]. (The compiler is ready, but the code/cargo.toml hasn't yet caught up)

[1]https://rocket.rs/v0.4/guide/getting-started/#installing-rus...


> That's the problem and stabilization doesn't solve it. Only rejecting language changes will.

What are you using, then; COBOL!? The C standard was last bumped in 2018, C++ is more aggressive than that, Python his shipped some downright controversial changes in the last minor versions, Go is gearing up for 2.0... Languages evolve.


I am using C and Go and I don't remember ever having to deal with some kind of change in the language. Sure there were changes but with very little impact on the ecosystem.

Languages evolve okay, but Rust felt more like it was just right in the making.


> nothing compile without nightly

Have you ever compiled ripgrep?


> The optimized and stripped library in rust was about 8 times the size of the C version. While 9MB is not a lot by itself, if a significant portion of libraries decide they want to switch to rust that would explode disk usage!

The underlying issue is that even though the standard library is statically linked, Rust does not make it easy to compile "custom" builds of the standard library as part of your project - which leaves you with all sorts of excess code bloat in the final build. You can solve this, but it requires unofficial tools such as "xargo" and the like. Once this is addressed, Rust should become genuinely competitive with C/C++ (wrt. binary size).

See also https://news.ycombinator.com/item?id=23496107 for the details.


Good news: `-Z build-std` is available in nightly and various bleeding edge projects have switched to it

https://github.com/rust-lang/wg-cargo-std-aware


> Though I think my problem with rust is that they make breaking changes in their compiler and spec every release.

Can you bring specific examples? The stable version is not only actually stable, but when big changes happen you can also opt into previous edition's behaviour with one config line. Breaking spec every release really doesn't sound right.

As for unstable - distros could find that complicated, but distros don't have to ship unstable things, so that doesn't sound like a big problem.


I guess they mean ABI breaking changes, like enum Lay-out optimizations, etc. (not “language feature” breaking changes).


By changes in the compiler I could kind of understand it like that... but changes in spec?


> problem with rust is that they make breaking changes in their compiler and spec every release.

this is demonstrably false


I really don’t see the problem with just statically linking everything.


Why are people even using Rust and Go, aside from employer say-so? They're not formally defined, there aren't multiple functioning implementations, it's just not a good idea.


Go has a supported alternative implementation, gccgo.


There are formally defined Rust subsets (see Rustbelt).


Two issues with the post:

- An ABI is not a PL feature, but a platform feature, I.e., it is not that Rust does not have a stable ABI, but that eg Linux does not have a stable ABI for Rust (it has one for C and you can use this ABI from Rust).

- You can export generic Rust APIs with a stable C ABI by using trait objects, and it is often very easy to do this. So the claim that Rust and C++ are on the same boat wrt generics / instantiations is not true.


> An ABI is not a PL feature, but a platform feature, I.e., it is not that Rust does not have a stable ABI, but that eg Linux does not have a stable ABI for Rust (it has one for C and you can use this ABI from Rust).

This is somewhat of a disingenuous thing to say, because it implies that the fault lies with linux for not providing a stable abi to rust. An ABI comprises various conventions wrt calling convention, name mangling, data layout, etc.; these are provided variably by the operating system, language specification, and language compiler. And, as TFA mentions, the rust compiler explicitly does not provide a stable ABI.


Windows does provide two cross language stable ABIs, CLR CLS and COM/UWP.

Mainframes also provide cross language stable ABIs, so called language environments.


I don’t agree.

The C ABI is specified in a spec that Linux adopts (eg the x86-psabi), and it is what allows all software using this abi (from assembly to C to Rust) to eg Interface with each other. Linux could write an ABI spec for Rust on its platform today and add a patch to the Rust compiler (or to a C++ comoiler) to adhere to this ABI.

Nobody has done this, and from many povs this is something that does not make much sense doing, but it’s up to the platform to specify how binary software communicates with each other. Linux only specifies this for C, and that’s what Rust software currently does and has to use on Linux.


In GNU/Linux it's the GNU part that supplies the C ABI. The GNU compiler collection provides it, and the GNU libc uses it talk with the Linux kernel, usually through traps appropriate to the CPU. The LLVM toolchain (and other alternatives, like ICC) conform to this de facto standard. The GNU devs designed their ABI long before Linux came along and it is mostly inherited from even older OSes like BSD and SVr4 and developed refined and adapted over decades by a common community of interests.

If you're going to criticise GNU/Linux for not providing a Rust ABI, make sure you're aiming at the GNU part. The Linux part doesn't care.


At the same time the C ABI is not part of the C standard. It's a GNU thing (or a Microsoft thing for Windows, or an Apple thing for Mac, or...). While they require the compiler to provide a stable ABI there's nothing preventing C20 (or future versions) from breaking existing C ABIs. It's not really right to talk about "the" C ABI, rather there's the x86 GCC ABI and the x86_64 GCC ABI and the x86 MSVC ABI and the MIPS GCC ABI and the...

That's why compilers have target "triples" (now more than 3 items): <CPU architecture><subarchitecture>-<vendor>-<os/system>-<abi>. So you might have ARMv7m-st-none-eabi for some embedded STM32 bare-metal code and x86_64-pc-linux-gnu for Linux. All C, all different ABIs.


> You can export generic Rust APIs with a stable C ABI by using trait objects, and it is often very easy to do this

do you have a guide for that ? all the reference I can find does not seem to behave any differently that it would in C++ - e.g. https://users.rust-lang.org/t/passing-a-trait-object-through... ; https://doc.rust-lang.org/nomicon/ffi.html ; ...


Off-topic but it seems the quotes around the title don't render on the front page, but they do render on this page? It flips the entire tone of this article.

Initially on reading the title I just eye-rolled, but clicking through and seeing it was a response to that, and was actually the quote in quote marks made much more sense!!


I see quotes on both pages on Firefox 79 on Windows


The submitter might have edited the title.

HN fixes a number of things in titles on submission but allows to undo these automatic changes manually.


I see it on both HN homepage and this page.


"Why do distros expect all the living organisms on your machine to share The World's Single Lungs Service, and The World's Single Stomach Service, and The World's Single Liver Service?"

This has been debated for years, and parts of the answers is right above. Also, software is not a collection of biological organisms. And the local variables are not shared, so WTF anyway. The analogy makes no sense. Everybody is already neatly separated.

Proponent of everything static have yet to show non toy / very specialized systems where everything is actually static.

Let's avoid the strawman anyway and in this case, yes, some static linking can have its use, especially for some small utility / metaprog / etc. packages, although it has and will always have drawbacks to, especially for higher level feature support (e.g. a codec). You have to go into the specifics to understand which are more important depending on the context. Probably a mix is needed.

For a Linux distro, I suspect some people will go crazy if the fix for a security vuln of a small piece of code ends up downloading hundreds of MB, but maybe there are advantages so great that this is something we can live with. The net perf impact is extremely hard to predict and measure. You will duplicate tons of code, but arguably e.g. the cache overhead might not be extremely bad, we now have tons of memory, so maybe we can waste some, etc.

Note however that if a Linux distro is competing with other kind of plateforms, there is the risk to put the Linux distro at a disadvantage if the static vs. dynamic (maybe on a package per package basis) is chosen improperly, because other platforms make the distinction between platform and application, their platform typically provides a very large API, and they won't go the insane way and switch to static.

The lack of proper dynamic linkage story for Rust is a problem that needs to be fixed to enable some kind of usages. Not something that can always be worked-around (sometimes it can, and for some crate you really want static to begin with anyway)


> For a Linux distro, I suspect some people will go crazy if the fix for a security vuln of a small piece of code ends up downloading hundreds of MB [...].

I always wondered about this problem; you could distribute the .o/.a's the same way you currently distribute the .so's, and integrate the linker with the package manager. This theoretically seems to share most of the benefits of both static and dynamic linking: push complexity away from the kernel/dynamic loader, smaller updates = easier patching (compared to fully static binaries), etc. And it works for closed source.

OpenBSD does something similar already for libc and kernel (for boot-time address layout randomisation) and it works great.


Except that shares all the same ABI issues as shared libraries. If they wouldn't link at runtime they won't link at package-install-time either.




Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: