I want my OS to just provide a decent interface over which I can install application packages myself, packages that I get from my own sources, just like on Windows. if those packages are statically linked, fine. I know most Linux users disagree, but I don't want the relationship between software vendor and user to be distorted by some distro maintainer, or having to be limited to a package manager. I want to be able to store application installers in my filesystem.
I also want my distribution to hide it's Python binary from me so I can install my own Python without breaking the OS.
Basically: stop assuming that I want to live under your wing. I just want you to give me a nice desktop environment, a terminal and a well docomented way to install third party software.
I know distro developers don't owe me anything, and it's fine if they do something else, but this is the actual reason why Linux isn't used in the desktop.
Second, linux distribution maintainers are usually much better informed about the technical details of installing software in a stable manner than any software vendor. Sure you can get your software from the vendor. But then you should be willing to accept that it is often broken, inefficient, and insecure. Software vendors have no interest whatsoever in installing their software in a professional and sustainable way on your system, package maintainers do.
As a die-hard linux user, this line of thinking really plays against Linux. Software distribution hasn't caused problems in windows since XP. Likewise, all mac users seem pretty happy with .app bundles.
> Second, linux distribution maintainers are usually much better informed about the technical details of installing software in a stable manner than any software vendor. Sure you can get your software from the vendor. But then you should be willing to accept that it is often broken, inefficient, and insecure. Software vendors have no interest whatsoever in installing their software in a professional and sustainable way on your system, package maintainers do.
Do you realise how insulting this comes across ? As a software dev, my personal experience of "professional linux distro packagers" is people literally removing random lines of code of my software until it builds - who cares if it crashes as soon as you do more than open it anyways. I'd rather not have my software packaged than have it like that. My subjective experience from the software I use is that stuff like AppImage made straight by the dev is generally much more stable and works much better than whatever chtonian hack a debian packager decided to apply.
That is, at least in my experience, very false. They often do care about build systems more than the maintainers, as they're the ones who have to make it work for everyone.
I've worked with the Gentoo packagers and those people I have interacted with have a lot of knowledge about how to distribute software. It's not for nothing the Gentoo community has to push changes upstream when upstream assumptions don't hold up downstream.
Every interaction I've had with packagers has been them trying to make the build system work for their single distro. None of them have ever tried to help make it work for "everyone" running linux, much less spend any time thinking about the other two platforms that have 90% of the actual users.
IME they also tend to be much more focused on making the build system work on their distro than making the actual software work. Not surprising given that that's what they know, but they tend to be perfectly happy to make changes which cause functional problems to get things to build.
> As a software dev, my personal experience of "professional linux distro packagers" is people literally removing random lines of code of my software until it builds
I'm sorry this happened, but I can assure you that the vast majority of maintainers do not do anything like this.
Mac user here, can't ever remember encountering post-install errors with Mac software in 15+ years (bugs sure, like every software has, but nothing related to incomplete installation or whatever)...
>Linux packages have a higher bar to meet for integration quality
Isn't it actually a lower bar?
They don't have to follow the platform's look and feel and UI libs (there isn't a standard, a distro can package Gnome, GTK+ only, KDE, XFCE, even CDE and Athena stuff, plus all kinds of ad-hoc UIs). They don't have to work with the same shortcuts, use the same system for configuration, or even play well with the same window compositor...
When it comes to "integration quality" in Linux it's mostly "compiles with the distro's version of libs".
Sorry to say, but I have to raise my hand here.
The only post-installation issues I ever had on Mac and Windows were related to drivers, not applications.
Plus since like a decade or so, I rarely "install" software on Windows anyway, because all the tools I use come as a "portable" version, which is just a ZIP archive.
In an Apple-like fashion, these "just work" after unpacking to an arbitrary location (e.g. a USB-stick or SD-card)...
What am I doing wrong here? ¯\_(ツ)_/¯
It's most of the time difficult, and sometimes impossible, which is very user hostile; I, the user, should be deciding which version of the software I want to run, not the distribution.
Because the newer version (of the lib, I presume) changes the ABI and borks all the packages that were compiled against the old version?
Trying to upgrade something like Mesa from Rawhide or whatever is just going to mess up your install like nobody's business but if you just need a couple minor libs (for a specific program) I've not had any problems.
For programs, I've had ones running for years after they got dropped from the distros with zero problems or that I built and installed from some random srpm I found on the interwebs.
In all honesty the only 'song & dance' I have to do on a semi-regular basis is rebuild my python modules when Python gets updated on a distro upgrade.
But with any distro, you call install whatever you like wherever you want. You, the user, make that call. You, the user, should have the knowledge to tidy up if it goes wrong.
Installing anything outside of distro repositories and Flatpak (& alternatives) is, in most cases, PITA.
Tracking down missing dependencies, figuring out which libraries are compatible, downloading source for those libraries, because the system provided are outdated, etc. You can literally spend a whole day just building a somewhat simple application.
I'm not a fan of binary blobs, like it's customary on macOS/Windows, but at least I can actually use an application in a minute or so, after downloading it. There is no fussing around, it just simply works.
Because they have huge teams maintaining API and ABI stability.
On any Linux distribution worth its name you can also use the software within seconds after installation. The teams are just smaller and their approach is more centralised and principled, but I never had a problem using, e.g., emacs after installing it.
You need to fuss around with software from their parties. Simply because these third parties do not care enough about your Linux to make the installation work well. These same third parties also do not care much about how stuff is supposed to work under windows, they just drop their binaries and assume that the windows ABI will somehow keep working.
Maybe that observation is incorrect, please give me data points. Also this won't mean you have to run Debian (but maybe a VM with Debian in it).
I don't remember the last time that happened. Not that I'd exchange my ArchLinux setup against either but you gotta be honest - installing stuff on windows and mac is a solved problem.
However the number of times a Debian or Ubuntu update borked something... that I don't have enough fingers nor toes - I had at least 3 installs of debian that I fucked up enough that I couldn't recover them in the last ten years - in comparison the last time that happened for me on windows was in 98' era. Not that windows doesn't suck - I still get the occasional BSOD even on win10 while Linux kernel panics are... uncommon for me but that's just my experience).
For most user-facing Mac software that comes with a GUI, there are no post-install. Installation simply consists of copying the application bundle into the /Applications folder.
That's sad. But fortunately, not all packagers are like this. As a software dev too, I have a great experience with packagers from various distributions. When they have an issue (either build or runtime) they tell me about it and we discuss a patch together that I include in my next release.
> My subjective experience from the software I use is that stuff like AppImage made straight by the dev is generally much more stable and works much better than whatever chtonian hack a debian packager decided to apply.
Sure it works if you are in the most common case (Linux on x86_64), but it has some downsides:
* It only supports CPU architectures the dev can build for
* It only supports operating systems the dev can build for
* The dev needs to publish an update every time there is a vulnerability in a bundled dependency (assuming they are even watching for vulnerabilities)
If you don't know what you are missing, you won't miss it.
Nothing, unless you have a application-specific upgrade daemon running. Sounds like a good design, right? :)
The automatic update process breaks software by making incompatible changes, thus preventing a security problem by just not allowing me to run the software in the first place?
The windows way of installing software (download random .exe or .msi files from random websites on the internet) has been a problem for every non tech-savy people and the people helping them manage their computer forever.
Every modern OS comes with its own package manager nowadays for a good reason.
I still maintain that for all its many technical and architectural faults, the biggest thing holding back wider Linux Desktop adoption is the Linux Desktop community itself.
You most likely do know your software better, after all, you wrote it. However, their concern is the integration of your software into larger system, and they may well be more familiar with those issues than you are.
In all my years of (Debian) distribution maintenance, this was the most typical cause of disagreement between packagers and upstreams. It's very common for upstreams to make fairly arbitrary decisions which might make sense in the context of an individual project, but which are not appropriate when considered as part of the system as an integrated whole. Neither are wrong per se, but as an upstream, your focus is on your project and it's easy to miss the bigger picture issues. Conversely, packagers' focus upon the system as a whole, so their view is at a higher level.
Additional to this, don't just discount the knowledge and expertise of distribution maintainers. There are of course some who just blindly package up other people's work and aren't experts. However, there are others who are software authors in their own right, with decades of accumulated experience and understanding. The latter may well know more than you do, and might be able to provide some keen insights you could benefit from. There are an awful lot of developers who don't know how to make software releases properly in a way that allows it to be consumed straightforwardly by others, with proper understanding of API and ABI issues, proper versioning, and proper use of build systems.
Rabbit's nest but the commonly accepted community API for font discovery is fontconfig.
> How do you know if the system is on battery?
Again, rabbit's nest but the commonly accepted community API for this used to be DeviceKit-Power. Then pm-utils. Now I think you're supposed to use UPower or poke through /sys/class/power_supply?
I wrote the battery applet five years ago in gnome-shell and even I can't keep up.
> How do you make sure that the temporary directory your service needs is actually mounted and writable?
In theory /run and systemd-tempfiles are new enough standards but that hasn't stopped a misguided packager from misinterpreting a policy and patching my software to support /tmp, which was not a tmpfs on their distribution, causing users to blame me and send bugs my way.
For all the things it gets wrong, Docker removes an entire class of misconfiguration errors that allows for more reproducibility and automation in configuring systems, and they do this partly by making the distribution irrelevant.
For Linux to be a world-class OS, it needs to step up its ABI and distribution game and figure out how to squash bickering over things like application menu standards.
Isn't this an admission that linux is worse? Less skilled developers can successfully create working applications for Windows/Mac because they have unified and documented APIs to access these operations. But on linux you have a heterogeneous system that uses standards developed 50 years ago and require greater expertise to work with.
"My system is harder to use" is not a selling point.
Unless you're a Linux Desktop evangelist, then it is because it lets you feel smugly superior to people who don't like wasting their time with shit like this.
And the one thing I absolutely loved about Linux is that it does what I say. On Windows 10 there are things that are impossible to get rid of. That new browser that automatically adds itself to the desktop icons? Or the Onedrive icon in the explorer which you can only remove through registry hacks (and which might come back at any time after a update).
Customising Windows is so amazingly hard and cumbersome, I was just blown away how easy and straightforward this was on Linux.
I once gave my mother a Ubuntu installation after I had been particularilly annoyed by a malware she managed to catch on her Windows 7 machine. She has no idea how to use computers, but she really loved Ubuntu and told me it feels much clearer. She used it for nearly 4 years till the hardware died. To my surprise she had significantly less questions about how to do $x than before.
My feeling is that the Linux desktop can be great for people with very simple needs or very advanced needs. It is that middle ground where it currently sucks I guess.
It kind of is when the reason it is is a corollary to fundamentally supporting letting you and your User community do whatever they want. I've used Slackware, Debian, Ubuntu and MacOS; and to be frank, at least with Linux, even if I do have some greater degree of exposure to dependency/ABI hell, I'm free to determine my own approach to meeting those challenges.
With Windows/MacOS,you have a 500 pound gorilla shoving sh*t down your throat because they decided it's better for them. I simply cobble together a workable solution for the use-cases my Users really need, keep it simple, well-defined and documented, and most importantly, kick software or decisions that impact my freedom as a sysadmin to run the system I'm fine and dandy to maintain to the curb.
To hell with devs who don't respect rule number 1 of system architecture: the System one is helpless to influence or get to understand is the first thing to go when the chips are on the table. At least for me anyway.
No one is trying to sell you a system. If you want your software to work well on Red Hat, Debian, and BSD then you either play with their respective standards or you let the maintainers do their job. Just stop whining that these systems are not windows and you need someone to that work for free but exactly the way you want it done.
If you feel well inside Apples walled garden, have fun there, but don't come complaining when some Unix tool or programming language does not work there.
And no, Red Hat or Debian is not harder to use than MacOS. It just supports way more nontrivial use cases out of the box. That comes with a certain complexity. If you don't need that, no one suggests you run it on your desktop.
I disagree. People have been recommending Linux to people with non-complex needs for a long time, especially since Ubuntu went mainstream. “No one suggests you run it on your desktop” is thus wrong.
> If you feel well inside Apples walled garden, have fun there, but don't come complaining when some Unix tool or programming language does not work there.
Is this really an issue? As far as I’ve heard, macOS is a POSIX-compatible system with a BSD userland, and most Unix tools work fine. Checking out the Homebrew repos, it seems like all the Unix tools I’m used to are there. And even some Linux-specific systems like FUSE have been ported to macOS. As far as I can tell, macOS thus supports Unix tools and programming languages as well as most Linux distros.
As I see it, it'd take someone with a lot of political connections and political capital in the Linux ecosystem to be able to pull it off successfully. I have neither.
Otherwise, hell fucking no. This misconception is causing so many mistakes. Software is an ecosystem; I give 0 shits about individual libraries programs whatever, just that the end composition meets my criteria. The app bundle / flatpak / corse-grained Docker vision of software is plain wrong, and will basically prevent future gains in productivity.
Nix and friends get it right: no single version dumbness, everything is installed the same way, be it by the admin or by regular user, and with proper notions of dependencies.
Socially, I understand where OP is coming from. The common view that distros are some crusty 90s holdover held hostage by a bunch of neckbeards who don't care about users not like themselves. But distros like NixOS put the user in full control. There's feeling of de-alienation using a NixOS machine that's really hard to convey to those that haven't yet tried it (and gotten over the initial learning curve).
The NiX package manager installs each version of a package to a directory, the trick being that each version of each package goes to a folder whose name is a hash of all the inputs of the build (files, build parameters, specific build folders of this builds dependencies, etc).
As a result, not only can you have different versions of a package: you can even have different builds of the same version (that you built with different build configuration and/or versiona of its dependencies).
The concept is sound. The issue is that you need the ability to actually build the application (you need the source) and you probably have to adjust its own build process to get it to work properly with NiX.
To me, Nix is "Noah's trampoline", no-holds-barred mud-wresting all the nasty details of Unix and software as it actually exits, all the way down, into something just every so slightly saner so we can sale over the floodwaters and land on the shore of an actually sane practice of computing.
And yes, I would basically want describe in in those terms even if it weren't for your nick and vocabulary :).
You get isolation (re "hide its Python binary"), multiple package variants, a unified software management interface for all applications, etc -- from functional package managers like Nix or Guix.
> but this is the actual reason why Linux isn't used in the desktop
I still believe the actual biggest issue with Linux on the desktop is graphics card drivers (and other aspects of the graphics stack like handling High DPI). Too many machines fail the basic test of 'can I install Linux, plug my screen in and have it behave sensibly'.
It's mostly fine. But high DPI, or worse, mixed DPI desktops are somewhere between unusable and sort of tolerable.
I dual boot my desktop, and I've given up getting my 4K main screen and old 19" off screen working together in Linux (and I've tried everything). I like moving IRC off to a smaller screen to glance at once in a while, so I never miss an internet argument, but mixing DPI monitors just doesn't work in Linux.
HiDPI is mostly fine, with the exception of mixing high dpi in xorg. Mixing DPI works in wayland. So I'll grant you that the combination of nvidia+mixing DPI doesn't work, because nvidia's drivers don't support wayland.
Otherwise I strongly disagree, and I'm genuinely confused as to why this myth persists. Wrangling drivers on windows is a huge pain. There's no one "update" button I can press to update all my drivers, let alone all my other software. I have to go through the driver manager and manually right click and select "update driver" which frankly nuts. And to update the graphics card I have to periodically go to nvidia's website and check manually? What year is it again? Why doesn't Windows do this for me?
If you're on a Windows machine right now, try it. Right click on a bunch of stuff in device manager and hit update driver. Clicking through on my machine and maybe 1/5 have an update.
The crazy thing is that it's a really painful process. It takes 15-30 seconds per device just to come back with "no updates" and you can't just select a whole bunch of them and do them all at once: you have to do it one at a time and go through the wizard for everything. And there are simply too many devices to do them all, so all of us are using plenty of outdated drivers. This is an absurd situation, it's 2020.
There are third party apps that do this automatically, but I don't know that I trust them.
WSUS is sort of a corporate Windows Update server where you can approve updates to groups of machines on whatever schedule you want.
A lot of WSUS admins skip drivers because they massively slow down the sync/approval process and take tons of disk space. Windows has tens of thousands of drivers.
Not to mention the fact that an increasing fraction of consumer-facing software now lives in the browser, where updating implicitly happens every time you go to use it.
Regardless of how you feel about these trends, they make out-of-date applications much less of a concern than they used to be.
It is true that it is easier to fix one version of a library than 10 different versions. But if you need 10 different versions for different applications, you probably do not need to patch all 10 of them.
Most people don't understand security and are not equipped with the necessary knowledge to correctly judge risk. As long as security is just something that gets in the way to get some job done, most people will just plow ahead, since not getting the job done right now has a high and easy to understand cost.
Even small desktop accessories carry their own copy of the Electron and Chromium libraries, usually around 150 MB. It would be a tremendous improvement to use a shared browser engine for this, but developers are resistant.
The point of the web is to support multiple user agents rather than forcing the user to a specific browser. I don’t understand why that shouldn’t apply to web apps running on the desktop.
That may be the case. Usually it is not a problem, there is mmap and swap and unneeded parts are paged out of physical RAM.
If it becomes a problem, one has to change tools or get more RAM.
This isn't true at all, and I struggle to think where you came up with this idea.
Linux distributions do distribute a hand-picked set of packages. That's essentially what a distribution does: distribute packages. Some are installed by default, others are made available. That's pretty much the full extent of it.
Yet, just because a distribution distributed packages, that doesn't mean you are expected to no use anything else. In fact, all the dominant Linux distributions even support custom repositories and allow everyone to not only put together their repository but also offer a myriad of tools and services to allow anyone to build their very own packages.
Even Debian and debian-derived distributions such as Ubuntu, which represent about half of Linux install base, offer personal/privayr package archives (PPA), which some software makers use to distribute their stuff directly to users.
So, exactly where did you get the idea that that so called ideology even exists?
1. Snap and/or Flatpak allow you to install GUI applications from most places nowadays. The internal tooling (system packages) are kept separate from the User installed applications and are effectively sandboxed in this way
2. Linuxbrew allows a Mac OS-like separation from your personal development tools and your OS's internal packages. Notably, this also allows you to install far newer tooling than your distribution would typically provide
3. Drop application binaries in ~/.local/bin if all else fails
If I weren't on a rolling release distribution I'd probably go that route. I hate being restricted by whatever my distro provides, I hate upgrading the entire world when new releases are made, and I hate third party repos and the hell the provide reconciling everything together.
It's really the in-between state that's terrible. Either go full *BSD or Mac OS and separate the concepts or full Arch Linux (w/ AUR) and don't. All other ways of distribution software tend to be more server centric anyway
Regarding the first alternative, I think it will be interesting to see how Fedora Silverblue  turns out. It’s basically going for an immutable base system coupled with Flatpak for apps the user installs.
Haven’t tried it myself, but for average desktop users, I think that sounds like a very good solution in the long run: a stable base system with up-to-date user-facing apps.
And you have to replace that whole binary everywhere it is installed.
Think about how many systems would still be vulnerable to heartbleed if every thing that used libopenssl had to be re-compiled and redistributed as a statically linked binary.
This proposed “statically linked” world might be worse than the current mess. See Docker.
Not if your package manager supports binary diffs . Which for many platforms is already the case, and other package managers would likely implement the feature if downloading large binaries becomes a problem. Right now, I’d however call it a premature optimization as the bandwidth of most people is dominated not by downloading updates but by streaming. But it is a solved problem since e.g. Fedora has supported package diffs for a decade.
Regarding the heartbleed, that is a valid and often repeated argument, but I think it is also overblown. One could e.g. have a static binary with embedded or bundled info about what library versions it was statically compiled with, and a way to automatically flag which binaries are thus vulnerable. Such a system would make it possible to push out new builds of high-priority targets like Firefox and Chromium without testing every other libssl-based app for compatibility first, thus cutting down on the response time for patching the most vulnerable apps. But the flagging would make it clear that the other apps need updating too. (Or in the case of automated builds, that the other apps need testing.)
I’m not saying that the whole OS should be statically linked, but for end-user apps I think it makes a lot of sense, and it’d remove a lot of friction where a library update can break a previously working app (that happens).
My question though... why? Everything about that seems to me worse in every way.
Specifically regarding python, on my system right now, I have python 2.7, 3.8.5 and 3.9rc1 installed on my system and maintained by the package manager. When 3.9rc2 or 3.9.0 final is released it will update to that. It's configured to run 3.8.5 when I just run "python", although I can manually run other versions by running the command python2 or python3.9, and I could reconfigure the default to be one of the other versions if I wanted. The package manager has versions going back to 3.4. I guess I don't understand the desire to install python manually - what can you do with a manually installed python that you can't do with the one installed by the package manager?
I have made a few packages for Debian ARM Linux when I needed them. I wouldn’t call the documentation great, but it’s not too bad either. Same with the infrastructure, the OS support versioning and dependencies, can be configured for custom repositories and to verify signatures…
It’s not terribly hard to make them fully working, install/remove/start/stop systemd services, support upgrade/purge/reinstalls, matter of a few shell scripts to write. The command to install a custom package from a local file is this:
sudo dpkg -i custom-package_0.11.deb
ALWAYS check the PKG-file when using AUR!!:
/bin /etc = System
/usr/local/bin /usr/local/etc = Programs
> this is the actual reason why Linux isn't used in the desktop.
* What people (not necessarily the users but corporate decision-makers) are used to (Windows)
* Enterprise manageability (achievable on Linux, but built-in on Windows)
* What is preinstalled (Windows)
* What some exotic business applications work with (Windows)
Almost purely soft factors, very hard to counter if virtually all the computers you can buy at your local PC shop come with Windows preinstalled.
My neighbors (70+) accidentally got hold of a laptop that didn’t come with Windows but Ubuntu and they’re perfectly fine with it. It still has Firefox after all. :-)
What you think you just said: Linux is so easy to use anyone can do it and it works better than alternatives!
What you actually just said: Linux makes a great webkiosk!
What you call the "average PC user" is really the average smartphone user these days. PC users are doing office work, running a business, development, gaming, content creation in more ways than I can enumerate, etc.
People who have a bulky desktop computer in their home just to browse the internet are vanishingly few.
1. a PC isn't necessarily a "bulky desktop computer", it can also be a slimline laptop!
2. "browse the internet" and "office work/run a business/etc" are not mutually exclusive. Google Docs/Office 365 etc. proves that. I'd argue the majority of business operations requires some level of internet interaction these days.
It seems to me like you've come up with a strawman definition to bat away another strawman definition. Almost everyone I know owns both a smartphone and a laptop, and the vast, vast majority of their time on that laptop is spent in a web browser.
I merely stated that a lifelong Windows user of advanced age can in fact use Ubuntu because it is, on the surface, similar enough. Same goes for a Mac, of course.
In my opinion, the most important thing distros does that is incompatible with how rust currently works, is handling security/bug updates.
The one libjpeg.so for everyone is meant to fix libjpeg flaws for everyone. And it has many security flaws. And it has many users. There is no denying the way this is done by distros is good.
Now, to pick author's code, one of its dependency is a CSS parsing, which is prone to flaws. (Maybe not /security/ flaws, but still) The question is, how is the distro supposed to handle that?
I know rust has tooling for that, but it seems to me that with the perfect version match crate build system, every dependency will happily break API. So let's say author no longer has time to develop rust librsvg, and cssparser crate has a major flaw, which is fixed only in a new API rewrite branch, then what? Distro are supposed to fix that themselves? Sounds like much more work for them.
Let me tell you, the way it is done by distros (Centos, Debian) is far from being good. You will get the fix a long time after the bug is published. And you only get it if your system is recent enough.
A slight aside but they also provide the ability to apply security updates sanely to all programs using that library on the system (just update the library and restart the programs, as opposed to having to install a rebuilt version of every program that uses the library). Is this a game-changing feature? These days probably not, but it is (again) just needless waste.
And I say this as someone who is currently developing a shared library in Rust that will probably be included in a lot of distributions because I expect quite a fair amount of container runtimes will end up using it. (But I do also work for a Linux distribution.)
Security updates are a plus, but introducing bugs through a shared library is a minus.
I would prefer we focus on robust application sandboxing instead.
At the server datacenter level its pretty much how its handled anyway, you have isolated vm / containers sitting on deduped storage.
... and in my experience, most application developers, and especially the developers who are the most truculently hostile to working with the system versions of libraries and the most likely to demand to bundle in their own copies for "stability" purposes, are really horrible about paying attention to security issues in their third-party dependencies.
When I'm being a user and a sysadmin rather than a developer or a packager, and I have the choice of trusting a developer versus trusting a packager, then, all else being equal, I will tend to trust the packager. Not that I trust either one that much.
Maybe there would be better sharing if I extracted the ELF sections first.
I just ran an experiment on the 20 largest items in my Mac's /usr/local/bin, many of which are Go binaries
for file in `find /usr/local/bin -perm +u+x | xargs ls -SL | head -20`; do split -a4 -b4k $file `basename $file`; done
ls | xargs md5 > md5.txt
cut -d' ' -f4 md5.txt | sort | uniq -c | cut -c1-4 | sort | uniq -c
To your second point, we still want fast security updates and more secure code even with sandboxing! EG, sandboxing would not help with heartbleed -- critical data leaked.
So if block level dedupe where the norm linkers would optimize for that instead. Coming from the .Net / Node world you don't statically link but you typically don't have shared libs, all the libs are in the same folder as the executable and dedupe works just fine there because you have just a bunch of duplicated files in different directories. Same with containers and Vm's all duplicating a bunch of the same system files. You get isolation and reuse because its done at a lower abstraction.
As far as security, again my point was a trade off, yes you could potentially get security updates faster along with breaking changes faster. If the sandbox apps had a good way to stay up to date, then they can update upstream after being integrated and tested with the app. If you have an app store of some sort it can watch for and notify to the authors of security issues in libs these use and prompt them to upgrade or be taken down.
I know the app store argument probably isn't popular around here but from my experience it is a far more secure solution especially for the non technical user. If something is really so widely used it should be a api provided by the OS, otherwise it should be bundled with the app individually and code sharing is done in source repos / build systems, at least thats my opinion after all these years.
Deduplication of containers is incredibly coarse-grained -- it's done a whole-layer basis which means that an update to any distribution package in a base image (resulting in a new layer hash when you rebuild on top of it) results in zero sharing between containers based on different versions of the base image. I am actually trying to rectify this problem with the whole OCIv2 effort -- but I wouldn't argue that today's containers make this problem non-existent. To be honest I hesitate to call the current model "deduplication" at all (yes, it might technically be deduplicated but it's not doing it to the degree you think it is).
And note that the page cache sharing you get from containers is also on the file level, so it's strictly no better than shared libraries (since the container filesystems contain shared libraries and that's what's being shared through the page cache).
The minus you mention is a consequence of "Move break fast things" webshit philosophy that's so pervasive these days.
Depending on undocumented behaviour and not properly documenting behaviour.
Edit: and forgot to mention ignorance towards security updates caused by modern PL package version locking etc..
It is a culture problem and technology can only do so much to fix that.
I seem to recall ZFS deduplication needing tons of RAM. Wouldn't the memory deduplication you suggest similarly require even more memory?
I don't know how ZFS works but the dedup I have seen runs in the background consuming minimal resources.
I would think if your mem-mapping a file that has been depdupped then its could just memap the already deduped block and cache in memory and not consume more memory if another file uses the same block and is memmapped but not sure if any OS does that.
By these metrics it's by far the largest rust application I've seen thus far. When fully optimized, it's 11MB in size.
> librsvg-2.so (version 2.40.21, C only) - 1408840 bytes
> librsvg-2.so (version 2.49.3, Rust only) - 9899120 bytes
In a larger sense, something that sets out to be a "systems programming language" needs to be exactly the sort of thing suitable for a tiny embedded system, even if it isn't running on one, because everything else builds on top of it. The attitude that "we have tons of power, why not waste it" just doesn't fly at the very lowest levels. You can write a desktop application in Python, and it's broadly fine - but try writing an OS kernel!
And for writing kernels the same applies; without the stdlib, it gets a lot smaller very fast. I've done it, so I think I can count myself on having some experience there. The biggest part of my kernel is a 128KB scratch space variable it uses during boot as temporary memory until it has read the memory mappings from the firmware and bootstrapped memory management on a basic level. The remainder of the kernel (minus initramfs) is then about 1MB large, the largest part after the 128KB scratch space using about 96KB.
FWIW, this is does not seem to be empirically true. The savings are possible in theory, but false in practice.
The most used largest libraries, like libc and libpthread, would be dynamically linked by Rust anyways.
Problem is that portable applications are an alien concept to Linux.
Static builds, AppImage, FlatPak, containers, etc would like to have a word with you.
But the pain point is on laptops like MacBooks which have comically small storage space for their price point. I think the base models have a measly 256GB SSD and charge crazy amounts for upgrades.
I get that Rust is awesome, but I'm not certain you need to make an entire new language just to have the security stuff.
Of course it might be complicated to do, but in the end, aren't there linters or other validators that can give the same security results rust has, but with C or even C++?
1. Rust is "backward compatible" in the sense that Rust code can use C libraries and C code can use Rust libraries - both ways via CFFI . Security gaurentees only apply to the Rust code.
2. We've tired static and dynamic analysis of C to find security bugs for decades, there has been a plethora of research and commercial tools in the space. None fix the problem like Rust does .
Objective-C and C++ are the only two languages which offer backwards compatibility. AFAIK it's complete in the case of the former and there are some limitations for the latter.
None fix the problem like Rust does, but it's worthwhile to examine why: typical companies and developers have an aversion to paying for tools and for anything which slows down development. That's why usually those tools and languages which are reasonably user-friendly are more successful. Ironically that's both an advantage and a problem for rust: it's nicer to use than some C tools, but still not user-friendly compared to alternatives like Go or Java and in some cases even C++.
• "C, but safer" on its own is not very enticing. With no other benefits, it's easy not to switch, and instead promise to try harder writing standard C safely, or settle for analysis tools or sandboxes.
• People who use C often have backwards compatibility constraints. Switching to another compiler and a dialect that isn't standard C is a tough sell. You can still find C programmers who think adopting C99 is too radical.
• Programming patterns commonly used in C (rich in pointer juggling, casts, and textual macros) are inherently risky, but if you try to replace them (e.g. with generics, iterators), it stops looking like C anyway.
So "safer C" is unattractive to people who are tied to C implementations or don't want to learn a new language.
But people who do want to learn a new language and use modern tooling, don't want all the unfixable legacy baggage of C.
Rust dodges these problems by not being a "weird C". It's a clean design, with enough features to be attractive on its own, and safety is just a cherry on top.
And there are languages that try to keep to C and add some minor safety improvements, eg my language C3 (subarrays/slices, contracts, all UB has runtime checks in debug builds and more)
But to answer the question, I suspect no, and if you did it would be basically re-engineering the borrow checker and forcing Rust semantics into C and C++.
I don't know if anyone has proven it, but my hunch is that borrow checking C is undecidable.
2. No. If C/C++ could be made safe* Rust would not exist.
* everyone agrees on this point, including the richest and largest software companies on the planet
And yes, just about anything non-lexical lifetimes can do you can do with static analysis. But the set of things you can do with Rust includes things that need explicit lifetime annotations, which C doesn't have.
If you're the only person smart enough to solve the $64B problem so everyone can start writing safe and backwards compatible code in C/C++, please, go right ahead.
> hence it would be backward incompatible
Rust is backward incompatible with C. A C implementation with fat pointers would not be backward incompatible with C. Depending on how it was implemented it could be ABI-incompatible with older C implementations but this is not necessary either.
I am challenging that. I do not believe that a C implementation that uses fat pointers needs to be any slower than Rust.
> If one wanted safety without performance they would just use java or similar.
Or C with a safe implementation.
It seems somewhat unrealistic to expect a really new language commit across the board to the same sort of ABI stablility as a decades-old language such as C.
Is there any research on having compilers do some of these tricks automatically? A compiler should, at least in principle, be able to tell what aspects of a type parameter are used in a given piece of code. Such a compiler could plausibly produce code that is partially or fully type-erased automatically without losing efficiency. In some cases, I would believe that code size, runtime performance (due to improved cache behavior), and compile times would all improve.
I'd hate to have the same happening for rust.
If for some reason modern languages insist on monomorphization, we should be able to design an ABI that suits that need.
A modern shared library is code that generates code (very much like a dynamic linker is actually code that links code).
The interface of the library would consist of:
a) a description language for the shape of data types (not types themselves, mind you)
b) a list of generators for functions
c) a list of function applied to shapes as a requirement
The job of the modern dynamic linker would then be to invoke all the functions to the necessary shapes, put the resulting code into memory and link it. It might be useful to support this with some kind of caching mechanism.
To get to the point, I quite prefer the package manager way of installing software to "hunt down a single release" app installers of Windows/Mac. The only issue is that sometimes software is a bit out of date. That's what the snap/flat pack/appimage projects are trying to solve.
As soon as one of those three get their user-hostile issues fixed, it will be a software paradise. :D
Though I think my problem with rust is that they make breaking changes in their compiler and spec every release. People regularly building on rust unstable to get features not yet released. This all makes things complicated for a distro.
But the points at about making breaking changes at the bottom resonated with me. Stability is what has allowed the Linux ecosystem to grow so well. Many interconnected parts all moving in unison. Not being able to fix a bad design decision because of this does suck. Still, having everything work and sadly is more important than perfect design.
That's not accurate.
Rust is backwards compatible. Most 1.0 code would still compile perfectly fine today. There have been some minor breaking changes for soundness reasons, if I remember correctly. There also has been one edition upgrade (2018 edition), but every new compiler still supports the old edition, and most new stuff actually works in the old edition as well.
There are experimental features, which are only available on the nightly compiler and have to be opted in to with a "#[feature(X)]" attribute. But those are very clearly labelled as experimental, unstable, and evolving, with no stability guarantees whatsoever.
The criticism I would share is that while Rust is backwards compatible, it is obviously not forwards compatible. Rust evolved dramatically since 1.0, and many developers jump on new features once available. So compiling actively maintained projects with a older compiler in distros like Debian is not fun.
But the rate of change has slowed down a lot over the past year or so.
Fwiw, I recorded at least four different times a Rust release broke timely / differential dataflow, and have seen a few others in other folks' code. Afaict, none of them were soundness related, and were instead due to ergonomic additions and performance improvements.
They haven't been recent (e.g. were 1.17, 1.20, and 1.22, and the others earlier) and I agree that things are much more stable now.
The biggest issue I had was with the macro import and visibility changes in the 2018 edition.
That broke quite a lot and was rather painful.
I think they are doing the right thing with the break (it adds something I've wanted) but at the same time it's another break.
That has changed a lot over the past 1-2 years. Many popular crates have matured a lot and work just fine on stable.
Not all, and Rust is still evolving, but things are much more stable now.
That's the problem and stabilization doesn't solve it. Only rejecting language changes will.
This problem is solved though. Almost all nightly users shifted to stable after async-await was stabilized. The last holdout (IIRC) was Rocket, which also compiles on stable now. I can't think of any popular libraries or frameworks that require nightly. I can't think of any popular feature that people would want to use nightly for.
I think you could learn Rust now, write your code and then not worry about any new features that are added, ever. Your code will work without breaking. Keep upgrading the compiler every 6 weeks and your code still won't break. That's a guarantee.
What are you using, then; COBOL!? The C standard was last bumped in 2018, C++ is more aggressive than that, Python his shipped some downright controversial changes in the last minor versions, Go is gearing up for 2.0... Languages evolve.
Languages evolve okay, but Rust felt more like it was just right in the making.
Have you ever compiled ripgrep?
The underlying issue is that even though the standard library is statically linked, Rust does not make it easy to compile "custom" builds of the standard library as part of your project - which leaves you with all sorts of excess code bloat in the final build. You can solve this, but it requires unofficial tools such as "xargo" and the like. Once this is addressed, Rust should become genuinely competitive with C/C++ (wrt. binary size).
See also https://news.ycombinator.com/item?id=23496107 for the details.
Can you bring specific examples? The stable version is not only actually stable, but when big changes happen you can also opt into previous edition's behaviour with one config line. Breaking spec every release really doesn't sound right.
As for unstable - distros could find that complicated, but distros don't have to ship unstable things, so that doesn't sound like a big problem.
this is demonstrably false
- An ABI is not a PL feature, but a platform feature, I.e., it is not that Rust does not have a stable ABI, but that eg Linux does not have a stable ABI for Rust (it has one for C and you can use this ABI from Rust).
- You can export generic Rust APIs with a stable C ABI by using trait objects, and it is often very easy to do this. So the claim that Rust and C++ are on the same boat wrt generics / instantiations is not true.
This is somewhat of a disingenuous thing to say, because it implies that the fault lies with linux for not providing a stable abi to rust. An ABI comprises various conventions wrt calling convention, name mangling, data layout, etc.; these are provided variably by the operating system, language specification, and language compiler. And, as TFA mentions, the rust compiler explicitly does not provide a stable ABI.
Mainframes also provide cross language stable ABIs, so called language environments.
The C ABI is specified in a spec that Linux adopts (eg the x86-psabi), and it is what allows all software using this abi (from assembly to C to Rust) to eg Interface with each other. Linux could write an ABI spec for Rust on its platform today and add a patch to the Rust compiler (or to a C++ comoiler) to adhere to this ABI.
Nobody has done this, and from many povs this is something that does not make much sense doing, but it’s up to the platform to specify how binary software communicates with each other. Linux only specifies this for C, and that’s what Rust software currently does and has to use on Linux.
If you're going to criticise GNU/Linux for not providing a Rust ABI, make sure you're aiming at the GNU part. The Linux part doesn't care.
That's why compilers have target "triples" (now more than 3 items): <CPU architecture><subarchitecture>-<vendor>-<os/system>-<abi>. So you might have ARMv7m-st-none-eabi for some embedded STM32 bare-metal code and x86_64-pc-linux-gnu for Linux. All C, all different ABIs.
do you have a guide for that ? all the reference I can find does not seem to behave any differently that it would in C++ - e.g. https://users.rust-lang.org/t/passing-a-trait-object-through... ; https://doc.rust-lang.org/nomicon/ffi.html ; ...
Initially on reading the title I just eye-rolled, but clicking through and seeing it was a response to that, and was actually the quote in quote marks made much more sense!!
HN fixes a number of things in titles on submission but allows to undo these automatic changes manually.
This has been debated for years, and parts of the answers is right above. Also, software is not a collection of biological organisms. And the local variables are not shared, so WTF anyway. The analogy makes no sense. Everybody is already neatly separated.
Proponent of everything static have yet to show non toy / very specialized systems where everything is actually static.
Let's avoid the strawman anyway and in this case, yes, some static linking can have its use, especially for some small utility / metaprog / etc. packages, although it has and will always have drawbacks to, especially for higher level feature support (e.g. a codec). You have to go into the specifics to understand which are more important depending on the context. Probably a mix is needed.
For a Linux distro, I suspect some people will go crazy if the fix for a security vuln of a small piece of code ends up downloading hundreds of MB, but maybe there are advantages so great that this is something we can live with. The net perf impact is extremely hard to predict and measure. You will duplicate tons of code, but arguably e.g. the cache overhead might not be extremely bad, we now have tons of memory, so maybe we can waste some, etc.
Note however that if a Linux distro is competing with other kind of plateforms, there is the risk to put the Linux distro at a disadvantage if the static vs. dynamic (maybe on a package per package basis) is chosen improperly, because other platforms make the distinction between platform and application, their platform typically provides a very large API, and they won't go the insane way and switch to static.
The lack of proper dynamic linkage story for Rust is a problem that needs to be fixed to enable some kind of usages. Not something that can always be worked-around (sometimes it can, and for some crate you really want static to begin with anyway)
I always wondered about this problem; you could distribute the .o/.a's the same way you currently distribute the .so's, and integrate the linker with the package manager. This theoretically seems to share most of the benefits of both static and dynamic linking: push complexity away from the kernel/dynamic loader, smaller updates = easier patching (compared to fully static binaries), etc. And it works for closed source.
OpenBSD does something similar already for libc and kernel (for boot-time address layout randomisation) and it works great.