Not so bad comparing to what? Yeah, comparing to downloading a tar file from the website and running ./configure, make etc - right, it's probably quite a similar risk. But who does that?
Every decent Linux distro has a package manager that covers 99% of the software you want to install, and comparing to an apt-get install, pacman -S, yum install and so on - running is a script off some website is way more risky. My package manager verifies the checksum of every file it gets to make sure my mirror wasn't tempered with, and it works regardless of the state of the website of some random software. If I have to choose between a software that's packaged for my package manager and one I have to install with a script - I'll always choose the package manager. And we didn't even start to talk about updates - as if that isn't a security concern.
The reason we should discourage people from installing scripts of the internet is because it would be much better if that software would just be packaged correctly.
> Every decent Linux distro has a package manager that covers 99% of the software you want to install
I wish this were true, but plenty of experience with Linux usage tells me that not having something packaged is a very common occurence.
Though of course this can be improved: More people should actually help working on their favorite Linux distro, so more software gets packaged. And upstreams should try better to collaborate with distros, which unfortunately they rarely do.
I wish more repositories took the NixOS approach of writing the build scripts (+ patches if necessary) on GitHub for easy visibility and a more familiar way to submit a new or upgraded package. Once approved the binary is built and distributed from the Nix binary cache.
NixOS is awesome, but contributing to Nixpkgs is at least a little challenging. The tooling leaves a bit to be desired, and it’s really hard to keep all of the guidelines and rules in your head. It would be an excellent experience if there was extensive linting and the tools were easier to understand, imo.
Certainly isn’t an easy problem.
It’s still probably a lot better than contributing to say, Debian. OTOH, the PR backlog is always daunting.
Linux distros should make tasks that are just link collecting as easy as editing Wikipedia and make contributing not require learning git, their syntax and giving out your real name or creating an account.
I've seen StackOverflow questions for "how do I install X" with thousands, sometimes hundreds of thousands of views and no-one has tried to contribute the 10 or so lines of code in the accepted answer as a package. Something is clearly too hard or not approachable.
Right. I love Debian, but its packages are often very stale. That's why many end up using Ubuntu. And yes, I get that package review takes time, and that Debian is arguably more secure. But that's little consolation when you're dead in the water because what's packaged is too old.
I don't think this idea of "Debian packages are often very outdated" still applies nowadays. One can add "testing" or "backports" channels in your /etc/apt/sources.lists.d and get "upstream version" software. Even "stable" ships fresh enough software these days.
In the last case you can always get the source, update it and send a nmu back to Debian. Lets not forget that that is how open source works :)
> Ubuntu is nothing more than the Debian "unstable" branch with Canonical branding plus non-free packages.
This just isn't true. Ubuntu is typically quicker to update popular packages, such as desktop environments, kernel, etc. Debian unstable, even experimental, are often months behind on Gnome, for example.
If Ubuntu is the not stale alternative to Debian (stable), then I can't imagine how bad the situation is there. I often build software myself because Ubuntu is very often stale.
Ubuntu LTS (freezes every two years) or "regular" Ubuntu (freezes every six months)?
I'm wondering if the periodic freeze-the-universe model that many distros use reflects a world that doesn't really exist anymore where distros came on DVDs (or CDs, or floppies). Whatever version you had on the disc, that's the version you're going to use.
I just started playing with FreeBSD in a VM, which has a frozen base system and constantly-updated packages separate from it. This works better for software you don't think of as an "OS component" but the question then becomes where you draw the line.
Or maybe it's just a fundamental disconnect between consumer-facing "move fast and break things" and enterprise-level "never break anything even if it means you can't move at all" and there's no way to make software that works for both.
This isn’t how Ubuntu even works. Ubuntu doesn’t just “freeze” their operating system every two years. They are constantly delivering package, security, and hardware enablement fixes. Often the stuff that lands in Ubuntu non-lts versions end up in the lts point releases. They keep a stable base of x.x versions but they definitely backport bug fixes to x.x.x versions of their software packages. For example there was a bug in sudo like 3 weeks ago, and Ubuntu immediately issued a fix within hours of the upstream project’s fix.
They back-port "high-impact" bugs such as security issues (your example), severe regressions and bugs causing loss of user data. They do not back-port other bug fixes or new features. The result is that you often find the version included in your Ubuntu release is stale. See https://wiki.ubuntu.com/StableReleaseUpdates#When
Maybe "version freeze" or "feature freeze" is a better term. The sudo bug was fixed in version 1.8.28, but Ubuntu LTS didn't upgrade to the new version. They're still on 1.8.21p2 from 2017, but with the bugfix and other Debian and Ubuntu patches applied, resulting in a bizarre package version of "1.8.21p2-3ubuntu1.1".
Which with something relatively small and stable like sudo, is one thing. For big projects on rapid release cycles, like GNOME, it's got a much bigger impact.
Though I do see that Ubuntu does keep updating Firefox and Chromium to the latest versions in LTS, because they're so big and change so fast that backporting fixes has become practically impossible. That looks like a very rare exception to the rule; your typical Python library won't be getting that treatment.
Well, in this case people should have been using Debian Testing instead of Stable.
But yeah it's often that people don't understand what's Debian stable and its trade offs compared to Testing and end up unhappy with it or switching to Ubuntu (which is ~very~ similar to Debian Testing).
>there is security support for testing, but in general it cannot be expected to be of the same quality as for stable:
>Updates for testing-security usually get less testing than updates for stable-security.
>Updates for embargoed issues take longer because the testing security team does not have access to embargoed information.
>Testing is changing all the time which increases the likelyhood of problems with the build infrastructure. Such problems can delay security updates in testing.
One can think of Debian testing as the "next-stable".
How does it works?
1. Upstream release a new version, it goes to unstable.
2. Package is tested for some days in unstable and get promoted to testing.
So telling that testing doesn't get security updates is somewhat incorrect, since you are grabing recent software. But by the other hand having too recent software also has its downside ;)
I simplified a bit. Yes, Debian testing gets new updates, which means it gets security updates. Eventually. It can (and does) take days for critical security updates to migrate from unstable to testing after stable has access to patched version.
I'm sorry, was my message unclear? There were no assumptions.
I'm speaking from experience that when I was using Debian testing I would usually receive security updates days after they are available for Debian stable.
Obviously security updates for stable do not go through normal release cycle.
I wasn't commenting stable security updates, but lack of timely access to security updates on testing.
Agreed, but that's actually an (UX) problem that Debian should fix. "Testing" is an awful name for "stable enough for normal use". When I first installed Debian I made the same error of installing stable on desktop and then fighting with it to install packages from testing... Just renaming testing to "regular" would prevent lots of wasted time all around.
It's been this way forever, though - when I started woody was "stable" but obsolete the day it was released. Since "stable" and "testing" are aliases of branch names, changing them would break scripts all over the place. You move to Debian, you have to learn to speak the language.
Though the truly baffling bit of Debianese is "contrib" which means "this is free software but depends on non-free software." I can kinda see how it came to mean that, but it's very non-intuitive.
Testing is only ok for desktops if you are ok with reinstalling it every so often, like with other distros. It won't last longer than your hardware, and will get odd problems after an upgrade once in a while.
Stable basically means it won't change, and says nothing about freshness. Debian has recently adopted a policy of releasing on a time basis, so it's never very stale.
I'd go back to Debian and their stale packages if only they had scheduled releases like Ubuntu has. Imagine Debian 19.10, 20.04 and so on, with Long Time Support on the .04 releases every other year. What a bliss.
Debian has been doing a stable release every two years since 14 years now. If you want the equivalent of a non-LTS Ubuntu release, you should use Debian Testing (besides the naming, it's pretty stable)
Yup, and Debian recently started committing to 5-year support cycles, so it's basically the same as Ubuntu LTS releases (can skip one, but not two releases), with the difference that Debian is released when it's ready, whereas Ubuntu is released on a schedule.
Personally, I don't trust Ubuntu LTS releases until they get their first point release, and even then I'm skeptical since they're a bit more loose with package versions on stable. I do trust Debian when it first releases because they're far more rigorous in their testing, though I usually wait a week or two before doing a release upgrade just in case.
I used to really like Debian testing, but I've since moved to OpenSUSE because they have a real rolling release (for my desktop) and a solid release based version (for servers). I like Debian, but testing gets a bit sketchy around release time (frozen, and then a ton of updates), and I honestly don't trust Sid aside from pulling in the odd package. I don't trust Ubuntu at all, since it has caused me far too many problems in the past.
Making Debian packages is a colossal pain in the ass, or at least it's poorly documented. I've tried to learn it twice and abandoned it for more user-friendly solutions to the problem.
Some people may say I'm stupid for not figuring it out, but the standard for usability of software has improved a lot since these systems were invented. The UX needs a serious overall.
I've been using Debian for 20 years and have never seen this advice. I currently use custom scripts for building local packages of things.
I think this comes down to Debian's weak documentation. The recommended intro guide from the wiki page [1] doesn't say this and looking through the maintainers guide I do see this [2] in there but it is buried in pages and pages of other docs.
Really, someone needs to write a new basic intro that doesn't bury the lede. It should start with this then expand on what to do for various issues.
Or maybe start with one of those 99% automation options. If they are even easier. As is you cannot find this information without some person telling you as you cannot find it by searching for it or reading the docs.
And how is using a package built like this any better than curl | sh from upstream? You're still trusting the upstream tarball. Now in addition you are trusting the packager.
It seems like you're assuming that there's someone vetting these packages. For enterprise distros like Red Hat that's certainly true. Community package maintainers in, for example, the Debian project provide some safety as well. But there are plenty of package managers where that's just not the case. In the case of Homebrew, the package manager pulls down the program directly from upstream and installs it. It's exactly the same as downloading a tar file from the developer's website over HTTPS. Same with npm. Some package managers like Maven and NuGet will rehost artifacts but if the project owner is malicious or compromised then that won't help - so the risk profile is again basically the same as downloading a tar file from the website.
> In the case of Homebrew, the package manager pulls down the program directly from upstream and installs it.
Sure, in the most basic case it does this, but even in the most basic case it does more than that. At minimum it also verified the download matches a known good hash.
The important part (to me) is that Homebrew also ensures the package installed conforms to Homebrew’s standard. There’s a lot of standards (install in /usr, /usr/local, /opt, etc) on something as simple as where the files are put, let alone how it’s built and it’s dependencies are pulled in.
So to say there’s no value in installing from Homebrew over curl|sh is clearly misguided IMHO.
I get why https://docs.brew.sh/Installation doesn’t discuss the versioning or security practices. It is interesting that homebrew doesn’t seem to interface with macOS’s signing and installation practices.
Reminds me that there is still no official package manager on macOS. So https://nodejs.org/en/download/ has you comparing check sums.
How exactly is the file going to get tampered with if you're using curl-to-sh? Everyone uses HTTPS nowadays. Validating the hash is not really doing anything significant.
As for ensuring that the package is well-behaved, could you elaborate on that? I'm not aware of Homebrew doing something like chrooting to /usr/local before running the install script. And the install script can do anything as your local user, same as curl-to-sh. Perhaps the Homebrew maintainers would catch something nefarious in the formula itself, but given that most formulas download code from the Internet and run it, that's not much help.
> Validating the hash is not really doing anything significant.
Yes it is, it’s ensuring that what the maintainer of the formula verified is still what’s being downloaded now. HTTPS does nothing to protect someone modifying the source URL, but the hash does that (assuming the maintainer actually inspected the initial download, which many if not most do).
> but given that most formulas download code from the Internet and run it, that's not much help.
1) the previously erroneously dismissed hash helps there.
2) the sandbox which you linked to later also helps too.
I suppose it comes down to whether the Homebrew maintainer who accepts the formula PR containing the hash is actually examining the upstream code in detail. I think that is unlikely; there's too much code for Homebrew maintainers to be experts on everything contained in homebrew-core and follow every single patch.
So yes, the hash prevents the upstream project from switching out the code at any time, but if they wanted to add some malicious code all they have to do is file a homebrew-core PR and hide it in a legitimate change.
> I think that is unlikely; there's too much code for Homebrew maintainers to be experts on everything contained in homebrew-core and follow every single patch.
By that logic, everything in any package manager should be treated with distrust then. Debian, RHEL, Arch, etc etc.
Not saying I disagree with distrusting, just making the point that risk exists everywhere, at some point you have to decide what you’re comfortable with.
To some extent, sure, but I think that extent is greater with Homebrew. It's my understanding that package maintainers for Debian, RHEL, etc are typically experts in the packages that they maintain. They overlay their own patches to ensure compatibility and submit patches upstream. With Homebrew there's only a small number of committers who maintain the homebrew-core repository, accepting PRs from thousands of people in the community. It's just a different situation.
> It's my understanding that package maintainers for Debian, RHEL, etc are typically experts in the packages that they maintain.
That’s certainly true in some cases, but is definitely not the case in the majority. I think you are letting your personal biases color your judgement to much.
Many package managers provide a real audit trail, and this (IMHO) is very valuable. For example, in python's PIP (and apparently in NPM too), the filenames are never reused:
Developer's website may change at any time, and even go back-and-forth between good and bad version. The pip software version won't. Put a version pin, and you can be sure you get a good package or an clear error.
Granted, you can record/verify the checksum of downloaded files as well, but many people don't. And crazy practices like 'curl | sh' make that impossible anyway.
Compared to donwloading a binary and 10 other similar methods people use.
>Yeah, comparing to downloading a tar file from the website and running ./configure, make etc - right, it's probably quite a similar risk. But who does that?
Millions of people?
And even more just download binaries off of websites...
Isn't there even a comment from Linus (could have been someone else) saying that he does exactly that and if that doesn't work gives up on the software?
There's still an issue with the package managers requiring arbitrary shell commands to be run, often with sudo. From docker[1] there are steps like these:
Install packages to allow apt to use a repository over HTTPS...
Add Docker’s official GPG key...
Use the following command to set up the stable repository...
Once you've done that, then it's just apt-get install.
> And we didn't even start to talk about updates - as if that isn't a security concern.
I think that's the worst part of it. Some software will nag you about updates, yarn and pipenv for example, but it's far more reliable to have one system that keeps everything up to date.
Well if this was true "Every decent Linux distro has a package manager that covers 99% of the software you want to install" we wouldn't have to install it thru sh.
4/5 of the examples the author gave have a package in Fedora. The one that doesn't (oh-my-zsh) is simply a git clone so it doesn't make much sense to package it.
For one of the examples provided, rustup.sh, there were not complete packages the last time I looked for them. There were some packages on Debian and Fedora but I ran into problems configuring the Rust plugin for VS-Code because it assumed that the rustup executable was present and it was not (at that time.) Going down this path, one then becomes dependent on using rustup to update the Rust installation. Now I need to run two commands to keep my system up to date. (pip? Make that 4 commands pip, pip3. CPAN? Yet another.)
I have some confidence that at the least the Debian packages won't be changing rapidly, making it more likely that any problems will be discovered before they get to me. A script (or tarball) fetched and installed from the Internet can change literally from one second to the next. If a trusted site is compromised the next download could be tainted.
Not all "package managers" instill confidence. I've heard too many bad things about npm and the associated environment and won't have it on my systems, but I am not a web developer so the impact is the occasional utility I have to forgo.
Indeed. It must be the other 99% that aren’t in the repos because I have to install a lot of things that aren’t available in repos (or that haven’t been updated in stable in a long time).
The package manager is only an option if you have root rights. Otherwise you either download binaries or compile it yourself, from the source downloaded from their website.
Conda has an .exe installer on Windows and a .pkg installer on macOS. Both signed by Anaconda, Inc. for the OS. There are RPM and deb bootstrap repos for Linux. Then there’s also the .sh shar file installer.
It would be nice to have something like a fusion of conda and apt to work right out of the box on distros like Ubuntu. Like an "apt install --user". I don't understand why we need root rights to install programs that will not need root rights to run anyway. Seems like a huge and obvious oversight to me. I guess it's not a big pain point because in most cases people use their own machines and have root access. Not always the case though in companies or university labs for example.
I have been wondering this for ages. Just last week I had to manually set up KeePass, VSCode, IntelliJ, Guitar (git ui) by manually unpacking tarballs (luckily already built) into ~/bin. Guitar luckily had an AppImage which worked beautifully as a single executable.
For that reason alone I'm a huge fan of AppImage above snaps and flatpaks.
A giant chunk of Linux users are software developers. Running code that has not yet been packaged or never will be is an extremely common occurrence for a developer.
> Yeah, comparing to downloading a tar file from the website and running ./configure, make etc - right, it's probably quite a similar risk. But who does that?
Many linux users? I get most of the software I need through package managers, but somewhat frequently I need to build it by source. Particularly if I want the most up-to-date version on debian.
Git cloning a repo is marginally better in the sense that, well, it's open. Theoretically if it were doing something nefarious, someone would've noticed. Is it perfect? No, of course not. I still think it's better than running random curl'd scripts.
>Yeah, comparing to downloading a tar file from the website and running ./configure, make etc - right, it's probably quite a similar risk. But who does that?
are you asking who compiles and runs software? A lot of people, for example I every time a new emacs version comes out and it isn't in the package manager yet
it's probably quite a similar risk. But who does that?
After downloading you can at least do some sanity checks - even if you don’t checksum it if you are familiar with it does it look right? Is make doing anything weird? Whereas curl|sh doesn’t give you this opportunity.
Every decent Linux distro has a package manager that covers 99% of the software you want to install, and comparing to an apt-get install, pacman -S, yum install and so on - running is a script off some website is way more risky. My package manager verifies the checksum of every file it gets to make sure my mirror wasn't tempered with, and it works regardless of the state of the website of some random software. If I have to choose between a software that's packaged for my package manager and one I have to install with a script - I'll always choose the package manager. And we didn't even start to talk about updates - as if that isn't a security concern.
The reason we should discourage people from installing scripts of the internet is because it would be much better if that software would just be packaged correctly.