I got to thinking what it would take to do such a task, and mainly what I came up with was a whole damned lot of time. Assuming a person can read, understand and remember 5000 lines of code per day (which honestly I think is way more than is realistic), it'd only take 8.5 years to audit the linux kernel. Add in all the other stuff and I figure something on the order of two decades. And in the end you'd be running two decade old software or you'd have to start over.
At the end of the day, security comes from the personal and corporate economics of reputation, profit, and prison avoidance. Do your best to get your stuff from people who you judge to be trustworthy and rely on their own self interest to not be malicious and to do their best to protect their repositories. And rely on others to be good citizens and report the bad shit that happens to them so that things can be cleaned up.
Now I'm not saying to throw out security best practices, but people should be aware that the quality of their systems are built on trust in human nature and self interest.
rm -rf /tmp/something
rm -rf /
People make mistakes. Mistakes contained to programs is one thing. Mistakes in shell scripts tend to have bigger consequences.
As for UDP, I'd tell ya a UDP joke, but get it you might not.
Well then I guess you haven't seen this, in which the server detects whether the script is being downloaded to disk or piped to a shell, and adjusts the payload accordingly:
i.e. when you download to disk and look at it, it looks benign, but when you pipe it to a shell, you get owned.
Just say no to curl|sh, kids.
And the ancillary point is that piping directly to the shell is worse, because it allows an attacker to exploit any false sense of security a user might have from having looked at the downloaded version of the script (whether that's that user specifically or someone else who did so and vouches for it).
The correct thing to do is distribute via cryptographically signed archives or packages, or via signed git tags.
Again, just say no to curl|sh, and also just say no to downloading and executing unverified shell scripts.
Really, what are you arguing for here? Security is a process that involves layers, and you seem to be advocating for tossing all the layers aside because layer A doesn't protect against attack method B.
> The correct thing to do is distribute via cryptographically signed archives or packages, or via signed git tags.
And now you have the problem of trusting the key instead of trusting the delivery mechanism. This approach works well for package managers because the user's trust in the package manager implicitly trusts the keys the package manager uses to verify all the software it downloads, but it doesn't work all that well for software distributed directly to end users, because very few people even know how to verify the signature, let alone figure out whether or not they should trust the key it was signed with. After all, the fact that it's signed doesn't mean anything, what's important is whether it's signed with a key that is trusted as belonging to the vendor (and is assumed to not have been compromised).
> Security is a process that involves layers, and you seem to be advocating for tossing all the layers aside because layer A doesn't protect against attack method B.
"Read the script before executing it" is a really really crappy layer. That assumes that you understand shell scripting, that the script is simple enough to actually be reasonable to skim through it and understand what it's doing, and that the hypothetical attacker hasn't figured out how to disguise their modifications to make it not immediately obvious to someone skimming the code. As previously stated, almost nobody actually does this. And once you've determined that, no, you're not going to read through every line of the shell script that you're using to install the software, then `curl | sh` is perfectly fine as long as you trust the delivery mechanism (e.g. https). At this point the only thing you've lost is the ability to verify code signatures, but people don't generally sign their shel scripts anyway, and also as previously stated, even if they did not very many people understand how to actually validate it and how to verify that they have a trusted key. GPG is really not user-friendly at all.
I guess what it really comes down to is, use a delivery mechanism that handles all this for you if you can (such as a package manager). If you can't, then the only real question is "do I trust this vendor and delivery mechanism enough to run something I just downloaded on my machine?"
I'm sure bad guys and state-level actor-types love it when people like you encourage people to continue promoting Worst Practices for security. Anyone who can MitM can trivially exploit this to take control of any machine that installs software this way, and they can do so surreptitiously by exploiting the technique that sends different content depending on whether it's piped to a shell or saved to disk. And guess what: when it's piped to shell, the evidence is gone as soon as the shell process exits. It's like the perfect crime, and thanks to people like you who expend significant effort convincing people to keep doing it, it continues to be a problem.
Oh, and you apparently haven't even considered what happens when a connection is interrupted and the shell receives a corrupted or incomplete script. This alone is reason enough to never do it.
The correct thing to do is to discourage Worst Practices and encourage Best Practices. The correct thing to is to never tell people to pipe to shell. The correct thing to do is to provide instructions on how to verify the integrity of downloaded software. To do anything else is grossly negligent and irresponsible.
> the only real question is "do I trust this vendor and delivery mechanism enough to run something I just downloaded on my machine?"
It has been demonstrated that the delivery mechanism in question is completely insecure. Even TLS does not ensure that a state-level actor who can forge certs is not MitM'ing a connection. You think NSA can't do this? They probably can. Can they break a 4k GPG key? Unlikely.
So the real question here is why you're trying so hard to encourage people to continue with this grossly irresponsible practice.
> Oh, and you apparently haven't even considered what happens when a connection is interrupted and the shell receives a corrupted or incomplete script.
I addressed this in my very first comment. You might want to try actually reading it before accusing me of not understanding an issue that is honestly really trivial to fix (wrap the whole script in a shell function and then execute the function; early termination means syntax error without anything being executed).
> The correct thing to do is to discourage Worst Practices and encourage Best Practices. The correct thing to is to never tell people to pipe to shell. The correct thing to do is to provide instructions on how to verify the integrity of downloaded software. To do anything else is grossly negligent and irresponsible.
Refusing to provide a convenient way to install your software doesn't make anyone safer, it just means you have fewer users. Anybody who is even remotely able to read shell scripts or to verify software integrity already knows how to read an install instruction like `curl | sh` and modify that to download the script first. And nobody who doesn't already know how to evaluate something like `curl | sh` and decide if they want to do that or download/verify the script first is going to bother using your product if you make the install instructions overly complicated.
> It has been demonstrated that the delivery mechanism in question is completely insecure. Even TLS does not ensure that a state-level actor who can forge certs is not MitM'ing a connection. You think NSA can't do this? They probably can. Can they break a 4k GPG key? Unlikely.
As stated in the original comment, even considering a state-level actor who can forge legitimate certificates is way out of scope for this discussion. Anybody who is at risk of being targeted by a state-level actor already has to take many many precautions that normal users don't take, and your website telling them to use `curl | sh` to install something is not going to be an attack vector, because either they understand how to download and verify the script first, or they've already been compromised by any number of easier ways than hoping the target decides to install the particular piece of software that the state-level actor has compromised.
> So the real question here is why you're trying so hard to encourage people to continue with this grossly irresponsible practice.
No, the real question is why are you behaving as though users are simultaneously capable of reading every line of every shell script, doing GPG verification of downloaded software, and figuring out how to even make sure the GPG keys are trustworthy, while at the same time so ignorant that they can't figure out how to go from `curl | sh` to "download, verify, and then run" without step-by-step instructions.
Avoiding `curl | sh` does not make anybody safer. All it does is make it harder for users to install your software, while not changing a single thing for users who are paranoid enough to want to verify the software before running it (since they can do that anyway). The only 2 things that you need to do to make `curl | sh` reasonable are always use https, and make your script resilient against unexpected EOF.
Edit: Another thing that hasn't even been considered this whole time, is if the threat model here is the website or delivery mechanism has been compromised, then the attacker can change the install instructions too. So even if you don't offer `curl | sh`, the attacker could modify your install instructions to use `curl | sh` and the victim wouldn't know that's not the "real" instructions. The alternative threat model is the vendor itself is untrustworthy (e.g. they're serving up a malicious script without having been compromised), but there's nothing you can do to guard against that because if you don't trust a vendor to not intentionally give you a malicious installer, then you shouldn't be running their software to begin with.
However, the point is that if you can't trust the publisher it doesn't make any difference what the installation procedure is.
> However, the point is that if you can't trust the publisher it doesn't make any difference what the installation procedure is.
That's not correct. A MitM attack is a risk even if the publisher is trustworthy.
This is not complicated--in theory, at least: only distribute cryptographically signed software. Anything else leaves you vulnerable to many attack vectors. And anyone who instructs users to pipe to a shell is irresponsible and encouraging Worst Practices.
> > However, the point is that if you can't trust the publisher it doesn't make any difference what the installation procedure is.
> That's not correct. A MitM attack is a risk even if the publisher is trustworthy.
You're contradicting something that I didn't write.
There is a quantitative difference between covering your laptop under a coat and locking the car doors and leaving it on bench when you take a walk around park.
Just as there is difference between piping the internet to root shell and installing a signed package that has whole ecosystem and many users validating and looking at it.
And for an egregious example of the author's approach of being angry without validating anything:
https://gnu.moe/petpeeves.html <- an angry post complaining about English mistakes that is itself splattered with basic grammar errors
The following on the home page  is hilarious:
I also maked a sitemap for those who want to dounloud
Also: <!DOCTYPE QTML>
`curl | sh` is insecure if you're using an http url, although as has already been said here, it's not really any more insecure than "download this script and run it", unless you're expecting all of your users to actually read through the whole script first. But if you're using an https url then you should be ok since the page cannot be hijacked or modified en route (unless the attacker actually has a trusted certificate for the domain, which is an attack that's way out of scope of this discussion).
The biggest risk with this approach, which the page doesn't even mention, is the danger of the connection being terminated before the whole script is downloaded, as the shell will still evaluate what was sent. But this can be handled in the script by making sure that an early EOF means nothing is actually run (e.g. you can wrap the whole script in a bash function, and then the last line executes the function).
So if you're using an https url, and the script is written to be resilient against early termination, then this is a perfectly reasonable way to install things.
1) If you attempt to read the script in your browser first, and everything's great, then go pipe to bash, the server can send alternate content based on your user agent.
2) You'd have a hard time proving the first point, or reviewing to see if a script acted poorly, if you didn't have a local copy, which piping to a shell like this normally prevents you from doing.
If you can't trust the site to give you a safe installer then you can't trust the rest of the sources it gives you either--you would need to audit the entire package. Virtually nobody is going to do that. Singling out the installer as uniquely dangerous is security theater.
I'm having trouble finding a proper source, but I'm pretty sure someone came up with a way to do it even if curl spoofs the user agent. I think it worked by looking for the characteristic timing pattern of bash emptying the pipe in small bursts as it executes the individual script lines.
curl | less
Or copy the request "as curl" from the network tab of any modern browser.
Curlbash lets you change out offerings at will, and leaves no auditable source for the user.
If you execute unsigned code from a 3rd party without sandboxing, transport encrytion doesn't help you much.
Similarly to what happened to e.g. https://transmissionbt.com/keydnap_qa/ or http://www.classicshell.net/forum/viewtopic.php?t=6441
So, why not just install using a package manager that takes some actual security precautions?
The answer is, most of the time, that software developers are either "moving too fast" (docker is guilty of this, but at least they are using the underlying package manager) or more often are just too lazy to provide packages through authenticated channels.
You're trusting the server in this case, not the contents being provided by the server. There's no way to verify that the contents attributed to eridius on github are the same contents that eridius actually put up there.
Github in particular has been shown to have problems with spoofed users.
Let's go one step further. I have a `curl | sh` script for you to execute to install Docker. Just run:
curl -fsSL https://get.docker.com/ | sh
The vulnerabilities go on and on. Using only HTTPS, you have no way to prove that the code you got from get.docker.com is the code that the docker engineers intended for you to use.
I'd argue that "curl http://.* | sh" is always bad, but so is every webpage offering software downloads that isn't https by default (and there are plenty of them).
And maybe it's not relevant, but I find it really off-putting how the author calls these developers idiots and retards constantly.
BUT. There is a difference -- code signing. HTTPS ensures that the data isn't compromised en route, trust in the vendor is what makes you OK with letting them run code on your computer, but neither of those things protect against a compromised payload. ie, if the vendor's server gets hacked and the script replaced, HTTPS doesn't help, and you get code that the vendor never intended for you to run. Code signing is what protects against this, cryptographically ensuring that the code you got is exactly the code the vendor wanted you to have, and is the last link in the chain that connects your machine to a trusted vendor.
Secondary fun fact when you add a key to GPG it's valid for any package from any repository. That repository could happily replace libc, or perhaps more strangely a key used to sign some popular repository could be uploaded to say the main Debian archive and anyone with that key added would happily install it with no errors.
Plenty of side cases in reality
(1) HTTP (no S) MITM. At least the lazy devs admit this one.
(2) No key/signature checking at all. Sure, some semi-lazy devs will tell you to add their own repo, and maybe you don't check the key for that repo yourself, but there are others who do and they'll raise an alarm you might hear. With curl|bash you don't even get this kind of herd immunity.
(3) No dependency checking.
(4) No adherence to standards. If you've ever tried to get a package included in e.g. Fedora or Debian, you know that there are people who will go over them with a fine-tooth comb and will reject them if they do bad things (or do them in a bad way).
(5) Most install scripts don't handle interrupted downloads well unless the author has taken special care (thank you for this one eridius). If you're piping directly into the shell you have no idea whether that's the case, and if the dev's lazy enough to be doing things this way in the first place the odds are poor.
Packages and package repos can be deployed and used in many ways. Some ways provide pretty strong safeguards and guarantees; other ways are weaker. Curl|bash is weaker than any of them. There's just no excuse.
6) reproducibility: vendors could version the download links but I've not seen that done, so you're always getting the latest version which depending on what you're doing might not be what you want.
7) uninstall: maybe the vendor was nice enough to include an install script? Sure with deb/rpm a vendor can screw up the uninstall but the framework is there for them to do it right, with curl | bash the vendor needs to understand that's something they need to do and implement a solution on their own.
8) am I really running bash? And the version you expect me to be? Hopefully your distro isn't evil, but I have more than once found bash just a sym link to something less feature rich. Same goes with older versions, you'd be surprised what features are "new"
I also find it a bit odd because the cranky old sysadmin who solves everything with unintelligible shell scripts is often made a mockery of in this world of saltstack, docker, cloud and other buzzwords - yet now we're just signing up for an even worse version of it?
Making packages is hard if you want to support many distros. Maybe we should adopt the Go model and just ship binaries.
1. Make sure you trust the vendor
2. Make sure you trust the delivery (TLS)
3. Think twice before you sudo
This level of security isn't desirable for everyone in all situations, but it's not just dogma and it's not "the same thing".
So again, while I agree with their general sentiment, being "baffled by how [oh my zsh] became so popular" just because it instructs users to curl pipe shows that they don't get the core issue at play here.
"Never, ever underestimate the power of convenience."
It's easier to mess up security (the early reset thing). This type of install script can't be run offline, because it needs to fetch dependencies (so you can't download it once and run it on multiple systems, and you can never run it on an airgapped system). It accustoms users to pasting commands into shells without knowing what they do, which is irresponsible even without JS clipboard shenanigans.
The author of this page went a lot overboard with the rhetoric, but the simple truth is that there's no good reason to suggest an install method like this. Even taking the exact same script, and asking the user to download and run it is a better plan, because it gives an improved user experience (though still nowhere near ideal).
And yes, I've seen Sandstorm's defense of this practice. I use and very much respect that project, but I couldn't disagree more with the choice of installation method.
It is useful for bootstrapping a package manager. Haskell's Stack uses this, Rust uses this, Nix uses this, etc
- Select text
- Start up a shell
- Paste the text into the shell
- Click download link
- Start up a shell so you can access stdin/stdout/stderr
- Run the script in the shell
- Click download link
- Run the file from your download manager once it's downloaded
1. Copy the link
2. Type wget, quotation marks, paste the link into the shell
3. Run it
4. Delete the installer script
1. Copy the command
2. Paste into the shell
Of course, I'd prefer all my software come from the package manager - a separate installer should just be for software that's not quite ready for packaging yet.
I generally have a shell open, so it is easier for me to copy/paste into that session than open a file dialog, save a temporary file, (potentially) switch to a different directory, run the script, and delete it.
> GUI program
Creates a temporary file on my disk, needs a correct file association for .sh since the download manager won't save it with +x (for my machine it automatically opens in Emacs, which isn't what I want), I don't even know how to make Linux open a terminal automatically for .sh. I know on Windows cmd.exe would close the window immediately after running the script so you can't see the output. All in all more complicated than just copy and pasting into an active shell session.
Still, if you're the type of person who generally has a shell open, you can probably figure out how to pipe something from a URL into an interpreter on your own. Which is the better thing to teach to users that can't figure that out?
I haven't manually deleted a temporary file in years, I just have a cron job that clears out my downloads directory. Though this may be a case where my weird workflow changes things from typical, from what I've seen most people just ignore the temporary files.
Most of your objections to the GUI program workflow don't actually apply to GUI programs. I admit that I forgot about the executable bit. I use a mix of Linux and Windows, and basically zero Linux packages have GUI installers -- most prefer .deb/.rpm files, which would be ideal. So I haven't actually run into that before.
It seems like it's only a couple of clicks to fix the permissions in the Nautilus GUI through Firefox, though. Really I would just run the installer from a shell anyway, but that's part of the better user experience: I get to choose that, it's not forced on me.
Well, you could if the script was written well enough. Back in the usenet days shar archives were a thing. Basically a shell script that had a large uuencoded payload (which itself was probably a tar file), so that you could just run it and get binaries out.
Of course no one actually does that any more since conversion to unicode bloats the archive considerably. But there's no technical reason why it couldn't.
I've seen a lot of random accusations about us in the Rust project, but this is the first time I've seen anyone accuse us of deliberately trying to spread malware. I guess I'm learning how it feels to be a politician :)
(Do be aware of pastejacking though, but this is not nearly an important enough issue for a wall of shame)
EDIT: We can all learn something from the readability of that page though. One zoom and it's better than 90% of the websites I've seen. Text -- it works.
Can't click links. Pain to select them on the phone.
No way to skim through since lines are the same and you can't quickly detect individual items in the list.
Doesn't work too well with Safari's Reading View — whole thing is a single paragraph.
It's in markdown already, why not spend 3 minutes and setup rendering to at least plain p, h and a tags.
Also the language is too condescending to be taken seriously. Not to say that the tone has anything to do with the argument. And this probably matters less in the software world. But if you want yourself to be taken seriously, you either need to be Linus Torvalds or learn to disagree respectfully.
It would be a mistake to group all curl pipes. Does it require elevated privileges? Is it served over TLS? Does it do any signature verification? What the heck does the script actually do?
Different levels of security are required depending on trust. I trust Debian's repository, so I feel less need to audit packages. But a random startup promising ponies? I'd like to at least skim what I can, then throw it in a jail/vm/container to test.
How is curl piping beneficial compared to grabbing the script, giving it a quick read, then executing it? Convenience is all I can come up with, and convenience often seems to be at odds with security.
I do usually glance over a a script/makefile/whatever before I run it--not so much to find security issues but to see if there's anything I'd like to tweak about it. For example, I always install homebrew in a nonstandard location, and that means changing a couple things about the installer script first.
RVM and Homebrew are two big projects that also do this method of installing. They are a breeze to setup. There's something to be said about just getting a job done and going home for the day.
It's not the wrong issue, because I believe the main goal here is to raise general awareness that piping to bash with untrusted data is a bad idea and we as developers should frown on it not promote it.
IT security can't just be the ones that come in and say "you've got issues here, here and here" without also providing solutions.
BTW, it's not just about security. It's also about correctness, consistency, repeatability, reversibility, auditability, etc. As a developer, I still build and install actual packages on my test systems not because there's a security issue but so I can be sure that an uninstall/reinstall will work exactly as they should and not pollute my system with untracked changes. I don't know what kind of developer wants to risk checking in changes that don't match what they tested, but not the kind I want to work with.
And most of all, the people that are part of the project
are also likely to be malicious because trying to infect
someone is the only valid reason to recommend this method of
One valid reason: large projects (e.g. rvm) have proven over time to not be malicious. It is far easier for a user to copy and paste one line than any other install method. This lowers the barrier to entry and reduces support requests for the maintainer.
Having an opinion is fine. Disagreeing is fine. Pretending like your answer is the only reasonable one: not likely to win over many people.
We're currently working on a commercial version of Shill targeting Linux. If Shill sounds like a product your company wishes it could find, we'd love to hear from you. Shoot me an email at email@example.com.
What exactly is the alternative the author would suggest? Git checkout? Couldn't you paste-jack that too? If the instructions come from a webpage, aren't they all basically paste-jackable? What is the specific issue with using this method?
In short, I'd prefer apt-get over curl-bash any day of the week, but most Windows users install loads of software (sometimes signed, never checked) since their OS offers nothing better, and also on Linux you never hear this debate when someone offers a package for download. People worry about what you might be piping to bash because they can see it and notice it could have easily been anything; a deb package (or equivalent) is much more opaque.
More details: http://lucb1e.com/!126
I'm trusting someone to serve me a piece of code to run either way. Brew, or the people that provide the https cert for the given endpoint, right?
Surely they want to make it easy to install their thing without a hitch, and that's why they provide https://get.docker.com for me to pipe into bash.
My point is it all boils down in who you trust. If you are downloading something unknown, sure, it's harder to go wrong with a package manager (if the package is available), but you're still trusting someone not to attack you or to leak the private keys.
Crucifying curling into bash has nothing to do with how safe you are. It's almost like saying "Never run anything you download from the internet, it's dangerous!"
1. An https URL is not secure unless your trust model involves trusting the server completely. Unless you're running this script in an isolated throwaway sandbox, this is a terrible idea.
2. Obviously auditing can't rule out well-hidden maliciousness or clever bugs, as we're up against the halting problem. But it is quite easy to do a quick sanity check on a downloaded script to see if it is going to do anything wacky or boneheaded.
3. Explicit packages are expected to have identities like versions and hashes. This allows us to talk about how something has been modified, whether a specific download instance has been tampered with, etc. A rando script has these things from the developer's perspective, but not from the users'.
4. These "easy install" scripts usually want to puke files into random places on your system. /usr/local, /opt, /home/crap, /who/knows, etc. This is a great way to create an unmanageable system. Standard practice for software outside the package manager is to let the user choose where things should be installed, eg ./configure --prefix=xxx. Lazy people choose /usr/local and can always blow that part of their system away, the more astute use /usr/local/pkg-1.0.0 (for stow), and some even have completely arbitrary paths (I personally use something like /x/local-x64/pkg-1.0.0 which gets synced across machines with unison).
5. Such scripts usually continue their reign of brokenness by instituting some sort of auto-updating. Now the user has little idea what version they were running (say they want to upgrade to a source install to investigate some bug), is less able to not change versions at an inconvenient time, and is further discouraged from bringing the package under their management.
The sheer majority of these installers provide files that would be fine as plain archives, but the distributors think they're being clever while forgetting about users' general requirements. I do understand that in this age of concentrated Melcalfe's law, it helps to appeal to the lazy people who don't really care if their system becomes an insecure unmaintainable mess. But really, you owe it to your users to provide a proper downloadable package that is installed in a manageable way. (And the same goes for proper versioned source releases, as opposed to telling users to grab a random git checkout).
The last time I saw a downloadable shar-esque installer was quite some time ago, and I've never seen software which installs a proper distribution's package using curl | sh.
Plain install scripts are still around, but are generally fixed when a package grows up. A large problem with curl | sh (especially which runs further curls) is that it makes a crappy approach masquerade as a polished solution.
If the script fails halfway? Good look trying to undo whatever it did if you do not have access to `zfs rollback` or similar.
It is also less-than-fun to go through `zfs diff` and the downloaded script to make a package out of it that can be distributed and automated.
There's a million things that the script can do stupidly, and practically every single one has at least one assumption that is bad.
One trick I've learned is to edit the script before running it and prefix anything that looks dangerous with "echo" (because of course none of them ever support --dry-run). Then I can at least see what they are doing, what they are downloading, etc.
curl|sh is the bane of my existence. Shame on you if that's your only means of installing.
You should redirect your anger away from curl and pipe and toward using install scripts vs package managers in general, because that's where your beef really is.
Even so, windows style binary installers are at least frameworks designed for installing stuff (many with years of bug fixes under their belt), while the curl|sh style installers are just ad-hoc one-offs written in a language that's known for being pretty hostile to defensive programming.
So yes, any installer could make those errors, but in my experience only random shell installers seem to do that. Saying they are the same is a false equivalency in my eyes.
The only installers I can think of that aren't auditable are binary installers. If you meant something else, I'm not understanding.
Even though the author made it clear something funny was about to happen - I just could not execute.
When I showed it to the guy next to me (at work) - he said he already installed it on another box and didn't even look.
The part that I find interesting is how many people who read the script are going to decode the blob and verify that the blob is non-malicious.
Either check a pgp signature if there is one, or skim the script before executing. Also covers off that pesky dropped connection problem.
curl ¦ sh is too handy to ever die, but it's possible to be smart about it!
Now would be a good time for the author to expand their vocabulary.
./configure && make && sudo make install
"Add this repo and key"
That and most Linux distributions are significantly harder to deal with both technically and administratively than Apple or Google app stores.
Fix these problems and curl pipe bash will die.
But here we have the usual sort of nonproductive advice you get from security people: shaming with no thought to underlying causes and God forbid we try to improve anything.