Hacker News new | past | comments | ask | show | jobs | submit login
“Curl Bash piping” wall of shame (gnu.moe)
116 points by type0 on Oct 22, 2016 | hide | past | favorite | 129 comments



I had this conversation with a person on reddit just the other day. After I pointed that there is no difference between piping a script directly into bash and downloading a package and installing it, they told me that they can audit a script/package they download and that they have personally audited every bit of code on their linux box. The kernel, gnome and metasploit.

I got to thinking what it would take to do such a task, and mainly what I came up with was a whole damned lot of time. Assuming a person can read, understand and remember 5000 lines of code per day (which honestly I think is way more than is realistic), it'd only take 8.5 years to audit the linux kernel. Add in all the other stuff and I figure something on the order of two decades. And in the end you'd be running two decade old software or you'd have to start over.

At the end of the day, security comes from the personal and corporate economics of reputation, profit, and prison avoidance. Do your best to get your stuff from people who you judge to be trustworthy and rely on their own self interest to not be malicious and to do their best to protect their repositories. And rely on others to be good citizens and report the bad shit that happens to them so that things can be cleaned up.

Now I'm not saying to throw out security best practices, but people should be aware that the quality of their systems are built on trust in human nature and self interest.


What if the download gets interrupted due to network error and bash runs something else than what was intended. E.g. the script has:

  rm -rf /tmp/something
But when the pipe gets interrupted it executes:

  rm -rf /


The body of the shell script should be wrapped in a function which is invoked at the end. If you do not trust them to get this incredibly basic thing right, how can you possibly trust any of the other code they have written?


sandstorm.io said: «If the connection dies mid-line, bash will execute the incomplete line – however, this can be mitigated by wrapping the whole script in a bash function executed on the last line, as we do.» (Source: https://sandstorm.io/news/2015-09-24-is-curl-bash-insecure-p...)


The same way thousands of people trust Steam. You would expect a company of Valve's size to get their Linux uninstaller right, but people found out the hard way that it was removing their entire home directories.

People make mistakes. Mistakes contained to programs is one thing. Mistakes in shell scripts tend to have bigger consequences.


Soo, maybe a silly question... but are downloads from the internet always "linear"? Meaning they go uninterrupted from start to end and if it drops it will stop in the middle of the downloaded file? Doesn't it get "pieces" and then place them together?


It depends on your download program, but that's irrelevant because of the pipe. It has to be linearized before going through the pipe.


Packets can arrive out of order, yeah. TCP includes ordering information, so lost packets can be retransmitted and in the end everything'll be in proper order.

As for UDP, I'd tell ya a UDP joke, but get it you might not.


It does get pieces, which are sequential. At least, curl does, if you don't tell it otherwise.


The https://github.com/silentbicycle/curlbash utility can check the hash of the script and refuse to run it if there's a mismatch.


There are better strategies than blindly relying on third party reputation. A primary approach is package management with hash databases, to ensure you are receiving the same code as others. Another is the use of multiple implementations whose results are compared, recently discussed @ https://news.ycombinator.com/item?id=12666923


You're still relying on the source to not be underhanded and malicious. Getting the same, reproducible build as everyone else only means you've got a lot of company. It doesn't mean there isn't malicious code or intentional "bugs."


You are right, though with hashing you've got effective post-release protection against a compromised source injecting malicious data at the very least, which these days is probably more likely than an outright malicious source for most major packages. So it's not without value. The second approach, more of an architectural workaround than a true solution, can still be used to insulate from major issues.


You can do that with curl + bash too. RVM does this: https://rvm.io/rvm/security


> there is no difference between piping a script directly into bash and downloading a package and installing it

Well then I guess you haven't seen this, in which the server detects whether the script is being downloaded to disk or piped to a shell, and adjusts the payload accordingly:

https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-b...

i.e. when you download to disk and look at it, it looks benign, but when you pipe it to a shell, you get owned.

Just say no to curl|sh, kids.


People who pipe it to a shell aren't going to look at it anyway. That's kind of the point. If they were, they'd download it first. There is literally no security gained by downloading it and running it, instead of just `curl|sh`. IF it's malicious, you just installed it either way. The only thing downloading does in this scenario is lets you look at the script later on when you realize it might have been malicious, but that's even assuming you kept it (I would wager that almost everybody would throw away the script after installing it if they weren't planning on reading it before installation), and assuming you know enough to understand its contents (not everybody knows bash scripting, and you can write some surprisingly complex scripts).


No, the point is that installing software by either downloading and executing shell scripts or piping shell scripts directly to the shell is a wildly bad idea.

And the ancillary point is that piping directly to the shell is worse, because it allows an attacker to exploit any false sense of security a user might have from having looked at the downloaded version of the script (whether that's that user specifically or someone else who did so and vouches for it).

The correct thing to do is distribute via cryptographically signed archives or packages, or via signed git tags.

Again, just say no to curl|sh, and also just say no to downloading and executing unverified shell scripts.

Really, what are you arguing for here? Security is a process that involves layers, and you seem to be advocating for tossing all the layers aside because layer A doesn't protect against attack method B.


It's not a wildly bad idea. You are seriously out of touch with how most people use computers if you think that. Very few people actually read every script they execute, and if it's a binary exe, they can't even do that. Whether or not you choose to execute software you downloaded depends entirely on your trust in the vendor and in the delivery process. If you trust the vendor to provide you with something you're willing to execute, then it really doesn't matter how you go about executing it either. Downloading it first and then executing it is no safer than piping it to bash. All it does is give you the option of reading it first, but as I said above, almost nobody actually does that.

> The correct thing to do is distribute via cryptographically signed archives or packages, or via signed git tags.

And now you have the problem of trusting the key instead of trusting the delivery mechanism. This approach works well for package managers because the user's trust in the package manager implicitly trusts the keys the package manager uses to verify all the software it downloads, but it doesn't work all that well for software distributed directly to end users, because very few people even know how to verify the signature, let alone figure out whether or not they should trust the key it was signed with. After all, the fact that it's signed doesn't mean anything, what's important is whether it's signed with a key that is trusted as belonging to the vendor (and is assumed to not have been compromised).

> Security is a process that involves layers, and you seem to be advocating for tossing all the layers aside because layer A doesn't protect against attack method B.

"Read the script before executing it" is a really really crappy layer. That assumes that you understand shell scripting, that the script is simple enough to actually be reasonable to skim through it and understand what it's doing, and that the hypothetical attacker hasn't figured out how to disguise their modifications to make it not immediately obvious to someone skimming the code. As previously stated, almost nobody actually does this. And once you've determined that, no, you're not going to read through every line of the shell script that you're using to install the software, then `curl | sh` is perfectly fine as long as you trust the delivery mechanism (e.g. https). At this point the only thing you've lost is the ability to verify code signatures, but people don't generally sign their shel scripts anyway, and also as previously stated, even if they did not very many people understand how to actually validate it and how to verify that they have a trusted key. GPG is really not user-friendly at all.

I guess what it really comes down to is, use a delivery mechanism that handles all this for you if you can (such as a package manager). If you can't, then the only real question is "do I trust this vendor and delivery mechanism enough to run something I just downloaded on my machine?"


You're just repeating the same old fallacy that comes around every time this issue is discussed: "Most people don't or won't or can't verify the files they download, so there's no reason to try."

I'm sure bad guys and state-level actor-types love it when people like you encourage people to continue promoting Worst Practices for security. Anyone who can MitM can trivially exploit this to take control of any machine that installs software this way, and they can do so surreptitiously by exploiting the technique that sends different content depending on whether it's piped to a shell or saved to disk. And guess what: when it's piped to shell, the evidence is gone as soon as the shell process exits. It's like the perfect crime, and thanks to people like you who expend significant effort convincing people to keep doing it, it continues to be a problem.

Oh, and you apparently haven't even considered what happens when a connection is interrupted and the shell receives a corrupted or incomplete script. This alone is reason enough to never do it.

The correct thing to do is to discourage Worst Practices and encourage Best Practices. The correct thing to is to never tell people to pipe to shell. The correct thing to do is to provide instructions on how to verify the integrity of downloaded software. To do anything else is grossly negligent and irresponsible.

> the only real question is "do I trust this vendor and delivery mechanism enough to run something I just downloaded on my machine?"

It has been demonstrated that the delivery mechanism in question is completely insecure. Even TLS does not ensure that a state-level actor who can forge certs is not MitM'ing a connection. You think NSA can't do this? They probably can. Can they break a 4k GPG key? Unlikely.

So the real question here is why you're trying so hard to encourage people to continue with this grossly irresponsible practice.


You're arguing that `curl | sh` is insecure as compared to something that literally nobody does, which is to read every single line of every script they ever execute. Once you discard the ridiculous notion that everybody should be reading every line of the script before running it, then there's nothing wrong with `curl | sh`. Yeah, you can't go back and read the script later, but if you wanted to do that, then just don't use `curl | sh`! If you're not planning on going back to read the script later, you wouldn't be saving it anyway after you ran it, so I genuinely don't understand how this argument is supposed to be convincing.

> Oh, and you apparently haven't even considered what happens when a connection is interrupted and the shell receives a corrupted or incomplete script.

I addressed this in my very first comment. You might want to try actually reading it before accusing me of not understanding an issue that is honestly really trivial to fix (wrap the whole script in a shell function and then execute the function; early termination means syntax error without anything being executed).

> The correct thing to do is to discourage Worst Practices and encourage Best Practices. The correct thing to is to never tell people to pipe to shell. The correct thing to do is to provide instructions on how to verify the integrity of downloaded software. To do anything else is grossly negligent and irresponsible.

Refusing to provide a convenient way to install your software doesn't make anyone safer, it just means you have fewer users. Anybody who is even remotely able to read shell scripts or to verify software integrity already knows how to read an install instruction like `curl | sh` and modify that to download the script first. And nobody who doesn't already know how to evaluate something like `curl | sh` and decide if they want to do that or download/verify the script first is going to bother using your product if you make the install instructions overly complicated.

> It has been demonstrated that the delivery mechanism in question is completely insecure. Even TLS does not ensure that a state-level actor who can forge certs is not MitM'ing a connection. You think NSA can't do this? They probably can. Can they break a 4k GPG key? Unlikely.

As stated in the original comment, even considering a state-level actor who can forge legitimate certificates is way out of scope for this discussion. Anybody who is at risk of being targeted by a state-level actor already has to take many many precautions that normal users don't take, and your website telling them to use `curl | sh` to install something is not going to be an attack vector, because either they understand how to download and verify the script first, or they've already been compromised by any number of easier ways than hoping the target decides to install the particular piece of software that the state-level actor has compromised.

> So the real question here is why you're trying so hard to encourage people to continue with this grossly irresponsible practice.

No, the real question is why are you behaving as though users are simultaneously capable of reading every line of every shell script, doing GPG verification of downloaded software, and figuring out how to even make sure the GPG keys are trustworthy, while at the same time so ignorant that they can't figure out how to go from `curl | sh` to "download, verify, and then run" without step-by-step instructions.

Avoiding `curl | sh` does not make anybody safer. All it does is make it harder for users to install your software, while not changing a single thing for users who are paranoid enough to want to verify the software before running it (since they can do that anyway). The only 2 things that you need to do to make `curl | sh` reasonable are always use https, and make your script resilient against unexpected EOF.

Edit: Another thing that hasn't even been considered this whole time, is if the threat model here is the website or delivery mechanism has been compromised, then the attacker can change the install instructions too. So even if you don't offer `curl | sh`, the attacker could modify your install instructions to use `curl | sh` and the victim wouldn't know that's not the "real" instructions. The alternative threat model is the vendor itself is untrustworthy (e.g. they're serving up a malicious script without having been compromised), but there's nothing you can do to guard against that because if you don't trust a vendor to not intentionally give you a malicious installer, then you shouldn't be running their software to begin with.


People paranoid enough to download it and read it are more likely to run what they just read. Why would you download it again?

curl >x cat x . x

However, the point is that if you can't trust the publisher it doesn't make any difference what the installation procedure is.


What? I feel like you're not thinking about this very carefully. Someone might look at the downloaded script and verify that it is harmless. Then they might run it. Then later they might go to another system they want to install it on, and they might think, "Well, I looked at it a few minutes ago, and it was fine, so now I'll just pipe it to the shell like the web site says to do." Or they might recommend the software to someone else, and they might say, "Hey by the way, I verified that the install script is safe, so you can pipe it straight to a shell like the site says."

> However, the point is that if you can't trust the publisher it doesn't make any difference what the installation procedure is.

That's not correct. A MitM attack is a risk even if the publisher is trustworthy.

This is not complicated--in theory, at least: only distribute cryptographically signed software. Anything else leaves you vulnerable to many attack vectors. And anyone who instructs users to pipe to a shell is irresponsible and encouraging Worst Practices.


I feel like you missed the point of what I wrote. Yes, there are things people could do where if the site owner is malicious they could get screwed. That is generally what happens when you install software from untrustworthy sources.

> > However, the point is that if you can't trust the publisher it doesn't make any difference what the installation procedure is.

> That's not correct. A MitM attack is a risk even if the publisher is trustworthy.

You're contradicting something that I didn't write.


Sorry for the formatting:

  curl >x
  less x
  . x


A downloaded installer can be signed, and you can verify signatures or at least checksums out-of-band.


That just asserts that it's the same series of bytes that was originally provided. You still have to trust the provider.


Right, we know that security is a gradient, do you?

There is a quantitative difference between covering your laptop under a coat and locking the car doors and leaving it on bench when you take a walk around park.

Just as there is difference between piping the internet to root shell and installing a signed package that has whole ecosystem and many users validating and looking at it.


In fairness, ssl from a trusted host is pretty similar.


You're trusting the provider of the program, not the provider of the server. Curl/bash from github? You're trusting Github to provide the correct content; there have been times before where people have been able to spoof other users. Signatures on the package (or the code) would prevent an issue like this.


You always have to trust the provider. If you don't authenticate the message you also have to trust every carrier.


At least you avoid javascript pastejacking in this manner.


I'm really not a fan of this author's style. The content seems angry just for the sake of it. I'm no expert on it, but it's pretty unclear whether the complaint is even valid, as there are talented, non-"retarded" developers that disagree:

https://sandstorm.io/news/2015-09-24-is-curl-bash-insecure-p...

And for an egregious example of the author's approach of being angry without validating anything:

https://gnu.moe/petpeeves.html <- an angry post complaining about English mistakes that is itself splattered with basic grammar errors


> an angry post complaining about English mistakes that is itself splattered with basic grammar errors

The following on the home page [0] is hilarious:

  I also maked a sitemap for those who want to dounloud
  errything.
[0] https://gnu.moe/


OMG, that looks like a parody of a 1990's web site, except that it's not a parody.


How's it not a parody? Seems like it's parodying of a pseudo-Japanese Free Software enthusiast, likely by a 4channer, that just happens to have some real content.

Also: <!DOCTYPE QTML>


And you think they are talented and non-retarded beause?


Because they're intelligence agencies.


There are ways of communicating beyond those designed to gain fake internet points on a west coast venture capital firm's website. The internet isn't your safe space. Anger is a gift.


This page is completely wrong.

`curl | sh` is insecure if you're using an http url, although as has already been said here, it's not really any more insecure than "download this script and run it", unless you're expecting all of your users to actually read through the whole script first. But if you're using an https url then you should be ok since the page cannot be hijacked or modified en route (unless the attacker actually has a trusted certificate for the domain, which is an attack that's way out of scope of this discussion).

The biggest risk with this approach, which the page doesn't even mention, is the danger of the connection being terminated before the whole script is downloaded, as the shell will still evaluate what was sent. But this can be handled in the script by making sure that an early EOF means nothing is actually run (e.g. you can wrap the whole script in a bash function, and then the last line executes the function).

So if you're using an https url, and the script is written to be resilient against early termination, then this is a perfectly reasonable way to install things.


There's a few subtle differences you haven't mentioned:

1) If you attempt to read the script in your browser first, and everything's great, then go pipe to bash, the server can send alternate content based on your user agent.

2) You'd have a hard time proving the first point, or reviewing to see if a script acted poorly, if you didn't have a local copy, which piping to a shell like this normally prevents you from doing.


> If you attempt to read the script in your browser first, and everything's great, then go pipe to bash, the server can send alternate content based on your user agent.

If you can't trust the site to give you a safe installer then you can't trust the rest of the sources it gives you either--you would need to audit the entire package. Virtually nobody is going to do that. Singling out the installer as uniquely dangerous is security theater.


Windows and OSX also do this and present pointless non-defeatable "omg but this is from the scary internet, are you SURE you want to run it????"


To be fair, those have been very effective in keeping my non-techie family members from accidentally installing malware because they'll at least check with me when presented with a scary system dialog. And it hasn't impacted my own use other than the occasional extra clicks. In a locked-down corporate setting, I can see how that could be maddening, though.


That's for the benefit of people who don't really realize that installing software from the internet is dangerous (i.e. most users). This discussion is about safe methods once you have decided to install something from the internet, which is different.


> 1) If you attempt to read the script in your browser first, and everything's great, then go pipe to bash, the server can send alternate content based on your user agent.

I'm having trouble finding a proper source, but I'm pretty sure someone came up with a way to do it even if curl spoofs the user agent. I think it worked by looking for the characteristic timing pattern of bash emptying the pipe in small bursts as it executes the individual script lines.


I think this is what you are looking for: https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-b...


> If you attempt to read the script in your browser first, and everything's great, then go pipe to bash, the server can send alternate content based on your user agent.

curl | less

Or copy the request "as curl" from the network tab of any modern browser.


There was a post on HN a while ago where somebody wrote a server side tool which can detect curl -> bash piping and delivered different content if that happens.


Sure, but how is that any less secure than other installation alternatives? As the site owner, I could swap out a valid package for a troll package at any moment to any subset of users I wanted to if they're not validating the contents of it...


You can't change the contents on a user's local storage that they've verified.

Curlbash lets you change out offerings at will, and leaves no auditable source for the user.


That is true, but that doesn't protect you from the server being hacked and the script replaced by a malicious one.

If you execute unsigned code from a 3rd party without sandboxing, transport encrytion doesn't help you much.


The website could be compromised though, and you'd have no reliable checksum to compare against (unlike a proper repository).

Similarly to what happened to e.g. https://transmissionbt.com/keydnap_qa/ or http://www.classicshell.net/forum/viewtopic.php?t=6441


This is not a risk inherent to `curl | sh`, it's a risk that you run any time you download software from anywhere that doesn't have a signature using a key that you've already previously verified as trustworthy.


Which is why most package managers for installling software use signatures. .msi, .deb, .app, .rpm... all provide methods for signing code, and can refuse to install code without signatures or improper signatures.


`curl | sh` isn't used as an alternative to package managers, it's used as an alternative to "download this script and run it".


`curl | sh` is not an alternative; it is exactly download this script and run it. The problem is "to what end". Most of the time, the script being run is installing software. Be it Docker, Prometheus, homebrew, or otherwise, the entire purpose of most of these scripts is to install software.

So, why not just install using a package manager that takes some actual security precautions?

The answer is, most of the time, that software developers are either "moving too fast" (docker is guilty of this, but at least they are using the underlying package manager) or more often are just too lazy to provide packages through authenticated channels.


Not sure why you are being downvoted, you are completely correct.


Perfect answer. Wrapping the entire script in a function is a clever idea to protect against connection drops as well.


> if you're using an https url then you should be ok

You're trusting the server in this case, not the contents being provided by the server. There's no way to verify that the contents attributed to eridius on github are the same contents that eridius actually put up there.

Github in particular has been shown to have problems with spoofed users.

Let's go one step further. I have a `curl | sh` script for you to execute to install Docker. Just run:

    curl -fsSL https://get.docker.com/ | sh
Oops. That was actually get.rekcod.com, with a RTL unicode character embedded. Or a unicode C which doesn't map to the ascii 'c'. Or I am able to mitm and strip the TLS. Or...

The vulnerabilities go on and on. Using only HTTPS, you have no way to prove that the code you got from get.docker.com is the code that the docker engineers intended for you to use.


"curl | bash" isn't actually very different security wise from "download this piece of software from our webpage and install it". Every time you install software these days you put a certain amount of trust on the vendor. (This may change in a theoretical future of reproducible builds and binary transparency logs.)

I'd argue that "curl http://.* | sh" is always bad, but so is every webpage offering software downloads that isn't https by default (and there are plenty of them).


How is this more of a security risk than having the user do wget https://example.com/whatever.deb and then dpkg -i whatever.deb? Or adding their apt repo & public keys? Sure, the project maintainer could include malicious code with a curl bash, but they could do in either of the ways I mentioned as well.

And maybe it's not relevant, but I find it really off-putting how the author calls these developers idiots and retards constantly.


It's definitely off-putting. And I'm totally fine with curl | bash. Composer (PHP package manager) does that and I've used it hundreds of times (production is containerized, but I do this on my personal machines too).

BUT. There is a difference -- code signing. HTTPS ensures that the data isn't compromised en route, trust in the vendor is what makes you OK with letting them run code on your computer, but neither of those things protect against a compromised payload. ie, if the vendor's server gets hacked and the script replaced, HTTPS doesn't help, and you get code that the vendor never intended for you to run. Code signing is what protects against this, cryptographically ensuring that the code you got is exactly the code the vendor wanted you to have, and is the last link in the chain that connects your machine to a trusted vendor.


But it also requires you to have some way to actually validate that the signature is valid. Apple provides this service to registered Apple developers, so you can code-sign your independently-distributed app or installer and the certificate you sign with is generated and signed by Apple. But in nearly all cases of code signing I've seen outside the Apple developer ecosystem, it's GPG signatures, which relies on you being able to independently verify that the signing key is valid and belongs to the vendor and was not compromised. Which is to say, not very many people who download stuff outside of a package system are actually going to validate that sort of thing.


Most people happily install whatever GPG key to apt the website tells them too. If they can compromise the installer probably they can compromise the page too.

Secondary fun fact when you add a key to GPG it's valid for any package from any repository. That repository could happily replace libc, or perhaps more strangely a key used to sign some popular repository could be uploaded to say the main Debian archive and anyone with that key added would happily install it with no errors.

Plenty of side cases in reality


I often see the claim that curl|bash is no worse than what a package manager does. That is simply untrue, for the following reasons.

(1) HTTP (no S) MITM. At least the lazy devs admit this one.

(2) No key/signature checking at all. Sure, some semi-lazy devs will tell you to add their own repo, and maybe you don't check the key for that repo yourself, but there are others who do and they'll raise an alarm you might hear. With curl|bash you don't even get this kind of herd immunity.

(3) No dependency checking.

(4) No adherence to standards. If you've ever tried to get a package included in e.g. Fedora or Debian, you know that there are people who will go over them with a fine-tooth comb and will reject them if they do bad things (or do them in a bad way).

(5) Most install scripts don't handle interrupted downloads well unless the author has taken special care (thank you for this one eridius). If you're piping directly into the shell you have no idea whether that's the case, and if the dev's lazy enough to be doing things this way in the first place the odds are poor.

Packages and package repos can be deployed and used in many ways. Some ways provide pretty strong safeguards and guarantees; other ways are weaker. Curl|bash is weaker than any of them. There's just no excuse.


Indeed, I think most of the harms of curl | bash fall under operational rather than security. To add to your list:

6) reproducibility: vendors could version the download links but I've not seen that done, so you're always getting the latest version which depending on what you're doing might not be what you want.

7) uninstall: maybe the vendor was nice enough to include an install script? Sure with deb/rpm a vendor can screw up the uninstall but the framework is there for them to do it right, with curl | bash the vendor needs to understand that's something they need to do and implement a solution on their own.

8) am I really running bash? And the version you expect me to be? Hopefully your distro isn't evil, but I have more than once found bash just a sym link to something less feature rich. Same goes with older versions, you'd be surprised what features are "new"

I also find it a bit odd because the cranky old sysadmin who solves everything with unintelligible shell scripts is often made a mockery of in this world of saltstack, docker, cloud and other buzzwords - yet now we're just signing up for an even worse version of it?


(1) you can easily see and just not use it (2) is incorrect, see https://rvm.io -- it verifies the script (3) you can do within the script, though you'd have to handle different package managers (4) is true for packages as well. You can put whatever you'd like in an RPM or deb including e.g. A post-install hook to sudo rm -rf --no-preserve-root /

Making packages is hard if you want to support many distros. Maybe we should adopt the Go model and just ship binaries.


Underneath the curl|bash method, which is what is being discussed I believe, there is a link to a "more secure installation.": https://rvm.io/rvm/security . These instructions have you download and verify a signed installer.


This is a perfect example of dogma over logic. I'm not sure if the author just prefers we all use the Mac app store. Three rules for any installation process:

1. Make sure you trust the vendor

2. Make sure you trust the delivery (TLS)

3. Think twice before you sudo


In the world of signed payloads the delivery, (2), includes all the distribution infrastructure. And trusting the vendor, 1, means verifying signature of the payload with their public keys. The assumption here being the building and signing infrastructure is more secure than the distribution infrastructure.

This level of security isn't desirable for everyone in all situations, but it's not just dogma and it's not "the same thing".


While this falls under #2, I think its a good point worth mentioning about using/trusting http over https.


I don't disagree with the author that it is a huge security risk. But this is yet another example of a software enthusiast who doesn't get the value of convenience. In fact, "convenience" is probably the first attribute of great software.

So again, while I agree with their general sentiment, being "baffled by how [oh my zsh] became so popular" just because it instructs users to curl pipe shows that they don't get the core issue at play here.


If I had to provide some sage piece of wisdom that summarized the learnings of my life, it would be this:

"Never, ever underestimate the power of convenience."


Convenience is a double-edged sword.

Witness the results of autorun on Windows removable media and Javascript in PDFs.


Piping curl into a scripting environment is problematic for a lot of reasons. For starters, it has a worse user experience than "Download and run this program". I know we love inventing worse user experiences than what's been done for decades, but it's a trend that needs to reverse.

It's easier to mess up security (the early reset thing). This type of install script can't be run offline, because it needs to fetch dependencies (so you can't download it once and run it on multiple systems, and you can never run it on an airgapped system). It accustoms users to pasting commands into shells without knowing what they do, which is irresponsible even without JS clipboard shenanigans.

The author of this page went a lot overboard with the rhetoric, but the simple truth is that there's no good reason to suggest an install method like this. Even taking the exact same script, and asking the user to download and run it is a better plan, because it gives an improved user experience (though still nowhere near ideal).

And yes, I've seen Sandstorm's defense of this practice. I use and very much respect that project, but I couldn't disagree more with the choice of installation method.


> there's no good reason to suggest an install method like this

It is useful for bootstrapping a package manager. Haskell's Stack uses this, Rust uses this, Nix uses this, etc


Why, though? Why is it better than asking users to download and run the exact same script?


Because it's easier for their use case to copy and paste (1 step) than do however many steps you want them to do. If you want to run something offline or run it on multiple machines there is generally a different method available for you to do so. That's not the use case curl | sh is trying to solve in this instance.


Copy and paste is 3 steps:

  - Select text
  - Start up a shell
  - Paste the text into the shell
Download and run is 3 steps for a shell script:

  - Click download link
  - Start up a shell so you can access stdin/stdout/stderr
  - Run the script in the shell
For a GUI program, it's 2 steps:

  - Click download link
  - Run the file from your download manager once it's downloaded
Why is piping the file directly into my shell easier than any of these? It's considerably worse than the GUI option, and slightly worse than the shell option because running programs is something I do from shell all the time.


90% of the command line software I'm installing isn't on my local machine but on either a virtual machine or a remote server. In those cases, "download and run" means

    1. Copy the link
    2. Type wget, quotation marks, paste the link into the shell
    3. Run it
    4. Delete the installer script
Instead of the curl|sh method of

    1. Copy the command
    2. Paste into the shell
If you're installing GUI software then obviously it should have a GUI installer (or the macOS method of just unzipping and drag and dropping the software into the Applications folder), but I've never seen any GUI software installed this way - only command line stuff targeted at developers or system admins who are already going to have a shell open.

Of course, I'd prefer all my software come from the package manager - a separate installer should just be for software that's not quite ready for packaging yet.


> Download and run

I generally have a shell open, so it is easier for me to copy/paste into that session than open a file dialog, save a temporary file, (potentially) switch to a different directory, run the script, and delete it.

> GUI program

Creates a temporary file on my disk, needs a correct file association for .sh since the download manager won't save it with +x (for my machine it automatically opens in Emacs, which isn't what I want), I don't even know how to make Linux open a terminal automatically for .sh. I know on Windows cmd.exe would close the window immediately after running the script so you can't see the output. All in all more complicated than just copy and pasting into an active shell session.


The shell is an invariant for script-based installers (i.e. required no matter what). My workflow is also apparently somewhat different from yours, because I tend to open a new shell for each distinct task.

Still, if you're the type of person who generally has a shell open, you can probably figure out how to pipe something from a URL into an interpreter on your own. Which is the better thing to teach to users that can't figure that out?

I haven't manually deleted a temporary file in years, I just have a cron job that clears out my downloads directory. Though this may be a case where my weird workflow changes things from typical, from what I've seen most people just ignore the temporary files.

Most of your objections to the GUI program workflow don't actually apply to GUI programs. I admit that I forgot about the executable bit. I use a mix of Linux and Windows, and basically zero Linux packages have GUI installers -- most prefer .deb/.rpm files, which would be ideal. So I haven't actually run into that before.

It seems like it's only a couple of clicks to fix the permissions in the Nautilus GUI through Firefox, though. Really I would just run the installer from a shell anyway, but that's part of the better user experience: I get to choose that, it's not forced on me.


> This type of install script can't be run offline, because it needs to fetch dependencies (so you can't download it once and run it on multiple systems, and you can never run it on an airgapped system)

Well, you could if the script was written well enough. Back in the usenet days shar archives were a thing. Basically a shell script that had a large uuencoded payload (which itself was probably a tar file), so that you could just run it and get binaries out.

Of course no one actually does that any more since conversion to unicode bloats the archive considerably. But there's no technical reason why it couldn't.


"And most of all, the people that are part of the project are also likely to be malicious because trying to infect someone is the only valid reason to recommend this method of installation."

I've seen a lot of random accusations about us in the Rust project, but this is the first time I've seen anyone accuse us of deliberately trying to spread malware. I guess I'm learning how it feels to be a politician :)


Amusing list. I would like to share it but the unnecessary use of "retard" is a bit upsetting when there are many other appropriate terms (fools, dangerous idiots, whatever)


Railing against `curl https://foo/ | sh`? I guess security theater's not just for the TSA.

(Do be aware of pastejacking though, but this is not nearly an important enough issue for a wall of shame)

EDIT: We can all learn something from the readability of that page though. One zoom and it's better than 90% of the websites I've seen. Text -- it works.


> We can all learn something from the readability of that page though.

Monospaced font.

Can't click links. Pain to select them on the phone.

No way to skim through since lines are the same and you can't quickly detect individual items in the list.

Doesn't work too well with Safari's Reading View — whole thing is a single paragraph.

It's in markdown already, why not spend 3 minutes and setup rendering to at least plain p, h and a tags.


Monospaced typefaces have never been good for readability.


I beg to differ. There is so much botched typography on the internet and widespread use of Arial-like fonts that monospaced, while not ideal, is far from the worst of choices.


It boils down to "Is there any type of attack that can compromise `curl | sh` without compromising a download from the site? I can only think if one - the download allows for md5 check.

Also the language is too condescending to be taken seriously. Not to say that the tone has anything to do with the argument. And this probably matters less in the software world. But if you want yourself to be taken seriously, you either need to be Linus Torvalds or learn to disagree respectfully.


Security is not pass/fail. What threats are enabled by this method? Are there mitigations? Are there alternatives?

It would be a mistake to group all curl pipes. Does it require elevated privileges? Is it served over TLS? Does it do any signature verification? What the heck does the script actually do?

Different levels of security are required depending on trust. I trust Debian's repository, so I feel less need to audit packages. But a random startup promising ponies? I'd like to at least skim what I can, then throw it in a jail/vm/container to test.

How is curl piping beneficial compared to grabbing the script, giving it a quick read, then executing it? Convenience is all I can come up with, and convenience often seems to be at odds with security.


Even though I know it's fine, it still feels weird to me when a site asks me to pipe their script directly into a shell. Since I always have the option of breaking it into two steps, though, it doesn't bother me, and I don't think a "wall of shame" is called for.

I do usually glance over a a script/makefile/whatever before I run it--not so much to find security issues but to see if there's anything I'd like to tweak about it. For example, I always install homebrew in a nonstandard location, and that means changing a couple things about the installer script first.


At least you have an option to download the script and glance at it before running it if you're worried. Compare that to "Save and Run" an MSI in Windows, which is putting the same level of trust in the product you are downloading except its much more opaque to what is actually happening.

RVM and Homebrew are two big projects that also do this method of installing. They are a breeze to setup. There's something to be said about just getting a job done and going home for the day.


This might be attacking the wrong issue. How about making it easier to create packages for major distributions? (this includes Windows and macOS)


You mean like `brew` for macOS, or the many other options like it?

It's not the wrong issue, because I believe the main goal here is to raise general awareness that piping to bash with untrusted data is a bad idea and we as developers should frown on it not promote it.


You do realize brew itself has an installation method like the post describes, right? :)


Yes, but what alternative are you suggesting for developers who want to distribute software to multiple platforms?

IT security can't just be the ones that come in and say "you've got issues here, here and here" without also providing solutions.


Bash scripts aren't magically multi-platform either. If you want something to work right on multiple platforms, you have to understand those platforms' proper distribution and packaging methods. Sure it's a pain, but so is writing the mother of all bash scripts to handle every condition on every platform. You won't get it right, just like all the people who tried to do the exact same thing back in the 80s didn't get it right. That approach failed so consistently and so badly that people created package managers to bring some sanity to the situation. They're still the best solution available, even if they're not perfect.

BTW, it's not just about security. It's also about correctness, consistency, repeatability, reversibility, auditability, etc. As a developer, I still build and install actual packages on my test systems not because there's a security issue but so I can be sure that an uninstall/reinstall will work exactly as they should and not pollute my system with untracked changes. I don't know what kind of developer wants to risk checking in changes that don't match what they tested, but not the kind I want to work with.


Aside from the fact that this is a plaintext markdown file, the tone of this page is an immediate turn off.


  And most of all, the people that are part of the project
  are also likely to be malicious because trying to infect
  someone is the only valid reason to recommend this method of 
  installation.
This lacks any empathy.

One valid reason: large projects (e.g. rvm) have proven over time to not be malicious. It is far easier for a user to copy and paste one line than any other install method. This lowers the barrier to entry and reduces support requests for the maintainer.

Having an opinion is fine. Disagreeing is fine. Pretending like your answer is the only reasonable one: not likely to win over many people.


This is among the many use cases we're hoping to solve with our secure shell scripting language Shill (http://shill.seas.harvard.edu/).

We're currently working on a commercial version of Shill targeting Linux. If Shill sounds like a product your company wishes it could find, we'd love to hear from you. Shoot me an email at sdmoore@fas.harvard.edu.


Unless you're installing from a package manager with signed packages everything is going to suck by comparison.

What exactly is the alternative the author would suggest? Git checkout? Couldn't you paste-jack that too? If the instructions come from a webpage, aren't they all basically paste-jackable? What is the specific issue with using this method?


I've seen so many arguments over this, I wrote a short article on it.

In short, I'd prefer apt-get over curl-bash any day of the week, but most Windows users install loads of software (sometimes signed, never checked) since their OS offers nothing better, and also on Linux you never hear this debate when someone offers a package for download. People worry about what you might be piping to bash because they can see it and notice it could have easily been anything; a deb package (or equivalent) is much more opaque.

More details: http://lucb1e.com/!126


How is this different than say, brew install <whatever>?

I'm trusting someone to serve me a piece of code to run either way. Brew, or the people that provide the https cert for the given endpoint, right?


When a third party is involved there is a degree of accountability. A proper package repository would require everything to be logged and signed.


Well, there's also accountability if I'm installing, say, docker.

Surely they want to make it easy to install their thing without a hitch, and that's why they provide https://get.docker.com for me to pipe into bash.

My point is it all boils down in who you trust. If you are downloading something unknown, sure, it's harder to go wrong with a package manager (if the package is available), but you're still trusting someone not to attack you or to leak the private keys.

Crucifying curling into bash has nothing to do with how safe you are. It's almost like saying "Never run anything you download from the internet, it's dangerous!"


Many of the comments here are utterly missing the point. Likely, you've been spoiled by the cancerous javascript ecosystem and so don't realize what exactly you're giving up.

1. An https URL is not secure unless your trust model involves trusting the server completely. Unless you're running this script in an isolated throwaway sandbox, this is a terrible idea.

2. Obviously auditing can't rule out well-hidden maliciousness or clever bugs, as we're up against the halting problem. But it is quite easy to do a quick sanity check on a downloaded script to see if it is going to do anything wacky or boneheaded.

3. Explicit packages are expected to have identities like versions and hashes. This allows us to talk about how something has been modified, whether a specific download instance has been tampered with, etc. A rando script has these things from the developer's perspective, but not from the users'.

4. These "easy install" scripts usually want to puke files into random places on your system. /usr/local, /opt, /home/crap, /who/knows, etc. This is a great way to create an unmanageable system. Standard practice for software outside the package manager is to let the user choose where things should be installed, eg ./configure --prefix=xxx. Lazy people choose /usr/local and can always blow that part of their system away, the more astute use /usr/local/pkg-1.0.0 (for stow), and some even have completely arbitrary paths (I personally use something like /x/local-x64/pkg-1.0.0 which gets synced across machines with unison).

5. Such scripts usually continue their reign of brokenness by instituting some sort of auto-updating. Now the user has little idea what version they were running (say they want to upgrade to a source install to investigate some bug), is less able to not change versions at an inconvenient time, and is further discouraged from bringing the package under their management.

The sheer majority of these installers provide files that would be fine as plain archives, but the distributors think they're being clever while forgetting about users' general requirements. I do understand that in this age of concentrated Melcalfe's law, it helps to appeal to the lazy people who don't really care if their system becomes an insecure unmaintainable mess. But really, you owe it to your users to provide a proper downloadable package that is installed in a manageable way. (And the same goes for proper versioned source releases, as opposed to telling users to grab a random git checkout).


You seem to be arguing install scripts vs package managers, and we are all discussing something else.


Sure, but the two generally go together.

The last time I saw a downloadable shar-esque installer was quite some time ago, and I've never seen software which installs a proper distribution's package using curl | sh.

Plain install scripts are still around, but are generally fixed when a package grows up. A large problem with curl | sh (especially which runs further curls) is that it makes a crappy approach masquerade as a polished solution.


I do not even dislike curl|sh for security reasons. Package managers go to great lengths to provide a reproducible runtime within them. When was the last time you saw 'curl | env -i sh -C' as instruction?

If the script fails halfway? Good look trying to undo whatever it did if you do not have access to `zfs rollback` or similar.

It is also less-than-fun to go through `zfs diff` and the downloaded script to make a package out of it that can be distributed and automated.


Agreed, I think this is the biggest issue. Also you have to trust some random installer to put the binary...somewhere? Is it going to overwrite junk in /usr/local? Does it assume ~/.something is available? Does it require root and then try to stuff code into /etc? Does it work if the install directory has a space in it? What if there's a symlink somewhere in the path?

There's a million things that the script can do stupidly, and practically every single one has at least one assumption that is bad.

One trick I've learned is to edit the script before running it and prefix anything that looks dangerous with "echo" (because of course none of them ever support --dry-run). Then I can at least see what they are doing, what they are downloading, etc.

curl|sh is the bane of my existence. Shame on you if that's your only means of installing.


None of this is an inherent problem with curl and piping however; any installer you could download and run has the same list of issues, and many of those aren't even auditable.

You should redirect your anger away from curl and pipe and toward using install scripts vs package managers in general, because that's where your beef really is.


I don't know about you but I don't end up running binary installers too much. Certainly not on linux.

Even so, windows style binary installers are at least frameworks designed for installing stuff (many with years of bug fixes under their belt), while the curl|sh style installers are just ad-hoc one-offs written in a language that's known for being pretty hostile to defensive programming.

So yes, any installer could make those errors, but in my experience only random shell installers seem to do that. Saying they are the same is a false equivalency in my eyes.


Who said anything about only binary installers? I specifically mentioned install scripts as being different from the subject of the discussion, and you seem to be conflating the two.


You said, "many of those aren't even auditable."

The only installers I can think of that aren't auditable are binary installers. If you meant something else, I'm not understanding.


I try to read scripts before I download and execute. The only time I thought WTF was this:

https://bootstrap.pypa.io/get-pip.py

Even though the author made it clear something funny was about to happen - I just could not execute.

When I showed it to the guy next to me (at work) - he said he already installed it on another box and didn't even look.


Where is the WTF in that script? Just because the author chose a base-85 encoding over base64? (That's a little weird, as I suspect the savings are so minor as to not be worth it …)


it could be base3, base64, base84.. that does not matter.

The part that I find interesting is how many people who read the script are going to decode the blob and verify that the blob is non-malicious.


I like this approach:

https://github.com/ellotheth/pipethis

Either check a pgp signature if there is one, or skim the script before executing. Also covers off that pesky dropped connection problem.

curl ¦ sh is too handy to ever die, but it's possible to be smart about it!

Edit: typos


> because they are either run by retards or intelligence agencies.

Now would be a good time for the author to expand their vocabulary.


curl|sh isn't the least bit less secure than downloading a software distribution and just running it.


I'd be tempted to say that `curl | bash` is as insecure as

  ./configure && make && sudo make install
The logic in the article effectively implies that one shouldn't be installing an software without auditing every single line of code.


AppFS ( http://appfs.rkeene.org/ ) solves almost all of the problems with "curl | bash" and indeed package managers in general. I'm the author. Feedback desired.


(Archived version if it's down: http://archive.is/Y9z0w)


Doesn't that popular software that tracks you tell you to install by piping the output of curl. I think it's call Brew Ware.


"Download this executable"

"Add this repo and key"

...so on.


This stupid hack exists because Linux is so fragmented it's impossible to distribute software easily in any other way.

That and most Linux distributions are significantly harder to deal with both technically and administratively than Apple or Google app stores.

Fix these problems and curl pipe bash will die.

But here we have the usual sort of nonproductive advice you get from security people: shaming with no thought to underlying causes and God forbid we try to improve anything.



https://weakdh.org/logjam.html even uses it as one of the examples. (the site has since been fixed)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: