Hacker News new | past | comments | ask | show | jobs | submit login
Curl | sh (curlpipesh.tumblr.com)
33 points by enedil on May 8, 2016 | hide | past | favorite | 26 comments

How's this any more dangerous than "download this .exe" or "add this apt repository" especially over https?

The canonical example is this:

Imagine an installer script that runs a buildscript and then cleans up after itself. So somewhere in the depth of it there is the command rm /home/gonzales/.build/foo, but the network connection cut's out just after /home/gonzales so the last thing that the interpreter sees is rm /home/gonzales and well ... there you go.

Stranger things have happened.

It's a valid concern. The solution is to wrap the whole script in a bash function executed on the last line.

Which, it should be noted, is also typical. I'm sure you can find many examples that don't if you go looking for it, but most I've seen do this, especially in recent years.

You could prepare your script in a way that it only executes once everything is downloaded (big string then eval or smthg). If you would that and use https is there anything else to worry about? I mean I do prefer to view the script before executing it, but theoretically.

Apt repositories are typically PGP signed. Even in the case where you just obliviously add the repository's PGP key, you are protected against the file server being compromised. Also if you find malicious things in a signed package, that takes away some deniability from repository principal.

Then there's https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-b...

> Even in the case where you just obliviously add the repository's PGP key, you are protected against the file server being compromised.

Unless you got the PGP key from the same file server, which is typical.

Good point, it's depressingly common.

It's not hard to DTRT though. Eg Docker install instructions have

  4. Add the new GPG key.

  $ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

If the connection breaks mid-download so that "rm -rf /opt/whatever" becomes "rm -rf /opt", then it is more dangerous. This can be mitigated by wrapping everything in a function, but not everyone does that.

If you "add this apt repository" then the programme will be installed via apt & dpkg. Hence you know:

* It will not overwrite any files that are owned by another package.

* If/When you update it later and the programme has changed the config files, apt can tell you if you've changed the config files on your machine and offer you a diff, or what to do.

* It can be easily uninstalled later

* You can download the dpkg file(s) and store them locally, and give them to a friend later/install on your own server later, knowing that you are installing the same files as you installed on your desktop/testing system.

* If there is a network problem while downloading the deb, so what? Nothing bad happens. The software won't be installed, But there's no risk of your home directory being deleted.

* Installation is more atomic. Either the package will be installed properly or not at all.

You don't get most of those benefits against a poor or malign postinst script.

It really isn't. The only time you'd have an advantage is with a package from your distro, which you might trust more than the maintainer. For Windows users who download and run often unsigned binaries all the time, it's really no better.

Exe-files, repository packages are usually signed by the developers, so in order to execute malicious code on your system, attacker must obtain private keys from developers. It's usually harder than attack a website.

If an executable is not signed, I don't see any difference, if you are not going to inspect script before you run it. You either trust a website or not.

I checked md5sums published on the website or on Twitter or something before, but never a signature. From what I've heard code signing has to be done with a key obtained from the OS vendor anyway, which in Linux' case (where you'd normally run curl|bash) is not even possible to begin with.

So I doubt signature checking is ever done in practice, at least on systems where curl|bash is an alternative.

Signature checking is widely used. In every Linux distribution I'm aware of, package managers use signatures to check downloaded packages (and maintainers use their keys to sign packages). GPG is another widely used standard for signing open source software.

Of course it's signed in your repositories, but if it's in the repositories you wouldn't be doing curl|bash anyway, you'd grab it from the repositories. The point is to distribute software that is not in there, which is not even packaged as a .deb/.rpm (or whichever distro you're using).

You could still sign the .sh installer with GPG, but then you'd have to get the public key from somewhere (and get people to care about verifying the sig first). If you're communicating a key somewhere, you might as well just publish the hash sum and communicate that.

Even if you do inspect it, you won't see everything.

>curl -L https://bit.ly/janus-bootstrap | bash

I don't know for "curl | bash", but using an URL shortener there doesn't seem like a good idea.

Because this isn't just "download this .exe", it's "download this .exe and run it".

do you ever download an .exe which you will never run?

Thanks for posting a balanced perspective. I find passive-agressive finger-pointing at the OP blog really distasteful. It implies (but never says directly) 'look at those lamers lol. and if you need some explanation on why this is bad you are a lamer too'.

I think it says 'yes' but so is everything else except for maybe, downloading the source, make &c..?

For most users, it doesn't make a difference if they install a shell script by first downloading it, and then executing it. While checking for PGP sings, etc are nice, only a very small portion of users will actually do that or check the source of the script. I believe those users are smart enough to first download it manually and then check it. For the rest of the users, this is really convenient, and fast. No need to storage a junk script and copy/paste two commands. Sure a simple "apt-get|yum install PACKAGE" would have been better but that adds more concerns: 1. If you let the package managers of public repose (Ubuntu, fedora) package your script, there are going to be a lot of outdated versions. 2. If you want to host the repo yourself, you now have to package your software at least twice for fedora and Debian. The user also needs to add another repo, which makes `update` a lot slower (especially when having a lot pin because it needs to connect to n many urls).

IMO curl |sh are really cool! I also wrote one that installs my home environment


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact