
Curl to shell isn’t so bad (2019) - maple3142
https://www.arp242.net/curl-to-sh.html
======
40four
This gained traction a few month ago. Previous discussion ===

[https://news.ycombinator.com/item?id=21490151](https://news.ycombinator.com/item?id=21490151)

~~~
Carpetsmoker
Yeah, not sure this needs to be discussed again since it was just 80 days ago
(and I say that as the story's author).

------
OskarS
I mostly agree that curl to shell isn't terrible, but he misses the big
negative security implication: the shell scripts are unsigned.

It is almost always far, far easier to hack into somebodies website and
replace replace "good shell script" with "evil shell script" (or hack into
someones GitHub account and make a commit evilifying a shell script) than it
is to both do that _and_ get a hold of the developers private key and sign the
script. The signing key is usually far more protected and non-public than the
web server is.

So no, I'm probably not going to review the entire install script either way,
but I'm FAR more comfortable running an install script I found online that has
been signed by a developer I trust rather than one that hasn't. Package
managers and OS installers handle this for you and freak out if there's any
issues.

In other words, the question with package managers/installers is "Do I trust
this developer?". The question with curl-to-sh is "Do I trust this developer
AND do I trust that the website hasn't been compromised".

~~~
falcolas
> "Do I trust this developer AND do I trust that the website hasn't been
> compromised".

You also have to verify that the script is coming encrypted over TLS. It's not
mandatory, and there's no warnings if its not. Sure, it's an easy check
(barring redirects, which also occur), but how many people who are
copy/pasting curl|sh scripts will think to do even that basic validation?

~~~
acdha
Your mistake is assuming that people only do that for curl|sh — in my
experience, the same percentage (high) will manually download and run a
program / shell script, add an APT/YUM repository, etc. Once someone decides
they need to run something the details aren’t very significant.

The problem isn’t curl but the challenge of trusting code. The default needs
to be either Apple-style sandboxing (note how you get prompted if a new
program tries to access your pictures, contacts, etc.) or something like
Docker where the default is private with exceptions for the resources you
enable.

~~~
falcolas
Package managers at least provide a path to trust - the PGP web of trust.
curl|sh does not provide any path to (cryptographically secured) trust.

~~~
acdha
That's only for people using the global trust mechanism in a mainstream repo.
If you think about it from a security perspective, how much difference does
installing some random project's GPG signing key make?

------
hyperpape
My reaction is that curl is just an aggravating factor.

The real problem is shell, or really any custom scripts in any language.
Acquiring or building software should happen via well-understood reusable
tools. Any time building executes bespoke code, you're reducing the
maintainability of a system.

Distro package managers are one good solution, but a properly developed
language package manager would also ideally not rely on custom scripts.

A contributing factor to the event-stream issue was that it was considered
acceptable to distribute code minimized by _who_knows_what, rather than by any
reproducible process.

~~~
oblio
> Distro package managers are one good solution

Distro package managers allow execution of custom scripts at every point of
the installation. If you're getting packages from outside the distro repos,
you're at risk. Sometimes you're at risk even when you get the packages from
the distro repos :-)

~~~
hyperpape
I thought about that, but like falcolas said, they are at least signed and
have some attention paid to them.

At one level, any build/install tool is executing extremely complicated code
on your machine, and could do lots of weird things. The advantage is that you
can learn what it does, and have trust in its authors. So well behaved custom
scripts created by a trusted party that has articulated standards about how
things should work are much better than using shell scripts from an arbitrary
project.

Still, I'd certainly prefer it if the distro didn't have custom scripts
either. It would simplify the task of knowing "this is how the distro does
things, and there are no exceptions".

~~~
oblio
You missed something, my main point: not all packages that are installed with
the distro package managers come from the distro. There's plenty of
applications that package their software as .deb/.rpm/...

From a security point of view, those are the same as curl | bash, basically.

~~~
hyperpape
Ah yes. I did gloss over that. I certainly feel a little sketched out when I
see a project that says "add this package repo".

------
h2odragon
Download and look before running gives you an opportunity to spot something
that's obviously, totally wrong... Not that one always _uses_ that chance but
at least you had it.

Installer systems that chain these "download and run" scripts (Node RED is the
most recent I've used) scare me despite all the valid points TFA makes. "Not
invented here / history must be reinvented" seems like philosophical point for
some of these systems.

~~~
UI_at_80x24
I agree 100%, all this does is reinforce the errors of the past and trains end
users to be dumb. This is NO DIFFERENT then Windows users double-clicking on
every email attachment.

Every time somebody suggests this, the author needs to get slapped with a
trout. Copy/Pasting code to get entered is BAD ENOUGH; nobody learns anything
that way AND it is not secure!

In a race to the bottom everyone looses.

~~~
t0astbread
There is a difference between downloading, say, the Docker install script from
docker.com and opening a random email attachment and that is that I trust
docker.com.

If I don't trust docker.com because I think it might deliver malware in the
install script or a poorly written install script, then I shouldn't use Docker
because why would anything different apply to their main software.

If I don't trust docker.com because I fear it might be hijacked then I
shouldn't use Docker because who guarantees me that their source code or any
derived binaries aren't also hijacked.

~~~
UI_at_80x24
That trust comes from education and experience. Just like an experienced
computer user could launch an email attachment safely if they know what they
are doing.

My point is telling new users to blindly copy/paste/autorun ALL scripts just
to get $thing to work only hurts US (FOSS proponents) and the end user.

------
mbreese
From the article:

===

 _Partial content: the shell may execute half the script due to a network
error.

Easily fixable by running in a function:

    
    
        do_work() {
           : 
        }
    
        do_work
    

All of the cited examples already do this. _

===

This is the one argument that I think the author gets wrong. Just because the
projects he cited do this correctly doesn't mean that all `curl install.sh |
sh` scripts will do this correctly. And this is the main fear that I have. I'm
not all that concerned about people publishing malicious installation shell
scripts. These things are normally public, so it would be easy for them to get
caught. I am concerned about the scripts being either (a) altered in transit
or (b) corrupted in transit (which is really the same thing).

It seems like there could be some happy medium where there was an install
script that had a published signature (SHA1 and/or full cryptographically
signed), and a tool that downloaded it, verified the signature, and then
called a standard function: `my_install()` (or something).

~~~
t0astbread
Well, altered in transit and corrupted in transit are both covered by HTTPS.
(I think error detection is even covered in TCP, hence also HTTP?)

If a vendor does deliver software over HTTP, then you should verify the script
before executing it via a checksum delivered over some secure channel. (This
is also what apt, the Debian package manager, does by default afaik.)

~~~
mbreese
It's still possible for data to be silently corrupted before it hits the HTTPS
server [1]. After dealing with a similar issue that corrupted data silently,
but still had valid checksums, I tend to not trust implicit checks. (My
example wasn't an HTTPS specific issue, but it in theory could still happen
with HTTPS).

[1] [https://stackoverflow.com/questions/34610581/is-it-
necessary...](https://stackoverflow.com/questions/34610581/is-it-necessary-to-
verify-checksum-when-data-is-sent-over-https)

------
geocrasher
If you are curl|sh'ing while blindly following some random online to tutorial,
it is bad.

If you are curl|sh'ing and know why you're curl|sh'ing and know _what_ you're
curl|sh'ing and understand the implications of curl|sh'ing, then I don't see
the problem.

Like most things, it's subjective.

~~~
falcolas
How can you know what you're "curl|sh'ing"? If you're piping it, it's
impossible to know what you're getting and executing. That's the problem.

~~~
geocrasher
And furthermore, it's a matter of trust. If I don't trust the source that I'm
running the software from, then curl|sh isn't the problem.

~~~
falcolas
Trust, and what you download from a web server, are two distinct issues. There
is no way to verify that the content you get from a web server is what the
developer wrote (particularly when you get into chains of 301 redirects to a
github repo. Packages, at least, provide a signature from the developers (or
distro packagers) that you can verify independently.

------
severine
Classic: _Is curl|bash insecure? (Sept 25, 2015 | 163 comments)_

Link: [https://sandstorm.io/news/2015-09-24-is-curl-bash-
insecure-p...](https://sandstorm.io/news/2015-09-24-is-curl-bash-insecure-pgp-
verified-install)

HN discussion:
[https://news.ycombinator.com/item?id=10277470](https://news.ycombinator.com/item?id=10277470)

------
lyxsus
I'd like to be proven wrong, but isn't thinking that executing unsigned script
(bash, js, …) on your computer is somehow inherently less secure than running
some binary is nothing but a superstition?

------
specialist
I just ddg'd "sandbox shell script" and learned about chroot. First hit:

How can I “sandbox” a shell script?

[https://unix.stackexchange.com/questions/363363/how-can-i-
sa...](https://unix.stackexchange.com/questions/363363/how-can-i-sandbox-a-
shell-script#363450)

What would a proper sandbox look like? Every session runs within a Docker
image?

I don't know anything about this stuff. My intro to the sandboxing and
security notions was Java and its Applets. I guess I thought we'd eventually
do everything that way.

------
dwheeler
Um, no. Curl-to-shell is bad.

Check this out: [https://www.idontplaydarts.com/2016/04/detecting-curl-
pipe-b...](https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-bash-
server-side/)

~~~
aequitas
As the author notes:

> You’re not running some random shell script from a random author, you’re
> running it from a software vendor who you already trust to run software.

~~~
falcolas
Without any form of validation that the shell script I'm running is actually
from the software vendor in question. That trust is even further stretched
when the request is 301 redirected to another host entirely.

Between custom domain names, TLDs, Let's Encrypt (I love Let's Encrypt, but it
provides is encryption, not trust) and cheeky developers using combinations of
those, how can I really know that a script coming from (fictional example)
[https://install.dock.er](https://install.dock.er) is actually the Docker
company?

~~~
aequitas
The same arguments apply for downloading a binary.

If you want to pull this even further. When is the last time you verified the
signing keys of your OS distribution repo without relying on the internet?

A lot of install methods that are not curl/sh are like: here copy this bash
line to add apt GPG keys for our repo, apt update and install. A lot of people
don't bother to check those keys.

~~~
falcolas
True, some people don't check those keys, but _it 's possible to do_. There's
a well trodden (and cryptographically secure) path for gaining trust in a key
that's distinct from downloading and unpackaging a file.

This is (currently) not possible to do with scripts downloaded from a web
page. Especially when immediately piped into a shell.

~~~
aequitas
As with most issues in security its a balance between usability and security.
The real problem is the majority of the users will always try to find the path
of least resistance. No matter how well documented a secure procedure is, if a
user can find a oneliner they will use it instead.

I've seen this in other places as well. Vendor makes a comprehensive guide.
Some people condense that to a minimum and will even boast they "made" an
easier way to do X or Z, whilst ommiting all the caveats. Of course that will
become the popular 'standard' people find. And they'll go complain to the
vendor if it doesn't work without even reading the original instructions.

~~~
falcolas
You're not wrong with your first paragraph. That doesn't make the use of
curl|sh _good_ however. It doesn't justify developers laziness in eschewing
secure methods of distribution, or writing articles like this that justify
that same laziness.

------
commandlinefan
> You can regenerate them from autoconf

Not that the autoconf input is readable itself, either, anyway...

------
komali2
> if there is a potential security problem and no one is exploiting it, then
> is it still a security problem?”

Yes, surely? Somebody is probably exploiting it and you just aren't aware,
because why wouldn't they have that in their toolbox?

~~~
Carpetsmoker
> Somebody is probably exploiting it and you just aren't aware

I tried finding an example, and asked last time this was on HN as well, and
thus far I've not found a single example.

Absolute security isn't realistic; it's often a trade-off between convenience,
cost, etc. I can probably break in to your house by throwing a brick through
your window ... should we now all invest in more secure windows? Doesn't
strike me as a good trade-off for most people.

------
henvic
It's not that curl isn't so bad, but that the lack of notarization is really
bad. Distributing software safely is still a pain.

Linux distributions have this kinda figured out a long time ago with signed
packages - almost a requirement if you want to distribute your software on
mirrors that you don't control. This is why it's safe to download a package
from a Debian HTTP mirror and install it. The thing afaik any of them still
don't do, and is really important too, is checking the signature with a
revocation policy.

I actually started writing about how I distributed a CLI for macOS, Linux, and
Windows a few years ago but never finished it.

If you are writing Go code you want to take a look at Equinox.

I used it to distribute a CLI in my previous job, and it worked like a charm.
One of the things I liked most was that I was finally able to push safe
updates because I authenticated every release with my private key, and only
allowed users to update if the key matched. If you have a primary and a
secondary key, and a strategy to revoke it, this means you have a quite safe
release process.

Related:

[https://developer.apple.com/library/archive/documentation/Se...](https://developer.apple.com/library/archive/documentation/Security/Conceptual/CodeSigningGuide/Introduction/Introduction.html)

[https://equinox.io](https://equinox.io)

[https://developer.apple.com/macos/distribution/](https://developer.apple.com/macos/distribution/)

[https://developer.apple.com/documentation/xcode/notarizing_m...](https://developer.apple.com/documentation/xcode/notarizing_macos_software_before_distribution?preferredLanguage=occ)

[https://blog.jessfraz.com/post/why-open-source-firmware-
is-i...](https://blog.jessfraz.com/post/why-open-source-firmware-is-important-
for-security/)

[https://docs.microsoft.com/en-
us/windows/win32/seccrypto/cry...](https://docs.microsoft.com/en-
us/windows/win32/seccrypto/cryptography-tools)

[https://docs.microsoft.com/en-
us/windows/win32/seccrypto/sig...](https://docs.microsoft.com/en-
us/windows/win32/seccrypto/signtool)

[https://www.mothersruin.com/software/SuspiciousPackage/](https://www.mothersruin.com/software/SuspiciousPackage/)

[https://successfulsoftware.net/2012/08/30/how-to-sign-
your-m...](https://successfulsoftware.net/2012/08/30/how-to-sign-your-mac-os-
x-app-for-gatekeeper/)

[https://arstechnica.com/gadgets/2012/02/developers-
gatekeepe...](https://arstechnica.com/gadgets/2012/02/developers-gatekeeper-a-
concern-but-still-gives-power-users-control/)

------
crdoconnor
Curl to shell is usually indicative of a package manager that could be doing a
better job.

~~~
meddlepal
If only there was one standard package manager... dealing with
brew/apt/[yum|dnf] and handling multiple yearly vendor releases (Ubuntu and
Fedora) gets to be annoying.

~~~
mbreese
And some random shell script installer is going to better than a dedicated
package manager at dealing with the differences between N many distributions
and M many architectures?

Sure, some install scripts will be quite simple, but in that case, why do you
need to have the install script in the first place? You'd only need it if the
installation procedure was too complicated for an `INSTALL` or `README`
document.

~~~
dmitriid
> And some random shell script installer is going to better than a dedicated
> package manager at dealing with the differences between N many distributions
> and M many architectures?

Surprisingly, yes. Especially for *nix systems where a lot of things (but not
package managers) are more-or-less unified. The author may not have the time
or desire to learn the half-a-dozen or so package managers and set up a build
system to create packages for all of those, and then do the work of getting
them into the standard repositories.

