
Curl to shell isn't so bad - stargrave
https://arp242.net/curl-to-sh.html
======
yoavm
Not so bad comparing to what? Yeah, comparing to downloading a tar file from
the website and running ./configure, make etc - right, it's probably quite a
similar risk. But who does that?

Every decent Linux distro has a package manager that covers 99% of the
software you want to install, and comparing to an apt-get install, pacman -S,
yum install and so on - running is a script off some website is way more
risky. My package manager verifies the checksum of every file it gets to make
sure my mirror wasn't tempered with, and it works regardless of the state of
the website of some random software. If I have to choose between a software
that's packaged for my package manager and one I have to install with a script
- I'll always choose the package manager. And we didn't even start to talk
about updates - as if that isn't a security concern.

The reason we should discourage people from installing scripts of the internet
is because it would be much better if that software would just be packaged
correctly.

~~~
hannob
> Every decent Linux distro has a package manager that covers 99% of the
> software you want to install

I wish this were true, but plenty of experience with Linux usage tells me that
not having something packaged is a very common occurence.

Though of course this can be improved: More people should actually help
working on their favorite Linux distro, so more software gets packaged. And
upstreams should try better to collaborate with distros, which unfortunately
they rarely do.

~~~
mirimir
Right. I love Debian, but its packages are often very stale. That's why many
end up using Ubuntu. And yes, I get that package review takes time, and that
Debian is arguably more secure. But that's little consolation when you're dead
in the water because what's packaged is too old.

~~~
TeMPOraL
If Ubuntu is the not stale alternative to Debian (stable), then I can't
imagine how bad the situation is there. I often build software myself because
_Ubuntu_ is very often stale.

~~~
bbanyc
Ubuntu LTS (freezes every two years) or "regular" Ubuntu (freezes every six
months)?

I'm wondering if the periodic freeze-the-universe model that many distros use
reflects a world that doesn't really exist anymore where distros came on DVDs
(or CDs, or floppies). Whatever version you had on the disc, that's the
version you're going to use.

I just started playing with FreeBSD in a VM, which has a frozen base system
and constantly-updated packages separate from it. This works better for
software you don't think of as an "OS component" but the question then becomes
where you draw the line.

Or maybe it's just a fundamental disconnect between consumer-facing "move fast
and break things" and enterprise-level "never break anything even if it means
you can't move at all" and there's no way to make software that works for
both.

~~~
farisjarrah
This isn’t how Ubuntu even works. Ubuntu doesn’t just “freeze” their operating
system every two years. They are constantly delivering package, security, and
hardware enablement fixes. Often the stuff that lands in Ubuntu non-lts
versions end up in the lts point releases. They keep a stable base of x.x
versions but they definitely backport bug fixes to x.x.x versions of their
software packages. For example there was a bug in sudo like 3 weeks ago, and
Ubuntu immediately issued a fix within hours of the upstream project’s fix.

~~~
kingosticks
They back-port "high-impact" bugs such as security issues (your example),
severe regressions and bugs causing loss of user data. They do not back-port
other bug fixes or new features. The result is that you often find the version
included in your Ubuntu release is stale. See
[https://wiki.ubuntu.com/StableReleaseUpdates#When](https://wiki.ubuntu.com/StableReleaseUpdates#When)

------
LIV2
I disagree with some of this, I.e paste jacking.

Plenty of software projects put more care and focus into their software and
not in their website, if you're running a vulnerable version of Wordpress or
whatever CMS it'd be easy for someone to insert something malicious without
being noticed whereas something that modified your code would show up in git,
code reviews etc

~~~
Carpetsmoker
Pastejacking should be mitigated if you use zsh, as it will never run pasted
commands automatically. From quick test it seems that recent(?) versions of
bash also implemented this feature and have it enabled by default. I don't
know about fish or other shells.

~~~
Un1corn
Pastejacking is not the only possible attack by a compromised server, you can
also change the content of the script when the user download it through curl
or wget.

Oh My Zsh use GitHub for their script so I trust it more than if they hosted
it themselves for example

~~~
Carpetsmoker
If people have access to change the content of the script then they can also
change foo-1.2.3-src.tar.gz or foo-1.2.3-linux-amd64.gz. These are all general
problems with downloading _anything_ from the internet.

~~~
Un1corn
Right but the attacker can make it look legit even for someone that look at
the script. The attacker can change the content of the script by the user
agent or even by detecting when you pipe it to bash[0]

[0] [https://www.idontplaydarts.com/2016/04/detecting-curl-
pipe-b...](https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-bash-
server-side/)

~~~
marcthe12
Well there is a way to resolve that. Have command in between curl and sh that
effectively only prints to stdout once stdin receive EOF. A double tac is an
xample

~~~
mike_hock
Or, you know, pipe to a file and verify the content, then execute.

------
avaloneon
I'm surprised that no one has yet mentioned that piping curl to bash can be
detected by the server (previous discussion at
[https://news.ycombinator.com/item?id=17636032](https://news.ycombinator.com/item?id=17636032)).
This allows an attacker to send different code if it's being piped to bash
instead of saved to disk.

IMHO, "curl to shell" is uniquely dangerous, since all the other installation
vectors mentioned don't support the bait-and-switch.

~~~
halter73
To me, this seems like only a slightly more advanced version of sending
malicious payloads only to curl user agents and not something uniquely
dangerous.

If I was already using curl to predownload and audit the script, I'd probably
just execute the script I already downloaded which would be safe. Most of the
people piping to bash directly do no auditing at all because they trust the
source. If you're going to put a malicious payload in a script, you don't have
to be that tricky about it.

Most people wouldn't know anything was up in any event until someone else
discovered the attack and started raising a fuss on social media. I don't
think serving the malicious script just to people who pipe it to bash (or
really just download it slowly for any reason) would stop _everyone_ from
finding out. It would just make the malicious script more notable when found.

~~~
avaloneon
Part of the confusion comes from the fact that there are several different
points do be discussed, and they're easy to mix up. For instance: software
trust in general, web server security vs repository security, reproducibility,
etc.

In this case, even "curl is dangerous" has at least two variations. The first
is not knowing what the server is sending, the second is that the server can
change what it is sending. My complaint is with the latter.

For example, a file in a repository somewhere or uploaded to a compromised web
server is static. Everyone who downloads the file gets the same thing.

A file served by `curl | bash`, however, isn't. The server could send
different files at different times of day, or only send malicious payloads to
certain IPs (like known TOR exit nodes), or certain geographic locations, etc.
which is something no repository I know of is even capable of.

Archives, packages, and installers downloaded from a server (instead of a
repository or FTP server or S3 bucket where the attacker controls the file but
not the server) share this weakness, so that alone doesn't make curl uniquely
dangerous.

Where `curl | bash` differs from installers, however, is that it's
_interactive_ , so the server can alter its behavior on the fly. This is
dangerous because, with installers, the attacker must commit to sending either
a clean or infected payload before the installer can tell them if it's being
run or not. In this way, even archives serve as a kind of a poor zero-
knowledge proof of what the software is, since the attacker needs to commit to
a version before knowing what the user intends to do. There's normally also a
file left on disk as well.

With `curl | bash`, however, the server has the unique opportunity to get a
callback from the installer _before it has finished sending it_ , which means
the server doesn't have to commit to sending malicious code blindly and hoping
it's not being saved by someone who intends to audit it. Also, `curl | bash`,
by default, leaves no trace, further frustrating auditing/reverse-engineering
attempts. (Adding insult to injury, there's no way to check the malicious
payload _before_ running it, since running it is what causes it to appear.
Even if run inside a VM, this can also be abused by an attacker to try to
cover their tracks in real time)

In this way, `curl | bash` allows for obfuscation/anti-debugging techniques
that no other method I know of offers. Hence, my opinion that `curl | bash` is
"uniquely" dangerous.

Edit: Thinking about this more, this generalizes to any installer that
interacts with the network, since all the attacker needs is a way to detect
execution and some way to avoid leaving artifacts. In this way, curl is indeed
not quite "uniquely" dangerous, since it's tied with other network-based
installers. However, since the other popular installation methods don't have
the ability to obfuscate their initial payload like this, I think the point
still stands. (Obviously feel free to correct me if I overlooked something)

------
lawl
> Not knowing what the script is going to do.

Yep, this is why i hate piping curl to sh. Much prefer how e.g. go does this:

Tells you to just run

    
    
        tar -C /usr/local -xzf go1.13.4.linux-amd64.tar.gz
    

It's not that I don't trust the installer script to not install malware. But I
don't trust the installer script to not crap all over my system.

~~~
lokedhs
Try using Qubes OS. It will allow you to run such scripts without having to
worry about your system being screwed up.

Note that Qubes have some drawbacks, the main one being that it doesn't
support GPU's, so not everybody are in the position to use it.

~~~
lawl
I really like Qubes and have been running it for a while. What I'd love is to
have something like qubes but a bit more lightweight. Using containers instead
of full blown VMs. Basically trading some isolation guarantees and security
for more usability.

~~~
lokedhs
I think Silverblue might be what you want. It's still in beta though, but if
you're OK with that you might want to try it.

------
andreareina
My experience is that software that installs via curl|bash tends to ignore my
preferences as expressed via $PREFIX/DESTDIR, $XDG_{CACHE,CONFIG,DATA}_HOME,
etc. It'll install who-knows-where and probably leave dotfiles all over my
home directory.

Maybe curl|bash is _functionally_ equivalent to git clone && ./configure &&
make && make install, but my bet is on the one providing a standard install
flow to be a better guest on my system.

~~~
manojlds
The curl|bash might actually clone a repo and build it. Your concern is a
different one.

~~~
marcosdumay
Yes, it could. But it that was the case, the site would say one can just clone
the repo and build it instead of sending everybody into the curl route.

~~~
ilaksh
Some of them do say that.

------
lokedhs
The points raised in the article are correct, and I'm much more concerned with
the willingness of people to run arbitrary software on their primary computers
in general, than the specific case of piping to sh. I think piping to sh just
emphasises how insecure the entire practice is, and arguing against that is
analogous to close your eyes to protect yourself from the attacking tiger.

The only system I've worked with that helps you truly deal with this is Qubes
OS. Perhaps Fedora Silverblue will achieve this as well, once it comes out of
beta.

~~~
Carpetsmoker
This article started life as a more general article about security and trust,
but decided to post this particular part in its own article.

There is a lot to be said about trust in software, but in general I agree that
mitigations against untrusted software could be improved. The problem with
this is that it's often hard to do this without affecting usability and, more
importantly, in practical terms for a lot of people the status quo seems to be
"secure enough", even though there are areas for improvement we (as an
industry) should work on.

~~~
astrobe_
Minor correction: it's "status quo" and I think it should be italicized.

------
sjy
Has running a curl-to-bash command found during normal user-initiated web
browsing _ever_ resulted in a malware infection? Even anecdotal evidence would
be valuable at this point.

~~~
Arnt
Not as far as I know, but I have heard about people pasting the Wrong Thing
into a root shell.

In a way it's a casting error. A type safety violation. You paste text into a
privileged shell and coerce it to be sh, and when it goes wrong the sh input
is rich in < and > characters.

Friends of mine have mentioned at least a) people accidentally pasting much
more than the intended line into sh because they selected more than intended
and b) sites that modify the cut buffer silently to add some "pasted from …
blah … like us on facebook" or somesuch, I forget the details. The person who
wrote the _page_ intended one line to be castable to sh, another person who
worked on the _site_ added the script that transformed the cut/paste without
realising that.

~~~
m0xte
Yes. I had a pretty bad one from a commercial software company. _Running any
script someone else wrote badly intentionally or otherwise is dangerous_. The
source is moot. Rather than provide distribution packages they had a shell
script that installed and updated their stuff. If you ran the update script it
would evaluate rm -rf $(SOFTWARE_ROOT)/

That environment variable was not set if the software hadn’t been installed
and it wouldn’t run unless it was a root shell.

Guess who ran the update script instead of the install script and hosed the
machine? I gave them a whole lifetime of bile over that.

The product turned out to be horrible as well.

------
tannhaeuser
Yeah I was asking this question on SO - How to responsibly publish a script -
but got no response, a sarcastic "Tumbleweed" badge even. My concern was that
the script could be easily hosted elsewhere and we'd have multiple versions
with potential malicious mods flying around. In the absence of alternatives
curl-bashing isn't so bad after all because it promotes a canonical download
location from a domain/site you control, even if I hated it initially as a
long-term Unix user.

------
eadmund
> There is no fundamental difference between curl .. | sh versus cloning a
> repo and building it from source.

Not true: when you clone a repo with signed commits, you have forensic
evidence that the repo signer provided the code you ran, while when you use
curl you have … just the code itself.

That's not a _lot_ , but it's not _nothing_.

~~~
akerl_
How many repos are there that actually sign commits, and of those, how many
users are doing validation that the signer of their local checkout’s commits
is actually the key they expected?

The line you’ve quoted doesn’t say that there’s no fundamental difference
between curl | sh and cloning a repo with signed commits, and I think it’s a
stretch to think signed commits have enough usage among devs / users to make
them a viable option.

~~~
theamk
I don't think signing commit matters. What matter is that if webserver is
compromised, very few people are likely to notice, and the evidence can be
gone at any time.

But if github repo is compromised, anyone who pullls the repo can notice
strange commits - and the evidence cannot disappear, as public head rebase
will bring even more scrutiny.

------
kijin
I hate install scripts, period. They feel so Windows-ish. Just distribute a
.deb, .rpm, .snap, homebrew package, npm package, or whatever is the most
appropriate for your software. All the scripting you need to do should be done
inside of the regular package installation process, and even that should be
kept to a minimum.

The only software that has any right to rely on an ad-hoc install script on a
Unix-like system is the package manager itself. It's awful enough that I have
to do apt update and npm update separately. Please don't add even more ways to
pollute my system.

~~~
Boulth
The problem with deb, rpm, etc is that you need to add instructions to all
supported systems one by one. Check out this site for reference:
[https://www.sublimemerge.com/docs/linux_repositories](https://www.sublimemerge.com/docs/linux_repositories)
and compare with curl URL|sh that can detect target system and delegate to
appropriate system. Much simpler.

The root cause of this is no universal packaging format for Linux in my
opinion.

~~~
kijin
I understand the difficulty. But if you're going to write a shell script that
detects the target system and takes different actions, you might as well move
that logic to the packaging system. It's much more robust, especially when it
comes to dependency management and updating.

I wish there were a simple, modern, easily configurable tool that can take a
declarative description of a project and spit out a ready-to-serve repository
(just point nginx at it!) for most commonly used package formats. For Linux
daemons this should be easier than ever before, now that systemd has gobbled
up all the major distros.

~~~
Boulth
> you might as well move that logic to the packaging system

Depends on who do you mean by "you". Software vendor can definitely write a
script but they have no power over distributions to "move that logic to the
packaging system".

This would have to be a collaborative work by different distros but from my
casual look it's just not happening as everyone is happy with their own
package manager that's "obviously the best".

I do agree on systemd. I didn't like it before I moved to Linux now I see a
lot of value that it brings.

This may also be relevant: [http://0pointer.net/blog/revisiting-how-we-put-
together-linu...](http://0pointer.net/blog/revisiting-how-we-put-together-
linux-systems.html)

~~~
kijin
> This would have to be a collaborative work by different distros

I think there was a misunderstanding between us. I didn't mean anything so
complicated.

By "moving that logic to the packaging system", all I meant is that instead of
using a script to detect whether your app is being installed on Ubuntu or
Fedora or whatever, you should just build and publish separate packages for
each distro you wish to support. The logic for selecting the right package for
itself is already built into every packaging system, ready for anyone to use.

Hopefully the process of building a dozen packages with each release can be
easily automated once it is set up.

------
paxy
The average non-technical user is never going to open up the terminal and run
commands. The well educated technical user is going to be vary of untrusted
sites and various forms of attacks (which I'm assuming the author of this post
falls under).

IMO this is good advice for those that fall in the middle of these two
categories, i.e. _slightly_ technical people who run into problems and copy-
paste solutions from Stack Overflow hoping that something will work.

> you’re not running some random shell script from a random author

This is _exactly_ what is happening in the vast majority of these cases. These
users are going to be vary if linked to an executable or installer, but "hey
just run this simple line of code" sounds like a very appealing solution.

~~~
hising
> copy-paste solutions from Stack Overflow

On the other hand, a solution on SO that would be a hidden attack would not
gain upvotes and be an alternative for the one seeking advice there.

~~~
fwip
Depends on how hidden it is.

------
jchw
Agreed. If I don’t trust the server, or don’t have a secure connection to it,
it is not likely wise to run any non trivial code downloaded from it.

Verifying a hash that comes from the same server also doesn’t make that much
sense. Verifying a PGP signature would be a compelling reason to not pipe to
shell, and that’s really about it.

~~~
pacifika
Just because the connection is secure doesn’t mean it’s controlled by a
trusted entity

~~~
jchw
Then perhaps running any non-trivial code from it is blisteringly unwise.

------
esotericn
For the most part this is a problem with non-rolling-release distros.

There are very few instances in which I've had to even use an installer on
Arch. For many of those cases, the AUR provides a package that verifies the
hash of the downloaded file anyway.

I've constantly been frustrated when using Ubuntu because something basic like
having 'vim' not be months out of date requires a PPA.

The 'official' Rust installation method is a curl | sh. Or:

    
    
        $ pacman -Q rustup && rustup -V
        rustup 1.20.2-1
        rustup 1.20.2 (2019-10-16)

~~~
titanomachy
Rolling-release and fixed-version distros serve different purposes. A fixed-
version OS has a set of software packages at specific versions which have been
tested together, both by test suites and by the many users using the same
version set. Security patches and bugfixes get patched in, but the packaged
software doesn't undergo major changes. That's important if you're running a
critical production system.

Rolling-release systems are awesome for personal machines where you can handle
breaking updates or work around them. Usually I want the latest versions of
everything when I'm doing exploratory stuff.

That said, modern software deployment is definitely moving away from "pick an
LTS Linux distro and only change your application code", instead we mostly use
containers now. A lot of production systems are probably still using the older
technique though.

~~~
esotericn
I agree.

But no-one should be running this curl | sh nonsense in prod anyway, right.
You at least want a defined version so you'd save the artifact instead of
piping.

To me, the whole thing seems like a solution to a self imposed problem. It
reminds me of the old "frankendebian" stuff in which people would be warned
against having a system half-stable half-unstable.

------
slim
the problem is mainly that the script is executed without leaving a trace. if
you downloaded the script then executed it, you would have something to
inspect in case something goes wrong.

it's too easy, and people with very scarce knowledge could develop a habit of
doing this without asking questions and not even leaving any trace for a
senior to inspect in case of a problem happening

~~~
M4v3R
It does leave a trace - the command executed is stored in your shell's history
file, so unless a malicious script deletes the history (and it also could
delete the download binary if you checkout it from a repo) anyone can
immediately see what was executed.

~~~
raimue
The history file only contains commands typed into an interactive shell.
Commands executed from a shell script or piped into sh will not end up there.

------
forty
> There is no fundamental difference between curl .. | sh versus cloning a
> repo and building it from source

I would say it depends. If the commits are signed by a key you know it's
probably better. Even if it's not the case, cloning with SSH if you know the
host key is also slightly better than downloading through HTTPS where any
(compromised) trusted CA can MITM your connection :) (you can argue that those
to use cases are rare in practice, and I would agree with you ;))

~~~
BadBadJellyBean
I don't think SSH is more secure. You have to verify the server key to make it
secure. How do you do that? I have googled a bit and didn't find an obvious
page for the github server keys. And even if I did I would be back to HTTPS
MITM by compromised CA.

------
eavotan
> Not knowing what the script is going to do.

This is more like: not knowing what to do, when it doesn't work. And this is
always the case until it works. Which is just a local Phenomenon and i can't
expect things that work for me to work for others. So why don't write an
expressive installation documentation with multiple steps instead of one-
liners that either work or don't. There is just no in between.

Take the installation instruction of syncthing for example:

    
    
        curl -s https://syncthing.net/release-key.txt | sudo apt-key add -
    
        echo "deb https://apt.syncthing.net/ syncthing stable" | sudo tee /etc/apt/sources.list.d/syncthing.list
    

These two steps are hard to automate, if you don't have an interactive shell.

Same goes for the saltstack-boostrap-script. This script doesn't work on all
platform equally good. This is not an reliable state. So in the end I'll stick
with the normal way to install things which is very easy to automate.

------
Niksko
I ran into this recently at work. I wanted to write a script that you could
curl into bash to quickly set up some common tools.

Firstly, I made sure that the script told you what it would do before doing
it.

Secondly, my instructions are two lines. Curl to a file, then run it through
bash. A compromise, but if you mistrust the script, you can inspect it
yourself before running it.

------
e12e
> Either way, it’s not a problem with just pipe-to-shell, it’s a problem with
> any code you retrieve without TLS.

Well, yes. But the _typical_ alternative is a tar-ball and a gpg signature -
both via insecure transport, but verifiable (like with tls and a CA).

Git will typically be via ssh or https - so to a certain degree over a secure
channel.

------
mkup
If curl loses connection to the source website while downloading the script,
then partially downloaded script will be executed, no matter what. This is a
main drawback of curl-to-shell piping approach, and the original article is
missing it entirely.

~~~
glic3rinu
A common solution is to wrap all code within a function. This way nothing gets
executed until the last line, the one that calls the function, is executed.

    
    
      function main () {
         # all code goes here
      }
      main

~~~
amalcon
Common, but not universal. If I pipe a response body into a shell, I don't get
to check whether they were careful or not.

------
saagarjha
A small benefit of downloading the installer is that this lets you run a
checksum on it.

~~~
thriqon
Yes, if you can acquire or verify the checksum via some other means, e.g. PGP
or phone.

To confirm unchangedness-in-transit I rely on TLS.

------
bauerd
I remember someone curling a Heroku CLI install script and upon inspection, it
would have tried to install a specific version of Ruby too instead of just the
client. Since then I always glance through the script first

------
daxterspeed
Is there a simple command you can use to read the contents of the script
(pipe) before it's sent to sh? Something like:

    
    
        curl ... | less-and-maybe-cancel | sh

~~~
OskarS
Yeah, "vipe" from moreutils (which is a package on most platforms) does this.
It inserts $EDITOR (usually vim) into the command pipe, and allows you to
review and/or edit the text before passing it on to the next thing in the
pipeline. Great little command line utility, for all sorts of things.

If you want to cancel, just erase the file. Or `:cq` in vim probably works as
well.

------
ZoomZoomZoom
I think normalization of this practice makes the scripts the primary attack
interest of the wrongdoers, and these scripts are often an easier target.

------
tedunangst
Sometimes I just want to download software without installing it. This is
complicated by install scripts that obfuscate the real source or break it into
dozens of parts.

------
api
Curl to shell is a result of Linux's fragmentation. It's the only way to
provide a simple install process.

~~~
hinkley
If I were trying to distribute a package on Linux I’d be pretty intimidated by
the number of package managers. I might start with Debian. Do an alpine
package, then nope out.

If the app was a server instead of a cli, I’d start with a docker image. I
ended up giving up on installing Erlang on my little embedded system and went
with the docker image instead.

~~~
api
Props for mentioning Alpine, the only sane distro.

------
tandav
I always install docker using simple command

    
    
      curl -fsSL get.docker.com | sh
    

Instead of copy pasting dozen of commands from docs / SO

~~~
heinrich5991
That looks vulnerable to MITM since it doesn't use HTTPS. Indeed, I just
looked it up, it's not in the HSTS preload lists of any browser:
[https://www.ssllabs.com/ssltest/analyze.html?d=get.docker.co...](https://www.ssllabs.com/ssltest/analyze.html?d=get.docker.com&s=2600%3a9000%3a21c4%3a200%3a10%3aa463%3a3d00%3a93a1&latest).

~~~
tln
FWIW docker doesn't publicize those instructions, they give:

    
    
        curl -fsSL https://get.docker.com -o get-docker.sh
        sh get-docker.sh
    

And, it's the only one of the examples listed in the article that doesn't pipe
to shell.

