Hacker News new | past | comments | ask | show | jobs | submit login
Curl to shell isn't so bad (arp242.net)
266 points by stargrave 11 days ago | hide | past | web | favorite | 193 comments





Not so bad comparing to what? Yeah, comparing to downloading a tar file from the website and running ./configure, make etc - right, it's probably quite a similar risk. But who does that?

Every decent Linux distro has a package manager that covers 99% of the software you want to install, and comparing to an apt-get install, pacman -S, yum install and so on - running is a script off some website is way more risky. My package manager verifies the checksum of every file it gets to make sure my mirror wasn't tempered with, and it works regardless of the state of the website of some random software. If I have to choose between a software that's packaged for my package manager and one I have to install with a script - I'll always choose the package manager. And we didn't even start to talk about updates - as if that isn't a security concern.

The reason we should discourage people from installing scripts of the internet is because it would be much better if that software would just be packaged correctly.


> Every decent Linux distro has a package manager that covers 99% of the software you want to install

I wish this were true, but plenty of experience with Linux usage tells me that not having something packaged is a very common occurence.

Though of course this can be improved: More people should actually help working on their favorite Linux distro, so more software gets packaged. And upstreams should try better to collaborate with distros, which unfortunately they rarely do.


I wish more repositories took the NixOS approach of writing the build scripts (+ patches if necessary) on GitHub for easy visibility and a more familiar way to submit a new or upgraded package. Once approved the binary is built and distributed from the Nix binary cache.

NixOS is awesome, but contributing to Nixpkgs is at least a little challenging. The tooling leaves a bit to be desired, and it’s really hard to keep all of the guidelines and rules in your head. It would be an excellent experience if there was extensive linting and the tools were easier to understand, imo.

Certainly isn’t an easy problem.

It’s still probably a lot better than contributing to say, Debian. OTOH, the PR backlog is always daunting.


Linux distros should make tasks that are just link collecting as easy as editing Wikipedia and make contributing not require learning git, their syntax and giving out your real name or creating an account.

I've seen StackOverflow questions for "how do I install X" with thousands, sometimes hundreds of thousands of views and no-one has tried to contribute the 10 or so lines of code in the accepted answer as a package. Something is clearly too hard or not approachable.


Right. I love Debian, but its packages are often very stale. That's why many end up using Ubuntu. And yes, I get that package review takes time, and that Debian is arguably more secure. But that's little consolation when you're dead in the water because what's packaged is too old.

Ubuntu is nothing more than the Debian "unstable" branch with Canonical branding plus non-free packages.

Compare https://distrowatch.com/table.php?distribution=debian and https://distrowatch.com/table.php?distribution=ubuntu and you will realize that all the freshness of Ubuntu is build on top of what is available in Debian unstable.

I don't think this idea of "Debian packages are often very outdated" still applies nowadays. One can add "testing" or "backports" channels in your /etc/apt/sources.lists.d and get "upstream version" software. Even "stable" ships fresh enough software these days.

In the last case you can always get the source, update it and send a nmu back to Debian. Lets not forget that that is how open source works :)


> Ubuntu is nothing more than the Debian "unstable" branch with Canonical branding plus non-free packages.

This just isn't true. Ubuntu is typically quicker to update popular packages, such as desktop environments, kernel, etc. Debian unstable, even experimental, are often months behind on Gnome, for example.



If Ubuntu is the not stale alternative to Debian (stable), then I can't imagine how bad the situation is there. I often build software myself because Ubuntu is very often stale.

Ubuntu LTS (freezes every two years) or "regular" Ubuntu (freezes every six months)?

I'm wondering if the periodic freeze-the-universe model that many distros use reflects a world that doesn't really exist anymore where distros came on DVDs (or CDs, or floppies). Whatever version you had on the disc, that's the version you're going to use.

I just started playing with FreeBSD in a VM, which has a frozen base system and constantly-updated packages separate from it. This works better for software you don't think of as an "OS component" but the question then becomes where you draw the line.

Or maybe it's just a fundamental disconnect between consumer-facing "move fast and break things" and enterprise-level "never break anything even if it means you can't move at all" and there's no way to make software that works for both.


This isn’t how Ubuntu even works. Ubuntu doesn’t just “freeze” their operating system every two years. They are constantly delivering package, security, and hardware enablement fixes. Often the stuff that lands in Ubuntu non-lts versions end up in the lts point releases. They keep a stable base of x.x versions but they definitely backport bug fixes to x.x.x versions of their software packages. For example there was a bug in sudo like 3 weeks ago, and Ubuntu immediately issued a fix within hours of the upstream project’s fix.

They back-port "high-impact" bugs such as security issues (your example), severe regressions and bugs causing loss of user data. They do not back-port other bug fixes or new features. The result is that you often find the version included in your Ubuntu release is stale. See https://wiki.ubuntu.com/StableReleaseUpdates#When

Maybe "version freeze" or "feature freeze" is a better term. The sudo bug was fixed in version 1.8.28, but Ubuntu LTS didn't upgrade to the new version. They're still on 1.8.21p2 from 2017, but with the bugfix and other Debian and Ubuntu patches applied, resulting in a bizarre package version of "1.8.21p2-3ubuntu1.1".

Which with something relatively small and stable like sudo, is one thing. For big projects on rapid release cycles, like GNOME, it's got a much bigger impact.

Though I do see that Ubuntu does keep updating Firefox and Chromium to the latest versions in LTS, because they're so big and change so fast that backporting fixes has become practically impossible. That looks like a very rare exception to the rule; your typical Python library won't be getting that treatment.


I've always preferred to use Debian stable just because it's stable. And because it's arguably got the latest security updates.

But for Tor, I always use the Tor Project repository. Or for Docker.

And then there's stuff that won't even build in Debian, because it's been developed specifically for Ubuntu.


My former boss had us use Debian stable in production for this reason. He did not like apt-get related surprises.

If you need a continuously updated Debian then Kali may fit the bill - it’s not just for security work.

Well, in this case people should have been using Debian Testing instead of Stable.

But yeah it's often that people don't understand what's Debian stable and its trade offs compared to Testing and end up unhappy with it or switching to Ubuntu (which is ~very~ similar to Debian Testing).


Unfortunately, Debian testing doesn't get security updates.

Good to know. From https://wiki.debian.org/Status/Testing, here is some more detail:

>there is security support for testing, but in general it cannot be expected to be of the same quality as for stable:

>Updates for testing-security usually get less testing than updates for stable-security.

>Updates for embargoed issues take longer because the testing security team does not have access to embargoed information.

>Testing is changing all the time which increases the likelyhood of problems with the build infrastructure. Such problems can delay security updates in testing.


One can think of Debian testing as the "next-stable".

How does it works? 1. Upstream release a new version, it goes to unstable. 2. Package is tested for some days in unstable and get promoted to testing.

So telling that testing doesn't get security updates is somewhat incorrect, since you are grabing recent software. But by the other hand having too recent software also has its downside ;)


I simplified a bit. Yes, Debian testing gets new updates, which means it gets security updates. Eventually. It can (and does) take days for critical security updates to migrate from unstable to testing after stable has access to patched version.

https://www.debian.org/security/faq.en.html#testing

> there is a minimum two-day migration delay


> It can (and does) take days for critical security updates to migrate from unstable to testing after stable has access to patched version.

Now you are making way to many assumptions with this phrase.

Do you really think that make sense to have critical security updates for stable having to pass through the normal release cycle? :)


I'm sorry, was my message unclear? There were no assumptions.

I'm speaking from experience that when I was using Debian testing I would usually receive security updates days after they are available for Debian stable.

Obviously security updates for stable do not go through normal release cycle.

I wasn't commenting stable security updates, but lack of timely access to security updates on testing.


To the best of my knowledge, this is not true. Have a link?

See this comment: https://news.ycombinator.com/item?id=21492080

Also, this: https://www.debian.org/security/faq.en.html#testing

> there is a minimum two-day migration delay


Agreed, but that's actually an (UX) problem that Debian should fix. "Testing" is an awful name for "stable enough for normal use". When I first installed Debian I made the same error of installing stable on desktop and then fighting with it to install packages from testing... Just renaming testing to "regular" would prevent lots of wasted time all around.

It's been this way forever, though - when I started woody was "stable" but obsolete the day it was released. Since "stable" and "testing" are aliases of branch names, changing them would break scripts all over the place. You move to Debian, you have to learn to speak the language.

Though the truly baffling bit of Debianese is "contrib" which means "this is free software but depends on non-free software." I can kinda see how it came to mean that, but it's very non-intuitive.


Testing is only ok for desktops if you are ok with reinstalling it every so often, like with other distros. It won't last longer than your hardware, and will get odd problems after an upgrade once in a while.

Stable basically means it won't change, and says nothing about freshness. Debian has recently adopted a policy of releasing on a time basis, so it's never very stale.


I used testing for many years without reinstalling.

I'd go back to Debian and their stale packages if only they had scheduled releases like Ubuntu has. Imagine Debian 19.10, 20.04 and so on, with Long Time Support on the .04 releases every other year. What a bliss.

Debian has been doing a stable release every two years since 14 years now. If you want the equivalent of a non-LTS Ubuntu release, you should use Debian Testing (besides the naming, it's pretty stable)

Yup, and Debian recently started committing to 5-year support cycles, so it's basically the same as Ubuntu LTS releases (can skip one, but not two releases), with the difference that Debian is released when it's ready, whereas Ubuntu is released on a schedule.

Personally, I don't trust Ubuntu LTS releases until they get their first point release, and even then I'm skeptical since they're a bit more loose with package versions on stable. I do trust Debian when it first releases because they're far more rigorous in their testing, though I usually wait a week or two before doing a release upgrade just in case.

I used to really like Debian testing, but I've since moved to OpenSUSE because they have a real rolling release (for my desktop) and a solid release based version (for servers). I like Debian, but testing gets a bit sketchy around release time (frozen, and then a ton of updates), and I honestly don't trust Sid aside from pulling in the odd package. I don't trust Ubuntu at all, since it has caused me far too many problems in the past.


Making Debian packages is a colossal pain in the ass, or at least it's poorly documented. I've tried to learn it twice and abandoned it for more user-friendly solutions to the problem.

Some people may say I'm stupid for not figuring it out, but the standard for usability of software has improved a lot since these systems were invented. The UX needs a serious overall.


I guess you are exagerating quite a bit.

Creating a Debian package is actually pretty straight forward:

1. download upstream tarball.

2. execute dh_make -f <path-to-tarball>

3. debuild -us -uc -b

That is it! There is no secret.

dh_make does the heavy lift of generating everything you need.

Your only job is to declare the dependencies (build and runtime ones) inside the "control" file, maybe change the "rules" file (it is a Makefile).

There are also several helpers that goes even further and automate 99% of the process.


> That is it! There is no secret.

I've been using Debian for 20 years and have never seen this advice. I currently use custom scripts for building local packages of things.

I think this comes down to Debian's weak documentation. The recommended intro guide from the wiki page [1] doesn't say this and looking through the maintainers guide I do see this [2] in there but it is buried in pages and pages of other docs.

Really, someone needs to write a new basic intro that doesn't bury the lede. It should start with this then expand on what to do for various issues.

Or maybe start with one of those 99% automation options. If they are even easier. As is you cannot find this information without some person telling you as you cannot find it by searching for it or reading the docs.

[1] https://wiki.debian.org/Packaging/Intro

[2] https://www.debian.org/doc/manuals/maint-guide/first.en.html...


You should expand this to blog post size and publish the hell out of it. Few people know this as the gp alludes to.

And how is using a package built like this any better than curl | sh from upstream? You're still trusting the upstream tarball. Now in addition you are trusting the packager.

It’s improved a lot but also consider how many people use FPM to generate multiple formats very easily.

It seems like you're assuming that there's someone vetting these packages. For enterprise distros like Red Hat that's certainly true. Community package maintainers in, for example, the Debian project provide some safety as well. But there are plenty of package managers where that's just not the case. In the case of Homebrew, the package manager pulls down the program directly from upstream and installs it. It's exactly the same as downloading a tar file from the developer's website over HTTPS. Same with npm. Some package managers like Maven and NuGet will rehost artifacts but if the project owner is malicious or compromised then that won't help - so the risk profile is again basically the same as downloading a tar file from the website.

> In the case of Homebrew, the package manager pulls down the program directly from upstream and installs it.

Sure, in the most basic case it does this, but even in the most basic case it does more than that. At minimum it also verified the download matches a known good hash.

The important part (to me) is that Homebrew also ensures the package installed conforms to Homebrew’s standard. There’s a lot of standards (install in /usr, /usr/local, /opt, etc) on something as simple as where the files are put, let alone how it’s built and it’s dependencies are pulled in.

So to say there’s no value in installing from Homebrew over curl|sh is clearly misguided IMHO.


Homebrew is only as secure as the results of that first curl command

     /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
I get why https://docs.brew.sh/Installation doesn’t discuss the versioning or security practices. It is interesting that homebrew doesn’t seem to interface with macOS’s signing and installation practices.

Reminds me that there is still no official package manager on macOS. So https://nodejs.org/en/download/ has you comparing check sums.


1. Using curl|sh isn’t the only way to install Homebrew

2. Most of my post wasn’t addressing security, but actual real usability gains by using Homebrew.


How exactly is the file going to get tampered with if you're using curl-to-sh? Everyone uses HTTPS nowadays. Validating the hash is not really doing anything significant.

As for ensuring that the package is well-behaved, could you elaborate on that? I'm not aware of Homebrew doing something like chrooting to /usr/local before running the install script. And the install script can do anything as your local user, same as curl-to-sh. Perhaps the Homebrew maintainers would catch something nefarious in the formula itself, but given that most formulas download code from the Internet and run it, that's not much help.


> Validating the hash is not really doing anything significant.

Yes it is, it’s ensuring that what the maintainer of the formula verified is still what’s being downloaded now. HTTPS does nothing to protect someone modifying the source URL, but the hash does that (assuming the maintainer actually inspected the initial download, which many if not most do).

> but given that most formulas download code from the Internet and run it, that's not much help.

1) the previously erroneously dismissed hash helps there.

2) the sandbox which you linked to later also helps too.


I suppose it comes down to whether the Homebrew maintainer who accepts the formula PR containing the hash is actually examining the upstream code in detail. I think that is unlikely; there's too much code for Homebrew maintainers to be experts on everything contained in homebrew-core and follow every single patch.

So yes, the hash prevents the upstream project from switching out the code at any time, but if they wanted to add some malicious code all they have to do is file a homebrew-core PR and hide it in a legitimate change.


> I think that is unlikely; there's too much code for Homebrew maintainers to be experts on everything contained in homebrew-core and follow every single patch.

By that logic, everything in any package manager should be treated with distrust then. Debian, RHEL, Arch, etc etc.

Not saying I disagree with distrusting, just making the point that risk exists everywhere, at some point you have to decide what you’re comfortable with.


To some extent, sure, but I think that extent is greater with Homebrew. It's my understanding that package maintainers for Debian, RHEL, etc are typically experts in the packages that they maintain. They overlay their own patches to ensure compatibility and submit patches upstream. With Homebrew there's only a small number of committers who maintain the homebrew-core repository, accepting PRs from thousands of people in the community. It's just a different situation.

> It's my understanding that package maintainers for Debian, RHEL, etc are typically experts in the packages that they maintain.

That’s certainly true in some cases, but is definitely not the case in the majority. I think you are letting your personal biases color your judgement to much.


They do chroot. You can only install to specific directories

Ah, I found it. On MacOS they rely on sandbox-exec which uses the sandboxing mechanism provided by the kernel:

https://github.com/Homebrew/brew/blob/e2c76cce8e01fd80e0910d...

https://github.com/Homebrew/brew/blob/master/Library/Homebre...


Many package managers provide a real audit trail, and this (IMHO) is very valuable. For example, in python's PIP (and apparently in NPM too), the filenames are never reused:

https://github.com/pypa/packaging-problems/issues/74

Developer's website may change at any time, and even go back-and-forth between good and bad version. The pip software version won't. Put a version pin, and you can be sure you get a good package or an clear error.

Granted, you can record/verify the checksum of downloaded files as well, but many people don't. And crazy practices like 'curl | sh' make that impossible anyway.


npm removes malware when reported.

Which is repeating that sources matter: curl from a source like Github with a solid abuse process is a very different story than an unknown server.

s/when/if/

Fixed that for you.


I don't believe that changes the meaning.

(sorry, it was an obscure inverted reference to the saying "not if, but when")

There's still an issue with the package managers requiring arbitrary shell commands to be run, often with sudo. From docker[1] there are steps like these:

    Install packages to allow apt to use a repository over HTTPS...
    Add Docker’s official GPG key...
    Use the following command to set up the stable repository...
Once you've done that, then it's just apt-get install.

> And we didn't even start to talk about updates - as if that isn't a security concern.

I think that's the worst part of it. Some software will nag you about updates, yarn and pipenv for example, but it's far more reliable to have one system that keeps everything up to date.

[1]: https://docs.docker.com/install/linux/docker-ce/ubuntu/



Well if this was true "Every decent Linux distro has a package manager that covers 99% of the software you want to install" we wouldn't have to install it thru sh.

4/5 of the examples the author gave have a package in Fedora. The one that doesn't (oh-my-zsh) is simply a git clone so it doesn't make much sense to package it.

Rustup doesn't seem to have a fedora package?

For one of the examples provided, rustup.sh, there were not complete packages the last time I looked for them. There were some packages on Debian and Fedora but I ran into problems configuring the Rust plugin for VS-Code because it assumed that the rustup executable was present and it was not (at that time.) Going down this path, one then becomes dependent on using rustup to update the Rust installation. Now I need to run two commands to keep my system up to date. (pip? Make that 4 commands pip, pip3. CPAN? Yet another.)

I have some confidence that at the least the Debian packages won't be changing rapidly, making it more likely that any problems will be discovered before they get to me. A script (or tarball) fetched and installed from the Internet can change literally from one second to the next. If a trusted site is compromised the next download could be tainted.

Not all "package managers" instill confidence. I've heard too many bad things about npm and the associated environment and won't have it on my systems, but I am not a web developer so the impact is the occasional utility I have to forgo.


Topgrade (written in Rust) [1] allows you to queue different package managers, including those you mentioned.

[1] https://github.com/r-darwish/topgrade


Indeed. It must be the other 99% that aren’t in the repos because I have to install a lot of things that aren’t available in repos (or that haven’t been updated in stable in a long time).

>Not so bad comparing to what?

Compared to donwloading a binary and 10 other similar methods people use.

>Yeah, comparing to downloading a tar file from the website and running ./configure, make etc - right, it's probably quite a similar risk. But who does that?

Millions of people?

And even more just download binaries off of websites...


Isn't there even a comment from Linus (could have been someone else) saying that he does exactly that and if that doesn't work gives up on the software?

But Linus is bad at everything except writing C code.

The package manager is only an option if you have root rights. Otherwise you either download binaries or compile it yourself, from the source downloaded from their website.

There are user space package managers, e.g. conda, flatpack and homebrew come to mind.

Ironically conda and Homebrew themselves need to be installed from a script off the internet

Conda has an .exe installer on Windows and a .pkg installer on macOS. Both signed by Anaconda, Inc. for the OS. There are RPM and deb bootstrap repos for Linux. Then there’s also the .sh shar file installer.

It would be nice to have something like a fusion of conda and apt to work right out of the box on distros like Ubuntu. Like an "apt install --user". I don't understand why we need root rights to install programs that will not need root rights to run anyway. Seems like a huge and obvious oversight to me. I guess it's not a big pain point because in most cases people use their own machines and have root access. Not always the case though in companies or university labs for example.

I have been wondering this for ages. Just last week I had to manually set up KeePass, VSCode, IntelliJ, Guitar (git ui) by manually unpacking tarballs (luckily already built) into ~/bin. Guitar luckily had an AppImage which worked beautifully as a single executable.

For that reason alone I'm a huge fan of AppImage above snaps and flatpaks.


A giant chunk of Linux users are software developers. Running code that has not yet been packaged or never will be is an extremely common occurrence for a developer.

> Yeah, comparing to downloading a tar file from the website and running ./configure, make etc - right, it's probably quite a similar risk. But who does that?

Many linux users? I get most of the software I need through package managers, but somewhat frequently I need to build it by source. Particularly if I want the most up-to-date version on debian.

Git cloning a repo is marginally better in the sense that, well, it's open. Theoretically if it were doing something nefarious, someone would've noticed. Is it perfect? No, of course not. I still think it's better than running random curl'd scripts.


>Yeah, comparing to downloading a tar file from the website and running ./configure, make etc - right, it's probably quite a similar risk. But who does that?

are you asking who compiles and runs software? A lot of people, for example I every time a new emacs version comes out and it isn't in the package manager yet


it's probably quite a similar risk. But who does that?

After downloading you can at least do some sanity checks - even if you don’t checksum it if you are familiar with it does it look right? Is make doing anything weird? Whereas curl|sh doesn’t give you this opportunity.


I disagree with some of this, I.e paste jacking.

Plenty of software projects put more care and focus into their software and not in their website, if you're running a vulnerable version of Wordpress or whatever CMS it'd be easy for someone to insert something malicious without being noticed whereas something that modified your code would show up in git, code reviews etc


This is pretty much the only real answer to the article. However, if the software itself is distributed via the site then the same caveat applies since replacing the release itself is much more enticing. It comes down to either the software being published on Github where a hijacked release might be noticed, and/or files having signatures that you can somehow trust.

How is this any different from just downloading a binary from their website and running it? Which people have been doing for ages?

It's different because in a lot of cases there are more layers of security in place that haven't been discussed.

For instance it would be typical in the past to sign the packages/software and publish the public key either to the site or somewhere else. Private keys used to sign the software would never touch the website infrastructure and would live on, typically, much more secure build or sign-only infrastructure.

The public keys used to verify the software could also be delivered through a separate channel, signed by a trusted third party, and etc. With Trust on First Use(TOFU) you'd trust the key when you first obtain it and be notified if the key ever changed unexpectedly.

I agree with the general point of this article. This is the weakest part of the arguments IMHO.


Plenty of installation scripts ask for root.

... and this is a red flag which generally forces me to stop and check why

if I download latest music editor, and it wants root, I will be very suspicious.


Pastejacking should be mitigated if you use zsh, as it will never run pasted commands automatically. From quick test it seems that recent(?) versions of bash also implemented this feature and have it enabled by default. I don't know about fish or other shells.

Pastejacking is not the only possible attack by a compromised server, you can also change the content of the script when the user download it through curl or wget.

Oh My Zsh use GitHub for their script so I trust it more than if they hosted it themselves for example


If people have access to change the content of the script then they can also change foo-1.2.3-src.tar.gz or foo-1.2.3-linux-amd64.gz. These are all general problems with downloading anything from the internet.

Right but the attacker can make it look legit even for someone that look at the script. The attacker can change the content of the script by the user agent or even by detecting when you pipe it to bash[0]

[0] https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-b...


Well there is a way to resolve that. Have command in between curl and sh that effectively only prints to stdout once stdin receive EOF. A double tac is an xample

Or, you know, pipe to a file and verify the content, then execute.

The set of people who have access to the GitHub project (push privileges) might not be the same as the set of people who have access to said project's website.

Not running pasted commands automatically isn’t a security feature on its own, because the paste “brackets” of bracketed paste can be inside the clipboard. It’s kind of ridiculous that most terminal emulators don’t defend against that by default.

Are you talking about the shell or the terminal protecting against paste jacking? I'm aware of terminals now protecting against this.

Fish also does this.

> Not knowing what the script is going to do.

Yep, this is why i hate piping curl to sh. Much prefer how e.g. go does this:

Tells you to just run

    tar -C /usr/local -xzf go1.13.4.linux-amd64.tar.gz
It's not that I don't trust the installer script to not install malware. But I don't trust the installer script to not crap all over my system.

Try using Qubes OS. It will allow you to run such scripts without having to worry about your system being screwed up.

Note that Qubes have some drawbacks, the main one being that it doesn't support GPU's, so not everybody are in the position to use it.


I really like Qubes and have been running it for a while. What I'd love is to have something like qubes but a bit more lightweight. Using containers instead of full blown VMs. Basically trading some isolation guarantees and security for more usability.

I think Silverblue might be what you want. It's still in beta though, but if you're OK with that you might want to try it.

I'm surprised that no one has yet mentioned that piping curl to bash can be detected by the server (previous discussion at https://news.ycombinator.com/item?id=17636032). This allows an attacker to send different code if it's being piped to bash instead of saved to disk.

IMHO, "curl to shell" is uniquely dangerous, since all the other installation vectors mentioned don't support the bait-and-switch.


To me, this seems like only a slightly more advanced version of sending malicious payloads only to curl user agents and not something uniquely dangerous.

If I was already using curl to predownload and audit the script, I'd probably just execute the script I already downloaded which would be safe. Most of the people piping to bash directly do no auditing at all because they trust the source. If you're going to put a malicious payload in a script, you don't have to be that tricky about it.

Most people wouldn't know anything was up in any event until someone else discovered the attack and started raising a fuss on social media. I don't think serving the malicious script just to people who pipe it to bash (or really just download it slowly for any reason) would stop everyone from finding out. It would just make the malicious script more notable when found.


Part of the confusion comes from the fact that there are several different points do be discussed, and they're easy to mix up. For instance: software trust in general, web server security vs repository security, reproducibility, etc.

In this case, even "curl is dangerous" has at least two variations. The first is not knowing what the server is sending, the second is that the server can change what it is sending. My complaint is with the latter.

For example, a file in a repository somewhere or uploaded to a compromised web server is static. Everyone who downloads the file gets the same thing.

A file served by `curl | bash`, however, isn't. The server could send different files at different times of day, or only send malicious payloads to certain IPs (like known TOR exit nodes), or certain geographic locations, etc. which is something no repository I know of is even capable of.

Archives, packages, and installers downloaded from a server (instead of a repository or FTP server or S3 bucket where the attacker controls the file but not the server) share this weakness, so that alone doesn't make curl uniquely dangerous.

Where `curl | bash` differs from installers, however, is that it's interactive, so the server can alter its behavior on the fly. This is dangerous because, with installers, the attacker must commit to sending either a clean or infected payload before the installer can tell them if it's being run or not. In this way, even archives serve as a kind of a poor zero-knowledge proof of what the software is, since the attacker needs to commit to a version before knowing what the user intends to do. There's normally also a file left on disk as well.

With `curl | bash`, however, the server has the unique opportunity to get a callback from the installer before it has finished sending it, which means the server doesn't have to commit to sending malicious code blindly and hoping it's not being saved by someone who intends to audit it. Also, `curl | bash`, by default, leaves no trace, further frustrating auditing/reverse-engineering attempts. (Adding insult to injury, there's no way to check the malicious payload before running it, since running it is what causes it to appear. Even if run inside a VM, this can also be abused by an attacker to try to cover their tracks in real time)

In this way, `curl | bash` allows for obfuscation/anti-debugging techniques that no other method I know of offers. Hence, my opinion that `curl | bash` is "uniquely" dangerous.

Edit: Thinking about this more, this generalizes to any installer that interacts with the network, since all the attacker needs is a way to detect execution and some way to avoid leaving artifacts. In this way, curl is indeed not quite "uniquely" dangerous, since it's tied with other network-based installers. However, since the other popular installation methods don't have the ability to obfuscate their initial payload like this, I think the point still stands. (Obviously feel free to correct me if I overlooked something)


You might be interested to read the section titled "User-Agent based attacks" of the linked article.

You might be interested to read that the timing attack doesn't rely on user agent. It detects what curl is feeding the output into.

The author's rebuttal to user agent attack doesn't rely on how the server decides what content to serve, and so is naturally generalizable to the timing attack. It's unfortunate how that section is named, because it fools people who didn't read the article into thinking they had a novel counterpoint, when in fact the author already anticipated their exact argument.

Their argument is this:

> you’re already trusting the vendor and site, and you’re already going to run the software that install.sh downloads.

I don't see how this makes sense? People do check what they run, and especially for sudo-calling commands.


If the folks at rust-lang.org are malicious and willing to put in some extremely customized web-server logic to serve up evil code when they think it won't be noticed, why wouldn't they just sneak it into ./configure or some unnoticed corner of the compiler's source or the standard library or a precompiled binary?

To be clear, when I go to rust-lang.org, my goal is to download a large amount of extremely complex code that I never plan to audit myself and run it repeatedly on my computer, plus also trust it to download even more code that for the most part I plan to never read, and finally I'm going to trust it to take code and turn it into binaries which at least some of the time will run as root. In fact, it's very hard for me to imagine a scenario where an attacker is able to implement the timing attack in the grandparent post (which, to be clear, is very cool and clever and interesting), but is unable to pwn my computer in a huge number of ways that are both technically simpler and harder for me to detect.

The OP's point, as I understand it, isn't that it's impossible to pwn people via `curl | sh`, it's that in many cases, such an attack doesn't fit into a reasonable threat model.


I disagree. I think "webserver got hacked, and no one noticed" is a very realistic threat model. The webpage tells me to get a script from "sh.rustup.rs" -- what is the security behind this server? How can I be sure that it was not hacked? If the server was hacked, how long would it be before the hack it is detected?

I have full trust in Rust team, but even kernel.org was hacked once! And the worst part, experienced users won't likely to notice that installer does something weird -- because it is fully opaque, and because it

An alternative approach is a manual "git clone". This is way more secure, because the same endpoint and protocol is used by both new users and devs doing daily work.

Can someone compromise dev account and backdoor git repo? Sure. How long before this is detected? Not very long at all, I bet there are people who work on Rust and watch every incoming change.


An unmodified curl invocation does not require weird timing based attacks, it sends an appropriate user-agent header the server can use (and which the article already adresses).

different attack vector.

by detecting the usage of `curl | bash` you can serve a different script only when someone does it, so someone doing `curl -O /tmp/some_script.sh` to audit the script wont see the harmful code.

It opens you up to a literally undetectable attack.

nonetheless, the point of the article author does have some truth. there is always a degree of trust involved when you're installing binaries from a third party. by using curl|bash you're just increasing the required trust a bit.


> It opens you up to a literally undetectable attack.

This is the crux of it for me. This is why it is dangerous. The author appears to have overlooked this attack vector entirely.


Is it undetectable? `curl | tee file | bash` should detect IMO.

True, you can detect it without a way to stop the damage! Or easier and more thorough,

    curl | bash -x

Piping through tee doesn't trigger the sever side detection (it doesn't stop to read every few ms) and using the x flag isn't inherited, so it's gone as soon as subshells are invoked, which is pretty normal for an installation script.

This has all been mentioned in the linked comment thread


Actually the server side detection in [0] isn't really affected by putting tee in the middle... and neither does -x, of course.

Good point about -x being fallible to an adversarial script, even a simple set +x would be enough!

Where's the link where this has been mentioned? I missed it.

0: https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-b...


Use disposable virtual machine to isolate the damage, while dumping the script, this way we can detect attack without compromising ourself.

My experience is that software that installs via curl|bash tends to ignore my preferences as expressed via $PREFIX/DESTDIR, $XDG_{CACHE,CONFIG,DATA}_HOME, etc. It'll install who-knows-where and probably leave dotfiles all over my home directory.

Maybe curl|bash is functionally equivalent to git clone && ./configure && make && make install, but my bet is on the one providing a standard install flow to be a better guest on my system.


The curl|bash might actually clone a repo and build it. Your concern is a different one.

Yes, it could. But it that was the case, the site would say one can just clone the repo and build it instead of sending everybody into the curl route.

Some of them do say that.

yep, and that would be bad! If clone / build steps are separate, I can at least verify the hash looks sane, or mirror it locally, etc..

In the curl|bash approach, all of this is lost.


The points raised in the article are correct, and I'm much more concerned with the willingness of people to run arbitrary software on their primary computers in general, than the specific case of piping to sh. I think piping to sh just emphasises how insecure the entire practice is, and arguing against that is analogous to close your eyes to protect yourself from the attacking tiger.

The only system I've worked with that helps you truly deal with this is Qubes OS. Perhaps Fedora Silverblue will achieve this as well, once it comes out of beta.


This article started life as a more general article about security and trust, but decided to post this particular part in its own article.

There is a lot to be said about trust in software, but in general I agree that mitigations against untrusted software could be improved. The problem with this is that it's often hard to do this without affecting usability and, more importantly, in practical terms for a lot of people the status quo seems to be "secure enough", even though there are areas for improvement we (as an industry) should work on.


Minor correction: it's "status quo" and I think it should be italicized.

Has running a curl-to-bash command found during normal user-initiated web browsing ever resulted in a malware infection? Even anecdotal evidence would be valuable at this point.

A close anecdote. I posted one on IRC about 15 years ago, before this was even a thing, which called home by calling curl to an endpoint I controlled only. The sales pitch for the script was setting up vim properly which it did do. 80% of the downloads executed instantly. I had 12 people run the script. No one read it first or downloaded it and ran it as the request count matched the callback count exactly and there wasn’t more than a couple of seconds before the request and callback.

After the fact I realised I should have put if the account was root or not in the callback!

Alas all it needs is some trust and a sales pitch and someone will run it. At the time I didn’t think of the security consequences until after I had done it.


The element of you story that deals with how you distributed your solution has no relevance to the rest of it. You could have been using any type of distribution it’s irrelevant that you actually used curl. If people trust you and ru. Your code, you can abuse that trust.

You could argue that by not having high quality independent third party review on a controlled market place (like the Apple App Store) has security implications. Because that would have checked for and vetted against abuse. But again this has nothing to do with curl.


Not intentionally malware.

But the OPAM install [0] deleted my $PATH, and it took me a while how to fix that one. They've since fixed what allowed the problem to happen [1], well it was a combination of that (normal shell problems) and a power-outage that killed the installer just before completion (in which case the script may just think it's complete).

But I'm sure similar catastrophic side-effects can occur in other install scripts out there.

[0] https://opam.ocaml.org/doc/Install.html

[1] https://github.com/ocaml/opam/issues/2165


It's also already more than a decade ago. But a friend of mine wanted to format a USB stick. So there was this tutorial online how to do it. I'm not sure if he copy&pasted it or typed the command himself. But anybody who used dd a lot, knows what happened, he damaged his root filesystem irreversibly by pointing to the wrong disk. I think repairing damaged filesystems with Norton suite or so was never a thing on Linux... ;)

Long story short, even when typing well intended shell commands, you can damage your system. (Even on Windows or macOS!) Directly piping curl into bash shows a lot of trust. It's amazing how well-intended the web is, must be at least 99,999999%

That said, I cannot count how many times following some tutorial blindly/some shell based installer made my carefully crafted *nix installation a bit worse. Nonetheless, most adware/spyware/malware I got through commercial download websites I think.


Not as far as I know, but I have heard about people pasting the Wrong Thing into a root shell.

In a way it's a casting error. A type safety violation. You paste text into a privileged shell and coerce it to be sh, and when it goes wrong the sh input is rich in < and > characters.

Friends of mine have mentioned at least a) people accidentally pasting much more than the intended line into sh because they selected more than intended and b) sites that modify the cut buffer silently to add some "pasted from … blah … like us on facebook" or somesuch, I forget the details. The person who wrote the page intended one line to be castable to sh, another person who worked on the site added the script that transformed the cut/paste without realising that.


Yes. I had a pretty bad one from a commercial software company. Running any script someone else wrote badly intentionally or otherwise is dangerous. The source is moot. Rather than provide distribution packages they had a shell script that installed and updated their stuff. If you ran the update script it would evaluate rm -rf $(SOFTWARE_ROOT)/

That environment variable was not set if the software hadn’t been installed and it wouldn’t run unless it was a root shell.

Guess who ran the update script instead of the install script and hosed the machine? I gave them a whole lifetime of bile over that.

The product turned out to be horrible as well.


Once I worked at a place where ooh support accidentally pasted an entire maint guide into putty (right click paste is not a good idea) on a prod oracle server, it was fine until the lines in the doc which read:

Dbfile1 -> /path/to/dbfile1 ... Etc

Which needless to say hosed the entire box... over Christmas...

And this is why they don’t use putty anymore ;)


Funny thing, I've pasted wrong text into the terminal on multiple occasions—from having copypasted some paragraphs of text and immediately forgetting about that. Each time I expect my files to be botched, but the worst that's happened so far is some unwanted files appearing due to ‘>’ being there in the text.

I've had a server become unbootable after applying a curl|sh (and I actually wget'ed, skimmed the script, then executed), and it errored out. I'd just taken a backup so I didn't do much forensics before I restored.

I know, badware ≠ malware


Yeah I was asking this question on SO - How to responsibly publish a script - but got no response, a sarcastic "Tumbleweed" badge even. My concern was that the script could be easily hosted elsewhere and we'd have multiple versions with potential malicious mods flying around. In the absence of alternatives curl-bashing isn't so bad after all because it promotes a canonical download location from a domain/site you control, even if I hated it initially as a long-term Unix user.

> There is no fundamental difference between curl .. | sh versus cloning a repo and building it from source.

Not true: when you clone a repo with signed commits, you have forensic evidence that the repo signer provided the code you ran, while when you use curl you have … just the code itself.

That's not a lot, but it's not nothing.


How many repos are there that actually sign commits, and of those, how many users are doing validation that the signer of their local checkout’s commits is actually the key they expected?

The line you’ve quoted doesn’t say that there’s no fundamental difference between curl | sh and cloning a repo with signed commits, and I think it’s a stretch to think signed commits have enough usage among devs / users to make them a viable option.


I don't think signing commit matters. What matter is that if webserver is compromised, very few people are likely to notice, and the evidence can be gone at any time.

But if github repo is compromised, anyone who pullls the repo can notice strange commits - and the evidence cannot disappear, as public head rebase will bring even more scrutiny.


I hate install scripts, period. They feel so Windows-ish. Just distribute a .deb, .rpm, .snap, homebrew package, npm package, or whatever is the most appropriate for your software. All the scripting you need to do should be done inside of the regular package installation process, and even that should be kept to a minimum.

The only software that has any right to rely on an ad-hoc install script on a Unix-like system is the package manager itself. It's awful enough that I have to do apt update and npm update separately. Please don't add even more ways to pollute my system.


The problem with deb, rpm, etc is that you need to add instructions to all supported systems one by one. Check out this site for reference: https://www.sublimemerge.com/docs/linux_repositories and compare with curl URL|sh that can detect target system and delegate to appropriate system. Much simpler.

The root cause of this is no universal packaging format for Linux in my opinion.


> The root cause of this is no universal packaging format for Linux in my opinion.

Which some have attempted to solve, so now we also have Flatpack, AppImage, and Snap -- each with their own little issues.


I understand the difficulty. But if you're going to write a shell script that detects the target system and takes different actions, you might as well move that logic to the packaging system. It's much more robust, especially when it comes to dependency management and updating.

I wish there were a simple, modern, easily configurable tool that can take a declarative description of a project and spit out a ready-to-serve repository (just point nginx at it!) for most commonly used package formats. For Linux daemons this should be easier than ever before, now that systemd has gobbled up all the major distros.


> you might as well move that logic to the packaging system

Depends on who do you mean by "you". Software vendor can definitely write a script but they have no power over distributions to "move that logic to the packaging system".

This would have to be a collaborative work by different distros but from my casual look it's just not happening as everyone is happy with their own package manager that's "obviously the best".

I do agree on systemd. I didn't like it before I moved to Linux now I see a lot of value that it brings.

This may also be relevant: http://0pointer.net/blog/revisiting-how-we-put-together-linu...


> This would have to be a collaborative work by different distros

I think there was a misunderstanding between us. I didn't mean anything so complicated.

By "moving that logic to the packaging system", all I meant is that instead of using a script to detect whether your app is being installed on Ubuntu or Fedora or whatever, you should just build and publish separate packages for each distro you wish to support. The logic for selecting the right package for itself is already built into every packaging system, ready for anyone to use.

Hopefully the process of building a dozen packages with each release can be easily automated once it is set up.


If you want to support multiple platforms this is the way to go. I don't see a problem with it.

We have the freedom to use different package managers. It comes, likes everything else, with it's own drawbacks.


What’s the difference between a curl you blindly pipe into sh and a blind brew install/npm install command?

Based on experience, brew will place files in a place that I expect.

I can’t predict where a shell script will scribble or what other changes it will make to my system.


A brew recipe is a ruby program that can do anything on your computer. It will and does invoke any number of external processes, including: downloading and executing other recipes, running any number of compilers, arbitrary binary executables, and shell scripts, creating and linking executables, overriding system-provided binaries and libs, etc. etc. etc.

All these recipes and scripts are supposedly downloaded from GitHub. With supposedly anonymous user analytics: https://github.com/Homebrew/brew/blob/master/docs/Analytics....


isn't brew itself a curl'd script?

Installing brew is basically curl|ruby. But that's somewhat forgivable because you need to install a package manager before you can use a package manager.

The lack of information. Most install docs tell about installation requirements. One-liners don't.

The average non-technical user is never going to open up the terminal and run commands. The well educated technical user is going to be vary of untrusted sites and various forms of attacks (which I'm assuming the author of this post falls under).

IMO this is good advice for those that fall in the middle of these two categories, i.e. slightly technical people who run into problems and copy-paste solutions from Stack Overflow hoping that something will work.

> you’re not running some random shell script from a random author

This is exactly what is happening in the vast majority of these cases. These users are going to be vary if linked to an executable or installer, but "hey just run this simple line of code" sounds like a very appealing solution.


> copy-paste solutions from Stack Overflow

On the other hand, a solution on SO that would be a hidden attack would not gain upvotes and be an alternative for the one seeking advice there.


Depends on how hidden it is.

Agreed. If I don’t trust the server, or don’t have a secure connection to it, it is not likely wise to run any non trivial code downloaded from it.

Verifying a hash that comes from the same server also doesn’t make that much sense. Verifying a PGP signature would be a compelling reason to not pipe to shell, and that’s really about it.


Just because the connection is secure doesn’t mean it’s controlled by a trusted entity

Then perhaps running any non-trivial code from it is blisteringly unwise.

the problem is mainly that the script is executed without leaving a trace. if you downloaded the script then executed it, you would have something to inspect in case something goes wrong.

it's too easy, and people with very scarce knowledge could develop a habit of doing this without asking questions and not even leaving any trace for a senior to inspect in case of a problem happening


If that were the case, the best option is surely `curl | tee $(mktemp) | sh`. In the case of downloading a script and then executing it, the script has the ability to modify its own contents.

It does leave a trace - the command executed is stored in your shell's history file, so unless a malicious script deletes the history (and it also could delete the download binary if you checkout it from a repo) anyone can immediately see what was executed.

The history file only contains commands typed into an interactive shell. Commands executed from a shell script or piped into sh will not end up there.

For the most part this is a problem with non-rolling-release distros.

There are very few instances in which I've had to even use an installer on Arch. For many of those cases, the AUR provides a package that verifies the hash of the downloaded file anyway.

I've constantly been frustrated when using Ubuntu because something basic like having 'vim' not be months out of date requires a PPA.

The 'official' Rust installation method is a curl | sh. Or:

    $ pacman -Q rustup && rustup -V
    rustup 1.20.2-1
    rustup 1.20.2 (2019-10-16)

Rolling-release and fixed-version distros serve different purposes. A fixed-version OS has a set of software packages at specific versions which have been tested together, both by test suites and by the many users using the same version set. Security patches and bugfixes get patched in, but the packaged software doesn't undergo major changes. That's important if you're running a critical production system.

Rolling-release systems are awesome for personal machines where you can handle breaking updates or work around them. Usually I want the latest versions of everything when I'm doing exploratory stuff.

That said, modern software deployment is definitely moving away from "pick an LTS Linux distro and only change your application code", instead we mostly use containers now. A lot of production systems are probably still using the older technique though.


I agree.

But no-one should be running this curl | sh nonsense in prod anyway, right. You at least want a defined version so you'd save the artifact instead of piping.

To me, the whole thing seems like a solution to a self imposed problem. It reminds me of the old "frankendebian" stuff in which people would be warned against having a system half-stable half-unstable.


> There is no fundamental difference between curl .. | sh versus cloning a repo and building it from source

I would say it depends. If the commits are signed by a key you know it's probably better. Even if it's not the case, cloning with SSH if you know the host key is also slightly better than downloading through HTTPS where any (compromised) trusted CA can MITM your connection :) (you can argue that those to use cases are rare in practice, and I would agree with you ;))


I don't think SSH is more secure. You have to verify the server key to make it secure. How do you do that? I have googled a bit and didn't find an obvious page for the github server keys. And even if I did I would be back to HTTPS MITM by compromised CA.

> Not knowing what the script is going to do.

This is more like: not knowing what to do, when it doesn't work. And this is always the case until it works. Which is just a local Phenomenon and i can't expect things that work for me to work for others. So why don't write an expressive installation documentation with multiple steps instead of one-liners that either work or don't. There is just no in between.

Take the installation instruction of syncthing for example:

    curl -s https://syncthing.net/release-key.txt | sudo apt-key add -

    echo "deb https://apt.syncthing.net/ syncthing stable" | sudo tee /etc/apt/sources.list.d/syncthing.list
These two steps are hard to automate, if you don't have an interactive shell.

Same goes for the saltstack-boostrap-script. This script doesn't work on all platform equally good. This is not an reliable state. So in the end I'll stick with the normal way to install things which is very easy to automate.


I ran into this recently at work. I wanted to write a script that you could curl into bash to quickly set up some common tools.

Firstly, I made sure that the script told you what it would do before doing it.

Secondly, my instructions are two lines. Curl to a file, then run it through bash. A compromise, but if you mistrust the script, you can inspect it yourself before running it.


> Either way, it’s not a problem with just pipe-to-shell, it’s a problem with any code you retrieve without TLS.

Well, yes. But the typical alternative is a tar-ball and a gpg signature - both via insecure transport, but verifiable (like with tls and a CA).

Git will typically be via ssh or https - so to a certain degree over a secure channel.


If curl loses connection to the source website while downloading the script, then partially downloaded script will be executed, no matter what. This is a main drawback of curl-to-shell piping approach, and the original article is missing it entirely.

A common solution is to wrap all code within a function. This way nothing gets executed until the last line, the one that calls the function, is executed.

  function main () {
     # all code goes here
  }
  main

Common, but not universal. If I pipe a response body into a shell, I don't get to check whether they were careful or not.

No. It's addressed in the second last bullet, Partial content

Yeah "it will happen anyway" misses the point that curl will notify you of the failed download before you run it, whereas piping it to sh will immediately run it

Is that really much of a problem? I can't remember the last time I had a download fail part the way through and they are usually much bigger than a bootstrapping script.

I mean it's unlikely but imagine it did happen and something in the script like "rm -rf /some/path" gets truncated to "rm -rf /" & immediately run

Even if your connection is TLS secured an MITM attack of causing a connection reset after X bytes could be a viable attack


As was pointed out both in the article and the comments here, this is easily addressed by wrapping everything in a function (or subshell, for that matter).

A small benefit of downloading the installer is that this lets you run a checksum on it.

Yes, if you can acquire or verify the checksum via some other means, e.g. PGP or phone.

To confirm unchangedness-in-transit I rely on TLS.


I remember someone curling a Heroku CLI install script and upon inspection, it would have tried to install a specific version of Ruby too instead of just the client. Since then I always glance through the script first

Is there a simple command you can use to read the contents of the script (pipe) before it's sent to sh? Something like:

    curl ... | less-and-maybe-cancel | sh

Yeah, "vipe" from moreutils (which is a package on most platforms) does this. It inserts $EDITOR (usually vim) into the command pipe, and allows you to review and/or edit the text before passing it on to the next thing in the pipeline. Great little command line utility, for all sorts of things.

If you want to cancel, just erase the file. Or `:cq` in vim probably works as well.


You can pipe the output of cURL to Vim like this:

  curl ... | vim -
Then, you can review the script, maybe tweak it a little, and you can send it to sh's stdin by running

  :w !sh
Or you can just quit Vim (:q! or ZQ) and nothing happens.

The Maybe command sounds similair.

https://github.com/p-e-w/maybe


I think normalization of this practice makes the scripts the primary attack interest of the wrongdoers, and these scripts are often an easier target.

Sometimes I just want to download software without installing it. This is complicated by install scripts that obfuscate the real source or break it into dozens of parts.

Curl to shell is a result of Linux's fragmentation. It's the only way to provide a simple install process.

If I were trying to distribute a package on Linux I’d be pretty intimidated by the number of package managers. I might start with Debian. Do an alpine package, then nope out.

If the app was a server instead of a cli, I’d start with a docker image. I ended up giving up on installing Erlang on my little embedded system and went with the docker image instead.


Props for mentioning Alpine, the only sane distro.

Of cource is not. https://flatpak.org/

I always install docker using simple command

  curl -fsSL get.docker.com | sh
Instead of copy pasting dozen of commands from docs / SO

That looks vulnerable to MITM since it doesn't use HTTPS. Indeed, I just looked it up, it's not in the HSTS preload lists of any browser: https://www.ssllabs.com/ssltest/analyze.html?d=get.docker.co....

FWIW docker doesn't publicize those instructions, they give:

    curl -fsSL https://get.docker.com -o get-docker.sh
    sh get-docker.sh
And, it's the only one of the examples listed in the article that doesn't pipe to shell.

[flagged]


[flagged]


The only two posts I've downvoted are your two "partial content" posts because they only repeat a point that is addressed in the article without adding any form of rebuttal or even recognition that it's even mentioned in the article.

I have no idea what the point of posting that link is shrug, so I downvoted that too




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: