I think the best alternatives are appstores such as found on ios and android, right?
However that doesnt really fit for open source. Is the way to install open source software, to download the source and compile it from scratch? Well, thats also risky, you should check the source code first...? But thats undoable.
All in all, it comes down to "trust". Do you trust the website? You dont care about security too much, well, imho then it makes sense to curl | sh.
tl;dr: (1) Reproducible builds, (2) Make sure everyone is getting the same thing (to detect targeted attacks) and (3) Cryptographic signing.
Package managers and appstores are the best we have right now, but they're missing (1) and (2). In the meantime, offering a pgp-signed installer file is a lot better than curl | sh.
^~ To the people scrolling by at 70 mph: READ THIS
"pgp-signed installer file" is protected by a key from a stranger.
Both are insecure.
Could you enumerate several points that show "pgp-signed installer file is __a lot better__ than curl | sh" ?
PGP can also be used in a trust-on-first-use manner. Get the public key once over an insecure channel, and if the attacker missed that single opportunity, you're safe until the key changes. With SSL, on the other hand, you're at risk every single time you make a connection, because any of hundreds of CAs has the power to sign that certificate, and as above, you have to assume the web server isn't compromised.
Another reason PGP is important is mirroring. Big F/OSS projects let others' volunteer mirrors. Even if those mirrors support SSL and the transport from the author to the mirror is encrypted, there's absolutely no guarantee that the mirrors themselves are not malicious. The mirrors could be backdooring their own files. The fact that you have an SSL connection to the mirror doesn't do anything to prevent this. But with PGP signatures, you're assured the files come from the software's developer, and haven't been tampered with by the mirror.
So the difference is: SSL secures the connection between your browser and the web server. PGP ensures you're getting the file the software developer intended you to get. It's a semantic difference.
I'd also argue hard against `curl | sh` for (assuming it exists) the psychological effect of teaching users that it's OK to pipe random things from the web into sh.
You need some other channel to communicate what keys should sign what files. You need some other channel to import the keys. Catch 22.
"trust-on-first-use" do you mean something like the certificate pinning?
Let's consider https://www.torproject.org
It uses GPG signatures for its packages.
If SSL is compromised as you say then all I need to fool you is to give you files that are signed using my keys (unless you know that you should use 0x416F061063FEE659 key (magic secure channel) and you've already imported it (again the magic channel) and tor never changes the key.
Where should I go to check that 0x416F061063FEE659 and pool.sks-keyservers.net are correct values (google?) if we assume that the key and the key server (and the connection to it) are not compromised?
Ask yourself: when was the last time that you've tried to check that the instructions that show the key fingerprint, the key server to use are genuine?
Also, If you have paper walls I wouldn't try too hard to make the door impenetrable. It is a trade-off: if you are downloading code from stranger's repository then you won't get much from replacing `curl https://github.com/... | sh` with a gpg-signed (by the same stranger) download.
Security is like onion rings: there are layers but it is only as strong as its weakest link. We know that a real adversary will just hack your machine if necessary.
curl + bash is suck and blam the machine. Hope you didn't do a sudo in the last few seconds...
sudo rm -rf /boot
Sure, some are. But most software packages that a user is going to download aren't signed.
I'm a developer and I don't think I've every checked an md5 signature of a jarfile/gem/package I've downloaded. Nor have I ever been in an environment where that was ever mentioned. (Have mostly worked in small to medium businesses--I imagine that bigger orgs or the defense department might do this.)
Things from Google, Intel, MS, VMWare, Spotify, Github, Dropbox, and even f.lux are all signed. Of course YMMV but the trend has been positive.
(A little worrisome is that there are two running Broadcom bluetooth apps that have explicitly revoked signatures...I wonder what that's about.)
EV signed software is usually done off the internet. In our case we use a physical key to sign it offline and then upload.
Is it more difficult to provide your own fake exe installer than to middle-man https that curl examples use?
With signed MSIs and EXEs, you'd need to get your code signed, which is probably more difficult than the web layer.
The easiest alternative is to run "curl <blah>", download the script, take a peek at the source, and only THEN follow up with the sh.
I mean, you're probably going to see a bunch of "download and install" commands, so you don't get much reassurance if you're worried that one of THOSE might be compromised too. But that's a problem inherent to unsigned internet installer scripts. At least you've somewhat mitigated the possibility that it's dumping your private SSH key to a server somewhere or some other blatantly malicious command.
On the other hand, the script might also include appropriate SHA1 verification commands, or be setting up a repository in a package manager, in which case it's pretty reasonable to trust the script.
curl <blah> | docker run --rm -t -i ubuntu sh
I think cleaning this up and making it friendly enough is a solvable UX problem, but I haven't quite solved it yet.
The solution I'm proposing isn't to add Windex, but to cleanly wrap whatever curlpipesh is doing. Make it equivalent to opening a browser tab: interact with it safely, keep what you like, and throw everything away if you don't like the way things look.
Curlpipesh can be just like loading a site. We have the technology to do it; we just need to figure out the UX.
For example, if you're downloading a script that has a line like this
rm -rf ~/.tmp/foo/bar
It sent my password file to someone in China, started a scanner to look for other systems to infect, downloaded a .tar.gz file that contained a root kit to hide itself, unpacked the .tar.gz, and ran the install script contained therein to install the root kit.
Or rather, it tried to. I had ISDN at the time, and had noticed the modem lights heavily blinking even though I was not doing any internet activity. This confused me, and I pulled the plug on the modem. Turns out I pulled the plug while it was downloading the .tar.gz. It got most of the file, but not quite all of it. It lost the last file inside the archive--which happened to be the install script!
Without the install script, it could not install the root kit, and that made getting rid of the worm a heck of a lot easier.
# real work
You should at least download, verify the size and checksum (if available), take a peek at it, and only then run it.
It is true that there remains the problem of potential truncation exactly at the end of a top-level line, but I contend that "it stops running here" is a much easier thing to reason about (and, strictly, could always happen if hit with a SIGKILL anyway) than "does the meaning of this line change if we cut it off in a weird place".
Personally, i'm totally comfortable with the idea of someone owning my box. The files I care about are backed up offline, I don't have many secrets, and the only accounts that would affect my life in general have passphrases, pins and multiple-factor auth tied to them. The worst thing someone could maybe do is impersonate me and cause havoc using my accounts, but I really don't see anyone having cause to do that. Anyway, I don't sweat it.
If you trust your distro, just verify the PGP keys of packages you download before installing, which is mostly automatic for most modern OSes.
Are you trolling? This sounds like the same sort of argument from the "I don't have anything to hide. NSA/GCHQ can spy on me all they want" camp.
(part of why I want 2FA providers to verify an identity using two or more devices is so that if your workstation is compromised, it still couldn't complete the authentication without a secondary device, making mitm much more difficult; though obviously they could still hijack an existing connection on either device... a benefit would be that for example, an attacker couldn't use a hijacked connection to initiate a money transfer behind your back without having you confirm it on your mobile device as well)
If you are depending on PGP, and only install software from verified sources using secure connections, your private key is still secure. If you don't depend on PGP, you wouldn't care about the security of your private key. You could also do private key operations on an airgapped machine, which wouldn't depend on the security of your main workstation.
On windows there's Sandboxie, which does more or less what you want as well. It creates a sandbox whose filesystem and internals can be modified by sandboxed applications without affecting the state of the REAL filesystem. Very nice utility for mitigating bad programming - I often use it to run multiple copies of applications that crash or refuse to launch multiple instances.
App stores do have some advantages here, today. It's also true that they don't well fit the model for open source. App stores define some root of trust for signing, but just like the CA model for HTTPS certificates, there are issues with that. App stores also bring a host of issues with user freedom. (Even the free ones can have this effect unintentionally: what percent of your coworkers know how to manage apt pinning? If one person on ubuntu 10.04 has an working, installed package, and wants to hand that exact (known working) version of it to a friend on ubuntu 12.04, can they do it?)
PGP signed binaries are one major step in the right direction. More friendly tools and popularized workflows for this would help a lot of people avoid major moments of vulnerability.
PGP (or any other signing system) also leave an interesting challenge, which is what to do in the case of key compromise. Revocation is hard. And say you want an audit log of all prior releases: what can you do?
We're all used to git having an immutable log of history by hash chaining, at this point. Is it time to start using systems like that to keep an auditable log of software releases, as well?
I've been working on something called "mdm"  that attempts to do an auditable, immutable chain of releases like this. It was originally designed to be a dependency manager for (arbitrary, language-agnostic, any binaries) software development needs, but I wonder if something like this is what the software distribution world needs to start evolving towards as well. Imagine if your end users could all verify that they have the same hash... and the same picture of historic releases from the same author, and the same picture of if the releaser's signing key changed, and so on.
I'd love to hear other thoughts on how hash chains and signatures can be made to intersect to tell the broadest story. The challenges are very real here.
This is a natural outgrowth of Rails itself which expects even the novice dev to do everything from the CLI. I also notice a certain amount of machismo surrounding this sort of thing, which means that the novices are that much more likely to blindly install things, in spite of the more senior devs blandly insisting that they should "read and understand the script before running it." This is ironic because of course they are novices and learning how to do a security audit on a random bash script is a non-trivial affair, to put it most charitably. It's also ironic because Rails is supposed to be all about easy and convention, not auditing everything for oneself.
The designers of OSs did not have in mind non-experts using the CLI with su turned on (or even off) all the time. In fact, user-friendly OSs are designed to prevent non-experts from doing any damage, anywhere. In all, it seems like this sort of thing is circumventing basic protections provided by experts for non-experts, and that is probably not something to encourage.
2. They give an easy way to automatically download and install a tool in one pasteable command without having to manage downloads and deal with the inevitable extra UI of installer executables.
3. Why does one need to understand what a script is doing any more than one needs to understand what a downloaded installer executable is doing? At least with the shell scripts it seems like more of an option.
So yeah the reason I bring up Java is I think there is strength in the notion of the "App Server" in the java world. It's like a partner for the OS that allows for somewhat of a security "air-lock". Even if using Docker or something to scale RoR / simulate the effect of an app server node, I just fundamentally think an OS has too much power.... a web app shouldn't have any access to it basically, except through known/monitored/regulated protocols and points of access. Basically, data can be handed to an "App server process" which in turn does some low-levelish system tasks, but the app itself can't do them.
Personally I think the app server approach also makes deployments/scaling easier and reduces the need for a sysadmin guru. RoR shops (and languages that follow a similar paradigm) really shouldn't be running any apps that handle sensitive info for example without a Linux guru and a senior Rails dev auditing app modules (or at least keeping up-to-date on existing security audits).
It's weird, those first few months of Java when you realize how limited you are by your EE or whatever type of container -- then it starts to feel natural/healthy. When architected properly, I don't think apps will ever suffer from this lack of freedom.
People will see no problem with curl | sh as long as they feel they can trust the source/site asking them to do so. They will be running the same risk as with downloading binaries or install packages.
I agree that, aside from that issue, the practice of many people is not materially worse than "curl | sh", but we should condemn that similarly.
Nate Lawson's comment in particular.
Sadly, it had become more popular since you posted that comment.
bash -c "$(curl -sfL git.io/wshare || echo "echo 'Installation failed'; exit 1")"
curl http://google.com/keylogger.sh > temp.sh && sh temp.sh
curl http://google.com/keylogger.sh > temp.sh
less temp.sh # actually read the fucking thing
Obviously, a better alternative is a package manager, pulling from a repository for which you have gpg keys installed (such as Debian provides). But that's not always an option.
Even checking out a git repository and building from source seems as bad a curl | sh to me.
Both introduce a point of failure if the remote host becomes unavailable and you need to deploy new machines.
The curl|sh method is vulnerable to change from the maintainer or any attacker who gets access to the storage layer. git is also vulnerable but requires a collision attack on SHA-1.
Mismatched SHA-1s are only noticeable when you have one to compare against. A git clone is always fresh, so if the attacker rewrote history, you'd not be warned.
At the end of the day, though, you're always going to trust someone with something.
If I can hack the endpoint (or the server that handles the 301 redirect to github), game over.
I could verify an asymmetric cryptographic signature based on a public key I already possess. As long as the private key is not stolen, an attacker who can breach the server cannot backdoor the software I'm downloading.
Maybe it would be an improvement to create a small command line tool that ensures https connections and asks for a hash before running a script?
apt-get install php-composer is much better than curl https://getcomposer.org/installer | php
That's undesirable. First, because you should be making the decision about whether to run the script after you look at it, and if it's already running while you read it then it's probably too late. Second, because you should not be echoing possibly-malicious characters to the terminal - view with less or an editor, tee works like cat here.
"Second, because you should not be echoing possibly-malicious characters to the terminal" - this is something I know nothing about, so I would be (and maybe others) highly appreciative of any links about this class of attacks that you could post. Is it possible to execute malicious code while writing to STDOUT, or is it because you can hide malicious code with escape characters? Also, I don't quite understand the "cat" comment as we're not doing any file concatenation here?
Gotcha. That makes more sense.
Regarding the second, I don't know whether there are presently any attacks in the wild for any commonly deployed setups (if anyone else does, I'd love to hear about them) but terminals are incredibly messy beasts once you start to throw nonprintable characters at them. Better not to expose that surface area.
Save it to disk, examine, then run.