Honest question: do you audit every line of code you ever download and execute?
Edit: Ironically Docker itself has the potential to help solve the problem of running untrusted open source code. I think every open source project should include either a Dockerfile or Vagrantfile to help users get up and running quickly, and safely run untrusted projects.
On my linux machines, generally, I'll only install software from trusted sources, say rpm repos or ubuntu sources. In this case, yes, I would review the source of this file before running it.
For example, I would notice that it requires apt-get
I think I get where you're going with this, in that, the burden to run this and review the source is too high, that it almost seems like a waste of time. If that is your point, then I agree with you.
I don't know much about docker, but doing a "curl | sh" peeks my interest, then downloading additional binaries into /usr/local/bin, I'd want to take a closer look. Obviously, this is a case by case review, till it sits well with me. If I was going to run this in production, I'd want to have a really good idea of what this was doing and what to expect, so I'd probably take a closer look at the source if it was not clear from the documentation.
Completely non-judgmentally, it's piques my interest, so you can spell it right next time. (I have the opposite problem: I spell things right and say them wrong.)
This can go on pretty far - trust is always an eventually unsolvable issue.
There is a certain level of trust that is easy to achieve and easy to get.
Trusting dotcloud is easier than trusting everyone on the internet is pink bunny. Happens to be that HTTPS and signing aren't exactly hard either ;-)
Because people who use command lines / open source software generally have better judgement about this sort of thing than the average user?
You either have to trust Docker (a fairly well known project built by reputable people) isn't going to root your machine, or download the source yourself and audit it.
This is no worse than suggesting you "git clone whatever; cd whatever; make" (aside from the lack of SSL)
...and everybody else on my network, with that method. Doing that I don't even get the chance to think "Hey wait a second, why was this only 50 bytes of shell script...".
The reason that you see outrage for this "method" is because it is born of laziness and far too reminiscent of more disturbing times in computer security.
The original poster didn't say his issue was with the lack of HTTPS so I assumed he doesn't approve of this technique in general, but yes, I agree HTTPS should be used.
> Because people who use command lines / open source software generally have better judgement about this sort of thing
Why do we need an instruction on downloading the source to begin with? It really just promotes bad habits with those who know no better, i.e. new/inexperienced developers. The problem is when people see instructions like that on 20% of the guides they read in earnest, trusting that everything is OK if enough people say it. One hopes they stumble upon a discussion like this so that they can consider the consequences but that just isn't going to happen to everyone. True, one should exercise equal caution while cloning, gem-ing[1], etc. It would be great if authors would just link to the source and paste the relevant lines from the README if necessary.
I agree, they should use SSL (and don't use a URL shortener, which they don't, but I've seen before).
Ideally it would download a file from Github too, that way you can be sure it's coming straight from the publicly visible open source repo, and you can audit if you want.
But I think the general outrage over this technique is overblown.
One alternative would be a graphical installer that asks for your root password. It would very likely also be served over unencrypted HTTP. This happens all the time, and HN never calls anyone out on it.
How is this different, other than a graphical installer being completely unauditable, whereas curl|sh is quite trivially auditable? Both run code as root.
Would there be any benefit in creating a VM on the fly, running the shell script in the VM and there reporting back on what was modified by the shell script. If all goes well, I reckon you can then safely run the script on the host machine.
Even if you can be bothered to semi-manually audit the changes a script applies to the VM and can afford the time and space overheads of such a "guess-and-check" approach, a malicious server could send you a different script the second time you requested it, or the script could in turn pull down other payloads differently the second time it executed. If you try to extract a diff of the changes applied to the VM and then reapply it to your host machine to ensure the behavior is the same, why not simply have an installer system which behaves in a more restricted way to begin with? The root of the problem is that shell scripts fetched from remote servers are far too flexible to be 'safe'.
Seriously, Docker is perfect for creating a sandbox with all dependencies to help new users get up and running quickly and safely. Every project should come with a Dockerfile and/or Vagrantfile.
... except when someone writes a script that guesses (or reliably detects, depending on container technology) whether it's running in a VM/container and acts differently then. Or if it only acts maliciously say, one out of five times ("old school" viruses would often do that - destroy your floppies sometimes, but most of the time just spread).
Sounds like a great application of Docker, come to think of it. I'm sure it's quite possible to spin up a new docker VM from a shell script and do exactly that.
Hm, the only problem is installing Docker in the first place ...
I am not very good at unix command line, but maybe it could be done like this:
curl http://get.docker.io > /tmp/docker-install && sh /tmp/docker-install && rm /tmp/docker-install
that gives quick and easy way to inspect by executing only first command and then inspecting the source, i.e.,
1) copy just first part
curl http://get.docker.io > /tmp/docker-install
2) then inspect, e.g. cat /tmp/docker-install
3) run the install
sh /tmp/docker-install && rm /tmp/docker-install
or, if not reviewing run whole at once
p.s. I know, that there should not be direct copy-paste from browser to console and this method leaves a file in /tmp on installation failure
Exactly _what_ is your qualification for debian being lumped in with hipsters? Some of us have used it as the most rock solid STABLE linux distro for servers and desktops for quite a long time.
As an aside, I miss some of the raw diversity that was present in the old Linux distros. Slackware was my drug of choice due to its steadfastly BSD flavor. I guess Slackware is still around, but have no idea what its status is and whether Patrick ever moved it over to system v-ish convention in order to be more like other Linux distros. I guess that distinction is even a bit anachronistic given all the fancy changes to the way init is done nowadays.
No no no. Do NOT do this. Kids these days...