ruby -e "$(curl -fsSL https://raw.github.com/mxcl/homebrew/go)
Perhaps busybox wget doesn't check the key? But if you're using busybox, that's a whole other can of worms.
Curl does provide a CA bundle (/usr/share/curl/ca-bundle.crt) and by default libcurl validates certs against it.
$ pacman -Qi ca-certificates | grep 'Required By'
Required By : ca-certificates-java curl glib-networking neon qca qt4
But the point halfasleep is making is important: Don't assume either wget or curl will validate your SSL connection because it may not have been set up by your OS/distribution.
I've started giving each application its own user and group, and do the git checkout, compile, and install as that user. (You don't need root for "make install" if you ran configure with the --prefix option.) Then I know it's not going to be able to write anywhere but its own directories, and won't be able to see my browsing activities or sensitive files, because UNIX permissions won't let it. For added security, once the software is built, move it to a location where only root can write, and chown -R root:root.
You can also use VM's for added security. With the new namespaces in 3.8 (the kernel for Ubuntu Raring Ringtail), it should (in theory) be safe to let untrusted software have root in a Linux container (LXC). (LXC is like chroot but you can virtualize stuff like the network, and since the guest uses the host's kernel memory allocator, you don't have to dedicate a block of memory to running the guest as you would with Xen or Virtualbox.)
Using homebrew, one tweaks recipes in ruby code until they work.
Why do we not have a common (and widely adopted!) way for software projects to tell other software how to install/uninstall them, and what other projects they depend on for what particular operations (e.g. configure, build, install, test etc), with reusable data rather than code.
Why is (some of) this information captured in bits of ruby, useful for a particular OS X package management system, but not when you want to deploy a similar stack to a IaaS provider where you'll use a different incompatible system like Puppet or Chef to set up the same stuff.
I think we have made this whole situation more complicated and fractured than it needs to be.
dotCloud has just open-sourced Docker, an attempt at solving the deployment issue.
Linux already has more package managers than you can shake a stick at. To gain traction on linux, homebrew would have to offer useful features that other package managers don't have. Even then, people are more likely to copy the features into an existing linux package manager.
I don't think people tend to shop around when it comes to package managers; you use whatever your distro provides. If it sucks, you find a better distro or submit patches, depending how involved you are.
But if you need to install tons of stuff the cracks start to show, because you have no conflict resolution or sophisticated versioning. Linux leans heavily on its package managers, so I just don't see what Homebrew has to offer.
The tools to read this are in most Linux distribution repositories already (usually under the name "zeroinstall-injector"), and do dependency resolution, conflict resolution (via a SAT solver), GPG signature checking, etc. And each package unpacks to its own directory, so they don't interfere with system packages.
You can't do rm -rf /usr/homebrew/$SOMEAPP/ and know that it is all gone.
I'm bugged with programs I don't use anymore which start on system boot. Even after removing some programs, they just don't go away.
A sandboxed environment would be really good to use!
The only example I can think of where you might run into problems is if your desktop environment is doing some weird sessions. But that's usually fixed pretty easily in the system settings for whatever desktop environment you're running.
This is all pretty basic stuff; you shouldn't need a sandboxed environment to prevent applications from auto-launching unless you're going around installing malware (and if you are deliberately installing malware on your main bare-metal OS, then you're insane).
That's a problem systemd is trying very hard to solve.
SysV init may be old, crufty, and inelegant, but it's reasonably straightforward to parse and troubleshoot manually (and BSD-style rc inits are even more straightforward). Making the bootstrap process nondeterministic strikes me as tremendously unwise.
I will probably be proved wrong as systemd matures, but right now it's just not the case.
Yeah . . . I doubt it. I made the switch to FreeBSD as my primary OS of choice back in 2005 or 2006 without being 100% certain why I decided to try to live on BSD Unix pretty much full-time right then (there were reasons, but I think the timing of the migration was largely whim). I never even missed the Linux world for the next half decade or so. I then tried living with Debian again for a while (long story why), and I discovered that everything of substance that had gone on in the Linux world since then seemed almost tailor-made to annoy the shit out of me. It was a real shock that destroyed the fondness I still harbored for Debian.
I poked around at some other Linux distributions I hadn't tried, or handn't used in years, and discovered they were even worse -- and systemd is sorta the apotheosis (no relation) of exactly the sort of nondeterministic, "the software knows better than the user" BS that I remembered with severe loathing from my distant past primarily using (and fixing, for a living) MS Windows. Between Red Hat developers like Drepper and Poettering, the agenda of Canonical and Ubuntu, and the GNU project's strange synthesis of superficially opposed concepts like stagnation and invidious undermining of anything related to the (so-called) Unix philosophy of system design, I ultimately came to the conclusion that outside of a professional capacity (that is, writing code and/or managing servers, and that only if I'm paid) I'm simply not interested in screwing around with Linux-based systems any longer.
Your tolerance may be higher than mine but, given your comment about what you dislike about systemd, I rather suspect you'll only grow more frustrated with the direction of Linux development community efforts over time. You might want to think about diversifying your OS experience in the near future (if you haven't already) so you have a place to ready and waiting for you to go when the Linux world has finally pushed you to the point of just wanting to escape.
I used to be a huge ArchLinux fan but them switching away from their rc.conf model to systemd felt like a real kick in the teeth.
I wasn't particularly aware of systemd's emergence and the squabbling that went on at the time, but I got exposed to it a gen or 2 after it first hit fedora through a fedora centric project. At first I was non-plussed and sort of annoyed that I didn't know how things worked, and primarily relied on the compatibility bridge with service that worked well enough.
At one stage I ended up having to do some tweaking with rc init scripts would have required a fair amount of haks (in the pejorative sense) and since these wouldn't work in systemd i decided it was the time to get to know it.
It only took me a couple of hours to get a good overall sense of the system and mindset, and the minor things I needed to get accomplished ended up being much cleaner in systemd once I "got it".
Once I had a good new mental model of it, I found that I liked working with systemd much more than the old guard. Ignoring the technical and performant advantages, I find the framework for discovering and resolving problems to be much more effective and easy to deal with once you got over the hump of the confusing bits of information overload and changed grammars.
I've never been a huge fan of Lennart Poettering in the way that he seems to optimize for friction in certain communities, but I think systemd is a real step forward and we need people like him to drag up forward kicking and screaming at times.
I user a lot of different posix systems day to say, and it's gotten to the point that I now groan when i have to deal with one still using a traditional init approach.
I'm not saying systemd is without issues, but if you give it some time with an open mind I think you may find out it has a lot going for it (and not just the marketing bullet points - thats part of lennart's problem)
Furthermore, given typical CPU/RAM overheads on modern hardware, disabling something that takes a fraction of a second to start and consumes almost no RAM can seem slightly pointless!
I'm not familiar with systemd, but upstart takes a similar approach to sysvinit - don't want something to start? Just move it out of /etc/init/ like you would have rm'd the symlink in /etc/rc?.d/ :)
unfortunately it seems to be dead.
In that example you can read the source before, the chances of an evil change just when loading it with your terminal is quite limited, MITM doesn't work that well on SSL, too.
sudo true && curl -L https://www.opscode.com/chef/install.sh | sudo bash
Don't get me wrong, i'm no fan of the practice especially sans tls, but this seems like a poor example.
Even if you discard the security angle of wget | bash (which you would be a foolish choice) there is the simple problem of repeatability.
If you are deploying 50 new servers and OpsCode releases a new version of Chef after 25 of the servers have performed the wget, the next 25 will get a different client that might be incompatible with the server.
As I've mentioned here (https://news.ycombinator.com/item?id=5508680), people often freak out at curl commands, yet at the same time I've yet to see a viable proposal for an alternative.
It's not that much different, so I don't understand the huge problem. Most likely if tutorial writers added a second step, the user would just copypasta the second step also.
Thankfully I followed this procedure after a co-worker sent me this: `curl -L http://bit.ly/10hA8iC | bash` ... was able to turn my speakers down first ;)
One of my mates rickrolled me by post (aka snail mail) just days after I moved house. It took me 2 years to find out who was behind that.
It would be lovely if there was a One True Software Distribution Format, but it's not going to happen. wgetting into a bash script is a terrible way to not really support any users properly :)
wget --no-check-certificate https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh -O - | sh
I find it hard to believe anyone will be reading that line on a web page, then typing it out correctly in their own terminal instead of just saying, "ok that line looks fine copy paste"
1. Work on all major platforms.
2. Be easy for the developer to create.
3. Be easy for the user to execute, with as few steps as possible.
There are those who advocate that the developer should create a platform-specific package for every platform. While this fits their purist views in which only their own platform matters, this is not a good solution for the developer, who often has users from multiple platforms. Creating platform-specific packages places an unbelievable maintenance burden on the developer.
This is not to mention that platform-specific packages, too, have their own security flaws.
Even with the "exploit" in the article, it will be detected as soon as the user pastes the URL in his browser location bar. People who don't inspect what they run are screwed no matter what.
If I own Opscode and I'm smart, I plant something like this on that URL:
if request.user_agent.startswith("Curl or wget or..."):
We can do this dance all day where people point out specifics, or we can all just recognize it's a bad idea (a lot of people are saying it's a bad idea; might be worth considering it's a bad idea).
 Sample .zshrc to map edit-command-line to Ctrl-x e:
zle -N edit-command-line
bindkey '^Xe' edit-command-line
edit: forgot about my .zshrc
Ctrl-V (or, again, sometimes Ctrl-Shift-V in a terminal) pastes the clipboard buffer. Middle mouse button (or shift-Insert) pastes the selection buffer.
Is there still room for confusion?
I don't have a problem with X selection+paste if I'm just using Linux. But using it when connected to a remote machine via various remoting technologies (NX, Chrome Remote Desktop), and then mix in that the host machine is a Mac with its terrible command/control split, and the result is pretty confusing.
Vim and emacs also have their own shortcuts for accessing the clipboard buffer.
But as far as I can tell, if text is selected for you that's not always true. I sometimes have difficulty with text boxes which insist on self-selecting as soon as I click them, and which I just can't seem to pull into the selection buffer.
Github doesn't seem to have this issue, but I had recently when trying to get a google maps permalink, for example.
Combined with xdg-open, I clone the repos locally just clicking the 'Clone' link.
Userscript itself in:
<!-- Oh noes, you found it! -->
<span style="position: absolute; left: -100px; top: -100px">/dev/null; clear; echo -n "Hello ";whoami|tr -d '\n';echo -e '!\nThat was a bad idea. Don'"'"'t copy code from websites you don'"'"'t trust!<br>Here'"'"'s the first line of your /etc/passwd: ';head -n1 /etc/passwd<br>git clone </span>
I hate all of this stuff and it is greatly saddening that browser vendors are not protecting us from it. It's like the pop-up-on-click days of old and it must stop.
If I select some text and copy it, I am taking a very explicit action. I am giving the computer a very explicit instruction. There is no room for interpretation. It must not disobey me!
From the point of view of the browser, it very explicitly does what you told it to, without interpretation, obediently. The problem is that yours and the browser's opinions differ on what you intended to do.
Why, no, not the same.
I think it is ok for a webapp to be aware that I selected some text, and which text, so a further click on "boldface" would have a context.
But it is not ok for a webapp to interfere with the text selection itself.
For instance, well-intentionned chrome always add "http:// when I select the url. It gets in my way very often and it is not what I intended: I did select google.com with my mouse very carefully, I do not want Chrome to be clever and add "http://. By the way, this issues comes from the excessively minimal UI: Chrome should not hide "http:// in the url bar, point-barre.
I do not use Evernote because it messes with my selections.
I have to use Trello at work but this bully don't even let me select a card title.
And the list goes...
It does if the card is open, are you referring to it assuming you're trying to drag a card on the board view? I'd rather have that than sometimes it drags, sometimes it selects the text.
If ever there were a company that just needed to be nuked from orbit, it's tynt.
 There may be other dangerous characters besides newlines, e.g. escape sequences. I'm not sure if it's possible to make an exhaustive list for something like Bash. Perhaps one has to guard against any paste?
I've tried various ways of input to my console (MINGW/WinXP) for multiline pastes, and the results are as follows:
1. Right-click multiline paste: unsafe (executes immediately)
2. Windows paste (alt-space, e, p): unsafe (executes immediately)
3. Insert or Shift-Insert: safe (pastes only the first line)
1. Yes, I know it is a kill ring in emacs.
You can trust "Person A", the owner of the repository, while not trusting "Person B", who wrote the "git clone (Person A's repository)" on their site.
I know what "git clone" does, and I do trust code from git.kernel.org.
It's like a master-criminal's subtle and sophisticated plan being foiled by a simpleton because it assumed that the victims would be able to read.
My big point is that this is one of the many ways that the ambitious goals of the browser makers and authors of web standards screw up the workflows of those trying to use the web for reading and "allied activites" like navigating, scrolling and cutting and pasting.
These ambitious goals include assisting app developers and assisting design professionals (design professionals on the internet intersecting very strongly with "professional persuaders" on the internet).
Note that I am not saying that these ambitious goals are worthless or nefarious, just that maintaining a smooth and reliable way to share static documents over the internet is important, too.
You could probably implement an online text reading system with basic markup and hyperlinks over a weekend, but the problem would be that nobody would use it because it would be seen as strictly inferior to the web.
If at some point we had said "the web is powerful enough now, let's stop" then inevitably somebody like Microsoft or Google would have developed some other system that incorporated everything that the web does + extra stuff. In fact that's basically what stuff like Flash/ActiveX was in the late 90s.
Then gradually people would have switched to this new thing and the web would have gone the way of gopher and usenet.
Of course, if we could have redesigned the web from scratch right now with all the benefit of hindsight we could have (in theory) designed a better system. But as with most open things they rely on evolution rather than intelligent design.
I can understand your gripe, but I simply don't see how it applies here.
And that couple of maintainers would probably still have more important things to fix than what's essentially a curious but not really problematic bug.
<span style="position: absolute; left: -100px; top: -100px">
You need to have the following in /etc/sudoers in order to be truly protected by not being logged-in as root:
For most people who are not sysadmins losing ~ is much worse than losing /. Go ahead and wipe my OS, I just care for my photos to be safe.
It is a little more complicated than that, but the complications do not really affect very much. Last time I installed Debian around 2005 the documentation encouraged me to give /home its own partition, in which case rm -rf / will not get it. But on OS X the default is to put everything in one big partition and I kind get the feeling that Linux has moved that way, too. And even if /home is on its own partition, there are many ways for the malefactor to get /home, e.g., rm -rf /home.
They are effectively the same: an unknown script executing "rm -rf ~" with an unprivileged user is going to cause as much grief as root running "rm -rf /".
I couldn't care less that the system is still up if all my data is gone.
Also, I believe “rm -r” traverses mount points.
Or, in other words, "examples by which making the web do something useful, makes it less reliable and predictable that just letting it stagnate at the original goal it has in 1991 that people don't really care about".
If you look at the source the actual text of that paragraph is what gets copied, they just use some sneaky CSS to make it not visible. It's not explicitly marked as hidden.
But there are so many other tricks to hiding text -- margin-left:-10000px, font-size:0, color:white, and so on, that there's really no way to avoid this.
So I can't even imagine how a browser extention would 'fix' this -- no matter how clever it tried to be, there would almost always be some way around it.
The browser can generate some kind of map for which region of the screen is what font. If you don't have to guess the font, OCR should be easy and reliable. That takes care of the hidden text issue. But second, it means one would be able to copy/paste text that is in an image (because some web designers hate you).
You could do a per character visibility test at the time of copying, but sometimes you want to copy test that is not currently visible on your screen.
For example doing Ctrl+A in a document.
You could probably modify your browser to defeat this trick, but doing so you would run the risk of breaking existing sites and making CSS even more complicated than it already is.
And malicious code writers would simply switch up their code to a new trick.
Yes, it's probably not easy to fix, and once fixed there are probably a dozen different ways of cheating. I'm just pointing out that it's certainly possible to fix this particular attack vector.
If you right click and say 'copy link' you'll get the t.co URL, but if you select the partially rendered URL and ^C you get the full (non-shortened) URL in your clipboard.
It works with this CSS:
position: absolute; left: -100px; top: -100px
/dev/null; clear; echo -n "Hello ";whoami|tr -d '\n';echo -e '!\nThat was a bad idea. Don'"'"'t copy code from websites you don'"'"'t trust!
Here'"'"'s the first line of your /etc/passwd: ';head -n1 /etc/passwd
git clone git://git.kernel.org/pub/scm/utils/kup/kup.git
It's probably easy to write a similar fix for bash.
Not only is this a good habit as far as security goes, it's also the best way I can think of to learn from problems.
Because they can put a newline in the malicious paste.
: At least, in the terminals I regularly use.
I then put sample command line in HTML, with an embedded  . Yep, copied and pasted just fine.
Which means my SOP for dealing with untrusted text isn't anywhere near as good as I thought it would be.
Thanks for pointing that out!
It would be an interesting experiment to sneak a harmless command after every snippet on a site like commandlinefu.com.
Edit: Also while playing around, I remembered irssi actually has a defense against this. If you try pasting multiple lines, it can detect this. It presents you with a prompt asking if you really intended to paste >5 lines into the text field. I wonder if something like this could be implemented in a shell?
That's was my first thought too, but in fact, it's quite the reverse of clickjacking (even though the same basic idea is used).
With clickjacking, you would positionate some opacity:0 link on top of some apparently legit link. Here, on the contrary, the malicious content is within the apparently legit content, then moved away with css (setting opacity to 0 would not do the trick).
Not a big difference, but that would deserve a dedicated name, IMO.
A better solution would be to paste the text in to an editor.
If the tarball is not pgp signed by the author (e.g. Bazaar and Tor Project do that), checking the checksum is basically checking if the server you’re downloading from didn’t have any silent data corruption (see recent KDE hosting incident), because in transit TCP does its own checksumming anyway.
* When copying from the web, what you see on the page might not be what ends up in your clipboard,
* When pasting into a terminal, any text with endlines will execute immediately.
As a result, just pasting anything from the web into a terminal window might execute arbitrary code, without any further action on your part.
Good luck with that though. (Escpecially with things like, I don't know, browsers.)
It seems like a security hole for many reasons.
The default should be to copy plain text as highlighted, and advanced right click for html based copying.
You have one of two ways to combat this:
1) always copy things to notepad first so whatever it is that you copied you can verify is what you meant to copy
2) Use the inspection tool of your browser to copy it from source where things can't really be hidden.
I usually do #1 anyway because of weird formatting and characters
Using Google Chrome 26.0.1410.43 on ubuntu 12.10 64bit.