Hacker News new | comments | show | ask | jobs | submit login
Time to hand over the reins before Capistrano costs me my youth? (groups.google.com)
472 points by codebeaker 993 days ago | past | web | 240 comments



I just wanted to point out how poisonous our community is. It's something that I've been struggling with for a long time, and trying to slowly change.

The fact that people read this article, and don't feel the need to mention his fear of releasing software just shows how broken things are. It shouldn't be an accepted fact of open source that if you release new code that might be backwards incompatible, you get vitriol for it.

His quote:

... but I too cowardly to release it and make it mainstream, as Im afraid it'll destroy whatever good will for open source I have left when the flood of support questions inevitably comes in, followed by all the people who are unhappy with what I've built and feel obliged to tell me how bad I am at software.


I'm the author of a popular open-source project for my programming-language-of-choice. Before I launched it ~3 years ago, I reached out to a well-known OSS guy in the community. You would probably know his name.

I wrote to him asking if he had any feedback on my project, since I'm about to release it, and because he had a similar project that didn't get much traction. He didn't reply...fair enough, I'm sure he's a very busy guy.

Fast forward a few months, I launch the project and it blows up on HN for a few days, and on twitter for a few weeks. The majority of people love it! But sifting through the feedback online, I'm shocked and disappointed to see is the very person I reached out to trashing my project publicly in comment sections and on his twitter feed. No constructive feedback, only how the project itself was a horrible idea, and the code was disgusting. I was dumbfounded...before, I wasn't worth 2 minutes to reply to in an email, but now I am worth destructive criticism on social media by the same person?

I waited a few months and sent another email asking for advice on maintaining the project. No response still. Yet every now and then, to this day, I still see backhanded comments by him about my project whenever someone mentions it to him on twitter.

Some people are toxic, petty, and childish. I don't really have a lesson from this, but that's my little story.


Kudos on being a bigger person than me; I'd be tempted to name names.


Some people are just trolls, better known as haters. Don't let em get you down :)


...those are not the same thing.


Hey, I went through the same thing! It's funny how you can get a hundred positive comments, but one snarky asshole will get you down. I usually save the positive comments in a text file and re-read them later...that makes me feel better.


Clearly he does it for show.


I completely agree. To new hackers, I always tell them that if they post something online, they should:

1. Focus on what people they actually know and respect think vastly more than anyone they hear from online.

2. Assume that anyone criticizing their effort is an armchair hacker who just likes to be better than others. It's not always true, but is better than assuming that everyone else is better than you.

3. Always be proud of what you've done -- just because we've gone to the moon doesn't mean you shouldn't be proud of making your Arduino blink the LED finally.

I talk about it a bit near the end of this: http://blog.saikoled.com/post/62737468595/entrepreneurship-f...


Point 3 really struck a chord with me.

    The greats weren't great because at birth they could paint
    The greats were great cause they paint a lot
    (Macklemore - Ten Thousand Hours)
It's the tiny steps that bring us forward. A big project starts with a single commit.


Gladwell always points out the 10,000 Hour Rule, which only becomes more depressing as you age. In my mid (almost late) thirties, I now sit down and think about all the things I don't have time to become an expert in, even if I want to.

Even if I could dedicate two hours every single day of the year to becoming an expert at something, I'll be 50+ before I'm an expert in it. If I could dedicate about 15% of my waking life to becoming an expert in new things, there is only enough time in my remaining life to become an expert in two things at most. And that's assuming it doesn't take more hours or become more difficult or even impossible to become an expert in new things when you're much older. And, frankly, what's the use of becoming an expert in something in your mid 60s?! Just in time to finish the last bit of your life.

If only I'd known the value of time much earlier in life, so I could have jumpstarted that in my 30s and focused on more than one thing to become an expert in . . . you only realize that sort of thing when it is too late.

Aaaaaand now I'm bummed out. :)


I don't think that this rule (I don't know Gladwell's version, but I hear this often in Karate) is meant to say that nothing less than 10,000 hours is worth doing.

Why do you do anything, after all? Do you have to be an expert for it to be worth doing? If this were the case, certainly no one would be a parent. It's just a guideline to keep people from claiming expertise that they don't have yet, which is often a problem with people who have been doing things for a very short time.

Or for a specific example, I've been doing karate for 10 years very seriously. Overall, I've probably spent about 4,000 hours doing karate in classes. Of that, I've spent probably about 1,200 hours teaching, which I started doing after only maybe 800 hours of training. I've trained two students from white belt to black belt, and have introduced on the order of 100 students to karate.

But I'm not an expert. According to this rule I need about another 10 years to be one. Does that mean that I don't enjoy doing it? Or that me doing it doesn't contribute to the state of the art? Hardly. Being an expert and being able to contribute are not the same. I'm not going to have karate masters coming to me for advice. But I still have made the world a better place, in a small way, for ~100 people.

My father and mother both started new careers at 40, again at 45, again at 50, and now they're both moving to new things again at 53 (they were young when I was born). Neither of them will probably be experts at what they're working on in a technical sense, but they're still making substantial contributions to the world. Heck, my dad just got his very first journal article published this year in an area that he started working in at 50!

http://america.aljazeera.com/articles/2013/8/8/study-conflic...

In the end, don't do stuff because you want recognition as an expert, do it because you care. Expertise will follow, for the things that are worth it.


I love Josh Kaufman's new book: http://first20hours.com/

I find that in some ways Gladwell's book as the opposite effect of causing me to not not even want to start something due to this "10,000" hour rule.

At the end of the day, we don't need to be an expert in most things to be proficient. Josh's book lays this out in a nice way. On a more practical level, even 1 hour a day on something has tremendous accumulative effect.

I believe the biggest gift we can give ourselves is to not lose our learning mindset. For those of us who have gone through college/university and gained the discipline to study, we tend to throw it out when we are done. In hindsight, that is the best skill that one should retain in their life. The ability to be disciplined and focus on studying/learning/doing new skills is what I am trying to continue to cultivate and I hope that when I am 50 I still have this mindset and discipline.


10,000 hour rule is only a metaphor. I think at best it should be treated as something that requires a lot of effort to master.

But its not like you go and spend 10,000 hours and at then at the 10000 hour 1 second you become an expert.

If you are deeply involved in something working day and night. Within an year, you will be close enough to swimming with experts. Often that is all you need.


It makes me sad that interacting with a community has to come with disclaimers :x


I can't comment on the poisonous nature of the community but what I have noticed is that at some point its taken for granted that the open source solution will be very stable and solid.

It seems (and this is just entirely my opinion) that it is forgotten that people are doing this out of their own goodwill and are not necessarily being paid for it and really have no contractual obligation to keep doing what they are doing.

It's almost the worst of both worlds. You get all the responsibility for having your open source project be considered at the same level as a commercial solution but without it actually being a commercial solution.


I released something huge a while back:

https://github.com/zerotier/ZeroTierOne

I got some good feedback, but more than my fair share of "this is the stupidest fucking idea I've ever heard of" and such. Luckily I know to ignore pretty much all that shit.

I'm gonna tell it like it is:

There are a lot of people in tech who don't have great social skills. They're awkward, feel withdrawn, and have trouble dealing with other people generally.

One quick and easy solution some of them hit upon is to just be an asshole. If you're an asshole, people sort of sometimes defer to you or steer clear of you. It creates the illusion of power and influence while shielding a person from having to do any real work to improve themselves in that area.

If you think it's bad online and with OSS projects, try delving into the meat-space startup scene itself.


In my opinion, it's ridiculous to excuse people somehow on the basis that they don't have great social skills. Maybe that was the problem in the 90s. That really has nothing to do with it today. There are a lot of people in this scene who have plenty of social skills, they just choose to act like raging assholes. Some fraternity brother type in the startup scene is not necessarily any better than a nerd in this regard.


You can disagree if you want, but I think that someone who chooses to act like a raging asshole has a definite lack of social skills.

I'm sick of these two types of assholes, really. The first type is the "I'm an asshole online -- it's my persona. "

The other is the hater / troll.

And I just have to think that both of those kinds of assholes result from a lack of social skills - as in, "I don't know how to get along with other people in society."

Damnit, I just want to code cool stuff and use other people's cool stuff. Can't we all just write cool stuff and help each other out?


Look closely. In many cases they don't actually have good social skills. That's the thing about being an asshole. It covers that up. Blind raging assertiveness is not good social skills anymore than banging on something really hard is craftsmanship.


I have little social skills, anxiety etc. I can't afford to be an asshole, in fact, it's quite risky for me to take a stand even when I think I should. Someone who is consistently a jerk to others and yet has a good job is unlikely to have low social skills.


I think one contributing factor is that discernment, critical thinking, and even applied pessimism are all good qualities for a programmer. Many of us get grumpy because we are on high alert for such extended periods. We are looking for bugs and issues. We are objecting to proposals, finding fault, raising negative points. It's a critical mode of thinking, so we accept being critical. Or grumpy.

So when some other human is the source of the bug or blocking issue then we don't take care to be social and delicate. That's broken, that's badly built, hey that sucks. Or just channeling other pent up annoyances onto whoever is around.


> I just wanted to point out how poisonous our community is. It's something that I've been struggling with for a long time, and trying to slowly change (from inside the python community).

Do you mean to say that the Python community is poisonous? Or that the Ruby community is poisonous from the perspective of a Python community member?


I didn't put too much weight on whether it was Python or Ruby. I felt it was a statement about the whole open source community rather than a particular group.


Yea, that. The python bit was just a side note.

I went ahead and removed the python part, as it didn't really contribute to the message.


I thought the Ruby community was supposed to be nice? MINISWAN and all that?


Definitely! But the rails community prefers DHHIADSWAD, and I'll let you guess what the letters mean.


MINSWAN, you had an Extra I :p

In case anyone doesn't know it.

"Matz Is Nice So We Are nice." (Matz being the creator of Ruby and generally nice guy.)


I guess I wanted it to be Matz Is NIce So We Are Nice because it's more pronounceable.


Matz Is Nice And So We Are Nice.


It's "Matz Is Nice And So We Are Nice", because MINASWAN is a joke on the Japanese word for "everyone" (minasan). The -swan is like a super-cutesy suffix honorific thing.

... And yeah, it sucks that it feels like it's died a bit. But keep it up anyway; https://twitter.com/tenderlove is a great example to follow. :)


I don't understand why he or anyone for that matter would care what other people think if they can't articulate it in a civilized manner. Nasty emails is what the delete button was made for.


Its a cumulative effect, the individual e-mails can be laughed off, but you are spending the energy to hit delete. Its e-mail without a spam filter of any sort. Its not the first e-mail that will get you, its the thousandth. No one survives being told they suck thousands of times unless they build a hell of a attitude and we get projects that are "mean to newbies". A lot of legitimate newbe questions are indistinguishable from the opening volley of a troll.


I can totally understand why someone would still be emotionally drained from receiving lots of criticism, whether they intellectually know it's silly or not. It's hard to have the self-confidence and restraint to maintain a good attitude when you're surrounded by criticism.


We're social animals and we care what others think, at the DNA level. We're also evolved, and we can recognize what should be paid attention and what should not. But we can't entirely escape our DNA, and we vary from individual to individual on how far we can each escape. We're not code, we're wolves.


You can dismiss the infrequent and brief squealing in your ears, but once it becomes tinnitus, the incessant, constant high pitched tone makes it impossible to focus on anything.


I can empathize with much of what he wrote. I built a fairly popular site back in the late nineties that I only finally terminated in 2010. It had a very dedicated community of about 100,000 people and did many millions of dollars in business (not for my benefit - the service itself was free, on principle).

I spent so much of my life working on it. I wrote the software behind it, designed the presentation, handled the customer service, mediated disputes, handled the promotion, and helped with user-founded real-world gatherings built around the community on the site.

It required so much of my attention and, unfortunately, often was so hostile and venomous that the only reason I stuck with it for many of the later years was out of a sense of obligation to the community (many people made friendships through the site, met spouses, established real world businesses, and even put themselves through school thanks to the site) and a sense of obligation to myself . . . I put in all of my 20s and part of my 30s dedicated to the site and service. Hours and hours every day. I put in enough time that it was a second full time job -- not counting the $25k of cash I put in over the years.

I had real life stalkers via the site. I had harassment via my email, IM, and even via local authorities (with such a large community, there are bound to be a few crazies). Hell, even just my email inbox was depressing. For the last 3-5 years of the project, I felt sick every time I would visit my own site. Sicker when I would check my own email for it. To the point that I would go months without going to my website . . . and more without checking my email. At one point, it was so bad that my inbox had 1.1m messages (that is AFTER filtering out spam).

The only thing worse was considering shutting it down. It was a part of me. It was probably my biggest accomplishment at the time. A lot of people dream of building such a huge successful community from top to bottom with their own hands for so long, but never get anywhere close to achieving it. How could I give that up, no matter how much grief it caused me?

I'm not exactly sure what pushed me over the edge, but in 2010, I finally shuttered it. I felt bad for it. I still felt obligated to them and to myself . . . but I sensed I had to move on. And since then, I have had this great sense of relief. I have time for myself and time for other potential future projects. I am no longer forced to dismiss other opportunities, because of the obligation I had to this project. I no longer felt sick thinking of my email. I no longer felt like a prisoner. I felt like I had more control over each day of my life and what I did with it.

I have never been one of those people who sticks in a relationship just because leaving it would make all the invested time "feel like it was wasted". When a relationship goes bad, I move on and don't look back or even keep in contact. I'm good with that. This experience, though, gave me a little insight into all the people out there who have a hard time leaving relationships. Even unsatisfactory ones. Life is short and you only have so much time and energy to put into things. When you have given so much into one thing, it can feel like moving on is a bigger loss than sticking around. I totally get that, now.

But, in the end . . . it was a great choice. For him, it might also be the right choice. All of the "if only things were different" thoughts and all of the motivation to take the reign to chance those things in the community are great, but . . . ultimately can become a way of simply keeping you tied down to the very things that make you regret what you're doing. When that becomes clear, the choice to move on has to be at the forefront.


I was just wondering why did you do all that without any benefit to yourself.

I have a feeling, anything that you offer for free will be often treated as worthless. Its an problem with us humans, we attach quality with price and we think if something is expensive its of superior quality. Going down this line of argument something that comes for free is always going be treated that way.

It also gives a good indication of how a person would spend his time. If you give away your work for free, how do you expect others to pay up or value your time? And look at this way- If you put up with crap, people will assume you are perfectly OK with crap thrown at you.

Take a tough stance and never climb down from there. At times that's harsh and rude to a lot of people. But ultimately that's the only thing that protects you.


The wisdom of Harry Browne comes to mind:

http://www.goodreads.com/book/show/82104.How_I_Found_Freedom...


Thanks for creating software which has been an immense service to the community, and which I rely on quite a bit.

Tangent mode on:

Somebody really, really needs to write the How To Deploy Rails Without Losing Your Sanity handbook. I will buy a copy. It will sell thousands of them.

A lot of the problems with people's interactions with Capistrano are environment/ops problems which have known solutions that work, but which rely on people having a great understanding of arcane trivia which is spread across conference presentations, blog posts, commit messages, and the practical experience of the best Rails teams. Unless you're prepared for an archaeological expedition every time you start a new Rails project, you're going to do something wrong. You should see the bubblegum and duct tape which I came up with, and it mostly works, but I know it is bubblegum and duct tape.

Example:

Non-deterministic deploys of code from (usually) un-tagged source control

I feel lucky in that I was mentored by an engineer who decided to teach me, one day, Why We Tag Shit. But for the Why We Tag Shit discussion, I would be like almost every other intermediate Rails engineer, and be totally ignorant of why that was a best practice until lack of it bit me in the keister, at which point the server is down and one has to rearchitect major parts of the deployment workflow to do things the right way. Why We Tag Shit is only about a 500 word discussion, but it's one piece of organic knowledge of the hundreds you need to do things right, and it is (to the best of my knowledge) not covered in docs/QuickStarts/etc because that seems to be out of the purview of the framework proper (I guess?).

I'm sure that I'm ignorant of several of the hundreds of pieces of things one needs to do to do deployment right, as evidenced by my fear every time I execute my deploy scripts. I, and I must assume many other companies, am willing to pay for an option which gets me to a non-bubblegum and duct tape outcome.

Seriously, folks: there is a product here.


I'll be downvoted by the "newer generation" but here's a pet peeve of mine

The same reason "why we tag shit" is the reason downloading packages from gem/pip is not enough

Who guarantees that when you do your deploy, the library you need is there? How many times did your build break because it had a glitch downloading the package?

Keep a way of rebuilding and deploying your software. All of it


Mmm. I inherited a rather hardcore strain of liking entirely deterministic builds from being a Debian Developer and would never give up the dependability of it now. In fact, the thought of a build process connecting to the internet to download code just seems so utterly broken in my world view I actually find it difficult to explain it properly to others.

I'd love to be able to do it without embedding code copies, or at least have a smarter system than that.


Sounds like you might like https://github.com/heavenlyhash/mdm ... yummy hashes for determinism, good for binary blobs, embeds semantic links in git, and uses git transports so you can swap out network or local filesystem trivially.


But for the most part these build processes are for web apps... If your build can't connect to the internet, your web app isn't going to work (even if it can build).


Sure, but it's not like the entire Internet is up at once. If rubygems.org goes off-line, it shouldn't mean you can't deploy your Rails app.


Actually, your web server being able to make requests out on the open internet is considered a security liability.

If it lives behind a proxy, a web server being able to talk to anything but the proxy server and database is a matter of developer convenience.


Why does my personal (or intranet) webapp needs an internet connection to work?


> I'd love to be able to do it without embedding code copies, or at least have a smarter system than that.

Maybe a section on the Server where gems lie, and a section on the hard-drive of your dev computer where gems lie - and then in a build process, it finds the gems with a certain checksum or tag or something similar. That's an interesting idea you have.


This. People seem terrified of running their own gem or apt repositories to mirror their dependencies - or even to host their own code.

The dirty little secret is that these things are trivial to set up. Anyone whose build or deploy has been broken by github, rubygems.org, ruby-lang.org or even rvm.io falling over is Doing It Wrong. While I empathise, I have very little sympathy for people panicking when github gets DDoSed again.


related - I've used this for running my own Gem server for years - It really is trivial: https://github.com/geminabox/geminabox


It can be even simpler. I use Apache serving static files out of /var/www/gems, and a cron job to rebuild the index every 5 minutes if anything's changed. Caching gems is just `rsync -avz $GEM_HOME/cache <server>:/var/www/gems/`. Publishing internal gems I've built is just scp.


Geminabox is simple until you want to run with a lot (500+) of gems. And then index-building is horrible - it basically never finishes.

The complexity comes in mirroring the right set of other people's gems, which match the exact combined set of all versions used in all your various projects.

There is not a good "pass-through" gem server that just automatically mirrors what it is asked for from RubyGems. There really, really should be.


> How many times did your build break because it had a glitch downloading the package?

This happens very infrequently in my experience, but it's simple on most CI servers to restart a build.


I also wouldn't normally expect a problem here, but this actually happened to me yesterday. Some 3rd-order transitive dependency resulted in trying to download a file from https://rubygems.org, rather than http. Apparently my SSL certs were out of date (on both dev and prod) and so I had to spend a few hours Googling how to make RVM + Ruby Gems update my certs.

It's really easy to write these kinds of things off as rare occurrences, but this is exactly the kind of thing that causes us to talk about "fear" when deploying.

Edit: solution was to run 'rvm osx-ssl-certs update all' on OSX, which is unsettling, as it installs certs all over the disk. Then on Ubuntu it was "yum update openssl", I believe.


Nitpicking: Yum on ubuntu us highly unlikely (it's using .deb packages, apt-get). That's what you'd use on a rpm based distribution (Fedora, CentOS).


You're correct, it was apt-get. I also have a Centos image around, so I have yum on the brain.


I've got this problem on a machine at the moment, what was the solution?


[deleted]


-- Edit: parent was deleted; the suggestion was about a solution to use http instead of https. --

Security nerd mode activated; solutions like this make me a little twitchy, even when I have to employ them myself.

At the risk of stating something you already know, for the sake of pedantry the security implications of this fix are (at least) as follows:

- If you're checking the signatures of the packages you're downloading, this is probably OK, since even if an attacker spoofed your DNS to route to her own package archive, she would still have to compromise the package signing key to run her code on your system. On top of that, if you're using a hosting/PAAS provider, she'd have to compromise their DNS infrastructure first as well. - If you're not checking package signatures, then hopefully your system doesn't have any "interesting" information (including username/password combinations that might be useful on your or other sites). The hosting/PAAS provider DNS system is still a barrier, but now you're down _two_ of the protections on the chain of code executing in your name.

As always, there are multiple-order-of-magnitude differences in the amount of effort any given element of security is worth; the above fix might be just fine for 99% of applications, while for the remaining 1% some extra thought would be worthwhile. TBH I have no idea how common such "code hijacking" attacks are in practice -- if any "real" security professionals have that info, I'd be curious to hear your thoughts.

Offered in the spirit of helping folks with managers asking "why can't we just turn off SSL?"


Wish grandparent had not deleted his reply; makes it look like I'm the one suggesting turning off ssl. :-(


If it happens even one time while you are in the middle of trying to resolve some "emergency", then it has happened one time too many. Customers don't care that you weren't thoughtful enough to keep a mirror of everything you were using.

I remember my first brush with this when Maven first started getting popular. Apache had to move their servers to a new datacenter and the server that had the drives for the main maven repo was lost in transit. It took five days to bring everything back up. During that whole time, nobody could build anything unless they had local copies.

Ever since then, I'm pathological about ensuring that I have a local mirror of all my dependencies.


I'm pretty sure that wasn't the point - however infrequently it occurs, it violates the principle that only changes introduced since the previous build could have made it fail.

In practical terms, this means that you never know whether change X is faulty or simply the world is temporarily faulty.

(Of course, this may or may not be not useful to you to know.)


True, but a build could fail arbitrarily fail for any unexpected reason. Pragmatism is important, I'd avoid adding an inconvenient step to my build/deploy process just to mitigate an infrequent edge case.


> a build could fail arbitrarily fail for any unexpected reason.

Good engineering is the attempt to prove that statement wrong.


Infrequent (I don't remeber that it happened to me during the hundred or thousand deploy I made with capistrano) and mostly harmless.

If the deploy fail for any reason, it's aborted, you just have to run it again.


What do you do if the package servers go down for a couple of days? "Oops! Guess I can't deploy. Oh well!"


Maybe my brain is wired differently, but having to tell a new hire "Oh yeah sometimes the build randomly fails and we just press retry until it works" would just feel kinda embarrasing to me.


That's not what I said. The point is one might as well say it never happens for how often it happens in practise. I have only had it happen once in recent memory, and even then it was because the hosted CI we use was experiencing issues.


Yeah, I'm right there with you.

Rubygems is a wonderful and powerful tool for getting libraries set up and adding new code. It's not really suited for putting things into production, especially when people think pulling old versions (for whatever reason) is acceptable behavior.

At work we install and set up libraries on development machines using bundler/rubygems, then go in and version lock everything after we know we'll be using it. Building a release-worthy version involves repackaging all of the gems as rpms so they can be installed along with all of our other software when building the new (virtual) server.

It's a pain in the ass, yes, but we have never had library issues crop up in production. I know exactly what is on the system, and fixing anything is an obvious and straightforward process.

You don't have to make things this formal/official, but it's pretty easy to achieve library version updates and know that what's on your server will work correctly at the same time.


for High Availability you would be a fool not to either run your own mirror, or package everything you need into a single bundle.


I'm about a third of the way through writing exactly that, covering deployment (capistrano) and basic server provisioning (chef), specifically for Rails apps. If you (or anyone) is interested my email/ twitters in my profile. Aiming to have it out end of 2013/ start of 2014 but would be happy to release it to a few people early for feedback.


Capistrano, what a Byzantine way to deploy software; I haven't typed cap deploy in about 3 years. Heroku's model of Git-based deployment and 12-factor apps has never given me a problem. Also Heroku deploys are de facto tagged as Releases (https://devcenter.heroku.com/articles/releases). I can't wait for Flynn (http://flynn.io/) to be released, it provides a Heroku-esque Git-based deployment strategy on any host.

P.S. I enjoy reading your comments as do many others, but I feel you hand-wave over important points sometimes, e.g. Why you feel tagging is important. If it truly is a 500 word discussion it shouldn't be hard to synopsise.


It is tangential to the point of the comment, since this only gets you to 0.001% of a Rails deployment which works, but if you're interested:

Tags, plus sufficiently advanced dependency management and database migrations (n.b. not trivial!), allow you to track and reproduce past states of the deployed software.

For example: A customer reports a problem. You verify the problem exists. The customer reports that the system didn't have the problem last week Tuesday. If you want to know why, tagged releases (and a record of them -- incidentally, another one of the 643 things to know is "All deploys should be recorded somewhere") let you travel in time back to the software in use last Tuesday, verify if the customer's recollection is accurate (well, sorta), and then compare just what has changed rather than trying to run down the bug without any historical context.

Naming things also lets you talk about them, which is helpful, because you'll want to talk about individual releases of the software. (I mean, sure, you could use random hexadecimal numbers, but for whatever reason people find production_release_20131007_3 a bit more informative than 066630de00d242137efab6bf21b8ea04aeee7a1d. One glance at the first one tells you that it's the 3rd deploy from October 7th, assuming you've used the company's naming convention for more than a minute. The second one tells you nothing and is difficult to pull useful information out of without having to manually cross-tabulate it with you git repository, which would be unfortunate if it were e.g. attached to a log file, customer support request, or Wiki article about known-good releases for interacting with external systems.)


This is where triggering deploys through Jenkins is really helpful. Now you have a record of which SHA was deployed, when it was deployed and a copy of the console log for the deploy. Can't do much better than that.


Thanks it's not for my benefit, it's for the engineers that might not know better and blindly follow something you say because of your standing in this community. Cargo-culting is not a good habit to encourage in software engineering.

FWIW dates in tags are redundant because the tag itself has a time stamp. Re: environment names in tags, IMO they don't belong in the repo.

Thanks for replying.


> FWIW dates in tags are redundant because the tag itself has a time stamp.

Having them in the tagname makes it much easier and more visible.


Times are in the git log. Same as the commit message/tag name/etc.


It's still much easier to scroll down a list of tags and find the right one by date than to look through the git log and look at both a tag name and timestamp.


Personally I find git log easier to handle than an enormous list of tags.


"Good judgement comes from experience; experience comes from bad judgement."


  after "deploy:restart", "deploy:tag_release"

  desc 'Tag release'
  task :tag_release do
    `git checkout #{rails_env}`
    `git tag #{rails_env}_#{DateTime.now.strftime "%Y_%m_%d-%H_%M"}`
    `git push --tags`
    `git checkout master`
  end
Boom!


That's not going to match the release name that capistrano chose, right? Seems like you'd want to use latest_release or something.


We do the inverse. We tag first, then instruct Capistrano to grab the latest release (with a prompt). We have a release plan though, where software goes through QA before it's tagged. Only software ready for production gets a tag. Everything up to that point is done on a release branch.

    set :branch do
      default_tag = `git tag`.split("\n").last

      tag = Capistrano::CLI.ui.ask "Tag to deploy (make sure to push the tag first): [#{default_tag}] "
      tag = default_tag if tag.empty?
      tag
    end


Yeah, modify to taste. I only wanted to post some example code to show how easy it is to automate tagging.

My code won't be appropriate if you don't use master/staging/production branches, either.


Just use git flow for releases. For non-trivial deploys, you may want to check out teamcity.


> Capistrano, what a Byzantine way to deploy software...

Why would you come here to say that? Capistrano is one way to deploy software. Heroku's method is another. Neither is wrong.

This kind of thing is why author's of open source tools like Capistrano get burnt out and just want to run for the hills. Because people with no skin in the game have to come along and shoot their mouth off.

You're probably a really nice person, but when you do things like this, you hurt other people and you make yourself look like an asshole.


Well-said. Heroku-style deploys are not a replacement for Capistrano-style deploys, merely an alternative. Capistrano is a much lower-level tool. I use Capistrano also for general sysadmin tasks that comprise multiple servers.


Is it really Byzantine? Never ever had this thought. Once its setup, you just cap deploy - how is this much different from git push heroku? To do anything complex with PaaS stuff you still have to write code to setup the environment - likely no more than for cap.


There definitely is a product here, and it's already being built. Docker is the underlying framework for which we'll see significant improvement for deploys in the near future.

In the meantime, and if you're on AWS, check out OpsWorks. It's the cat's pajamas. It both makes much of the hassle of deploying a thing of the past, is extremely flexible, and makes it possible to have Heroku-like "git push" deploys without giving up control (or being forced into Beanstalk's opinionated ways). Amazon made a very smart purchase when they bought Scalarium.


you don't happen to have a good online reference on Why We Tag Shit, do you?


Not exactly a product, nor wholly Capistrano-centric, but I recently put together an article on how I deploy Rails on a new VM from scratch: http://robmclarty.com/blog/how-to-deploy-a-rails-4-app-with-...


This is one of the biggest gotchas of (some of) the high productivity full stack frameworks that are out there, in my experience. I've also had issues getting Meteor deployed, and while it's in many ways a smaller, simpler beast than Rails, it has the same kind of black magic and hitting-a-wall-at-100mph deployment experience.

Both Rails and Meteor will let you develop apps at breakneck speed, with their powerful, magic cores and their excellent module ecosystems, but when you get to deploying them you'll sometimes wish you'd stayed on LAMP or (god forbid) ASP.NET. Yes, I really said that.


I glad you said that. I'm seriously considering moving from RoR to ASP.NET/C# platform.

We're spending $50k-$100k/yr on Linux system administration of all the small and big moving parts that always fall apart and requires constant babysitting.

I want to give MSFT all this money and use MSVS, C#, ASP.NET and Azure for everything. I love MSVS and always loves MSVC (back in the days). I love performance of C{++|#}.

I don't feel inspired to build great SaaS solutions with shitty interpreted languages, forcing to use made-in-the-basement plugins (or gems) and all that trailing crap of sluggishness, upgrades, fixes and weird behaviors that follows.

Now you all may downvote me!


I'm the OP of the mailing list post, and have maintained Capistrano for the last 5 years. I'm passionate about providing great open source tools, my business and reputation are built on Capistrano and I don't want to give it up, but it's destroying me.


Maybe you need to take some time off completely if you're this burned out. Go travel and come back to software and life in a few months to a year. Maybe you'll be inspired to start something new, or maybe there will be something completely different you'll want to do.

You'll always be the creator of Capistrano and for this your reputation will live on. If you've got clients that love you, your business will also thrive. I think your fear of letting go is probably unwarranted, and in any event your happiness should come first.


Have to second this; your love and dedication is too much to lose, both for the community and for yourself. Take a break, ignore the haters, stay happy.


I just want to say that I love Capistrano. Even though I don't use it all the time, I still think it's a really great way of deploying when you aren't in a container-based workflow and you just want a quick way of running remote commands from your local machine.

As for your comment, "Ruby is pathologically difficult to install correctly on modern Linux distributions", I disagree wholeheartedly with this. It may be more difficult to install than a simple `apt-get`, but if you're used to compiling from source and you know what the hard dependencies are, it's really not a huge problem. ruby-install and ruby-build of course make this easier but those tools don't require the use of RVM or rbenv, especially on a machine where you know there will only be one Ruby. However, you're correct in the assumption that Ruby could be a lot easier to install if the language creators would move towards that. That said, I think there are a lot of people who are unfairly blaming you for their own misunderstandings of what these tools do and how they are used. For example, chruby or rbenv is absolutely vital on my dev machine because I work with projects using different versions of Ruby. But it's just bad practice to have a production box running multiple versions of Ruby, in my opinion.

I really feel that the users you've interacted with may have been frustrated by their own misunderstanding of the tools they're using, because it's really not as hard as a lot of people make it out to be...


Ruby is awful to install for production.


I compile Ruby from source for all my deployments (Debian based). I only need to apt install a handful of build deps and the build goes off without a hitch every time.

Can someone direct me to the common complaints, or outline the common complaints here?


Your method is sane, the problem is with RVM and rbenv, which everyone uses and depend on hacky implicit shell BS (rbenv less so, but it's still there). Build from source and use the full path to the interpreter is the only sane method IMO.


That's the only thing I can figure. While sitting on the couch this evening, I put this minimal build script [1] together:

https://gist.github.com/bradland/6861980

That's 9 lines of shell if you exclude comments, and includes the MD5 check. I've updated these commands a few times as Ruby has advanced, but most of the changes come with my distro updates. For example, the Debian 7 release required an update to most of the libs that Ruby depends on. Otherwise, it's been relatively trouble free.

1: It's not really a build script. It's a listing of the shell commands you'd need to install Ruby manually. I extracted them from my build scripts.


I think the fundamental problem is that on your laptop you use the system's package manager to manage gems and dependencies, while on the server you suddenly are forced to use RVM.

Here's my take on using rbenv to have the same kind of environment on your laptop and the server - damn fast : http://www.lambdacurry.com/damn-fast-production-ready-ruby-s...


> ...while on the server you suddenly are forced to use RVM.

Why are you forced to use RVM on the server? A great number of the issues people face when trying to deploy Rails apps are environment based. Tools like rbenv and RVM are environment managers. They make all kinds of changes to things like PATH, GEM_HOME, etc.

In a production server environment, you want consistency and simplicity. Yes, you can use tools like rbenv or RVM, you but add layers of environment management on top of the base server environment. When you use these tools to manage Ruby on your server, any process or daemon that needs Ruby will first have to ensure that these tools are loaded and invoked properly. It's a layer of indirection that I feel a strong incentive to avoid.

Don't get me wrong. I'm not saying they're not good tools. Quite the opposite. I'm a huge fan of RVM, rbenv, and chruby. I have a deep appreciation of their contribution to the Ruby ecosystem, but I'm one guy. I have a reasonable understanding of Linux system administration, but the more layers of complexity I add, the deeper the water gets. At some point, I'm in over my head.

There are two really great options for installing Ruby on your production servers:

Note: Most Linux distributions give /usr/local/bin priority in PATH by default, which is why I use it instead of /opt. Using /usr/local/bin means you don't need to modify PATH.

1) Use ruby-build [1]. This is the plugin that rbenv uses to install Ruby, and you can use it without rbenv. Once it is installed, this simple invocation will get you a working Ruby install:

    ruby-build '2.0.0-p247' '/usr/local'
Because of the reasons outlined in the note above, this type of Ruby install will "just work", even for utilities that shell out with bare shells like `sh -c 'some command here'`. This is because we installed to paths that work with the bare minimum environment.

2) Install from source. In the example above, ruby-build really isn't doing all that much. You still have to have the basic deps (build-essential, zlip, ssl, readline) in place in order to build Ruby, and you have to remain aware of any caveats introduced by new OS or Ruby releases.

I use option 2.

A great way to keep up with what's required to install Ruby on your system is to look at what tools like rbenv and RVM are doing under the hood. These tools act as collectors for the little hacks and workarounds (e.g., ref URL 2 below) that are occasionally required to install Ruby. If you're running a mainstream distro on a recent release, you really shouldn't need much in the way of hacks/workarounds these days. OS X is where a lot of the trouble lies.

1: https://github.com/sstephenson/ruby-build

2: https://github.com/sstephenson/ruby-build/blob/master/bin/ru...

EDIT: Above I said, "[ruby-build] isn't doing all that much", which isn't entirely fair. Ruby-build does do some really nice things like check MD5 hashes. It also checks for common caveats, which I kind of played off. I do the work of looking out for these things, but I'd certainly understand why someone wouldn't want the hassle, and ruby-build does a fantastic job of keeping up with these hassles and automating them away. In short, I want to be crystal clear that I am super appreciative of the work that the rbenv and RVM authors do. They are a tremendous asset to the community.


I read this from time to time and I don't get it. This is literally what I do on Ubuntu LTS:

  sudo apt-get install build-essential openssl-dev zlib1g-dev
  wget http://ftp.ruby-lang.org/pub/ruby/2.0/ruby-2.0.0-p0.tar.gz
  tar xvfz ruby-2.0.0-p0
  cd ruby-2.0.0-p0
  ./configure && make && sudo make install
This is not hard, and it's never caused me any kind of trouble.


A few months ago, I ran into psych/yaml problems doing just the above because I didn't have the right yaml libraries installed.

Also, if you did that and also did something like "apt-get install chef" or something that installs the system ruby, you will have two copies of ruby installed and need to make sure your $path is correct.


I just started learning Chef, and all I can say, vis-a-vis Ruby, is "F%!#, what a nightmare". Now, to be fair, the version of Ubuntu I was on was a bit out of date, and I'm not a "Ruby guy" so I knew bugger all about rbenv, RVM and the like going in. But that was downright painful.

I don't know what life is like for Rails devs, but just getting Chef up and running is painful enough that it killed any latent interest I might have had in ever diving into Ruby.


What was painful about it? Chef provides Debian packages. I added their repository, then apt-get install chef, and it was done. I use chef-solo, and every time it just worked. What sorts of problems did you experience?


I was following one of their tutorials and it had you doing things like using rbenv and bundler, and none of that crap worked right. I don't remember the specific error messages, but it was all crap like "Can't load Gem <such and such>" or "can't load required file <blah>", etc, etc.

In that case, part of the problem was documentation (as far as I can tell in retrospect, I never really needed anything other than the Ruby 1.8.7 I already had installed), but just being exposed to rbenv and the mess of that world left a bad taste in my mouth.

Of course, it's not just Ruby. I have no use for the similar tools for any environment, that purport to let you keep multiple versions installed simultaneously. Whether it's for Python, Groovy, Ruby, whatever. I've come to the conclusion that it's just plain A Bad Idea to do that sort of thing.


Maybe that tutorial was old. Use their Debian packages. It bundles a private Ruby that is only used for running Chef. It always works and never breaks.


Interesting... I guess I didn't find any packages that bundle private Ruby, or if it does it does it behind the scenes and I just don't know it. But what I'm fighting with now, is install some Gems that one of the tutorials has you install... when I run "bundle install", one of the Gems flakes out because it requires Ruby > 1.9.1, and doing a "apt-get install rubygems" earlier apparently made Ruby 1.8.7 the default on this box again.

Ran "update-alternatives" and set Ruby back to 1.9.3 and now when I try to install the Gems, I get failures like:

"in `require': cannot load such file -- mkmf (LoadError)"

I think I'm making progress, but man, this is frustrating.

To be fair though, the basic Chef install is done and working, and I can run "knife client list" and see my clients. It's just that I'm trying to get through this EC2 tutorial and it has you installing this specific list of Gems, and that's where the problems are now. sigh


Yes I can see why installing Ruby is frustrating wit those kinds of errors. The thing is, Debian split Ruby into multiple packages, but made a lot of them not installed by default, which causes a lot of headache for users. The mkmf thing is supposed to be a standard part of Ruby, and most developers assume it is always available, but I Debian put it in `ruby-dev` I think, so you get that error if you don't `apt-get install ruby-dev`.

If you're not a Rubyist and just wants to use Chef, Chef's own Debian packages work great. You can ignore RubyGems and update-alternatives and other stuff, Chef's Debian packages provide everything you need.

I think that Chef should go through the tutorials and clean up the old things.


Being in a similar situation to you I was grossly confused with Ruby until I ran update-alternatives, then things started to make more sense.


Yes, you do have to know your deps, but the deps don't change very frequently. Common Ruby build tools will tell you what the deps are for your operating system, even if you're not going to use them in production (I don't).

As far as system Rubies go, I try to keep things simple. I use Debian for my systems, so I know that `/usr/local/bin` will always be first in my path, so that's where I install.

This falls far short of "awful" in my opinion. There are many pieces of software that are far more difficult to compile from source.


Let me reply to myself. Try to automate a Ruby installation on Debian without using a PPA (like the awesome service that Brightbox is providing everyone at http://wiki.brightbox.co.uk/docs:ruby-ng ) or installing a compiler on your production server.

I don't have a problem with Ruby, I have a problem with its clusterfuck of programs that reinvent the package management wheel and assume that you're going to send your work over to magicland where all of the Ops wizards will make it into a working product.

The reason it's that way is because most of these people develop on Macs then hand off to someone to deploy on Linux. They don't know anything but web dev, and with PaaS like Heroku, they're never likely to. My warning, though: if the reason you're on a PaaS is because you have no idea of how to deploy, you're going to be paying an ignorance tax. Make Ruby easier to deploy, and PaaS will end up cheaper.

edit: I have no problem with rvm (or virtualenv) as a container to run different versions of a language and dependencies on the same server. To use those tools to run a single system version for a single application would be silly, but since the tools are completely broken for doing that anyway, it's simply impossible to do and to retain dark, lush hair at the same time.


It might be a problem with your OS/distribution if you feel like it is awful to use in production. FreeBSD, for example, provides an excellent ruby version in it's package repository (and can have all of them installed, 1.8, 1.9, 2.0, rubinius, jruby, etc) and it's very easy to manage. This is true, I am sure, for many linux distributions as well, and many other OSes. I don't see how ruby can be blamed here.


Hi Lee -

Our company also uses Cap extensively and we really appreciate all your hard working building and maintaining it. I totally feel your pain reading responses about bugs, test coverage, etc. I urge you to consider whether you can get a couple of trusted allies or colleagues involved in putting Cap 3 out. I think it could be a huge boon to those of us who like the idea of modularized code and perhaps you can find a buffer or person who is willing to field some or all of the influx of bug reports (directly ask the community if so!)


Tom `seenmyfate` Clements has been helping to support me over the last three months when I've been very burned out, and has really taken my vision for v3 and made it reality where I wouldn't have been able to. He'll always be around taking care of Capistrano as his company grants him some 20% free time to work on it, I trust him implicitly, and he already has deploy and unregulated commit access.

Tom, if you are on HN, you're the only reason this project has been able to come as far as it has. Thank you.


It's been a pleasure, I'm really proud of what we've built so far. I hope that following a long and well deserved break you'll find yourself itching to return. No-one can deny you've gone above and beyond with a very demanding project, maybe with v2 retired and some additional support the overhead will come down to a much more manageable level.


Lee - We're a small dev shop in the greater Boston area and we use Capistrano exclusively for our deployment process. Frankly, I'm blown away that you've taken this as far as you already have, and completely understand your stance on the community at-large. I think I'd have to echo others before me in saying that sometimes you just need to step back and grab a cocktail. Take a few months away for the sake of self preservation. It's only code :)

Thanks for all you've done.


if you can't find a successor - just keep in mind that people don't tend to step up unless there is a vacuum. if you were to step down, someone would step up, especially in a community as large as the one you're in.

if it's destroying you, leave. go on a long vacation. it's not worth it. there's no shame or guilt in taking a break or passing on your work.


We use capistrano at Shopify for our main rails project and countless smaller projects. Thanks for your efforts.


Lee, we use Capistrano at Scribd for our main Rails app, QA servers and countless other Ruby/Java services. In a world of complaints, we haven't had a single complaint about Capistrano. Thank you.


Just wanted to add my thanks for such a great tool. I've been using cap since the rails 1.x days and it's always worked for me like a charm - I'm so grateful you stepped up to the plate to create & maintain it.

Maybe you can find a way to delegate the bits you don't like about maintaining Capistrano so you don't need to directly manage it yourself? This is probably niave and maybe you are doing this already with v3 but it occurs to me if rails is the main cause of the headaches perhaps you can split that part into a cap-rails gem that is separately managed then you could hand over the reigns to someone else?

Anyways, as mentioned, thank you for a great tool - I very much hope you find a solution that helps you feel good about the situation.


As I have used Capistrano on many projects, I want to thank you for your hard work and assure you that it has been much appreciated.


Kickstarter ?

I mean for something as operationally critical like Capistrano, why dont you seriously raise money on Kickstarter, outsource some of the testcases, etc. and be more productive overall.

And please feel free to raise money to fix RubyGems as well ;)


I don't know how this would vibe in the open source community, but I'd definitely donate to help out a Capistrano project.


Running a Kickstarter project is not recommended for someone close to burning out.


Thank you so much for Capistrano. We use it on a daily basis (several times) and built http://capo.io to make it easier to re-use deploy scripts.

I'm going to take a closer look into the source code and try to help out with issues. It's something I've been wanting to do for a while so I hope I can at least help out a little bit here


Thank you for all of your hard work on Capistrano. We use cap extensively at Simply Measured... as well as a "TurboGC" version of Ruby on an LTS version of Linux (hah!). Perhaps in the future something like containers will provide a better way forward, but for now cap is irreplaceable.


Let's say that you decide to continue. I think that you should get advice from elsewhere than here, from someone whose professional expertise is in the emotions.

Before everyone leaps to conclusions, I'd better clarify - I am not saying there's anything wrong with you - I am not excusing the crass behavior which is getting you down.

What I'm saying is that there are almost certainly skills you can learn which will stop this getting to you. I don't know exactly what they are, and most people here don't either (although they will passionately sell you the hammer which worked for their particular nail). Ask an expert.


Hey Lee, just wanted to say thanks for making Capistrano. It's awesome and it's made the world a better place.


Hey Lee,

I am yet to look into v3 stuff closely, but if you think it is better why not make maintstream? Is the main reason just community backlash?


As I wrote in the mailing list post, it'll be made mainstream this week, as I think it's a general improvement in all areas, at least it's faster, easier to debug, looks and works better, and has better out of the box compatibility with Rails (3 and 4).


Just to say, I love Capistrano - its a great product that I use almost every day. I introduced it at one place I worked and it took deployment time from painstaking hours to minutes.

Thanks very much for your efforts and I am sorry you feel the need to step down from maintenance.


Thank you so very much for your efforts. We use Capistrano at Bigcommerce to deploy our apps and components, and it's saved us from an inevitable pile of shell scripts. :-)


I have never commented anywhere about Capistrano. But, it has been part of a toolchain that has brought me success and improved my life. Thank you very much for all you have done.


Some unsolicited advice from someone who's never run an open source project as popular as Capistrano:

* Ditch v2 ASAP (seems like you've already decided on this). It's pretty obvious you aren't motivated to work on that codebase anymore. I've looked at v3 and it's much better thanks to relying on Rake tasks.

* Be selfish. It's your project so if you think v3 is the way to go forward, go with it and who cares what the "community" thinks.

* Seems like you already have a few people helping out, so continue and maybe make formal "core" team. There's nothing with yourself taking a step back from the heavy coding. But I believe that Capistrano would be better with your guidance than without it.

codebeaker: There was no mention of Harrow in that post. Are you still working on that? I'd assume that if you were you'd continue work on Capistrano since it's based on it.


> Be selfish. It's your project so if you think v3 is the way to go forward, go with it and who cares what the "community" thinks.

Or call the community's bluff. "I think v3 is the way forward, so that's what I'm going to be working on. If you want to stick with v2, you maintain it."


Seriously. It's open source, that's what open source is for.


And then prepare to be getting accused of elitism, being selfish, not caring about users, crippling their system, etc. For a good example, see how people treat Lennart (author of systemd).


and the author of pulseaudio and avahi. He's taken a lot of abuse over the years.


There's a great Japanese term, omakase, it means "fuck the community I'll do what I like".


Regarding Harrow [1] bretthopper, I absolutely am, and this is where I want to focus my work, and to be able to build a company around deployment best practices and Capistrano.

Regardless of frameworks, and of technologies and platforms, I believe if I can take the load off myself with Capistrano, by turning it into the open source component of a best-practice company, and build teams of passionate, skilled support engineers, then I'll be where I want to be.

Unfortunatley Harrow would suffer if I give up on Capistrano, as part of the promise of Harrow as a SaaS is that it's all guaranteed to stay compatible, and work as we've all come to expect, just with improved workflow, etc...

([1] http://www.harrow.io, please excuse the missing graphic placeholder I've not updated the landing page in some time as I've been focusing on building the product, and the landing page performs really well without that graphic in place)


Lee, why not try the sidekiq way? A capistrano pro, and a hosted managed version or maintenance fee...


Also, charge for support. List supported platforms and charge for support for strange configurations.


For real long time Capistrano v2 has been exclusively going forward with Pull requests and next to no new development while Lee worked on v3 on separate branch, which looks like a rewrite.

As a result various releases of v2 were buggy. Capistrano is a hard to test application agreed but its test coverage is plainly woeful.

About 6 months back when 2.4.12 release was broken (https://github.com/capistrano/capistrano/issues/434) I suggested to remove asset pre-compilation stuff from Capistrano. Capistrano is a general purpose tool, company where I work we use it for deploying java, php, ruby and all sort of stuff. I don't understand why it should have poorly tested asset pre-compilation things built in.

I don't know what made Lee work on a rewrite. I can only imagine how difficult it must have been for him to work on something so big singlehandedly while running a company.

His last point is very valid about using RVM, rbenv etc in production. I don't know why people do that. Does that make it easier? Aren't people aware of something like - https://launchpad.net/~brightbox/+archive/ruby-ng ?


The motivation behind the rewrite was in order to bring the project to a state where people who weren't intimately familiar with the internals would be able to contribute.

The rewrite is a ground-up re-think, but it's leaning on the best of what the open source community has come up with in the last 5 years, and leaning heavily on all the best practices we've learned as a community.

The rewrite was also a way for me to say "this isn't a rails tool anymore" (of course, those of us who knew Capistrano well could always cut out the core railsisms and use it for deploying pretty much anything). And a way to say "look, this tool isn't magic, it's an orchestration tool that glues together some other libraries".

Part of the rewrite was to split things into components, so that when people have a version that works for them, bug fixing changes to rvm or rbenv, or other extensions don't have to risk breaking the core functionality. Stability through modularity.

A way for me to pay my envisaged debt to society for the good things that being the custodian of such a widely used project has brought me.

Many of the v2 releases were broken as I tried to let two people help me with maintainer ship, and both of them went a little nuts accepting pull requests, and a lot of things got merged that probably weren't quite upto scratch, or caused subtle problems, which is a huge problem of Capistrano.

I'm grateful to both of them for their bravery, and willingness to try and help, but maintaining it has been such a hard task. A balance between maintaining compatibility, and not breaking people's production environments, and pushing the tool forwards, and improving it.


The reason I use RVM in production is that it makes it trivially easy to:

1) Use multiple parallel Ruby versions for transitions and rollbacks - for example, when we upgraded from Ruby 1.9 to 2.0, we installed 2.0 via RVM, ran our staging environment on 2.0 while production ran on 1.9. The instant-rollback safety net is great, because if we run into some obscure bug on one VM, we just change our RVM environment config var and deploy and we're back to known-good in seconds with no downtime.

2) Run applications that require different Ruby version levels. For example, Puppet 2.7 wasn't compatible with Ruby 1.9+, but our app was running on 1.9. With a single system install, we'd not be able to run both.

The point about stepping out-of-bounds in regards to LTS distros is extremely valid, but at least in our case, it's not about "just effin' compile already!", but rather about flexibility and resilience - by using RVM, we absorb the burden for making sure things work rather than delegating it to our distro, but it's worth it for the things we pick up.


I would recommend running using chruby or just ruby-install and $PATH (chruby just changes the path variable) for those things since it is less invasive.

Personally I always try to run ftom apt in production.


+1 to this. I wrote up how to build a ruby .deb from ruby-install here: http://www.blackkettle.org/blog/2013/08/26/how-i-ruby-part-2...


My first thought was "I owe this guy, Capistrano is the main reason why I have spent ~ $100 per year on VPS servers and not $100 per MONTH on Heroku et al.

I'd suggest Lee runs a Kickstarter type thing and I'd very happily throw in $100. But I don't think he will because it doesn't seem quite right.

So here's a (wild and completely off the cuff) startup idea - a pre-emptive Kickstarter. Someone creates the project "Lee Hambley, continue working on Capistrano." and we all pledge into the pot. If Lee agrees to do it, he gets the money. If not, we don't pay anything.


Or just give them money on something like gittip. Things like this need to be sustainable, not just a flash in the pan of $10k, then going back to having nothing.


Really looking forward to Docker being 1.0.

What you want to do is build a single package of everything your application needs (which includes the application code and all dependencies -- libc and up), then copy that package to the production servers.

It shouldn't matter if your application server has Ruby 1.9.3 and you need 2.0.

It shouldn't matter if the last deploy of your app needs Nokogiri compiled against libxml 2.8 and you now need 2.9.

It shouldn't matter if you are running 5 different apps with 5 completely different set of dependencies on the same machine.

It shouldn't matter if you need to use the asset pipeline.

It shouldn't matter if github or rubygems drops out half-way through the deploy process.

All the production server should get is a single package of all that your application needs, then a 'restart application' command.

Docker should be able to handle all this simply.


As always, it bears repeating: rvm/rbenv don't belong in production. They exist to allow developers on Macbooks to sync their version of Ruby with whatever is packaged in the Linux distro or BSD variant that runs in production.

If I had a Mac I'd skip the ad-hoc Ruby environment switchers and skip straight to Vagrant.


I used to agree with you. Why would you take a 20 or 30 percent performance hit when you are busy shelling out tens of thousands for hardware load balancers.

Then one day a rare instruction set on our colocated server (so not something super standard like linode) was specified during the compilation step of ruby and (due to an extremely subtle co-bug between ruby and the compiler). It took us fucking weeks to find this hisenbug that was somehow causing workers to drop, but only during times of very high load.

Probably lost $500k worth of customers, dev time, and company moral.

Now I have a different view. Keep things as simple and as "normal" as possible. That way you can always upgrade to the next version, you don't hit weird bugs when you libraries assume that Time.now is second-accurate, instead of sub-second accurate (MRI vs Enterprise Ruby).

RVM is made for production (http://stackoverflow.com/a/6282260/384700) and it saves a lot of headaches to just go with the flow. As for mac dev, I agree that it is a waste of time compared to working out of Ubuntu, but designers like photoshop and Vagrant is non-trivial for them to set up, especially for people that work on multiple projects.


Pointing to the author's declaration that "rvm was made for production" is not exactly the best evidence of such, especially in light of mine and my colleagues' endless hours trying to figure out how to fit it into a non-interactive-shell environment.


I have had similar experiences with rvm in production environments.

chruby (probably also rbenv, haven't used it) is easier to understand and trivial to set up with non-interactive shells.


I've had similar problems with RVM. Ultimately I used the system-wide RVM installation and this chunk of code:

  export PATH=$PATH:/usr/local/rvm/bin/rvm
  . /usr/local/rvm/scripts/rvm
  . $REPO/.rvmrc
  . $(rvm jruby-$JRUBY_VERSION do rvm env --path)
  cd $REPO
  bundle exec torquebox deploy
Definitely not ideal and I spent a large amount of hours figuring out how to use it non-interactively. I'm still somewhat worried that some things might go wrong if some of the code I didn't write in this repo attempts to call binaries directly. Basically with bundle exec --deployment, all gems are stored in $CODE/.vendor which is necessary to allow users to install their own gems, with the root installation. Bundle exec has to be used, I messed around with RVM wrappers, but they don't work with the --deployment bundler use.


You're doing too much. All you need to do is:

  source /usr/local/rvm/environments/$BUILD
You can put it in a wrapper script if you like. It will unset all conflicting environment variables first.


The thing is that the rvmrc has the jruby version env set. So I need to load it first. And it contains an 'rvm use' which I don't want to fail. I suppose I can slim the rvmrc down to just include the environment and then all I would need to do is source the rvmrc? Thoughts?


I don't believe you need to source the rvmrc. The line I pasted above does all the necessary things that "rvm use $BUILD" would do for a shell, without all the things that require an interactive shell.


The rvmrc is part of the repo and contains the java_opts env.


I'm confused.

Simple and normal, to me, means "use packaged Ruby." Chasing the bleeding edge and compiling the interpreter at every release is exactly the sort of practice that introduces edge-case heisenbugs.

If you need a nonstandard compile-time option you go to work on the package source, make the change, put your custom package up in your private apt/yum repo, and leave things alone until the next security update comes down the pike.

Me, I consider hosting to be entirely fungible. I'd rather change vendors than mess about with custom packaging big chunks of the stack to work around weirdness.

Wherever I have encountered Phusion's REE I have ripped it out, with good results.


> Chasing the bleeding edge and compiling the interpreter at every release is exactly the sort of practice that introduces edge-case heisenbugs.

I think most people that uses rbenv or RVM in production is only compiling new versions if the specified version isn't there already.


Building a package wouldn't fix the bug. The ruby binary in the package would contain the same problematic compile-time option.


It absolutely would. You change the compile-time option and build the package with that change.


Right, so the ruby binary inside the package that you just built would contain the bug caused by that particular compile-time option.


I want to be sure that we're on the same page.

No.

3pt14159 said that he used to use distro-packaged Ruby, but stopped because his distro used a compile-time option that didn't play well with his vendor's hardware.

3pt14159's solution was to start using rvm on production boxes.

I think that is a bad idea, and that either:

1. 3pt14159 should just pull down the package source, change the compile time option, rebuild the package, and use the mildly customized package in production.

2. 3pt14159 should not waste time working around hardware weirdness, and just switch to a vendor that doesn't have these problems.


You shouldn't necessarily be compiling ruby on your server anyways you should be using a package (rpm, deb) to install it. Especially if you are losing $500k because you decided to compile ruby on your production server.


One of the challenges of running a production application based on Ruby is the fact that relying on distro packages will fail miserably. Rubygems, in particular, does not work well with distro installed Ruby interpreters.


I was talking about creating your own packages.


Ah, I re-read your comment in an entirely different light now, and I agree that packaging your own deps is a great way to avoid surprises when you roll your infrastructure.


I use ruby-install to build precompiled MRI packages. I'd be very interested to know if that might have avoided your problem.


In mature production environments, sometimes you do need more than one Ruby environment on a host. I've seen this case where multiple programs must be deployed on a host, one of which was built a long time ago (i.e. legacy). (This applies not only to Ruby, BTW.)

Maybe rvm/rbenv is not the best solution, but some solution is needed. You can either invent one yourself, or adapt something that's already been built. We're using rvm, and while I'm not a huge fan of it and would rather switch to rbenv, I've managed to tame it so that it's not such a huge beast, using various wrappers so that non-interactive Ruby programs work start up in the correct environment.


My money's solidly on ruby-install for this, but only to build packages - not to run in production. You just end up with rubies in /opt/rubies/ruby-X.X.X-pXXX/bin. Picking one is as simple as plonking that on the front of $PATH.


Generally agreed. rbenv is just a nice convenience layer atop that.


If the divide is ruby 1.8 and 1.9, Debian packages both, and they can be installed simultaneously.

If it's more arcane than that, it sounds like kvm or lxc is the answer.

If you really really need to run two pieces of software with differing ruby interpreters on the same machine, then your real problem is technical debt. One fix would be to update the ancient software so that it can use modern Ruby. Another fix would be to re-architect the solution so that the greater system could operate across multiple OS instances. The least attractive option would be to use packaged Ruby for the modern stuff, and a hand-compiled Ruby for the legacy stuff.


If you really really need to run two pieces of software with differing ruby interpreters on the same machine, then your real problem is technical debt.

Why?

Why can't you have a stable app using Ruby 1.9 that hasn't need any updates (other than security) in three years running on the same machine that also runs another newer application which uses Ruby 2.0 and Rails 4?

To me, that's not "technical debt".


Maybe "wasting asset" is a better metaphor. At some point, there'll come an OS release on which that Ruby won't even build (and this can happen sooner than you'd think - 1.8.7 needs to be patched to build on Wheezy, for instance; by the time Jessie goes stable 1.9.3 will probably have been deprecated and unsupported too). At that point, you're forced into porting to a newer Ruby just so you can keep up with security patches to the underlying OS. It might not be "technical debt" per se, but it's definitely a maintenance cost.


kvm or lxc are interesting, but that seems like attacking a fly with a sledgehammer, and it's hardly cost-free in terms of monitoring, maintenance and resource economization. Why bother with virtualization when setting path-related environment variables is sufficient?


What happens when app 1 has been tested against libxml 2.7 and app 2 needs libxml 2.8?

I wouldn't want to worry about dynamic library loading flags.


I would actually worry about LDPATH first before trying lxc.


You are entitled to your opinion, but as other commenter pointed out, rvm was created for production.

> hey exist to allow developers on Macbooks to sync their version of Ruby with whatever is packaged in the Linux distro or BSD variant that runs in production.

Or to allow developers who run ubuntu to develop multiple projects which need conflicting versions of the Gem and interpreters seamlessly. Also, I sometimes deploy multiple projects on same production box where the 2 might need different interpreters, and they almost always need separate gemsets. I can manage the interpreters and LOAD_PATH manually, but then I will end up creating an unholy mess which will somewhat resemble rvm.

I am curious. What are your reasons for "rvm doesn't belong in production"?


As for rvm in production:

Superior package management is the reason that small teams of ops people can manage huge deployments.

In the bad old days of early Linux or traditional UNIX one had to hand-build and deploy those dozens of libraries that go scrolling by when you "apt-get install libxml2". Worse, the only way to really figure out the dependency tree was a recursive operation that involved picking a library, downloading it, attempting to build it, waiting for it to fail, figuring out what other library was missing, downloading that library, attempting to build it, waiting for it to fail, figuring out what library was missing...

Inevitably, the grad student that wrote one of the libraries somewhere in this dependency soup would have moved on, and the web page at http://morlock.iscs.random.edu/~gradguy would disappear. The next step would be to fire up an ftp client and start digging around sunsite.unc.edu and some surviving yggdrasil mirror in Australia with a banner that said "PLEASE DO NOT USE THIS SERVER IF YOU ARE OUTSIDE OF AU/NZ, WE ARE PAYING $10 PER MEGABYTE ACROSS INTERNATIONAL LINKS."

Once the software was compiled you then had to subscribe to every mailing list for each piece of software and keep an eye for security notices. If the project didn't have a mailing list you had to keep an eye on the relevent USENET groups where something might get mentioned. If the project had neither you just had to visit the FTP site every so often and see if there was a new version. If the project had a changelog you could read that and hopefully make a decision as to whether it was necessary to upgrade. If it didn't have a changelog you had to diff the old source and the new source and try to figure out what the implications were.

Package management put an end to this insanity. The Linux Filesystem Standard was an important part of this, as well.

rvm turns its back on 20 years of progress and goes takes us back to ad-hoc what-the-fuckery.

A proper Linux distribution is an exercise in distributed responsibility. Instead of saddling every sysadmin in the world with the individual responsibility of making all of these decisions, expertise is allowed to accrete with the individual packagers. The operator evaluates the quality of the distribution and trusts the packagers to do the right thing.


So you're saying that using distro packages saves the sysadmin a lot of work because a lot of maintenance will be offloaded to the distro maintainers. Fair enough.

So what do you do if the specific Ruby version you need isn't packaged?

What's that I hear? Compile from source? How exactly is that any better than RVM? Every single piece of criticism you shouted against RVM applies just as much, if not MORE, to tarball compilation.


You fix your code so that it runs against distro-packaged Ruby.


What if the distro-packaged Ruby contains a bug, and the distro does not provide an update?

What if the distro only provides 1.9 but you need 2.0 features?

It sure is easy to wave your hands and accuse people of being insane.


The best example of why the rvm model fails on the dev side is nokogiri.

Yeah, sure, you can have multiple versions of Ruby and the Nokogiri gem on the same box. libxml2, borrowed from the GNOME project, is the C library that is doing all of the heavy lifting underneath. If the version of libxml2 on your dev machine is different than the version in prod, it doesn't much matter if you've got the same version of Ruby and the same version of Nokogiri.

Use Vagrant. You will get a complete Linux VM for each different project that will match up perfectly with whatever the prod environment looks like.


Do you deploy with Vagrant? I think container might be the way forward, but I am not convinced, and I don't deploy Vagrant to production(containers might be the future for deployment but Vagrant in it's current state isn't). I don't use vagrant in production because I use a Ubuntu box for development and rvm/bundler, virtualenv/pip...cover my needs for maintaining different and isolated dev environments.


No, I don't deploy with Vagrant. Nor would I.

Use Vagrant to spin up Linux VMs locally so that you can have a dev environment that matches production.


Yes, but that won't solve when I have 2 applications deployed on the same production box. rvm solves it.


"Solves" in the same vein of "There, I Fixed It."

http://thereifixedit.blogspot.com/


I don't see any point in conversing any further since you are being intentionally obtuse. apt-get doesn't help me with different projects using different ruby interpreters and isolated gemsets(and it shouldn't); rvm does. I deploy multiple applications to the same production box, rvm solves multiple interpreters and gemsets, I use it. Your diatribe about package management and rvm is responding to imaginary arguments. Also, package management doesn't keep up with Ruby releases and I would rather install from rvm then packaging my own and having a separate apt server. And as pointed out many times, it's not just about the interpreters but isolating gems.


Addressed: https://news.ycombinator.com/item?id=6505227

I don't work in a Ruby shop anymore, but I'd look at Bundler if I was looking to manage multiple gemsets on the same installation. As I point out in the above thread, maintaining multiple gemsets on the same machine is an antipattern that points to technical debt.

Ruby 1.8 is over. EOL. You need to port everything forward or suffer the consequences.


> Addressed: https://news.ycombinator.com/item?id=6505227

False dichotomy. There is no technical debt. Both Ruby 1.9.3 and Ruby 2 are functional and used in production. It will take some time to totally move to 2, and when that happens, there will be applications on different release versions of 2. I don't want to wait for os packaging to use a new ruby release, and I don't want to be forced by the packaging to use a particular version(older or newer).

> I don't work in a Ruby shop anymore, but I'd look at Bundler

Um. I use Bundler along with rvm.

> maintaining multiple gemsets on the same machine is an antipattern that points to technical debt.

Having a restriction of not more than one app on one machine is as stupid as it gets.


Say App 1 requires libxml 2.8.1 and App 2 requires libxml 2.9.

How does rvm fix that?


No it doesn't. But throwing away RVM doesn't fix the things RVM fixes. RVM doesn't solve all problems; that doesn't mean it isn't useful for problems it solves.


FWIW, current nokogiri 1.6.0 bundles and compiles its own libxml.


The problem is that the ruby versions packaged in the distros are not always as compatible as one could wish.

Sometimes it is the list of default gems that is different (like how fedora packages bignum as a gem), and sometimes it is the distro packaged gems that just doesn't look like the stock versions. I've seen examples of gems split into two gems, as well as gems that has had their version dependencies altered. things like that easily makes either rubygems or bundler very unhappy.

The thing about distro packaged software is that you have to trust the distro to do the right thing, and it only takes so many examples of them doing something stupid before that trust is lost.


Sometimes it's not even gems/libs, but dist specific patches to the language itself.

RHEL used to be notorious for breaking their Perl 5 dists. The first rule of Perl was to compile your own the server.


Does anyone know what's wrong with the Rails asset pipeline that is mentioned in the post as one of the issues?


There's a number of problems, but foremost is that there's no good way to "roll back" assets, and there's no concept of keeping assets that might be used by old versions of pages cached in CDNs, when they have been replaced by newer assets. This is a problem of the manifest system, and of the `assets` directory always representing the current newest state, not the collective state since the beginning of time. Maintaining state from the beginning of time would bring it's own problems, thus many of the workarounds about tracking old, and new assets are time consuming and sub optimal, and unfortunatley people need them. There's a cap task which touches `mtimes` of all referenced assets, which can typically take 5 minutes to complete. It's naïve, and stupid, but it's the only solution (that we could come up with) to a real asset pipeline problem.

I'm also of the opinion that compiling assets in production as a part of your deployment process is insane, there's so much magic in the Rails asset pipeline that it's not uncommon to turn up bugs where tables don't exist, and the rails app can't initialize, or some javascript runtime environment isn't found which can leave your deployment in a broken state.

I'm firmly of the opinion that assets should be compiled and checked in, but then of course you run into problems with rails serving those in development mode, rather than the development files.

All these issues are fixable, but they're all indicative of tools that aren't quite mature yet, and as Capistrano sits on the boundary of where these problems come to light, it seems to fall to us to deal with it, and to educate people on what they ought to be doing.

Education is no problem, I really believe that the de-facto standardisation of Rails-like deploys (i.e timestamped releases, with common linked directories, and a symlink to the current active timestamp) is an excellent result for knowing what to expect in an environment where there's hundreds of ways to get Rails apps running, but it's still not as smooth a process as it could be.

I'm familiar with at least one project that's been re-written in Scala and Java because the previous version was prohibitively difficult to deploy as it was in RoR. (GrayLog2, to namedrop them)


> I'm also of the opinion that compiling assets in production as a part of your deployment process is insane, there's so much magic in the Rails asset pipeline that it's not uncommon to turn up bugs where tables don't exist, and the rails app can't initialize, or some javascript runtime environment isn't found which can leave your deployment in a broken state.

This is one of the pain points for me. Even when everything is setup, the asset compilation churns the disks and takes way too much cpu. I would rather do that on my local machine than disrupt the production server. I do a simple workaround for that:

    namespace :deploy do
      namespace :assets do
        desc 'Run the precompile task locally and rsync to shared'
        task :precompile, :roles => :web, :except => { :no_release => true } do
          run_locally "rake assets:precompile"
          run_locally 'rsync -e "ssh -i production.pem" --recursive --times --rsh=ssh --compress --human-readable --progress public/assets #{user}@#{host}:#{shared_path}'
          run_locally "rake assets:clean"
        end
      end
    end


I ended up doing something like the following:

    # Dont recompile assets unless they hange
    task :precompile, :roles => :web, :except => { :no_release => true } do
      from = source.next_revision(current_revision)
      if capture("cd #{latest_release} && #{source.local.log(from)} vendor/assets/ app/assets/ | wc -l").to_i > 0
        run %Q{cd #{latest_release} && #{rake} RAILS_ENV=#{rails_env} #{asset_env} assets:precompile}
      else
        logger.info "Skipping asset pre-compilation because there were no asset changes"
      end
    end
I'm not entirely happy with it and don't even know if it's something I should be doing. I rarely change my assets, so this at least allows me to do quick deploys until an asset change comes. (shrug) It feels incredibly hacky.


Not compiling when assets haven't changed is a problem. I have been experimenting with https://github.com/ndbroadbent/turbo-sprockets-rails3 - seems to work so far.

For me, the more important thing was not compiling on a production box. Compiling on production box does lots of io and uses way too much cpu. I am ok with a slight delay in deploy due to assets compilation happening on my local machine.

I see a lot of people in this thread criticizing the assets pipeline. But I think it's a good idea with an implementation which is yet to mature. For now, it needs some work on my part(do I check in the assets, do I compile on production box, how do I ensure assets are only compiled when changed etc etc), but overall I am quite happy with the way it works.


My deployment process does this too. One thing to be careful of is to make sure you're not deploying a different revision than the one you are on locally. For instance, if you have uncommitted changes locally, they will be reflected in your assets but not in the rest of your deploy.


Yes. Different branch and uncommitted changes are an issue. Different branch can be solved by switching the branch if `git rev-parse HEAD != git rev-parse branch`. Uncommitted changes can be handled by `[[ -n $(git status -s)]]`(simply abort).


    > some javascript runtime environment isn't found
This is a very familiar pain point for me. IMHO many problems with RoR deployment stem from the fact that Ruby depends on so many native code extensions. It's pathetically easy to break things like therubyracer or nokogiri because some .so file was updated or is not installed. I'm guessing that this is done for performance issues with the Ruby interpreter, but wow, it really makes for portability nightmares. Add RVM on top of this and it's just insane.

As someone coming from Java (I largely switched from Java to Rails about a year ago) I find this tendency toward native code deps incredibly painful. Turns out there's a really good reason to demand "100% pure Java" libraries. One can always throw rocks at Maven or CLASSPATH hell, but I find that Ruby deploys tend to be much more problematic than Java.

Whatever you decide on I hope you get some relief. FWIW I have benefited greatly from your work on Capistrano, and sincerely appreciate your efforts.


therubyrhino and nokogiri provide pure Java backends on JRuby, FYI


I wonder if one solution wouldn't be to have an actual build enviroment that is neither development or production, but between the two. We have that where I work and asset compiling fit very naturally into where we compile the RPM (which is how we deploy, by necessity).


Exactly. Build on a staging system. Take a server (or a couple of them) out of LB, put new, already built stuff in. Put in back into LB, rinse, repeat until you have the new stuff everywhere. Don't actually build anything on a production server. Perfectly scriptable, possible to rollback (if you keep your releases tagged) and if you pay attention to monitoring you can even spot significant problems mid-deploy.


https://github.com/capistrano/capistrano/wiki/Upgrading-to-R...


For anyone interested in an overview of Capistrano v3, I wrote an introduction last week - https://medium.com/p/ba896a142ac


Tom's article is a really great run down of all the awesome things that we built.


A good decision - get out while things are still positive. Not enough people are brave enough to step down at the right time (or even when it's obvious it's already the wrong time).


I am a bit sad that he feels this way about it.

I have used Capistrano a lot, I built my "default" setup, compiled it into a gem, and released it here: https://github.com/kaspergrubbe/simple-capistrano-unicorn and moved on with my life as a developer.

I know of at least two bigger organizations that depend on Capistrano (and my gem) for their deploys. I feel like Capistrano is the way to go if you manage your own servers, and need to deploy to them.

Capistrano started my Rails experience, and I am very grateful for the work put into it. But I never wrote and said "Thank you" or "Great job", maybe we need to be more vocal to the people that put in time and energy to build the software that we use a lot.


It's sad to see when an open source project becomes overwhelming. On one hand the project is open source, so hopefully, someone else can pick up the torch. We saw this happen in the node.js community and node's been moving along. On the other hand, based on what Lee is saying, it looks like situation is pretty bleak. I'm not a Rails user, but I feel like most of the "hot-startups" in San Francisco run a Ruby stack. From an observer looking into the community and platform though this post, I never realized how many challenges there were in that development environment.


As much as this is an open invitation to rail on the RoR community, I think this is a problem that is a lot more indicative of this brave new software culture both open source and (independent) commercial.

If your tool sees any sort of uptake, it suddenly no longer is yours. The community suddenly expects you to not only to continue to modify the base code to improve functionality, but to also adhere to a sort of backwards compatability so that everything they know and loved about your baby never changes.

I can't imagine how much more taxing this would be once the tools you built become integral part of other team's workflow. The burden and stresses of keeping "the world" afloat would cause many a sleepness night for people of strong constitution.


Capistrano really has saved us multiple times, sad that a vocal part of the community tends to exhibit such behavior.

At our company, we develop multiple RoR apps and we've run into many of these issues (mostly related to the asset pipeline), yet none of them actual problems with Capistrano. Since it's the bridge between so many things, I can imagine why it's easy for it to become cannon fodder.

We've tried to standardize many of our recipes such as local asset precompilation into a single cohesive gem (https://github.com/innvent/matross). That has saved us the trouble of debugging the same issues over and over when they inevitably pop up across applications.


Thanks for the awesome software.. I just started learning about capistrano recemtly, just amazed by how simple it is..

I believe when you said that PAAS will go, only reason I use heroku and dokku(from docker) is due to its easy deployment.. and for no other reason than deployment..


Check out fabric as a much faster alternative to Capistrano. Combined w cuisine.py it's a simple and powerful alternative to chef solo.


I love ruby and rails, but yeah, I'd switch to any framework in any language that made deployment stress-free. Except php.


The JVM has a lot of tools that make SysAdmins happy (or at least not grumpy) to deploy projects. Some cost a fair amount, but the list of JVM languages is pretty nice.


Play's deployment process is more or less:

$ play dist

$ unzip app-name.zip

$ cd app-name

$ ./bin/start

Can dist on build server and copy archive to prod if that option is available (or upload from local dev machine).

It's ridiculously simple, and all generated at compile time.

I don't understand hand wringing deployments, is Rails _that_ good?

I mean, outside of MVC, RESTful routing, form validation, AR, migrations, etc. functionality that exists in other frameworks/libraries, what does Rails actually bring to the table that is unique?

Is it the gem ecosystem, where you can find x,y,z functionality that exists nowhere else, or some cutting edge Rails feature? I don't know, just asking, spent a year on a Rails clone, Grails, and was not really taken, tracking runtime MOP errors is no joy, give me a static compiler any day over that.


I live Play, but that deployment only works for the simplest solutions.

Need versioned assets? Need multiple versions of assets in prod? Play doesn't do it out of the box.

Need a specific version of the JVM? Play can't help you there.

Need to change your start script to do something different (ports, SSL, JVM config). Play can't do it.

Play is very far from deploy script.


> Need versioned assets? Need multiple versions of assets in prod? Play doesn't do it out of the box.

Just turn off assets entirely as the Capistrano author does with Rails. Running Bower + GruntJS here, snappy fast, well happy working with tools designed for the job.

Not sure about multiple asset versions, I just have front end web server handle inbound /assets/img/foo.timestamp.png request as /assets/img/foo.png. It's not too difficult to create an asset helper trait to append timestamp at container start up to asset paths.

> Need a specific version of the JVM? Play can't help you there.

This is hardly a strike against Play, installing Java is not exactly an overwhelming task ;-)

> Need to change your start script to do something different (ports, SSL, JVM config). Play can't do it.

Excuse me?

./bin/start -server -Xms128m -Xmx512m -Dhttp.port=9001 -Dhttp.address=172.16.53.1 -Dlogger.file=${HOME}/logger.xml -Dconfig.file=conf/prod.conf


Heroku, EngineYard, etc? Unless you're also adding "cheap" into the equation too.

Me I run Rails on Linode and it's fine, but yes, it's unfortunate Ruby/Rails never reached LAMP-level simplicity on virtual hosts.


Go's static binaries are pretty fantastic- though there are other details that you may care about more.


I'm starting to wonder if all this developer time we're saving by using Ruby is leaking out in other places (chiefly, deployment and performance optimization). I'm new to Go, but it seems to me that deployment and optimization are considerably less of a problem while also not costing me much on the development side of things.

I'm not sure, I just think it's worth discussing.


I think most of these development worries are more likely related to language independent deployment issues, mostly static assets processing: optimizing images, minifying Javascript, running CSS preprocessors, versioning everything to prevent caching, generating and copying configuration files, other necessary file/folder transformations for creating a running application. Sure the dependency worries are bothersome too, but I'm not sure they're the whole problem.


Ruby is uniquely awful on the deployment side, I have to say.


Really, the core of this is about the asset pipeline. A valuable feature that does not exist in other frameworks. It sucks having to write that plumbing code yourself.


php is really simple to deploy, you know ;)


Python with virtualenv and pip has been fairly stress free in my experience.


I feel Python apps is quite a pain to install. I had to deal with a Graphite installation, and it consists of 50% packages from apt-get and 50% of packages from easy_install.

https://gist.github.com/kaspergrubbe/5792356

Is this an issue with Graphite? Because I feel this setup is quite elaborate compared to using Rubygems and bundler.


From experience with both python/django and ruby/rails, I think python is generally simpler. Start with the fact that python itself is usually pre-installed on most distros, and usually a recent-enough version to get you started. Ruby on the other hand is much harder to just get installed, choosing the right (minor)version, rvm/rbenv choices etc. I tend to compile my ruby, but it's a lengthy and rather fragile process.

Graphite is both a good and bad example. Good because it is really complex and documentation is a little sketchy. I've written a fabric script[1] that automates the process, and it's far from trivial. Bad example, because it's not really a single app, but a system - a collection of tools with dependencies. Even if we discount the web server (nginx or apache?), it includes things like the core "database" (whisper), the event listener (carbon, which in itself is complex depending on your setup), the graphic and processing libraries, and then graphite which is a pretty involved django app with its own sub-components.

So when you say graphite, it's really a full-blown system with lots of moving parts that need to fit in together. I can't think of an equivalent example in the rails world, but any rails app with a db, caching layer, and a few other external components won't be much easier to get set up and running.

[1]https://github.com/gingerlime/graphite-fabric


That's a rather alarming install script, especially things like:

    sudo apt-get --assume-yes upgrade
in it. But to your point, I think most of the easy_install packages could be handled by pip. The apt packages look like almost all non-python major components like rabbitmq, apache, sqlite, etc which are best provided that way.

I'm not sure about Graphite itself, but at a quick glance it's not clear why it's all 'sudo python setup.py' rather than in a package.


This is an issue with Graphite. There is no way that Python can fix Graphite's install breakages unless Graphite wants to fix them.


I guess that's because you can't pip install system packages, only Python packages.


I've generally had the same experience, although that particular toolchain only solves part of the deployment problem. The other caveat is that you will eventually run into a situation where, say, pip doesn't quite make it to the finish line, and you're left wondering why. If my memory serves me, the last time this happened to me was while I tried to install a project that used ipython (=zmq+readline+pyglets+..+..+) and needed a numpy that was properly linked to the intel MKL. It failed at the 'pip level' but I didn't arrive at a solution wasn't clear until I dove much deeper (python build, MKL build, ZMQ build, setuptools interactions, autoreconf incantations, etc.) There are definitely times when it pays to understand things like the differences betweening pipping and easy_installing.

However, when dealing with projects that are closer to pure python, and/or less complicated, workon proj && pip install < requirements.txt has almost never failed me.

More and more I find myself moving to Ansible. So good.


"Whilst I believe strongly in Capistrano as a general purpose tool [...] I do think the future of software deployment is in small, containerised VMs and so-called PaaS, as what we're all doing right now has to end, some time."

Kudos. It takes a lot of courage to admit your baby is not going to fulfill the future you had initially imagined.


Check out 'Deploying Ruby Applications to AWS Elastic Beanstalk with Git' [1]

[1] http://ruby.awsblog.com/post/Tx2AK2MFX0QHRIO/Deploying-Ruby-...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: