Hacker News new | past | comments | ask | show | jobs | submit login

Agree'd. It seems like the author either has no customers or very understanding customers. Pushing every commit to production is just non-sense by any metric. What about commits that muck with the data model where bugs might potentially destroying production data? What about complex developments that consist of more than one commit?

This almost reads like a link-bait. I somehow doubt the author really believes what he's writing there.




Well, it's certainly not "no customers"... Timothy is referring to IMVU. Whether the customers are understanding or not is another issue. However, when we implemented this "cluster immune system", we had outages all of the time. But every single time, we took steps to prevent that class of failure from happening again, and now we deploy code to the cluster twenty times a day.


Well, I'm not against frequent deploys. I just found the article a bit light-hearted in tone - as if no testing at all was going on.

No matter how many classes of failures you have ironed out, in any reasonable complex system you will still regularly have regressions that are not caught by a quick "two eyes" check.


"What about complex developments that consist of more than one commit?"

Obviously since you use git or another DVCS, you can do your mucking about locally in a branch, and when it's ready to go, squash it down into one atomic commit.


Well, he didn't mention that in the article. Such a model generally requires a staging server which implies integration testing, though. Unless he suggests the release manager (or even developer de jour?) just merges the stuff on his local machine and pushes it out as he sees fit...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: