Hacker News new | past | comments | ask | show | jobs | submit login

You're right. If everyone behaves the way they do with larger deploy cycles even after moving to an extremely short deploy cycle then you would have lots of downtime.

The point wasn't that it solves downtime, but that it causes you to more readily understand sources of downtime. You then have to use that information to make your development and deploy process resilient to those classes of failures. Rinse. Repeat.

For example, at my company we would spin up a new engineer and within about a month they would take out a database (or worse) with a query that would be extremely slow. What did we do to fix the problem? We capped all queries to 20 seconds, after which we would fail the page request. In the best case, a developer pushes a page with a slow query in it, and the page fails enough to cause the revision to roll back. In the worst case, the feature the developer was working on would be broken but the rest of the website would continue functioning.

And yes. yes. yes! You should be catching your problems before they go live. The list of techniques is staggering: unit testing, regression testing, functional testing, fuzz testing, exploratory testing... but those methodologies only go so far. Continuous Deploy goes further.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: