Hacker News new | past | comments | ask | show | jobs | submit login

They listed exactly what they are doing. The artifacts they are picking out should already be tagged with the release. The script to push those to production can use those tags to know what to push and what git commit hashes to check.

ETA: I'll add that when pushing production, the amount not just you can waste, but everyone down the line from users, developers, testers, etc, etc, can be huge. Calculating how much time you would save isn't useful.




I don't want to say you are wrong, because there are a lot of situations (most?) where deployment automation can greatly reduce errors.

However, there can also be a number of reasons why builds and deployments could not be automated safely:

- "Production" is not a single environment, but multiple customer environments with multiple deployment version targets based on need/contract. Customer environments might not even be accessible from same network as build/deploy machines.

- Code is for industrial/embedded/non-networked equipment.

- Policies dictated by own company or regulatory body require builds are manually checked and deployed by a human who can validate and sign off.

There really is no way of knowing. Automation can save hundreds and thousands of man-hours and reduce margin of error; but it is not applicable to every scenario. Sometimes manual work reinforced by good habits and processes are the tool for the job. As much as it pains be to say, as my job is automation.


Not to leave you guys hanging about my specific case, the process we do is as almost as automated as possible. All the heavy lifting is scripted and the deployment to all other environments is done via a single click, however for prod we do have an additional checks, as deploying the wrong build could have some bad consequences.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: