Hacker News new | past | comments | ask | show | jobs | submit login
Fulfilling the Promise of CI/CD (stackoverflow.blog)
71 points by kiyanwang on Jan 22, 2021 | hide | past | favorite | 77 comments



“ If any engineer on your team can merge a set of changes to main, secure in the knowledge that 15 minutes later their changes will have been tested and deployed to production, with no human gates or manual intervention required, then congratulations! You’re doing CI/CD.”

We absolutely do this and I don’t think we are THAT unusual.

Also, as you hit larger scale it’s questionable whether this is still a good idea. We are about 50 engineers and our CD means we release to production about 10 times a day, on average. At that pace, some tricky things start to happen: a metric starts to move sideways and it’s hard to actually correlate it to a release - you may have to roll back across multiple releases, which can create a fairly confusing situation for the team.


I suppose there's an even larger scale that the law of large numbers applies and you can more reasonably filter out "the noise"


If you want to get more into the details about the benefits of CI/CD and other best practices, Accelerate[0] is a great book.

It's especially helpful for getting the data and forming the arguments you need to persuade leadership that CI/CD is a good thing.

[0] https://smile.amazon.com/Accelerate-Software-Performing-Tech...


Can confirm, they really took a legitimate scientific approach and years of research in order to arrive at their results. Excellent book.


I've been at teams and on projects doing CI and CD for the last... 15 years.

Sure, our deployment process was 10-15 minutes in the best of cases (because of serializing deployments for different branches), but where does the author's impression that the collective "we" are not doing CD come from?


"Very few teams are actually practicing CI/CD" is mentioned in the article.

In an enterprise setting, due to segregation of duties, it's often true that CD is impossible due to restrictive change management processes that result in unpredictable and typically week-long cycles.


It depends on the nature of the business. My experience (regulated financial institution) is that both front office and back office have teams which release hundreds of times a week. It does depend on the maturity of the team (including the business which it’s part of), however. And not all teams are at this level, because of their own particular limitations.

Don’t do CD as a tech/ego benchmark thing, however (or Facebook/Google envy). Do it if you can agree that this brings some benefits to the business — eg reduced risk, higher throughput, etc. Also delivering faster really forces you to examine your process for bottlenecks and strip away overhead. I really like this line from the blog:

> The teams who have achieved CI/CD have not done so because they are better engineers than the rest of us. I promise you. They are teams that pay more attention to process than the rest of us.


> Don’t do CD as a tech/ego benchmark thing, however (or Facebook/Google envy). Do it if you can agree that this brings some benefits to the business

This is a really important point. Rapid releases are not always inherently better. I used to work in a company that dealt with heavy regulatory bureaucracy (in the sense that putting out wrong/misleading information could result in class action lawsuits), and because of that, workflows were highly optimized for waterfall delivery. This meant aggressive reliance on classical project management tools and techniques, and hyper-specialization (as opposed to the sort of full stack engineer approach taken by many bay area tech companies) - and this has worked well for that company for decades.

You can also do hybrid approaches, e.g. continuous deployment to a QA environment, and have dedicated QA teams, while delivering to client on a waterfall schedule.


> You can also do hybrid approaches, e.g. continuous deployment to a QA environment, and have dedicated QA teams, while delivering to client on a waterfall schedule.

I feel like this approach lets developers still spot check and make sure their code looks good when in QA while still giving the more anxious folks who need production to be holy to have their manual button press to get things live. Ideally, your pipeline is also at that point a simple button press to go from QA out to production. It also hopefully isolates any production issues to environmental differences as you will be in the "bundled changes" territory referenced in the article.

From what I've experienced the FUD around letting things get into production rapidly really freaks out some organizations and having it hybridized is better than nothing.


Delivery method also matters. E.g. having actual continuous deployment to customer devices in a typical embedded context is very different from continuously deploying a web app. Depending on the circumstances not always impossible, but the factors feeding into the evaluation if it's worth it are very different.


FB/Google don't always or even usually do CD, either (unless you're talking about to some kind of staging or QA environment). There are always tradeoffs and different business requirements that need to be satisfied, and different approaches will make sense to address them.


Agreed. In the data points from my professional career it's only been small companies and teams that have actual CD to prod. The closest I've been able to come in enterprise environments is CD to a lower environment (got all the way to UAT once!) with manual promotion to prod. But that's not the same.


My workplace is probably one of the larger ones in HN (tens of thousands of people in technology), and we do CD all the way to production. We’re also extremely regulated.

I agree that we’ve been lucky though, to have great tech leadership on this. And also a very supportive business.

The regulators have not had any problem with CD or high cadence at all — because we can also show that our systems actually got more stable as we adopted CD “in spirit” and increased release cadence. Also, because we no longer do manual promotions, we’ve improved our Cyber-sec position by a lot — just one of the many advantages of automation.


I'm currently at a Fortune 500 and we're doing CD for our eCommerce site. I count myself very lucky to have been involved in the process.


I would like to hear more talk of how we can approach this problem. When you are in a place like that, whats the way to get management and customer buy-in, especially in the certain industries (fin, gov, etc) where they seem to think monthly deploys are almost too fast.


I guess I think CD could still be valuable in that kind of setting in terms maintaining a deployment ready release that is ready to enter an externally imposed release cycle. (including delivering change mgmt artifacts etc...).


That's not CD though.


Depends on which CD you're talking about. Continuous Delivery: artifacts get delivered and are ready to deploy when someone is ready to do it. Continuous Deployment: built artifacts get deployed automatically.


The article talks about Continuous Deployment.

IMO Continuous Delivery is kind of a nothingburger vague process term that lots of people can (and do) talk themselves into saying/thinking they are doing. That's kind of the theme of this blog post BTW. Continuous Deployment is the actual indisputable thing (you either automatically deploy new commits in mainline or you don't) that gives you the real benefits but which lots of orgs aren't yet doing.


Can anyone besides web get away with continuous deployment? IOS already wants to update too often, I don't want CD to my phones or probably my desktop or server OS. Especially in enterprise. And in cases where hardware hasn't even been released it is impossible, much less undesirable. It makes sense to talk about continuous delevery amd how to squeeze out extrabenefits from that model. I suspect it is mainly about automating as much as possible between CDel and CDep but I have obly ever worked in CDel on hardware that doesn't even exist yet.


Things like classic Ubuntu get continuously updated with security fixes, and with livepatch, even kernel gets updated without a reboot.

Even their process is very much like true CI/CD but packages only get into "proposed" archive (still accessible to the public) without further human vetting (I mean, a code review is human vetting too).

I personally want my desktop/server apps to automatically update according to traditional LTS rules (no breaking changes, just bugfixes).


> As long as you have manual processes, it's not continuous.

Agreed. I'm all-in for Continuous Deployment. Continuous Delivery is a half-measure at best.


> when someone is ready to do it

That's not automated. CD (however you (re)define it) is about automation and removing people from the loop.

As long as you have manual processes, it's not continuous.


Anecdotally this has largely been my experience, too.

TL;DR ... everyone has Jenkins but they can't just let it deploy stuff because there is tons of "professionalism theatre" bureaucracy baked into the SDLC implemented as crappy homegrown scripts that check you filled in fields in JIRA. On top of that the org likely has incorrect thinking that the way to reduce bad outcomes from software is to change it less often.


> incorrect thinking that the way to reduce bad outcomes from software is to change it less often

This. So much this. The way you turn a bug into a "bad outcome" is to make it slower to change. If you can deploy quickly and easily, you can fix any software issue before it becomes a big problem.


Deployment means shipping working code, but in my professional experience most people conflate that to publication. Publication is about making something available to an audience, whether internal or external, which is much more than just adding a functional piece to another place. This is just the start of getting a deployment process terribly wrong, but everything else wrong seems to result from this.


A 10-15 minute deployment process sounds very good, if you take the Accelerate research as a baseline.


As my startup [1] is in the domain of CI/CD, I've been doing a bunch of customer development interviews to better understand how teams deliver software. I was also surprised how few teams use full Continuous Delivery, even at cutting edge tech companies. It is indeed often used by small teams, even within large companies, where they deliver internal backend services.

The most common seems to be auto-deployed to staging or a dev environment, with some sort of daily or weekly process for promoting to production. One company built a Slack-based approval process using +1 or -1 reactions, and another has a zoom meeting where every author has to attend and is walked through a checklist before the release is approved.

My team also had a manual approval step to production, which theoretically meant the dev would check logs, dashboards, and alerts before approving, but in practice both with us and teams I talked to, that is followed about 50%-80% of the time.

What we built into our product, Sleuth [1], is a way to automatically promote staging releases when the staging release was determined to be healthy and soaked for a minimum amount of time. This allows the 80% case to simply flow through to prod without developer babysitting, whereas we can easily interrupt the process with a -1 reaction in Slack if it needs more manual testing. I think this is the ideal - the common case is the code flows but you still have an easy way to interrupt the process when the change needs it.

[1] https://sleuth.io


All those years it has been on focus, I still don't understand what problem CD is expected to solve.

Yes, there is value on CI, and there is plenty of value on very low effort deployment. But, except from a large downwards risk, what does automatic deployment bring in exactly?


To me, the main value of CD is ownership. The developer can own a change from beginning to end, and most importantly, its impact on production. That feedback loop creates better future changes and a better developer.

Automatic deployment takes it to the next level where it becomes so easy to deploy that the dev starts fixing things they would have previously ignored because it was too painful, annoying, or time consuming to deploy before. A typo here, a refactoring there, and now they can fix, push, and go back to what they were doing within minutes, but do so with the confidence they won't break things. It is kinda like the difference from when CI takes 3 minutes to 10 minutes. It isn't much but that longer time means you alt-tab over to reddit or whatever, forget what you are doing, and now that task takes orders of magnitude more of wall clock time.


Ok, I was not explicit enough on my question. The benefit of low effort deployments is very visible.

What benefit does have a computer deciding that a version of the software is ready to deploy instead of the developer pressing a button when he thinks the software is ready to deploy?


Right, got it. I think it is the lag time involved in the pipeline. Once a developer clicks merge, CI has to run, an artifact has to be created and uploaded, then usually a deployment to dev or staging is kicked off. That takes somewhere between 5 minutes and 2 hours in my research. Then, once done, the dev can click a button to push to prod, though often that is just to a canary, so another 5-30 minutes, then repeated for other prod environments. That whole process takes a lot of babysitting, and realistically, the dev has started on another task and forgot about it, making it take even longer.

By making it automated, the dev can immediately start on a new task and the deployment can go out quicker, meaning the next dev can start on their deployment. Therefore, I see it as helping preventing costly context switching and therefore, improving efficiency.


Thanks. That makes a lot of sense.

Yet, it goes completely against my experience, so you've left me wondering what creates that difference and suspect it comes from organizational differences.

You certainly considered no post-deploy activity. That should have been obvious from the beginning, but I just didn't think of it. I suspect the post-deploy activities I'm used to go to the role of a product manager, and you are assuming a more jira-oriented environment than I was.

You are probably assuming larger software blocks than I am. I should have imagined that too, because I have a tendency of breaking software in many more independent pieces than it's usual, and of imposing that on my environment everywhere I go. I am used to people having something to do on a different project than the one they started a CI, so they can start right away, and come back after the CI is complete, so there is no wait. And there is no "next dev" waiting, because he is working on a different project too.

Also, having a different project to work on during CI means a major context switch if people need to fix anything on the previous change they made.

So, with that said, I can now imagine what kind of environment CD makes sense. It's somewhere with a small number of projects and a large team of developers (or, any place with a high developers/projects ratio). It's the kind of place where would be hard to coordinate a release if it was manual too.


I disagree with some of the conclusions.

CD is avoided because most software products should not be delivered continuously, as it is inversely correlated to software stability for customers. It may create value internally, but if it doesn't create value externally and is cheap to roll out then it's probably not worth it.

Slightly related, I think CI/CD is today where VCS was before git. It all kinda sucks, and no startup or company is going to fix it (nor can they, it must be free as in beer and speech). We need something that sucks less and becomes as standard a tool as git or make.


Good point. Imagine the developers of Postgres or Mongo would do CD to your database.

IMO, the idea of CD only works for companies that control the production environment and basically offer a service, not a product. If that's you, you may get a great deal out of CD.

But if you have a product and if that product manages your customer's data, is integrated with your customer's scripts, and runs on an environment that you don't completely control, you better think twice before permanently updating the system. You can still do it, but it will be expensive to get it right.


> CD is avoided because most software products should not be delivered continuously, as it is inversely correlated to software stability for customers.

This has been shown to be flat out wrong. Stability has been shown to be highly correlated to the capacity to deploy quickly and often.

https://www.goodreads.com/book/show/35747076-accelerate

https://nicolefv.com/research


I led our team to a full CI/CD (Deployment) and while it felt really counterintuitive to move quickly and have the occasional bug show for our customers, the speed at which we could fix the bug has vastly offset it. Introduce a bug? Fix it same day. Introduce a breaking change? Rollback and fix it. No biggie.


Have you had data corruptions because of the bugs that could not be rolled back quickly?


This is something that I would be concerned with. If you have a multiplayer game like Runescape, and have CD on the servers, introducing a bug that causes corrupted data may be able to be patched quickly, but in that hour a lot of players could have been given items that they shouldn't have had. Then you have to distinguish which got the items properly and which didn't, and have processes set up to make those kinds of admin changes to the prod db


Goes without saying that you deploy to an identical test environment that duplicates the entire stack, with realistic test data. Automated testing protects from regressions, manual testing for the rest.

crickets

One (QA) can always dream!


Some stacks are quite complicated, involve various kinds of mobile devices with different app versions, custom hardware, offline functionality. It's just that not all systems are easy to replicate. But that's another problem.


Fortunately not, but we're deploying the frontend for an eCommerce site so we don't deal with (much) data can could be corrupted.


2016 was the last time I was on a team I consider not to be doing CD, in the sense that you hit merge and it goes to prod without manual intervention if automated tests pass.

However, I'm sure some people would argue that some of these teams are not CD enough due to items like the following:

- One team had very long (4+ hour) test suites. End result, devs hit merge in the morning to see their changes that day, and didn't if a test failed (and of course there were flaky tests). Eventually there was a project to pare down unneeded tests and address flakiness, but by the time I left it was still a 2 hour (though more reliable) time from merge to deploy. No human intervention needed to.

- Release windows. The release pipeline on one team only ran 9-5 monday to thursday. If you merged after 5 it kicked off the following morning. If you merged friday, it kicked off monday morning. I'd accept a claim that the team was only doing CD 4 days of the week, but not doing CD at all is overstating it.

- Code reviews as requirements of merge. I still think this doesn't disqualify it, because the developer still has the option of when to hit merge after review and go to production without anyone else's intervention. On 2/3 teams the CI pipeline ran on branches also was sufficiently reliable that if your branch build passed, you were pretty sure the release build also would.


Anyone got some good resources on CD?

I think, especially in the cloud, I'd would fear updating a database automatically and lose data. Some of the resources in AWS/CloudFormation are "replaced" and not "updated", which gives me a bit of paranoia.


I’d start with reading the Phoenix Project (a “novel” about Devops) and also Accelerate. Also the Continuous Delivery book by Dave Farley and Jez Humble.

Beyond that however it’s about taking a look at your processes and your particular challenges.

Say if you have a old-school DB with lots of stored procs, no or not enough tests, and that people tremble to update, that DB is a business risk.

You should have backups and contingency plans (eg a hot standby?) to ensure updates are resilient even if something fails. And of course the “good hygiene” of adding columns, using feature flags etc as the other commenter wrote.

Once you’re less worried about the database, you can start refactoring it if you wish.

Refactoring a large, entrenched database is a large topic but re publicly available case studies, have a look at Netflix’s case study on how they migrated their billing system away from Oracle[1].

[1] http://techblog.netflix.com/2016/06/netflix-billing-migratio...


+1 I really enjoyed the Phoenix Project.


Thank you very much!

Currently, I'm mostly interested in greenfield projects and how to do things the right way from start.


Writing data migrations that fail safely is tricky, and requires some extra care. Temp tables, adding new columns instead of modifying, feature flags, etc. all help with this. Basically, you want to run stuff in production with less risk.

On the AWS/CloudFormation side, "Cattle, not pets" is the motto[0]. Terraform specifically is really good at capturing the desired vs current state of your infra, and showing you what those changes will be before you apply them. Point being - as long as your automation is captured in Infra-as-code, you shouldn't care too much if something gets destroyed.

[0]- http://cloudscaling.com/blog/cloud-computing/the-history-of-...

[1] - https://www.terraform.io/


Are data migrations easier with MongoDB/DynamoDB? They don't really have a schema.


With document DBs you still have "schemas", they just won't be explicit at the database layer. If you rename a field your app uses, you will end up having to either rename that field in every document, or punt on migrating and instead ensure that your app can handle either field name when reading.

While the latter may sound easier (and often is easier when you're starting out and iterating quickly), you will soon start to feel the pain of managing dozens of different versions of your "schema" at the app layer. I've always found it easier to only have to deal with a single schema at a time in any situation where you can't just regenerate all records on the fly, like a cache, and for which you don't care about what the specific schema was at a point in time.

If you actually do care about what the schema was at any point in time at the app layer, your app WILL need to account for any version of the data—but if you need this you're accepting the tradeoffs of the additional complexity.

Definitely more nuanced than "no migrations = easier" though.


Yeah, just shovel the data in. No schema, no problem.

The tradeoff is that if you're actually relying on the structure of that data, you're introducing risk since its changes aren't being intentionally managed at the DB level.


How do you work around that with databases that don't have a (formal) schema?


You dont. Use DBs with schemas for structured data.


The continuous delivery book is the "bible" in this area

https://martinfowler.com/books/continuousDelivery.html


"Is it hard? Yes, it is hard. But I hope I’ve convinced you that it is worth doing. It is life changing. It is our bridge to the sociotechnical systems of tomorrow, and more of us need to make this leap. What is your plan for achieving CI/CD in 2021?"

At Release[0] we discussed this in our Build vs Buy[1] page, specifically in the section "Is building a PaaS your core competency?" Companies should be focused on building their product and delivering the value of that product to their customers, not spending years building out a platform to achieve CD.

My hope is that people's plan for achieving CI/CD in 2021 includes looking at all the companies working in this space and give them a chance rather than trying to spin something up on their own.

* If it wasn't clear, I work at Release, so my I'm letting my bias be known *

[0] https://releaseapp.io [1] https://releaseapp.io/build-vs-buy


The "Delivery/Deployment" distinction is baseless and used as a justification for not shipping by people who are attached to old and burdensome change management processes.

Software not in the customer's hands hasn't been deployed OR delivered.

Alternatively: Software undeployed is software undelivered.


Note for people attached to old and cumbersome change management processes, eg ITIL:

ITIL’s latest iteration, ITIL v4, has a track for “high velocity IT” which incorporates rapid release cadence, CD, etc and is described by them as suitable for “digital” organisations or organisations going through digital transformation.

I laughed a bit at this because this is just ITIL playing catch-up, but still, it’s a useful data point to get some ‘stuck in the past’ people to see that the “old Enterprise IT ways” are no longer un-challengable.


I've always found the change management tooling backwards. Force the developer to aggregate the data so that auditors have an easier life when they check a single change once a year. Why doesn't the tooling just hook into the existing data sources that already provide the data for auditors. like PRs, testing data, JIRA tickets, etc...


This article addresses an issue I currently have but doesn’t actually address how to fix it. I recently set up a simple homeserver to use as a testing ground for sideprojects before eventually porting them to a cloud service.

As a result I looked into building a CI/CD pipeline for the first time and practically every article I came across talking about “CI/CD” really just talked about CI.

Even today I have no idea how to easily automate deployments. The only service I know of that does this is Heroku. Ideally I should be able to push any changes to a master branch on Github and have those changes automatically deployed to my server. How this can be done is poorly documented from my experience and certainly nowhere near discussed as much as CI solutions.


I implemented CI/CD pipeline for a project I am working on recently. I used Azure DevOps and the app is self-hosted on our servers. It took some time to figure out but I got it working after some trial and error.

Right now, it is triggered by a push to "master" branch, the pipeline builds the app, runs tests, creates artifacts(files to be used for release/deployment). Then, the release pipeline is triggered, this builds a "release", and deploys the application to "staging" environment used for user testing. Once, the application is approved, I have to go to Azure DevOps and confirm deployment to "Production". There are many different approaches, triggers and settings.

Check out Azure DevOps they have a lot of good information on CI/CD pipelines.


In our production system, CircleCI has Github and AWS credentials. When CI is done building and testing the Docker image, it pushes it to a private container registry and updates the terraform infrastructure repo with the new container hash ID (either through a tfvar file or through env variables). Then another CircleCI job runs `terragrunt apply` in that repo to deploy to staging (prod/stage/dev are separate folders, automated jobs only update staging). Deploying from staging to prod is manual, by copying the container hash from staging to prod and pushing to master.


That's because building software & running tests is somewhat standardized, but deploying is very specific to each application's environment (and requires additional credentials, integration with monitoring, automated rollback, etc).

Some platforms provide this as part of their feature set (like AWS).

You talk about your side-projects - how do you deploy them today? Write a script for that and think about how many unique-to-yourself edge cases you are handling. If you believe your solution is generic and re-usable, then build that into a tool/platform for everyone and profit!


The key word that covers the Heroku-style experience is "GitOps".

Most tools are Kubernetes-centric (i.e. FluxCD[0] or Jenkins X[1]), but there are some simpler Docker-only options as well such as Dokku[2].

[0]: https://fluxcd.io/

[1]: https://jenkins-x.io/

[2]: http://dokku.viewdocs.io/dokku/


>As a result I looked into building a CI/CD pipeline for the first time and practically every article I came across talking about “CI/CD” really just talked about CI.

You don't mention what technology you are using, but maybe you didn't find the correct resources?

https://codefresh.io/docs/docs/yaml-examples/examples/#deplo...

F.D. I work for Codefresh


In my hobby project I just have a script that pushes docker images and rsync the files (docker-compose.yml and some volume mounted things), and then sshes to the server and restarts docker-compose.


where would you like to deploy? I've never had this problem you mention, but I knew how to deploy manually before using CI/CD, the problem if any is usually how do you translate what you are already doing to deploy manually to the CD format/language and mechanism.

EDIT: I subscribe what jasonpeacock is commenting above, I think it's better expressed in those words.


In my experience the opposite is true - no one is doing CI. But that's only because the definition of CI is unrealistic/impossible for large teams, specifically this requirement: https://en.wikipedia.org/wiki/Continuous_integration#Everyon...

I've never seen a team that operates with every developer committing every day. Small commits merged into a stable trunk as often as possible, yes. But every developer merging code every single day is unrealistic, counter-productive, and incompatible with code review practices.


> Everyone commits to the baseline every day

That's not a hard requirement, it's more like a principle.

In my experience, it's easier to review smaller things than gigantic PRs. I don't take this line item to mean literally committing every chronological day, but in the sense of not withdrawing from the world and building out entire systems in a cave, so to speak. It's a more granular version of "prefer agile over waterfall".

Another subtle aspect of the commit-often mantra - particularly when it comes to a workflow with tests running in CI - is that you are encouraged to build software in a bottom up fashion (simply because if you try to "tack on" an incomplete feature to an existing live system, it'll obviously not work).


I'm actually having this discussion with a client at this very moment. Client is expecting that check-ins to baseline happens multiple times per day. On our distributed team with junior and senior people in different locations the ability to do async code reviews is critical to maintaining quality. Automated linting, unit testing, and other CD-friendly tools can't teach and enforce code quality the way we need with our junior developers.


The author seems to be asking for some things that just don't make sense.

My impression is that they claim that "true" CD means each deployment is of a single author's changes, and that batching defeats the spirit/purpose of CD.

But many orgs commit to master faster than deploys can go out (eg 15 minutes) especially if there is caution taken to ensure the change did not cause problems (automatically or manually detected).

I only skimmed parts; was there a solution to this problem mentioned? Or did I misunderstand?


What strategies can a team implement for CD if deploys and merges to master take about an hour to deploy today?

Our problem is that we need to manually cut a release branch with the right version number manually then go through logistical steps before we can merge.

Docker alone takes about 30 minutes to build.


I suggest a pull-based approach, instead of gitlab/github's push based thing. Have a regular job pick the latest master branch, tag it correctly (watch out to order your releases), build, package, ship, deploy. I would keep it synchronous. Doesn't make sense to build the latest commit when deployment takes hours. Just wait for the last deployment to finish before starting again.

Also a pull based approach allows you to integrate multiple repos. I have never seen anyone from the CD camp talk about how a company should manage CD with n interdependent git repositories with the usual push based approach.


I have always wondered how companies managed CI/CD infrastructure with PCI or SOC2 requirements, given that there have to be a lot of manual approvals and acknowledgements through the delivery process.


Automate everything else but the approvals. Releases should still be built, tested, and pushed completely automatically, except that one of the release process steps is to present an "Approve" button to the release manager. After the release manager clicks the button, the rest of the release proceeds automatically.

Generally you would only need the manual approvals for prod. Dev, qa, staging, etc., can typically still be released completely automatically, so you just create a CI/CD infrastructure that can be used for both.


The whole point of continuous deployments is that they are continuous. This is in contrast to processes that require approvals.


Yes, but if you have regulatory requirements for a manual approval (as the parent comment of mine did), then that's the best you can do - everything is continuous except for the approval.


If you are not allowed to deploy continuously then don't do continuous deployments.

The tendency to bend words until they lose their meaning just to tick them off is a bit strange. No one can possibly use every trendy idea simultaneously. No trend is a great fit for every situation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: