I've browsed through the article you linked to, one of the subtitles was "Realizing the future of DevOps is a single application". Also a horrible idea: I think it locks developers into a certain workflow which is hard to escape. You have an issue with your setup you can't figure out - happened to me with Gitlab CI - sorry, you're out of luck. Every application is different, DevOps processes is something to be carefully crafted for each particular case with many considerations: large/small company, platform, development cycle, people preferred workflow etc. What I like to do is to have small well tested parts constitute my devops. It's a bad idea to adopt something just because everyone is doing this.
To sum it up, code should be separate from testing, deployment etc. On our team, I make sure developers don't have to think about devops. They know how to deploy and test and they know the workflow and commands. But that's about it.
Having CI configuration separate from the code sounds like a nightmare when a code change requires CI configurations to be updated. A new version of code requires a new dependency for instance, there needs to be a way to tie the CI configuration change with a commit that introduced that dependency. That comes automatically when they're in the same repo.
For example as a use case: Software has dozens of tagged releases; organization moves from deploying on AWS to deploying in a Kubernetes cluster (requiring at least one change to the deployment configuration). Now, to deploy any of the old tagged releases, every release now has to be updated with the new configuration. This gets messy because there are two different orthogonal sets of versions involved. First, the code being developed has versions and second, the environments for testing, integration, and deployment also change over time and have versions to be controlled.
Even more broadly, consider multiple organizations using the same software package. They will each almost certainly have their own CI infrastructure, so there is no one "CI configuration" that could ever be checked into the repository along with the code without each user having to maintain their own forks/patchsets of the repo with all the pain that entails.
I had (and still have) high hopes for circleci's orbs to help with this use case. Unfortunately, orbs are private - which makes it a no-go for us.
But, in my dream world, we have bits of the deployed configuration that can be imported from else where - and this is built right into the CI system.
In practice, for my org, the code and configuration for the CI comes from both the "infra" repo as well as the "application" repo. The configuration itself is stored in the app repo, but then there's a call `python deploy_to_kubernetes.py <args>`. The `deploy_to_xxx.py` script would be in the "infra" repo.
It also depends on your workflow - do you change the common deploy infrastructure more often, or do you change the application specific deploy infra more often.
Yeah, writing code to deploy code is sometimes fun, but sometimes nasty.
IMHO it makes sense to have CI config version controlled in the same repo as the code. Unless there's a good tool for bisecting across multiple repos and subrepos?
This way your devs won't have to merge, they can just rerun their tests, which should be the same workflow as if your CI config is separate from your codebase.
It's a CI service that lets you run your builds however you want, on any machine you want (cloud vm; in house server; your laptop) using an open source cli that just wraps any shell script/command and streams the logs to a service to give you all the useful stuff like build history, team account management, etc.
In other words, how you configure and run your builds will be up to you - scripts in the same repo, in another repo, in no repo - whatever you want, since you could just git clone or otherwise copy the source from wherever it is - there's no 1-1 relationship between the source repo and the CI, unless you want that. It'll be launching very soon :-)
My initial thought was a guarantee that should the company not work out, the management console software will be completey open sourced. Obviously this would just be relying on trust though, which I can see could be an issue.
Though really the thing that should happen in this case is that we should be seeing more tooling that can be run on a local machine that has a deeper understanding of what a "project" is. Git is great for version control. But stuff like issue tracking and CI also exist. So if there was some meta-tool that could tie all that together.
A bonus: if you make some simple-ish CLI tool that ties all this together, the "Github as the controller of everything" risk goes down because it would become easier for other people to spin up all-encompasing services.
A tool like this would do to project hosting what Microsoft's language server tooling has done to building IDEs. A mostly unified meta-model would mean that we wouldn't spend our times rewriting things to store issue lists.
Where else would you put these configs?
Technically, version control lends itself naturally as part of the now well-accepted infrastructure-as-code mantra.
Operationally, version control is the one that developers interfacing most primarily, shifting the interactions to that interface would be benefitical to users.
Of course, DevOps as a skill set is becoming less and less relevant, given the increasingly integrated toolings that interfacing directly with developers, that's for sure.
I think my ideal devops situation is the one where you start with a simple deployment script from your local machine when it's a one man show and then scale it to a large organization not by switching to a large piece of devops software, but by gradually adding pieces that you need as your team grows and requirements change. Exactly what happened to us and I think our devops workflow is really great and I'm very proud of it.
Some companies have tons of old and new projects with very heterogeneous technologies in use. Imagine 50+ teams, several different programming languages, and things being deployed to different "hardware" (bare metal, cloud VMs, kubernetes, etc). It just seems like a lot of work to manage CI configs for all those different teams/cases, handle "support" requests from different teams, fix issues, and so forth. Hence, why the "easy way" out is to have each team manage CI configuration themselves as much as possible, to spread the maintenance cost across many capable developers.