Hacker News new | past | comments | ask | show | jobs | submit login

It's a horrible trend. CI should not be tied to version control. I mean we all have to deal with it now, but I'd much rather have my CI agnostic and not have config files for it checked into the repo.

I've browsed through the article you linked to, one of the subtitles was "Realizing the future of DevOps is a single application". Also a horrible idea: I think it locks developers into a certain workflow which is hard to escape. You have an issue with your setup you can't figure out - happened to me with Gitlab CI - sorry, you're out of luck. Every application is different, DevOps processes is something to be carefully crafted for each particular case with many considerations: large/small company, platform, development cycle, people preferred workflow etc. What I like to do is to have small well tested parts constitute my devops. It's a bad idea to adopt something just because everyone is doing this.

To sum it up, code should be separate from testing, deployment etc. On our team, I make sure developers don't have to think about devops. They know how to deploy and test and they know the workflow and commands. But that's about it.




I'm an operations guy, but I think I have a different perspective. The developers I work with don't have to think about CI/CD, but the configuration still lives in the repo, I'm just a contributer to that repo like they are.

Having CI configuration separate from the code sounds like a nightmare when a code change requires CI configurations to be updated. A new version of code requires a new dependency for instance, there needs to be a way to tie the CI configuration change with a commit that introduced that dependency. That comes automatically when they're in the same repo.


Having CI configuration inside the codebase also sounds like a nightmare when changes to the CI or deployment environment require configuration changes or when multiple CI/deployment environments exist.

For example as a use case: Software has dozens of tagged releases; organization moves from deploying on AWS to deploying in a Kubernetes cluster (requiring at least one change to the deployment configuration). Now, to deploy any of the old tagged releases, every release now has to be updated with the new configuration. This gets messy because there are two different orthogonal sets of versions involved. First, the code being developed has versions and second, the environments for testing, integration, and deployment also change over time and have versions to be controlled.

Even more broadly, consider multiple organizations using the same software package. They will each almost certainly have their own CI infrastructure, so there is no one "CI configuration" that could ever be checked into the repository along with the code without each user having to maintain their own forks/patchsets of the repo with all the pain that entails.


> organization moves from deploying on AWS to deploying in a Kubernetes cluster

I had (and still have) high hopes for circleci's orbs to help with this use case. Unfortunately, orbs are private - which makes it a no-go for us.

But, in my dream world, we have bits of the deployed configuration that can be imported from else where - and this is built right into the CI system.

In practice, for my org, the code and configuration for the CI comes from both the "infra" repo as well as the "application" repo. The configuration itself is stored in the app repo, but then there's a call `python deploy_to_kubernetes.py <args>`. The `deploy_to_xxx.py` script would be in the "infra" repo.

It also depends on your workflow - do you change the common deploy infrastructure more often, or do you change the application specific deploy infra more often.

Yeah, writing code to deploy code is sometimes fun, but sometimes nasty.


You can create a separate repo with your own CI config that pulls in the code you want to test; and thus ignore the code's CI config file. When something breaks, you'd then need to determine in which repo something changed: in the CI config repo, or the code repo. And then, you have CI events attached to PRs in the CI config repository.

IMHO it makes sense to have CI config version controlled in the same repo as the code. Unless there's a good tool for bisecting across multiple repos and subrepos?


I hear you. I thought about it and I think you need to reframe the problem. If a change in the application breaks your CI, it means you need to adjust the CI so it doesn't break when such changes are introduced. In my experience, these kinds of things happen very rarely.


In my experience maintaining an active project at a large enterprise, these kinds of things happen nearly daily. Sometimes I wake up and our EMEA team has merged in a change that requires a CI change as well, and they are able to self-service those through their PRs.


I'll give you a counter example: whenever I change CI workflow for something that has nothing to do with the repo - like a new deployment scheme to staging - I have to go and ask people to merge/rebase from master into their branches or they won't be able to deploy. It happens pretty often and I'd rather avoid this.


To fix this problem you can set up you CI server to merge into the base branch automatically before running the code (in fact you should probably do this by default for other reasons).

This way your devs won't have to merge, they can just rerun their tests, which should be the same workflow as if your CI config is separate from your codebase.


If you tests are lightweight and fast you could even trigger this automatically


I 100% agree and am building https://boxci.dev to solve this problem for myself and I hope others too.

It's a CI service that lets you run your builds however you want, on any machine you want (cloud vm; in house server; your laptop) using an open source cli that just wraps any shell script/command and streams the logs to a service to give you all the useful stuff like build history, team account management, etc.

In other words, how you configure and run your builds will be up to you - scripts in the same repo, in another repo, in no repo - whatever you want, since you could just git clone or otherwise copy the source from wherever it is - there's no 1-1 relationship between the source repo and the CI, unless you want that. It'll be launching very soon :-)


Looking forward to it, very interested. However, I'm going to be honest with you - it'd be a hard decision to rely on a service rather than an open-source software. Like, you guys can go bust any time and while the runner is indeed open-source, as I imagine, the management console for it won't be. And I understand that's what people are paying for, but, perhaps, consider licencing it, rather than running from your own server and charging for the service.


This is a really good point and something I'd been thinking about, thanks. I'll have think about the licensing option - it's a good idea and something I'd not considered.

My initial thought was a guarantee that should the company not work out, the management console software will be completey open sourced. Obviously this would just be relying on trust though, which I can see could be an issue.


You might appreciate the sourcehut approach (which I am the author of). Each build job is decoupled from git and can be submitted ad-hoc with zero or more git repos declared in the manifest. Your manifest is machine-editable too so you can add extra build steps or tweak various options before submitting it in specific cases. postmarketOS does this, for example, to automate builds of all of their packages. SourceHut itself does this to fill the demand made by this trend, too, by pulling build manifests out of your git repo and tweaking the sources to check out specific commits, then submitting it over the API.

https://man.sr.ht/builds.sr.ht

https://sourcehut.org


Having all the tooling being integrated makes it a lot easier to offer good feature interaction. Right now the atomisation of "software project management features" means that your project is run over 10 servers all held together by flaky webhooks, and requests that should be instant instead take 30 seconds because everything has to be remotely queried.

Though really the thing that should happen in this case is that we should be seeing more tooling that can be run on a local machine that has a deeper understanding of what a "project" is. Git is great for version control. But stuff like issue tracking and CI also exist. So if there was some meta-tool that could tie all that together.

A bonus: if you make some simple-ish CLI tool that ties all this together, the "Github as the controller of everything" risk goes down because it would become easier for other people to spin up all-encompasing services.

A tool like this would do to project hosting what Microsoft's language server tooling has done to building IDEs. A mostly unified meta-model would mean that we wouldn't spend our times rewriting things to store issue lists.


Even if code and deployment config are different (and managed by different teams), there's no reason why they cant be stored in the same repo.

Where else would you put these configs?


Into a separate repo. See my other reply: https://news.ycombinator.com/item?id=20647649


Why it's horrible?

Technically, version control lends itself naturally as part of the now well-accepted infrastructure-as-code mantra.

Operationally, version control is the one that developers interfacing most primarily, shifting the interactions to that interface would be benefitical to users.

Of course, DevOps as a skill set is becoming less and less relevant, given the increasingly integrated toolings that interfacing directly with developers, that's for sure.


So, it's a very interesting situation. On one hand, I agree with you: when I start a new project, I am the master of it, I take care of everything and I need to make sure deployment and testing are implemented as early as possible. But this needs to scale. When we hired more people, I quickly realized no one wanted to deal with CI. They wanted it to just work and I wanted people to work on features and bugs, not fighting CI. So you can call me a devops guy by accident (turns out, that's a huge chunk of what I do as a CTO - remove obstacles by implementing and managing devops).

I think my ideal devops situation is the one where you start with a simple deployment script from your local machine when it's a one man show and then scale it to a large organization not by switching to a large piece of devops software, but by gradually adding pieces that you need as your team grows and requirements change. Exactly what happened to us and I think our devops workflow is really great and I'm very proud of it.


I'm interested to know how many different teams use your CI system, as well as how many different platforms (operating system, distro, programming language) you support.

Some companies have tons of old and new projects with very heterogeneous technologies in use. Imagine 50+ teams, several different programming languages, and things being deployed to different "hardware" (bare metal, cloud VMs, kubernetes, etc). It just seems like a lot of work to manage CI configs for all those different teams/cases, handle "support" requests from different teams, fix issues, and so forth. Hence, why the "easy way" out is to have each team manage CI configuration themselves as much as possible, to spread the maintenance cost across many capable developers.


Why devops, if everything can be deved? If the dev experience is good enough, do anyone needs ops?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: