I do like that this post expresses the gratitude for many years of free OSS CI that Travis did provide, and I especially want to echo that and thank the original founders and engineers who helped grow Travis CI and the entire idea of 'every project gets CI' that is now prevalent (in most professional realms), just like the previous generation embraced the idea of 'every project is in a VCS'.
It's too bad what's happened, I think, but Travis does have a great legacy.
Not every setback is a pattern, but if you have no histamine response at all then you will miss a lot of preventable issues.
We really need to improve the "annoying authentication" so this doesn't become a reason to continue building silos with a single vendor.
They also support MIPS (and their BSD builds support SPARC).
> Note: support for multi-arch builds is underway, but not yet available.
Happily I'd gotten into a habit of making sure my repositories were "self-contained" already, when I first started using self-hosted Travis (Hudson?) back in the day I did all the configuration in the UI, but soon realized I should keep the CI-system very simple and essentially run "make test", "make build", etc.
Having the actual test/build scripts inside the repositories means that a human could easily run the same things interactively, and that moving between CI-systems is painless. All you need to do is configure the appropriate location of the repository and enter the names of the appropriate scripts.
Even now my github actions assume that they can just run a script within the repository, for example the most basic one:
Also, while I don't imply either to move or to stay, Travis still does the job as it did before¹, so, unless pricing or specific features are a requirement, the technical incentive to move may (may) not be high enough.
¹=I did notice one change though, and it's that the list of the builds slowed down at some point; their solution: reduce the entries listed while loading (!!).
Gitlab invented this, github copied.
Same as Snapchat invented, Instagram copied them.
Have we learned nothing?
What will you have us do, in the mean time? Will you pay to run my open source projects' CI servers?
And if you do, what exactly makes you better than "another Microsoft monopoly"?
Also other CI systems support their configuration format: https://blog.drone.io/drone-adapter-gitlab-pipelines/
And the disadvantages of the switch? None that I've noticed, so far. Perhaps the bloom will go off this rose as well, but that's life in the software world.
Honestly, we live in weird times.
Take a look at Tekton CI/CD for a good primitive that you can build on.
1. Build matricies. I was running windows builds on appveyor and linux builds on travis for a long time (faster builds / iteration / feedback.) Regression testing against multiple rustc versions can be done from within a script, but doing so via CI-configuration makes for easier to read CI results that flag individual builds as failed instead of an entire script that you have to go log diving into.
2. Custom runners. Be it custom hardware or licensing-burdened custom software, I frequently need to own the actual CI hardware, and using the native solutions for CI runners is typically easier than merely configuring whatever custom infrastructure I might come up with.
3. Images. Using prewarmed cloud-ci-specific images is often faster to spin up.
As long as you are looking for “free and someone else maintains it” you are going to be hopping services regularly to stay on something that has VC money to burn for goodwill.
If you host it yourself you either have a maintenance headache or a shortage of features.
Depends what your time is worth really; I use buildkite with runners on AWS for work, and it doesn’t suck.
For personal stuff it’s harder to pick.
I used Drone CI in the past and liked the focus on Docker containers for everything. It seemed to contain the sprawling monster of bash that seem to occur in enterprise pipelines.
Taking this approach has two big upsides. First, the bot is just a binary running on a cheap VPS; so I know that it'll be fast and that I can ssh in at any time to debug if necessary. Second, if there's a feature that I want, I can just add it, rather than twiddling my thumbs. For example, I noticed that I was manually deleting my merged PR branches every time, so I added a few lines to the bot and now it deletes them automatically.
The obvious downsides of this approach are: 1) Implementing even an MVP of such a bot can consume significant time and energy, which you may not have a lot of; 2) If there's a bug, it's your fault and now you need to spend more time and energy tracking it down; and 3) Your bot might have a security vulnerability that exposes secrets, allows injection attacks, etc.
I'd love to see tooling/frameworks that make it easier to create custom CI systems. I think startups (and perhaps larger companies as well) could benefit a lot from building a CI in-house, since it allows you to optimize for your own specific needs. I see a lot of parallels to code linters, where spending a bit of time writing custom lint rules or static analysis tools can have a large payoff.
* A webhook would be fired at my host when a new pull-request was created, or updated.
* When the webhook was received we'd store details of the repository, the branch-name, and the PR-ID into a queue, from the webhook paylad.
* A bunch of worker machines would poll the queue, and when a new message was received they'd checkout the repository, run "make test", and add the output of the run as a comment to the PR.
This was done as a temporary thing, rather than spinning up Jenkins/similar, with the expectation we'd get Github Actions coming to Github Enterprise in the near future.
(Caveat: If you plan to do devops at a Big Corp, then you might as well get good at Jenkins because they already use it.)
I say almost because Gitlab CI lacks one critical thing: support for tasks independent of a commit or other event. Stuff like "take a dump of the production database and synchronize it to the integration environment".
Also, Gitlab CI is, due to its nature of polling workers instead of the master pushing work to the slave as well as spinning up a new container for each job instead of reusing the same environment, slower than Jenkins which does matter for some people.
A particularly dumb case showing this is when something needs to be done on a remote server via ssh - in Jenkins, one has to click "SSH Agent", choose the credential, and you can use "ssh user@host" just fine and do whatever you want. In Gitlab CI, one has to check if ssh is available on the runner image, install it if it isn't present, eval ssh-agent, and only then it works - and all of this needs to be re-done at each run (additionally meaning that your jobs have a dependency on an Internet connection plus the distribution's package servers!). In Jenkins, with a proper tool configuration I can specify something like Maven or NodeJS and Jenkins will automatically install the tool if it is not present and then never again while in Gitlab, again, as it's stateless all will need to be re-done every single build time.
GitLab CI has the ability to do SSH on the Runners. You deploy a runner and configure it to use SSH. Then it won’t use containers at all and instead use SSH.
The same is true for configuring the runner in a shell capacity. You can then reuse the same environments over and over just like Jenkins does.
As for Maven and NodeJS if you’re using containers, you simply build a dockerimage with those baked in, and use it for your builds. GitLab also has container storage that allows your images to work seem less and quickly with the runners.
For independent tasks without commits. You can easily configure a gitlab job to trigger only if a pipeline variable is present. Then trigger the pipeline via HTTP POST Request, via the UI or vi an event.
I talk about and demonstrate all of these topics on my blog www.lackastack.com - Shameless plug, but I hope it helps.
The problem is that each of the magic functions you listed above are separate plugins. They're not part of Jenkins itself. Each plugin may (and often will) push breaking changes and vulnerabilities to your Jenkins instance, if they haven't been outright abandoned by their maintainers. Over time, your builds will steadily accumulate hacks to work around broken plugins, and your Jenkins instance rots.
Here's an example for running pipelines manually (can even be triggered through the UI): https://docs.gitlab.com/ee/ci/yaml/#whenmanual
I believe the other comments address some of your other concerns as well.
Often better to have your scripts do most of the work and the build tool handle triggers, bookkeeping, and some statistics.
It is a pleasure to work with.
I found the docs to be thorough for the API, and really sparse for how to actually wire up a GitHub build pipeline or do a deploy to k8s, seems it’s really niche and not many people are writing guides, which concerns me.
BOSH is not a requirement to host it yourself, and I've never used it.
Probably the easiest way to run it is to use the helm chart and run it on Kubernetes. There are some things to keep in mind, like the workers have their own containerizer that is not Kubernetes-aware. You don't want it competing with Kubernetes for scheduling containers, so it's best to let the worker pods be the the "owner" of their node, and you can use affinities to enforce that.
At one company where we ran it directly on VMs in AWS, I used Ansible to build the CloudFormation stacks for the database (Aurora) and autoscaling groups, and the machines self-configured with Ansible to download and extract the Concourse tarball and start it. I wish I could make that code public but it now lives inside the company where I used it. I guess my point is that running it isn't magic, it's just a program (actually two, web and worker) that you start with some arguments that are pretty well documented.
Yes, there should. If you're a VC and that interests you, I've got so many slides.
I suspect that support for the BOSH deployment will eventually be discontinued.
Disclosure: I work for VMware, which sponsors Concourse. My own views.
Definitely easier though than using something like Jenkins, but personally I'm a big fan of Github Actions. I've never used it in a production setting though, only for personal projects.
It will be a slow but easy migration.