Hacker News new | past | comments | ask | show | jobs | submit login
Moving Away from Travis CI (ropensci.org)
116 points by datajeroen 3 days ago | hide | past | favorite | 76 comments





I wrote a blog post about this a couple weeks ago [1], and I think as a result of writing it in the midst of the week-long annoyance that I had at having to quickly move my active projects over to GH Actions, I may have been a bit too negative in my overall tone.

I do like that this post expresses the gratitude for many years of free OSS CI that Travis did provide, and I especially want to echo that and thank the original founders and engineers who helped grow Travis CI and the entire idea of 'every project gets CI' that is now prevalent (in most professional realms), just like the previous generation embraced the idea of 'every project is in a VCS'.

It's too bad what's happened, I think, but Travis does have a great legacy.

[1] https://www.jeffgeerling.com/blog/2020/travis-cis-new-pricin...


I think in general we are tuned to have a stronger reaction to unexpected setbacks. The people who have no aversion to them end up dying by a thousand cuts, as a long series of predictable but ignored scenarios play out one after another. In the most delusional cases, blaming it all on bad luck instead of lack of foresight.

Not every setback is a pattern, but if you have no histamine response at all then you will miss a lot of preventable issues.


> The native integration with GitHub this takes away the annoying authentication dance that is required for third party services.

We really need to improve the "annoying authentication" so this doesn't become a reason to continue building silos with a single vendor.


The issue is that this is one of those things that’s a direct trade-off between ease of use and security. The smoothest authentication is “this

Luckily, my CI platform supports "this authentication.

I'm surprised people are still using Travis CI. The writing was on the wall the day of the acquisition. I moved all of my builds immediately.

We (radareorg/radare2 team) use Travis CI specifically to build and run tests on ARM64 (ARMv8 or AArch64), PowerPC64, and SystemZ (s390)[1]. No other CI service offers those. I wish there were also MIPS, SPARC, and RISC-V.

[1] https://travis-ci.com/github/radareorg/radare2/builds/202966...


sr.ht supports ARM64, PowerPC, and SystemZ from what I can see on the list of supported builds: [0]

They also support MIPS (and their BSD builds support SPARC).

[0] https://man.sr.ht/builds.sr.ht/compatibility.md


There are no checks in those - they are empty fields.

The lack of a checkmark signifies that it runs virtually rather than on real hardware - which I believe is the same for TravisCI (SystemZ builds are atop AMD for example).

There's a supported box for "any support" and a native box for non-emulated support. There's also this statement:

> Note: support for multi-arch builds is underway, but not yet available.


If you have such niche needs I'd say you're better off with a self-managed Jenkins CI. afaik they support whatever you make available yourself.

I work with an important open source project with large full-time professional and competent teams working on it... that had no idea .org as shutting down until a couple of weeks ago. Their messaging wasn't enough.

The parent was perhaps a little pessimistic, but I admit the moment I heard there was an acquisition I too assumed that things would change, and I also preemptively started moving my stuff away from them.

Happily I'd gotten into a habit of making sure my repositories were "self-contained" already, when I first started using self-hosted Travis (Hudson?) back in the day I did all the configuration in the UI, but soon realized I should keep the CI-system very simple and essentially run "make test", "make build", etc.

Having the actual test/build scripts inside the repositories means that a human could easily run the same things interactively, and that moving between CI-systems is painless. All you need to do is configure the appropriate location of the repository and enter the names of the appropriate scripts.

Even now my github actions assume that they can just run a script within the repository, for example the most basic one:

https://github.com/skx/github-action-tester


I didn’t know till I read this blog post.

My company was using it on 400+ repos, it isn't like we could just migrate the next day.

I’ll bet/hope 80% of them were simple and static libraries without any mono-repo nonsense.

Mving may require significant effort for products with sophisticated configurations.

Also, while I don't imply either to move or to stay, Travis still does the job as it did before¹, so, unless pricing or specific features are a requirement, the technical incentive to move may (may) not be high enough.

¹=I did notice one change though, and it's that the list of the builds slowed down at some point; their solution: reduce the entries listed while loading (!!).


Gitlab is a pioneer here. The did start with the 2000 minutes free of shared runners, for private and public repos. I really think we should all thank them for pushing ahead on innovation.

Gitlab invented this, github copied. Same as Snapchat invented, Instagram copied them.


I don't know about the timeline between when Gitlab CI started vs Travis, and gitlab certainly made the innovation of integrating it into the code host, but certainly Travis and Circle CI were popular before Gitlab was.

It's a shame that we are beating a path to another Microsoft monopoly.

Have we learned nothing?


When time comes, we will move away from GitHub actions too.

What will you have us do, in the mean time? Will you pay to run my open source projects' CI servers?

And if you do, what exactly makes you better than "another Microsoft monopoly"?


Why not move to GitLab right now? Their core product is OSS and the CI is integrated and free for OSS projects.

I don't really see how it's any better. But I am using and loving gitlab ci for my projects on paid plans fwiw.

It's better because you could move to self-hosting if something happens to gitlab.com.

Also other CI systems support their configuration format: https://blog.drone.io/drone-adapter-gitlab-pipelines/


No free Windows and macOS runners.

It's easy enough to add your own. I use the cloud-based runners for Linux, then provide my own for Windows, Mac and FreeBSD, all VMs.

Maybe it's easy to add them, but I don't have any Windows or macOS servers in the first place.

When what time comes? When? :)

Travis' fault man.

This makes me wonder what Travis' endgame here is. When Github and Gitlab, the two sites that account for the vast majority of OSS development, offer free CI, who's their target market? Enterprise? They already use Jenkins or Bamboo (or Github Enterprise or Gitlab self hosted).

Probably cut costs and hope enough of the revenue sticks around until they’re profitable. GitHub Actions is a relative newcomer to the party, but they probably have the better pricing structure here, charging by the minute.

I moved to github actions (GHA) for an R package, and found these benefits of GHA over travisCI: 1. It's much faster. I think this is because it does not have to build up an environment from scratch, but instead caches some elements. 2. It is more reliable. Those early steps that travisCI takes in building up an environment seem to be error-prone, leading to failures long before it got around to checking my code. Usually, these were bad-signature or failure-to-download problems with basic libraries, and they tended to go away after a few days or a week, but in the meantime, my "badges" made it look as though my code had failed tests, when it was more that the tests were failing of their own accord.

And the disadvantages of the switch? None that I've noticed, so far. Perhaps the bloom will go off this rose as well, but that's life in the software world.


(Paraphrasing) "Therefore, we find ourselves looking to other open-source friendly solutions, the greatest of which currently seems to be the one by Microsoft."

Honestly, we live in weird times.


What I would do is find a way to decouple your test running system from the lifecycle tooling. Develop a custom solution that run CI pipelines in k8s and collates results and passes them upstream to the lifecycle tooling whether it's GitHub Actions or Jenkins or whatever. By doing this you free your pipelines from lock-in and can utilize cheap compute (DigitalOcean, GKE pre-emptibles, etc).

Take a look at Tekton CI/CD for a good primitive that you can build on.


Just having your CI steps run scripts that install/build/deploy/??? already goes a long way to decoupling from your CI solution, and can be a lot faster than spinning up a custom image for your CI build. Custom images don't necessairly do much for fixing the last 10% either - I'll typically want CI-specific:

1. Build matricies. I was running windows builds on appveyor and linux builds on travis for a long time (faster builds / iteration / feedback.) Regression testing against multiple rustc versions can be done from within a script, but doing so via CI-configuration makes for easier to read CI results that flag individual builds as failed instead of an entire script that you have to go log diving into.

2. Custom runners. Be it custom hardware or licensing-burdened custom software, I frequently need to own the actual CI hardware, and using the native solutions for CI runners is typically easier than merely configuring whatever custom infrastructure I might come up with.

3. Images. Using prewarmed cloud-ci-specific images is often faster to spin up.


Buildkite is BYO infrastructure. It's worked really well for us, as we have a fairly idiosyncratic build process/test suite that out of the box solutions like GH Actions or CircleCI don't really cater to.

Any recommendations on good open source CI / CDs? Right now I just use github actions, but am interested in other self hosted options.

It’s a difficult space.

As long as you are looking for “free and someone else maintains it” you are going to be hopping services regularly to stay on something that has VC money to burn for goodwill.

If you host it yourself you either have a maintenance headache or a shortage of features.

Depends what your time is worth really; I use buildkite with runners on AWS for work, and it doesn’t suck.

For personal stuff it’s harder to pick.


Buildkite for work, GH Actions for personal.

I used Drone CI in the past and liked the focus on Docker containers for everything. It seemed to contain the sprawling monster of bash that seem to occur in enterprise pipelines.


I’ve been quite pleased with the free, unlimited GitHub Actions for public repos. I’m sure the party will end eventually, but it’s easy enough to move to whatever is best whenever that happens. And it’s not VC money that’s funding it, so who knows how much goodwill and market share Microsoft wants to buy?

GitLab CI is nice, if you're comfortable with GitHub as your primary remote, it's easy enough to set up a secondary remote (and have most git actions duplex to both).

I know this isn't super helpful, but I built my own bors-style CI bot over a few weekends, and I'm very satisfied with it. It looks for PR comments with a specific keyword, then pulls the branch, tests it, and pushes a merge commit automatically.

Taking this approach has two big upsides. First, the bot is just a binary running on a cheap VPS; so I know that it'll be fast and that I can ssh in at any time to debug if necessary. Second, if there's a feature that I want, I can just add it, rather than twiddling my thumbs. For example, I noticed that I was manually deleting my merged PR branches every time, so I added a few lines to the bot and now it deletes them automatically.

The obvious downsides of this approach are: 1) Implementing even an MVP of such a bot can consume significant time and energy, which you may not have a lot of; 2) If there's a bug, it's your fault and now you need to spend more time and energy tracking it down; and 3) Your bot might have a security vulnerability that exposes secrets, allows injection attacks, etc.

I'd love to see tooling/frameworks that make it easier to create custom CI systems. I think startups (and perhaps larger companies as well) could benefit a lot from building a CI in-house, since it allows you to optimize for your own specific needs. I see a lot of parallels to code linters, where spending a bit of time writing custom lint rules or static analysis tools can have a large payoff.


I did something similar for a self-hosted github enterprise installation:

* A webhook would be fired at my host when a new pull-request was created, or updated.

* When the webhook was received we'd store details of the repository, the branch-name, and the PR-ID into a queue, from the webhook paylad.

* A bunch of worker machines would poll the queue, and when a new message was received they'd checkout the repository, run "make test", and add the output of the run as a comment to the PR.

This was done as a temporary thing, rather than spinning up Jenkins/similar, with the expectation we'd get Github Actions coming to Github Enterprise in the near future.


We’ve been running Gitlab on-prem for around 6 years. We run self-hosted Gitlab CI runners on an on-prem Kubernetes cluster. We’re a team of around 20 and collectively execute around 1000 Gitlab CI jobs every day across numerous client projects for web, mobile, cloud deployments, etc. It works amazingly well.

builds.sr.ht is another one. Hosted: https://builds.sr.ht/ Source code: https://git.sr.ht/~sircmpwn/builds.sr.ht

Definitely look at Jenkins. You can do pretty much whatever you want with it (not always a good thing though of course, use responsibly)

Chiming in to say: Avoid Jenkins like the plague. Jenkins is a bottomless pit of vulnerabilities and obscure bugs and outdated documentation that will waste weeks of your life.

(Caveat: If you plan to do devops at a Big Corp, then you might as well get good at Jenkins because they already use it.)


Gitlab CI could almost replace Jenkins.

I say almost because Gitlab CI lacks one critical thing: support for tasks independent of a commit or other event. Stuff like "take a dump of the production database and synchronize it to the integration environment".

Also, Gitlab CI is, due to its nature of polling workers instead of the master pushing work to the slave as well as spinning up a new container for each job instead of reusing the same environment, slower than Jenkins which does matter for some people.

A particularly dumb case showing this is when something needs to be done on a remote server via ssh - in Jenkins, one has to click "SSH Agent", choose the credential, and you can use "ssh user@host" just fine and do whatever you want. In Gitlab CI, one has to check if ssh is available on the runner image, install it if it isn't present, eval ssh-agent, and only then it works - and all of this needs to be re-done at each run (additionally meaning that your jobs have a dependency on an Internet connection plus the distribution's package servers!). In Jenkins, with a proper tool configuration I can specify something like Maven or NodeJS and Jenkins will automatically install the tool if it is not present and then never again while in Gitlab, again, as it's stateless all will need to be re-done every single build time.


Hey op - Wanted to chime in here some of the things you said aren’t accurate anymore.

GitLab CI has the ability to do SSH on the Runners. You deploy a runner and configure it to use SSH. Then it won’t use containers at all and instead use SSH.

The same is true for configuring the runner in a shell capacity. You can then reuse the same environments over and over just like Jenkins does.

As for Maven and NodeJS if you’re using containers, you simply build a dockerimage with those baked in, and use it for your builds. GitLab also has container storage that allows your images to work seem less and quickly with the runners.

For independent tasks without commits. You can easily configure a gitlab job to trigger only if a pipeline variable is present. Then trigger the pipeline via HTTP POST Request, via the UI or vi an event.

I talk about and demonstrate all of these topics on my blog www.lackastack.com - Shameless plug, but I hope it helps.


That's a good point, there is a vacuum for a good general purpose task automation tool that Jenkins has historically filled.

The problem is that each of the magic functions you listed above are separate plugins. They're not part of Jenkins itself. Each plugin may (and often will) push breaking changes and vulnerabilities to your Jenkins instance, if they haven't been outright abandoned by their maintainers. Over time, your builds will steadily accumulate hacks to work around broken plugins, and your Jenkins instance rots.


The same is largely true of GitHub Actions... so don't use any: wit you have Jenkin's doing anything other than task management and running shell scripts you maintain, I'd argue strongly you are doing it all wrong (as if nothing else you are buying into lock-in for no reason).

Here's an example of scheduling pipelines to execute automatically: https://docs.gitlab.com/ee/ci/pipelines/schedules.html

Here's an example for running pipelines manually (can even be triggered through the UI): https://docs.gitlab.com/ee/ci/yaml/#whenmanual

I believe the other comments address some of your other concerns as well.


manually running pipelines is much easier and more confifurable in jenkins. there are many issues which are created in gitlab so it is easier to migrate from jenkins but those issues are not prioritized

Couldn't you do something like have a dummy repo just for trigger things, and then make commits to it via cron?

Yeah been years since I used it and forgot about this. Very true. (and yep used it at several big corps)

Jenkins is powerful and painfully complex. You might need that complexity for large projects but personally I've found it to be so frustrating to set up simple stuff.

As we discussed in a previous thread, the features tend to become a trap of lock-in.

Often better to have your scripts do most of the work and the build tool handle triggers, bookkeeping, and some statistics.


That does seem like the best practice.

Also take a look at Drone CI, it's in good hands now.

I use self-hosted Gitlab CI runner with docker. It never misses a beat.

Also using self-hosted Gitlab CI runner w/ some basic shell scripts.

It is a pleasure to work with.


I have had good experiences with Concourse CI, especially for trunk-based development

I really liked the expressivity of the yaml syntax when I did some prototyping with Concourse recently, and the Resource abstraction is very nice.

I found the docs to be thorough for the API, and really sparse for how to actually wire up a GitHub build pipeline or do a deploy to k8s, seems it’s really niche and not many people are writing guides, which concerns me.


How do you host concourse? Did you adopt their entire stack? Last time I looked at it, it seemed cool but I had no interest in learning BOSH.

I also love Concourse and feel it has the perfect set of abstractions to be able to compose arbitrarily complex pipelines. You can extend it by creating your own resources (I actually wrote a tool to help make this easier[0]). The pipeline visualization is better than anything else I've seen. I'm not aware of a hosted Concourse solution, but there really should be.

BOSH is not a requirement to host it yourself, and I've never used it.

Probably the easiest way to run it is to use the helm chart[1] and run it on Kubernetes. There are some things to keep in mind, like the workers have their own containerizer that is not Kubernetes-aware. You don't want it competing with Kubernetes for scheduling containers, so it's best to let the worker pods be the the "owner" of their node, and you can use affinities to enforce that.

At one company where we ran it directly on VMs in AWS, I used Ansible to build the CloudFormation stacks for the database (Aurora) and autoscaling groups, and the machines self-configured with Ansible to download and extract the Concourse tarball and start it. I wish I could make that code public but it now lives inside the company where I used it. I guess my point is that running it isn't magic, it's just a program (actually two, web and worker) that you start with some arguments that are pretty well documented[2].

0. https://github.com/cloudboss/ofcourse 1. https://github.com/concourse/concourse-chart 2. https://concourse-ci.org/install.html


> I'm not aware of a hosted Concourse solution, but there really should be.

Yes, there should. If you're a VC and that interests you, I've got so many slides.


Kubernetes deployments overtook BOSH deployments in the last Concourse community survey: https://blog.concourse-ci.org/community-survey-2020-results/...

I suspect that support for the BOSH deployment will eventually be discontinued.

Disclosure: I work for VMware, which sponsors Concourse. My own views.


Been using PKS for almost all of this year at work and it feels very much like 'the writing is on the wall'.

It all runs on a couple of ec2 instances in AWS. I'm not the guy who installed it (and I started at my current company after it was setup so maybe it was complicated?) but as far as maintenance goes it's been pretty much set and forget. Our team is pretty DevOps heavy and we never really need to do anything with it.

Definitely easier though than using something like Jenkins, but personally I'm a big fan of Github Actions. I've never used it in a production setting though, only for personal projects.


CD-wise I've had a decent experience with gocd.

I am/was using Travis for my open source projects. I started noticing my builds taking forever. I did some checking around and decided its just easier to move to github actions rather than figuring things out.

It will be a slow but easy migration.


I also moved from Travis CI to GitHub Actions and was pleased with the results. It wasn't mentioned in the article, but GitHub Actions is free for both public and private repos. To my knowledge, Travis CI was never free for private repos.

There are limits on GHA to the number of minutes for private repos when not using self-hosted runners (currently 2_000 minutes per month allocated per org/user when using "Github Free")

We are also moving away from Travis for openfaas, k3sup, arkade and inlets. That's about 50 repos and it's unfortunately taking the community away from delivering actual value.

We used to use Jenkins but have moved to AskJeevesCI as it is the same guy



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: