Yes, Jenkins has its issues (crappy UX, poor/awkward docs), but where it shines is the fact it's self-hosted, so I can SSH onto the instance to debug a failing build or replay it with a modified Jenkinsfile on the fly.
I'm quite proud of the current setup we have; We're hosting our app with Google's Container Engine (Kubernetes), so what we're doing is on every build Jenkins creates a slave agent within the same cluster (just different node pool) as the production, so the environment in which test containers are ran is identical to production, and what's more it actually talks to the same Kubernetes master, which means I can, for example, test our Nginx Reverse proxy with real backend endpoints and real certificates (that are mounted through glusterfs).
Drone for example is trying to sell itself as a "jenkins replacement" but has no concept of a build triggering arbitrarily, or that isn't intrinsically linked to a git repo. It's nonsense.
Once you set up a bunch of tooling jobs on Jenkins it's very nice to be able to use it as some form of control center for a bunch of different operations. Calling it a build server is underselling it.
1. Periodic jobs (not linked to a pushed commit)
2. Not linked to a git repo
At GitLab we try to have 'infrastructure as code'. We think builds should be under version control to allow for collaboration.
But periodic jobs make a lot of sense and it is discussed in https://gitlab.com/gitlab-org/gitlab-ce/issues/2989
I can do this by setting a manual build that I just run occasionally, but what if I forget to run it for months? In Jenkins I could say "run this job for no reason whatsoever other than the fact that 30 days has gone since it last ran."
This IMO is a huge use case for Ops. At this point I'll likely need to run Gitlab and Jenkins which I really don't want to have to do.
Sounds like you really need the possibility to run job with a periodic interval. Does https://gitlab.com/gitlab-org/gitlab-ce/issues/2989 that I mentioned before address all you need?
Chatops sounds cool, it would be nice to have a slash command to trigger a build. I've created a feature proposal for it https://gitlab.com/gitlab-org/gitlab-ce/issues/25866
I like to run various reports, for example run a report monthly and diff my AWS/GCE firewall rules to show if anything that shouldn't have changed has changed then email if the there is a diff. Same with instance lists, load balancers, things of that nature. Then all of the server stuff, hourly snapshots using aws-cli or gcloud, yaddayadda.
Once the periodical thing is back in I can give that a shot. I've only used the travis style builds for about a year so I'm not very advanced with them. I think when I can do a when: monthly that will help greatly.
Re: ChatOps I've upvoted the ticket. I'd LOVE it if there was a slack: step that Gitlab could use that went beyond the checkboxes under integrations> - for instance I'd like the ability to echo whatever under a Slack step to a channel, for instance. I can do this buy making my own step, running a slack container and doing webhooks but that's clunkier than I'd like, especially when the GItlab server already has my slack tokens/etc. Right now I need to run a container that pulls that info to webhook to slack.
echo: "$buildtag has been deployed, you can access this build at $server1 and $server2"
curl: #posttodatadog "$buildtag released on $date"
There are a lot of features that don't need to be added to the core because they can be resources instead.
For git-triggered builds, I use the git resource. Periodic, I use the time resource or a cron resource. S3 triggered, I use the S3 resource. Triggered on release of new software to PivNet, I use pivnet-resource. Triggered on a new docker image, I use docker image resource. And so on.
Any resource that has a 'get' operation can trigger a build. 3rd parties can add triggers without needing to avoid interference with other resources, because every operation is isolated.
Disclosure: I work for Pivotal, which sponsors Concourse.
What I liked most and what we're still working on are cross project triggers https://gitlab.com/gitlab-org/gitlab-ce/issues/16556
The use of resources sounds cool.
Sadly this does not fix bugs. And the issue tracker of gitlab-ce grows and grows and the CI is not really stable.
The new permission model was great, but break stuff.
Unfortunatly at the moment me and my team is discussing to use gitlab + jenkins instead of gitlab + gitlab for vcs/ci.
just because webhooks + jenkins is way more stable than the gitlab way (and actually webhooks are push based, while gitlab ci polls.)
This is sad since gitlab ci was quick and easy, while even the new dsl for jenkins is way harder to get right. but we actually can't retry half of our builds.
Another way would be using the shell executor where we actually loose some functionality. (we use the docker executor).
I know your vision (and I think some ideas are great), but sadly I think too much centralization is harmful after going with gitlab since v8, somethings does not need to be reinvented by gitlab (they might just need a improvement).
The issue tracker is growing because we also keep feature proposals there.
GitLab CI doesn't poll the repository. The GitLab Runner does poll GitLab for new jobs. We're currently working on making sure that GitLab can deal with those requests more efficiently. Polling makes setting up Runners a lot easier for our users.
Not being able to retry half of your builds sounds very bad. Please contact support@ our domain and add a link to this comment to receive help with that.
It was a big thing to add CI to GitLab and we had a lot of concerns about it before we did so. But after doing it the improvements in the interface, CD functionality, Pages, review apps, auto deploy, and many other things convinced us that this is the right path.
GitLab will work with other tools but out of the box it will offer you a great experience. For the benefits of that please see https://about.gitlab.com/2016/11/14/idea-to-production/
My last note is on your mention of the growing issue tracker. The issue tracker is not only about defining bugs and problems, but also about idea exploration. To my knowledge the more GitLab will be known, the more it will grow. It is everyones best interest to let it flourish, but in an organised way. We are hard at work in that area as well! For example, we have dedicated Issue Triage Specialists which keep track and answer a lot of the newly created ones.
This is required because the configuration is versioned in the git repository.
> has no concept of a build triggering arbitrarily
This is not entirely correct anymore. Drone can trigger builds and deployments using the API or the command line utility, and there is a PR to add to the user interface . The caveat is you need to trigger from an existing build or commit, because Drone needs a commit sha to fetch the configuration from the repository.
There are individuals using Drone + cron to execute scheduled tasks such as security scans and nightly integration tests.
As for me, I had used Travis just a bit, and same for Jenkins, and didn't like any of those options very much.
So when my team needed to setup a CI solution, we ended up using GitLab CI, and it brings the best of both worlds:
- Free service version if you use the GitLab.com deployment (granted, gitlab.com is a bit slow because it's the new thing and everybody is using it now), like travis.
- Open source so that you can host it yourself in the future if you need to, like Jenkins.
- Easy to use and configure, like Travis.
- Free service for private repos in GitLab.com (neither Travis nor any Jenkins-service provider offer this, AFAIK).
I plan to never look back.
For an intro to Gitlab CI see https://about.gitlab.com/gitlab-ci/
Self-hosted CI on the other hand requires a lot of resources, especially when you're not only testing Linux or BSD, but proprietary OSes (OSX, Windows, although the latter at least has the Edge images that work everywhere).
Ultimately I feel like the use of CI in many open source projects isn't very high due to constant annoyances and difficulty debugging the CI environment.
My perspective may be driven by the communities I'm involved in, but this seems wrong to me. From my experience I would say most open source projects use CI, particularly because so many of the bigger platforms (Travis, Circle, CodeShip) make it free for open source.
It is an extremely valuable resource when you consider the infrastructure and integration you get with absolutely no effort. Installing more obscure libraries or cutting edge releases can be a pain, but it is minuscule compared to the time it would take to provision, secure, and manage a group of containers for an open source project.
Fully self-hosted looks like the only sane way. Harder to set up, but at least one can check the whole chain that way.
I've used TeamCity a lot and find the functionality to be decent to good but the UI horrible. Perhaps a bit biased because I had access to the server and agents in that case. A lot of debugging/discoverability issues are easier with full access of course.
I've then used Travis a bit for small GitHub projects (and contributing to other people's projects) and it has a nice UI for simple stuff. However, it gives the impression that they are struggling to stay alive - many features are in beta or feel like they are; a lot is under-documented or planned soon (for multiple years). I know I can dig into their various projects' source code to find out how things work, but I always feel incredibly unproductive figuring something out in Travis. I worry for them. Great to hear you have a good experience with GitLab!
Did you know GitLab also comes with a private container registry? https://about.gitlab.com/2016/05/23/gitlab-container-registr...
Without dind, it's really hard/annoying to us CI/CD to build docker images. But with Gitlab, they also provide a place to store docker images (the "registry") right next to your code, for free!
At work, we're investigating moving to Gitlab for everything except issues (which we'd need to keep on JIRA for now since we're so embedded with it). It looks like Gitlab does have some integration with JIRA, but it's project-level, and it would be nice if it could be group-level since we have many small repos. :)
(I don't have an affiliation with Gitlab, I'm just super happy with the service.)
We have a couple of open issues about integration with multiple JIRA projects: https://gitlab.com/gitlab-org/gitlab-ce/issues/25541 and https://gitlab.com/gitlab-org/gitlab-ce/issues/25758. Having group level integration can be a good solution for these requests. We will talk about it on those issues feel free to join the discussion.
That said, our (Codeship) Docker support is the most "Docker native" on the market, in my opinion. We build your containers, by default, using a Compose-based syntax and all commands are natively executed by your containers. There's no interacting with a Docker host or running explicit Docker commands at all.
We don't offer self-hosting but we do have a local CLI that lets you run and debug your process locally with parity to your remote builds.
(We also went from paying $1k/mo to $0/mo, which is a very nice side effect)
That's not entirely true, because someone had to set Jenkins up and has to maintain it, but once things are rolling, it hardly needs any input.
By lack of proper caching - I mean reusing a volume (or similar). They do have an approach where they bundle up some files and throw them on S3, but that can actually take more time than re-downloading them. Not much of a cache - which is important for both Docker and large Java builds.
Supposedly the v2 should fix that, but I've been waiting forever to get my beta invite.
Swapping to Jenkins allowed me to choose faster hardware, and optimize our build a bit better. Trying to get Blue Ocean to work right, but to be honest it's a supreme PITA - there seem to be bugs/undocumented workarounds in the github authorization side. Once that's up though, it _ought_ to work better.
Jenkins' usability issues are most of what's allowed these other products to become popular. Hopefully they'll focus on that a lot more, but past performance would suggest they won't. If I wasn't so lazy, I would pitch in myself :)
It's something I'm willing to re-think if it turns out to be problematic on any level. But so far after half a year in production, we've seen no problems.
it seems you are using glusterfs for your persistentvolumeclaim.. why not gce-pd ?
As for glusterfs, our Nginx reverse proxy also has a cron job that runs letsencrypt automatic renewal once a week so it needs to be able to write those new certificates. Because we need to be able to run several reverse proxies, all with write persmissions (we run one at a time, but on deployment, there are two running to avoid downtime) we chose gluster, as gce-pd doesn't support multiple writers.
What about ceph ?