Hacker News new | past | comments | ask | show | jobs | submit login

Seeing how much people recommend other solutions, I've actually moved from Travis to Jenkins, and never looked back.

Yes, Jenkins has its issues (crappy UX, poor/awkward docs), but where it shines is the fact it's self-hosted, so I can SSH onto the instance to debug a failing build or replay it with a modified Jenkinsfile on the fly.

I'm quite proud of the current setup we have; We're hosting our app with Google's Container Engine (Kubernetes), so what we're doing is on every build Jenkins creates a slave agent within the same cluster (just different node pool) as the production, so the environment in which test containers are ran is identical to production, and what's more it actually talks to the same Kubernetes master, which means I can, for example, test our Nginx Reverse proxy with real backend endpoints and real certificates (that are mounted through glusterfs).




The problem with travis and the like is most do not support arbitrary builds.

Drone for example is trying to sell itself as a "jenkins replacement" but has no concept of a build triggering arbitrarily, or that isn't intrinsically linked to a git repo. It's nonsense.

Once you set up a bunch of tooling jobs on Jenkins it's very nice to be able to use it as some form of control center for a bunch of different operations. Calling it a build server is underselling it.


I think there are two aspects to arbitrary builds:

1. Periodic jobs (not linked to a pushed commit)

2. Not linked to a git repo

At GitLab we try to have 'infrastructure as code'. We think builds should be under version control to allow for collaboration.

But periodic jobs make a lot of sense and it is discussed in https://gitlab.com/gitlab-org/gitlab-ce/issues/2989


The lack of arbitrary jobs in Gitlab is one of my huge pet peeves regarding it. I've used Gitlab-ee for a year now and really enjoy it for CI/CD stuff. But sometimes I want to throw up a quick shell job that is something super simple, like backing up a database, cleaning up docker images, things like that. I'm used to storing these in Jenkins/Bamboo so that they aren't left hidden on a server in a cron job.

I can do this by setting a manual build that I just run occasionally, but what if I forget to run it for months? In Jenkins I could say "run this job for no reason whatsoever other than the fact that 30 days has gone since it last ran."

This IMO is a huge use case for Ops. At this point I'll likely need to run Gitlab and Jenkins which I really don't want to have to do.


Thanks for being a customer of GitLab EE and great to hear you enjoy the CD/CD stuff.

Sounds like you really need the possibility to run job with a periodic interval. Does https://gitlab.com/gitlab-org/gitlab-ce/issues/2989 that I mentioned before address all you need?

Chatops sounds cool, it would be nice to have a slash command to trigger a build. I've created a feature proposal for it https://gitlab.com/gitlab-org/gitlab-ce/issues/25866


Periodic jobs is great for overnight regression testing, things of that nature and the few examples I gave above. That'll be a great (re)addition. But still, the ability to create random jobs like in Jenkins without having to create a repo, etc would be a huge plus along with easing ops teams into performing server maintenance via CI.


If you use one 'catchall' repo for your random jobs you would only have to modify a file instead of creating a repo when adding a new job. Do you think this is acceptable considering the added convenience of having the job under version control?


So perhaps something like an infrastructure/tasks repo, then make various "environments" for assigning different env variables (like server IPs, etc). I'll fiddle around with this and see if its viable. I just really don't want to have to create a project/repo for every single little tiny task/report I want automatically done.

I like to run various reports, for example run a report monthly and diff my AWS/GCE firewall rules to show if anything that shouldn't have changed has changed then email if the there is a diff. Same with instance lists, load balancers, things of that nature. Then all of the server stuff, hourly snapshots using aws-cli or gcloud, yaddayadda.

Once the periodical thing is back in I can give that a shot. I've only used the travis style builds for about a year so I'm not very advanced with them. I think when I can do a when: monthly that will help greatly.

Re: ChatOps I've upvoted the ticket. I'd LOVE it if there was a slack: step that Gitlab could use that went beyond the checkboxes under integrations> - for instance I'd like the ability to echo whatever under a Slack step to a channel, for instance. I can do this buy making my own step, running a slack container and doing webhooks but that's clunkier than I'd like, especially when the GItlab server already has my slack tokens/etc. Right now I need to run a container that pulls that info to webhook to slack.

  slack:
    if: successful
    channel: deploys
    echo: "$buildtag has been deployed, you can access this      build at $server1 and $server2"
    curl: #posttodatadog "$buildtag released on $date"


look at stackstorm for that kind of stuff (and lots of other cool things). Right tool for the job, etc.

https://docs.stackstorm.com/rules.html


I think one thing GitlabCI lost in taking inspiration from Concourse (amongst other tools!) was the centrality of resources. It wasn't obvious at first why this is so important, I and most others fixated on the other visible differences (containers for everything, declarative config, pipeline view etc).

There are a lot of features that don't need to be added to the core because they can be resources instead.

For git-triggered builds, I use the git resource. Periodic, I use the time resource or a cron resource. S3 triggered, I use the S3 resource. Triggered on release of new software to PivNet, I use pivnet-resource. Triggered on a new docker image, I use docker image resource. And so on.

Any resource that has a 'get' operation can trigger a build. 3rd parties can add triggers without needing to avoid interference with other resources, because every operation is isolated.

Disclosure: I work for Pivotal, which sponsors Concourse.


Concourse is great and it was certainly an inspiration for GitLab CI.

What I liked most and what we're still working on are cross project triggers https://gitlab.com/gitlab-org/gitlab-ce/issues/16556

The use of resources sounds cool.


well I know you have great idea's but sadly a lot of the focus of gitlab at the moment is either monetizing with EE (which I can understand, I need to develop for my income aswell..) and the other with adding features to CE (which of course leads to the first).

Sadly this does not fix bugs. And the issue tracker of gitlab-ce grows and grows and the CI is not really stable. The new permission model was great, but break stuff.

Unfortunatly at the moment me and my team is discussing to use gitlab + jenkins instead of gitlab + gitlab for vcs/ci. just because webhooks + jenkins is way more stable than the gitlab way (and actually webhooks are push based, while gitlab ci polls.)

This is sad since gitlab ci was quick and easy, while even the new dsl for jenkins is way harder to get right. but we actually can't retry half of our builds.

Another way would be using the shell executor where we actually loose some functionality. (we use the docker executor).

I know your vision (and I think some ideas are great), but sadly I think too much centralization is harmful after going with gitlab since v8, somethings does not need to be reinvented by gitlab (they might just need a improvement).


We spend a lot of development time fixing bugs. Over the last year GitLab has gotten a lot more stable. Of course it is not perfect and the rapid addition of new features creates new bugs.

The issue tracker is growing because we also keep feature proposals there.

GitLab CI doesn't poll the repository. The GitLab Runner does poll GitLab for new jobs. We're currently working on making sure that GitLab can deal with those requests more efficiently. Polling makes setting up Runners a lot easier for our users.

Not being able to retry half of your builds sounds very bad. Please contact support@ our domain and add a link to this comment to receive help with that.

It was a big thing to add CI to GitLab and we had a lot of concerns about it before we did so. But after doing it the improvements in the interface, CD functionality, Pages, review apps, auto deploy, and many other things convinced us that this is the right path.

GitLab will work with other tools but out of the box it will offer you a great experience. For the benefits of that please see https://about.gitlab.com/2016/11/14/idea-to-production/


Thank you for your thoughts regarding your use of GitLab CI. As UX Designer at the gitlab CI team I can say that in the upcoming release there will be a focus on making the system more robust and fixing bugs. Stability and dependability are certainly a priority and are increasingly so. It's about finding the right balance between focussing our scope and broadening it.

My last note is on your mention of the growing issue tracker. The issue tracker is not only about defining bugs and problems, but also about idea exploration. To my knowledge the more GitLab will be known, the more it will grow. It is everyones best interest to let it flourish, but in an organised way. We are hard at work in that area as well! For example, we have dedicated Issue Triage Specialists which keep track and answer a lot of the newly created ones.


> or that isn't intrinsically linked to a git repo

This is required because the configuration is versioned in the git repository.

> has no concept of a build triggering arbitrarily

This is not entirely correct anymore. Drone can trigger builds and deployments using the API or the command line utility, and there is a PR to add to the user interface [1]. The caveat is you need to trigger from an existing build or commit, because Drone needs a commit sha to fetch the configuration from the repository.

There are individuals using Drone + cron to execute scheduled tasks such as security scans and nightly integration tests.

[1] https://github.com/drone/drone-ui/pull/69


IMO moving from Travis to Jenkins seems like a very disruptive change.

As for me, I had used Travis just a bit, and same for Jenkins, and didn't like any of those options very much.

So when my team needed to setup a CI solution, we ended up using GitLab CI, and it brings the best of both worlds:

- Free service version if you use the GitLab.com deployment (granted, gitlab.com is a bit slow because it's the new thing and everybody is using it now), like travis.

- Open source so that you can host it yourself in the future if you need to, like Jenkins.

- Easy to use and configure, like Travis.

- Free service for private repos in GitLab.com (neither Travis nor any Jenkins-service provider offer this, AFAIK).

I plan to never look back.


Thanks for your kind words about GitLab. We're working hard on making GitLab.com faster, see https://gitlab.com/gitlab-org/gitaly/issues/ for the latest. A workaround would be to run your own GitLab installations, all the CI features are in the open source GitLab CE https://about.gitlab.com/products/#comparison

For an intro to Gitlab CI see https://about.gitlab.com/gitlab-ci/


Honestly my experience is that SaaS / hosted CI is just generally annoying due to constant stability and performance (both transient and in general, eg. caching in Travis) issues.

Self-hosted CI on the other hand requires a lot of resources, especially when you're not only testing Linux or BSD, but proprietary OSes (OSX, Windows, although the latter at least has the Edge images that work everywhere).

Ultimately I feel like the use of CI in many open source projects isn't very high due to constant annoyances and difficulty debugging the CI environment.


> Ultimately I feel like the use of CI in many open source projects isn't very high due to constant annoyances and difficulty debugging the CI environment.

My perspective may be driven by the communities I'm involved in, but this seems wrong to me. From my experience I would say most open source projects use CI, particularly because so many of the bigger platforms (Travis, Circle, CodeShip) make it free for open source.

It is an extremely valuable resource when you consider the infrastructure and integration you get with absolutely no effort. Installing more obscure libraries or cutting edge releases can be a pain, but it is minuscule compared to the time it would take to provision, secure, and manage a group of containers for an open source project.


Just a note - you can host your own CI node (runner) for project(s) on gitlab.com. It's super simple to setup, too.

https://docs.gitlab.com/runner/install/linux-repository.html


This doesn't solve issues in runner-to-base comms. I had tried this kind of setup relatively recently, and almost every 10-15th build had failed for me because GitLab wasn't healthy (502 from Docker registry or problems up/downloading artifacts between stages, etc.)

Fully self-hosted looks like the only sane way. Harder to set up, but at least one can check the whole chain that way.


Sorry about GitLab.com having performance problems. Self hosted is indeed the only way to work around it in the short term. I hope you can set it up quickly. Please use the Omnibus packages that should install in a few minutes https://about.gitlab.com/installation/


Do you really find Travis easy to use and configure?

I've used TeamCity a lot and find the functionality to be decent to good but the UI horrible. Perhaps a bit biased because I had access to the server and agents in that case. A lot of debugging/discoverability issues are easier with full access of course.

I've then used Travis a bit for small GitHub projects (and contributing to other people's projects) and it has a nice UI for simple stuff. However, it gives the impression that they are struggling to stay alive - many features are in beta or feel like they are; a lot is under-documented or planned soon (for multiple years). I know I can dig into their various projects' source code to find out how things work, but I always feel incredibly unproductive figuring something out in Travis. I worry for them. Great to hear you have a good experience with GitLab!


I'll definitely check it out - but the problem with many of these tools is a lack of "proper" Docker support, along with the ability to self host and debug. Thanks for the heads up!


GitLab CI has proper Docker support. You can set a default docker image for each runner `--docker-image ruby:2.1` or set one for the project in the .gitlab-ci.yml file. For more information see https://docs.gitlab.com/ce/ci/docker/using_docker_images.htm...

Did you know GitLab also comes with a private container registry? https://about.gitlab.com/2016/05/23/gitlab-container-registr...


Also notable is services like docker-in-docker (and privileged docker containers) are allowed, which is a huge win over services like Atlassian's Pipelines.

Without dind, it's really hard/annoying to us CI/CD to build docker images. But with Gitlab, they also provide a place to store docker images (the "registry") right next to your code, for free!

At work, we're investigating moving to Gitlab for everything except issues (which we'd need to keep on JIRA for now since we're so embedded with it). It looks like Gitlab does have some integration with JIRA, but it's project-level, and it would be nice if it could be group-level since we have many small repos. :)

(I don't have an affiliation with Gitlab, I'm just super happy with the service.)

EDIT: typo


Hi! Thanks for your kindness =)

We have a couple of open issues about integration with multiple JIRA projects: https://gitlab.com/gitlab-org/gitlab-ce/issues/25541 and https://gitlab.com/gitlab-org/gitlab-ce/issues/25758. Having group level integration can be a good solution for these requests. We will talk about it on those issues feel free to join the discussion.


Thanks for the comment! Group-level integration with JIRA is a nice idea. I've created an issue for this [1]. If you would like to give us more details on how we could achieve this, or give us some insights of your use cases, please comment in this issue!

[1]: https://gitlab.com/gitlab-org/gitlab-ce/issues/25867


Thanks for your kind words for GitLab! Our JIRA support is pretty extensive https://docs.gitlab.com/ee/project_services/jira.html but extending it further is a priority. Felipe Artur is working on this full time. Consider creating an issue for group level integration, it sounds interesting.


Thanks for clarification - I'll definitely have a look :)


Full disclosure, I am a Codeship employee that helps onboard new customers for our Docker support.

That said, our (Codeship) Docker support is the most "Docker native" on the market, in my opinion. We build your containers, by default, using a Compose-based syntax and all commands are natively executed by your containers. There's no interacting with a Docker host or running explicit Docker commands at all.

We don't offer self-hosting but we do have a local CLI that lets you run and debug your process locally with parity to your remote builds.


FWIW, Circle CI offers the ability to SSH in to build nodes and free nodes for public projects


...sort of. I eventually rage-quit Circle and set up a Jenkins cluster because of all of the heisenbugs we found on Circle. Builds would fail 5-10% of the time for totally unreproducible reasons (for example, pip install into a venv would fail with a permission error), and you can't SSH into a build that's already failed. We very rarely had problems with Jenkins builds, and when we did, we could go look at the environment it had run in and diagnose what went wrong. I love Jenkins and would absolutely choose it over a hosted solution.

(We also went from paying $1k/mo to $0/mo, which is a very nice side effect)


> (We also went from paying $1k/mo to $0/mo, which is a very nice side effect)

That's not entirely true, because someone had to set Jenkins up and has to maintain it, but once things are rolling, it hardly needs any input.


Agree completely. I think new users are perhaps turned off by Jenkins due to its (deserved) reputation for ugly UI. But its very dependable, stable and reliable solution. And there are plugins for doing every conceivable thing. We have a job manager to store the config in CI, and its such a breeze to work with.


I had much the same experience. Their lack of (proper) caching support meant our already-slow build took 2.5 times as long as it should, not to mention over burdening the maven repos.

By lack of proper caching - I mean reusing a volume (or similar). They do have an approach where they bundle up some files and throw them on S3, but that can actually take more time than re-downloading them. Not much of a cache - which is important for both Docker and large Java builds.

Supposedly the v2 should fix that, but I've been waiting forever to get my beta invite.

Swapping to Jenkins allowed me to choose faster hardware, and optimize our build a bit better. Trying to get Blue Ocean to work right, but to be honest it's a supreme PITA - there seem to be bugs/undocumented workarounds in the github authorization side. Once that's up though, it _ought_ to work better.

Jenkins' usability issues are most of what's allowed these other products to become popular. Hopefully they'll focus on that a lot more, but past performance would suggest they won't. If I wasn't so lazy, I would pitch in myself :)


Was looking for this comment! CircleCI is amazing. We use it and it seems to offer us everything we need in a very nice shiny box.


It's really well implemented too, it grabs authorised public keys from the github repo directly, so if you can push to the repo for the project, you can just ssh straight into the build machine. Magic.


We run Drone and are very happy with it. Really easy to install and get working, similar but better usage than Travis, open source, Docker powered. I really hope it catches on more strongly because it's fantastic.


Are you running test code in production environment? Not sure about this one


I wasn't at first either, but benefits outweigh the risks (which are all security related).

It's something I'm willing to re-think if it turns out to be problematic on any level. But so far after half a year in production, we've seen no problems.


how are you deploying secrets to your kuberenetes CI pipeline ? for example on a build, you create (I'm assuming) a new namespace.

it seems you are using glusterfs for your persistentvolumeclaim.. why not gce-pd ?


Juenkins' Kubernetes plugin allows to mount secrets like with usual pods so we deploy test secrets just like we normally do production ones.

As for glusterfs, our Nginx reverse proxy also has a cron job that runs letsencrypt automatic renewal once a week so it needs to be able to write those new certificates. Because we need to be able to run several reverse proxies, all with write persmissions (we run one at a time, but on deployment, there are two running to avoid downtime) we chose gluster, as gce-pd doesn't support multiple writers.


Oh wow. I have never used glusterfs on GCE - how difficult was it to set up and manage it ?

What about ceph ?


We are doing the exact same thing and also couldn't be happier :)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: