Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm wondering, who uses Jenkins nowadays and why?

A few years ago, at one of my first gigs in an enterprise environment, I used Jenkins for the first time to test and build my stuff. When I needed a newer version of my compiler or specific linter, some fellow had to install that on the VM that was running Jenkins.

Later, working in a more modern environment, we started using Gitlab CI. It was a bliss. I specified the docker image with my favourite tooling and my stuff got built in there. When some tooling changed, I updated my image.

At my current gig, again an enterprise, it is Jenkins everywhere. They do the most complex things with it, orchestrating entire releases, integration tests, etc. I don't know what to think of this yet.

How does the HN crowd see this?



Not sure if I can speak for the HN crowd overall. I've never used Gitlab CI, so I can't comment on how I like it. But my experience with Jenkins has been good overall.

Some thoughts:

  - Jenkins is FOSS, which I like.
  - If I want commercial support, I can buy it.
  - Jenkins works effortlessly with Active Directory.
  - Jenkins gives good group-level access control.
  - I don't like the UI very much, but I can live with it.
  - Jenkins is designed to work with remote build agents, which I like a lot.
I think Jenkins' greatest strength is also what some people hate about it: everything is configurable. Its flexibility is a burden if you've got simple build needs. But it's perfect if you've got weird build flows.

I do embedded Linux work. For me, that means I have to deal with weird cross-compilation issues, obscure toolchains, and lots of glue logic between different parts of the build. Jenkins gets the job done better than Travis or Circle, and lets me do it on remote build agents (which is very expensive with Bamboo). I could maybe use Buildbot maybe, but that needs too much customization IMO.

Yes, it's a little crusty. And yes, the UI isn't as pretty as some other CI tools. But it gets the job done with a functional UI and great access control. And you can't beat the price.


The biggest thing is that Jenkins isn't a CI tool, it's a job execution tool. Whether those jobs are scheduled, manually triggered, triggered by a webhook or otherwise..that's it. It runs the job, records the output and the exit status and makes it easy to read the log of any of those jobs, as well as alternating log rotation of them. It can also manage multiple nodes, distribute those jobs across them or isolate certain jobs to certain nodes.

All that a CI server does is run install dependencies, run the job and then do something with the output. There are tons of plugins for Jenkins to handle those specialized bits of output.

If you want to use Docker images for it, you can do that. If you want to use it to run cron jobs across multiple servers over SSH but centralize the logging and error notification...you can do that. If you want to schedule scaling jobs for expected traffic...you can do that. If you want to trigger Ansible execution...you can do that.

Trigger, Run, Track, Parse. Jenkins does it general purpose, for free, with flexibility to handle many different types of work and different auth tools in one place.

The UI could be better...but it's also hard to tune the UI unless you are specialized for a certain type of work.

The real question is...why bother with a special CI-only tool when you can use Jenkins for that and a whole lot more?


I’d like to find something friendlier than Jenkins, but the trade offs with the other CIs have made them not worth it or just plain not possible.

If you want or need to host your CI internally, a lot of the shiny new ones are off the table. Additionally, paying per user or per job for our CI is really unpalatable. We host our Jenkins on a few different nodes in our VPC and use them for WAY more than just building and deploying code. Content and database migrations, temporary stack creation and tear down, etc. We have code we build once and ship to multiple environments with different configurations and content, so we have different pipelines that run based off the commit branches, etc. We push to our private Docker, Maven, and npm registries, and auto-deploy via bastion nodes where necessary. On top of that it’s hooked in to our LDAP for auth, which is usually an ‘enterprise’ option that jacks up the price a ridiculous amount.

There’s not much out there that’s mostly batteries included that can handle what we’re doing. Where possible, it would be nice for a team or company to have a Jenkins focused person (back in ye olden days it would’ve been an SCM or build engineer position) because the UI and configuration can definitely be complex.

If you don’t need the complexity, services like BitBucket Pipelines, Travis, DeployBot, and their ilk are certainly much more friendly and likely less error prone. Jenkins definitely still has a valuable place in my book, though.


We're using it because it costs nothing and it's what everyone already knows. We're making a big architectural shift to microservices and I made a pitch for the Travis / CircleCI style workflow, but after Jenkins pipelines were discovered, that was the compromise that was made.

We've only got our toes in it now, but from what it looks like, you theoretically can use a Jenkins pipeline (with a Jenkinsfile) to get some of the benefits of those systems, but the problem is that they also allow you to rule out the others. Your Jenkinsfile can assume certain plugins are installed, assume other jobs are configured on the same Jenkins instance... basically all the things that led to Jenkins becoming what it has in terms of a carefully configured sacred cow that must be meticulously backed up and everyone is scared to update.

Having builds trigger on push isn't easy if you're not using GitHub or BitBucket, and having a series of pipelines that trigger off of each other is... not clean. You can certainly trigger another job as a "post" action just like you could in any other Jenkins job, but now your upstream job contains the logic for whether or not a downstream job is triggered. What if your downstream project (like a VM image) only wants builds from a certain branch? Or should hold off on new builds from a certain microservice while QA completes their testing? I guess you'll need to edit the Jenkinsfile for the upstream project (likely someone else's project) and be careful not to break it.


>> our Jenkinsfile can assume certain plugins are installed, assume other jobs are configured on the same Jenkins

I think what worked for us in this case was using Jenkins shared library[1]. We provide a common template for the common stacks and expose only few configurable options. This would really help in maintaining sanity across the jenkins env and since you maintain the shared lib, you can control the dependencies.

[1] - https://jenkins.io/doc/book/pipeline/shared-libraries/


We use it because of legacy. Its fragile, un-updatable and virtually un-automatable. Use plugins? They'll break. Use JJB to make things repeatable? Someone has made a change in the GUI. At last count we had something like 70+ jenkins masters (because we couldn't share slaves, because people kept on cocking up the master.)

The rise of circle-CI style yaml interface is wonderful. The build script is there in the repo, for all to see. Yes, there is less logic, but that's a good thing. Build, test, deploy, alert if fail.

Gitlab's runner is also good. (just don't use the Saas version, as the only thing it does well is downtime.)


What's the difference between circle-CI yaml files with declarative jenkinsfiles (except from groovy vs yaml)?


a few things: 1) plugins, they routinely break, the API they provide breaks, functionality changes, or the API they rely on also changes. Its a mine field.

2) bootstrapping a _secure_ jenkins server to the point where its able to accept jobs is a monumental faff. (gitlab runner is far far more simple, if you want Uber free, circle CI if you don't mind paying)

3) its just so much _effort_

Jenkins was great compared to the field in 2010/1 (after it forked changed from alfred) ever since it's failed to move in the right direction.

It over complicated what is essentially a cron/at daemon with web gui.


> At my current gig, again an enterprise, it is Jenkins everywhere. They do the most complex things with it, orchestrating entire releases, integration tests, etc.

You just answered your own question -- people use Jenkins because it can do all of those things.


It's possible to use Jenkins to do all of those things. It's also enormously costly in terms of manpower, etc.

It's truly absurd just how much shepherding Jenkins requires once you have build slaves, etc. The pipeline stuff is heaps better, but unfortunately it doesn't really work for the workflow we're using in our shop, so... sad faces all round :(.


Yes it is harder than it should be - there is a project under way to try to make this config and plugin pain go away: https://jenkins.io/blog/2018/04/06/jenkins-essentials/ (not sure if that name will stick, will maybe just be Jenkins at some point in the future). Hopefully that will help one day, and be far more efficient use of time.


I don't like it any more than you do, but I'm not aware of a competing enterprise-grade solution that has all of the necessary features and won't take a year to migrate to.


Jenkins strength is also its weakness. It's like a swiss army knife that, especially combined with plugins, can do just about anything in any way. This makes it hard to find support, best practices etc.

Anyway, you can also have your jobs completely run inside containers, but apparently they chose not to do that in your first job (or containers where simply not a thing yet). See: https://jenkins.io/doc/book/pipeline/docker/


I’m by no means a devops expert and while I have used Jenkins before, the only times I’ve had to design a CI/CD system from scratch I used either Microsoft’s hosted version of TFS (with git) -VSTS or AWS’s Code Pipeline.

With AWS Codepipeline, you can also specify a custom Docker image with your build environment and all of the tools you need - it’s in fact required if you want to build on Windows with the .Net framework (not .Net Core) and it’s also orchestrated with yml files.

I much prefer that approach to Jenkins.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: