Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why on earth did we choose Jenkins for 2019? (rookout.com)
27 points by ariehkovler on Dec 18, 2018 | hide | past | favorite | 22 comments


Does it need to be justified just because it is old? All the new build systems are cool for some cookie cutter 2019 webapps but for anything more I don't know of anything else as easy to deploy, setup, and comes with so many plugins and wealth of supporting information.

Just seems strange to have to justify something that "just works" for so many years. It isn't even really "old" compared to many tools you use every day.


Yep, the UI might be a bit dated and often a bit convoluted but it's really a powerful tool which does the job well.


It's possible to deploy Jenkins "right", but good grief is that rare. The typical Jenkins deployment feels like every new project or pipeline demands a new plugin, and will harass the ever loving piss out of you to upgrade if any of those plugins get a bit stale.

Why run a command in a shell build step when you can install a plugin that abstracts everything away in a non-intuitive manner?


We keep Jenkins job config/logs in a git repository which is checked out into /var/lib/jenkins.

When we want to add a plugin or whatever we just add it to the plugins repos, git add/commit the repository and re-kickstart the Jenkins VM.

The Jenkins kickstart then just restore the previous git config/logs back into /var/lib/jenkins. Everything comes back as it was, but the new/upgraded plugins are available.

Easy peasy. But then again we don't have a huge number of jenkins jobs or a massive cluster of Jenkins masters/slaves so yeah.

You could feasibly do it without re-kickstarting the machine, but we like to show off how the Jenkins server can rebuild itself with a self-hosted Jenkins job.


In your opinion what does a good jenkins deployment setup look like? Do you have any examples? Our Jenkins looks like it's about to blow up from all those plugin warnings.


They keys, in my book, are to avoid plugins absolutely anywhere possible.

Don't allow your Jenkins Master to create the Jenkins Slaves -- instead create the slaves via the same tooling your create the master, and register them with the master via the same (or similar) discovery mechanisms you use elsewhere in your environment.

Don't use plugins that configure your build tool of choice. Yes, it's initially spiffy that you can click around and change ant's behavior, but when things break you can almost never repro manually.

Shell steps. Shell steps. Shell steps. Make sure it's trivial to manually execute any step in the pipeline. It should be rare that you actually manually execute the step, but it helps you avoid landing in situations where it's impossible to debug the Jenkins build, because your plugin or non-shell step insists on cleaning up after itself in a manner you can't manually override.

Finally, read this before even getting started: https://queue.acm.org/detail.cfm?id=2841313

Once you've read it, read it again.


Title is clickbait. They didn't choose Jenkins in 2019, but 8 years ago:

> eight years in

Maybe they chose to stick with Jenkins in 2019 and not discard their 8 year old Jenkins setup?


I think it's Jenkins that's 8 years old. Rookout, the company that wrote the blog, only launched a few months ago.


We choose it a few months ago , jenkins is just old...


Ah. In that case, the wording is just really weird.


Pipeline is very cool but very quickly hit roadblock due to different supports from plugin. To my knowledge, it's up to plugin to support pipeline.

1 years ago, it doesn't even have a support way to call `scm checkout` and get revision. You have to go all way around by calling out to shell to get `git rev` write to a file and read it back. Groovy DSL supports it now but given backthen I feel like even Jenkins team didn't push Jenkins hard enough.

Configure Pull Request Builder, Github Hook Trigger and Build Context are a hit or miss with Groovy DSL too :(.

I think in 2019, it has no reason to continue use Jenkins other than we are too familiar with it and don't want other approach. Concourse CI has a steep learning curve but it's way better and a right approach to me. At least, you don't have a UI where people go in and trigger build. Jenkins make it easier to be messed up.


Well - I dealt quite a bit with Jenkins shared libraries when porting jobs from Jenkins 1.x to Jenkins 2.x recently. Declarative pipelines are often too restrictive leading to a mixture of declarative and embedded scripted pipeline scraps. In the end I would have preferred to be able to program the pipeline with python.

The only reasons to still use Jenkins in 2019 are SCM polling and e-mail notification. For manual builds I just use python with fabric for remote builds.


I wrote the "only reasons" as I see them, feel free to correct me.

Also we use scripted pipeline all the way


I guess the reasons depend on the environment, in my case a multi-platform embedded product. E.g. in my case I don't have a use for dynamic node instantiation. Testing partly requires fiddling with hardware and diverse system architectures, so these parts have to be done manually. Delivery bundling isn't fully automatable as well, so there isn't a place for Jenkins in the go-to delivery pipeline I have in mind. It would just complicate things imho. So what's left is SCM-based triggering of builds, unittests, dynamic tests (valgrind, sanitizers) and system tests.


One of the key pieces that we've been missing is an HA setup for Jenkins. We use JJB to automatically create jobs, have everything set up with configuration management, so that works but we still have some downtime if our master loses connectivity etc.


I built a nasty "HA" solution for Jenkins out of Ansible, JJB, and Consul/Vault once.

Basically, Consul would monitor the Jenkins master for liveness. If it discovered that the master had gone down, it would spin up the cold standby machine, first attempting to use a recent disk snapshot and then by re-running plugin installation, JJB, and copying in a secrets store file from Vault (and essentially starting the server fresh again). Then all the slaves would self-configure using Consul to figure out which "master" node was actually master.

It was gross, and I hated it, but it gave us Jenkins failover in under a minute in most cases. We only lost all our job history once in a year and a half, and this was in a flaky-ass openstack environment.


Mmm actually you are right, did you try https://jenkins.io/doc/book/architecting-for-scale/ ?


It's old, it's clunky, but it does just work most of the time.


OP here , feel free to ask me anything :)


In the "Use code instead of a template language (Groovy vs YAML)"-section, you write that you prefer a DSL over a programming language. Is that because you consider YAML a programming language or how is it supposed to be understood?


Not what I expected but pretty cool.


Just out curiosity - what did you expect? :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: