Hacker News new | past | comments | ask | show | jobs | submit login
Jenkins Is Getting Old (itnext.io)
461 points by zdw on April 29, 2019 | hide | past | favorite | 328 comments

Disclaimer: I'm pretty biased towards Gitlab -- write about the things you can do it from time to time and they gave me some free swag once.

Best CI I've ever used is Gitlab CI[0]. The runner is completely open source[1] and you can use your own runner with your gitlab.com (or local instance) projects -- set it up in an autoscaling group[2] for savings.

I run https://runnerrental.club but Gitlab also recently released the ability to pay for minutes in 11.8 [3], so my product is more-or-less dead in the water but I don't mind since Gitlab is such an excellent tool, I'm glad to see them fill the need.

But back to Gitlab CI -- the YAML configuration documentation[4] is pretty fantastic -- Most easy things are easy and hard things are possible. I suspect that one could run an entire startup like circleci/travis/drone based on just the software that Gitlab has open sourced and made available already.

[0]: https://docs.gitlab.com/ee/ci/

[1]: https://gitlab.com/gitlab-org/gitlab-runner

[2]: https://docs.gitlab.com/runner/configuration/runner_autoscal...

[3]: https://about.gitlab.com/2019/04/22/gitlab-11-10-released/#p...

[4]: https://docs.gitlab.com/ee/ci/yaml/

If you know a bit about GitLab and Docker, GitLab CI is pretty easy to grok. I really enjoy that you can run your CI jobs inside any old Docker container (with a shell). GitLab CI is built up from very simple concepts and functionalities, but still enables some powerful use-cases.

The artifacts feature is great and some artifacts, like unit test report files, can be interpreted by GitLab and used in various parts of the GitLab web UI. A lot of this just works and most of the CI features are available in the GitLab community edition which is open source.

I have not used Jenkins actively since before the Jenkins pipeline file format was common. So for me Jenkins always appeared to be this game of checking the right checkboxes and clicking the right buttons in the Jenkins UI. The new pipeline feature is probably much nicer. However, now that I use GitLab I don't really see any reason to switch back to Jenkins.

(Disclaimer: CloudBees cofounder here)

Our hope to simplify away the checkboxing and plugins is ready to go distro (free of course): https://www.cloudbees.com/products/cloudbees-jenkins-distrib...

You mention docker and it is super great for CI: it’s probably one of the widest used feature of Jenkinsfiles (you can specify what image or dockerfile you want a stage to run in - simple but powerful, as you probably know)

Good to hear CloudBees seems on top of this issue. Seen multiple posts of you guys here :)

Last I checked, there were a lot of little things that made it not possible to move to GitLab CI. E.g.:

- Can't customize your git checkout process (e.g. shallow clone with depth, or merging source branch with target branch with certain strategy)

- Can't make job run/not run based on filter on source branch/target branch/etc. of a merge request

- Can't dynamically make certain jobs only run on certain agent machines

So I'm still stuck with Jenkins for now. I know we love bashing Jenkins but I have yet to come across anything that offers the same amount of flexibility.

You can definitely do all of your requirements, but likely as a result of recent features

1) you can customise the git checkout depth and style: https://docs.gitlab.com/ee/ci/yaml/#shallow-cloning https://docs.gitlab.com/ee/ci/yaml/#git-strategy

2) https://docs.gitlab.com/ee/ci/yaml/#onlyexcept-basic

3) https://docs.gitlab.com/ee/ci/yaml/#tags

I'm not associated with gitlab at all but happy to give pointers if anyone wants to contact me direct

2) was released in October, about six months ago [0]. I'd also like to add: as a user of GitLab for about a year now, their steady rate of feature releases has bene pretty pleasant, even if I can't take advantage of some right away.

[0] https://about.gitlab.com/2018/10/22/gitlab-11-4-released/#ru...

I'm not sure that's what I needed. For example, I wanted to trigger a job only for a merge request, and only if the target branch of the merge request is e.g. master. Is that possible? Trigger a job only for a MR is possible, but I don't know how to further do the later branch filtering.

When using a merge_request pipeline, GitLab defines additional variables for the run. That includes `CI_MERGE_REQUEST_TARGET_BRANCH_NAME`.

So it would look something like this:

        - merge_requests

Oh wow I didn't know you can do conditionals with variable like that. I'll give it a try later, thanks!

We generally only merge into master at my org, but I didn't know you could conditionally trigger jobs using an equality expression on a GitLab-provided variable. I'll have to keep this in the back pocket for future reference, thanks! Any docs that describe this feature in more detail?

I will second this, Gitlab has the best CI I've ever used, and I don't know what it is. The UI is just so clean, it does what I need to and is easy to configure. I put all my projects on Gitlab mostly because of the CI, but also because of the other great features.

GitLab product director for CI/CD here - thanks so much for the feedback, everyone. It's really great to read how much you're getting value out of what we built.

We have an overall CI/CD direction page up at https://about.gitlab.com/direction/cicd which you can drill down into the individual stages plans from. Feedback is always welcome, we love building things in partnership with real users. You can reach me at jason@gitlab.com any time.

Have you started working on providing a UI for the XML test result artifacts produced in CI runs?

They are displayed in merge requests but not anywhere in the normal CI pipeline UI and I'd live to be able to see those and see what works and what doesn't.

GitLab Product Manager for Verify (CI) here (who works for the Director above ).

Yes - that's something we want to get to this year. We call it "CI Views" right now but we want to expose better all the types of test results that get collected by GitLab. Today for XML (JUnit) results, you can see what fails in the merge request with https://docs.gitlab.com/ee/ci/junit_test_reports.html.

But we want to make the XML, JSON or HTML output of _any_ type of test first class with CI Views: https://gitlab.com/gitlab-org/gitlab-ce/issues/35379

Yes, but that only works for the merge request view so far, are there plans to add a view of these to each CI job, so even if something on master fails, I can see the results?

Not sure if it will make the first iteration (as merge request centric workflow will be "first")...but I agree the vision should include the ability to see the tests on any pipeline.

I haven't checked out Gitlab in a long time. My, they've come a long way! I love a lot of what I'm seeing including the Web IDE and their bias towards making CI/CD a priority.

Inclusion of the free docker registry was also pretty visionary, and it's been a while since they added that -- it's crucial for just about all my new projects.

I'm the GitLab product manager for the Package, which includes the container registry. Thanks for the feedback! You can see the updated vision and direction for the container registry here: https://about.gitlab.com/direction/package/container_registr...

If you have any questions or feedback, you can email me at at trizzi@gitlab.com. I'd love to hear more about how you are using the docker registry today and any improvements you'd like to see made to the product.

Just saw this, thanks for the link to the vision! I can't say I have any great recommendations for the container registry but hopefully someone that does sees this.

NO debug overs ssh, why I still use CircleCI.

GitLab Product Manager here

I'm not super familiar with Circle CI's SSH debugging, but we do have a feature called "Interactive Web Terminals" https://docs.gitlab.com/ee/ci/interactive_web_terminal/.

This allows you to connect to a terminal in the running job on GitLab CI for debugging. Would love to understand how this does or does not meet your use-case here.

Great, seems it does the job. 'Cause sometimes 'test passed'/'test failed' isn't enough, you should go deeper.

CircleCI supports SSH connection for max 2h, then shutdown the task.

Is Gitlab CI available as standalone solution? I don‘t need a source code solution, just an CI solution.

If you prefer to host your code on GitHub, it is fine! You can use GitLab CI/CD the way that you host your code on GitHub, but build, test and deploy from GitLab. Take a look at https://about.gitlab.com/solutions/github/!

This is not a full standalone mode, because GitLab CI is a built-in solution that can not be easily separated, but might work for you.

Multirepo pipelines behind a paywall is a good reason to say "No, thanks".

TBH, its easily done in shell via REST API. We use custom pipeline runner from simple Powershell scripts that works even better for us then default style.

We set all jobs to manual and then our script triggers them depending on commit message, person, moon phase etc.

But really, this should be in core. Its very hard to do multirepository stuff. Its not that easy to do mono repo stuff too - I really need a pipeline withint sub-project, a mother pipeline, option to run whatever one I want etc... Gitlb pipelines could be a lot better.

I find it very easy to make multi-repo stuff, I've also wrote about it: https://vadosware.io/post/fun-with-gitlab-ci

Also, running pipelines in different projects and whenever you want is accessible from the web interface... It could be cleaner, but IMO adding even more syntax to the YAML file can also be a rabbit hole -- CURL from the script section seems like not a bad middle ground.

Am I misunderstanding what is being referenced?

This post seems very focused on projects with just a single developer. The "recognizing and tagging" versions for example completely breaks down when you have multiple people commit and merging things to master and multiple pipelines running that are trying to recognize and tag versions.

The post is very much not complete -- there is much more a team would have to standardize to fully take a set up like this into production, bu I'm not sure that it "completely breaks down" with more people contributing.

As long as there is some release coordination, a system like this can work. In particular, I've found that the lightest way to get a system like this working for x > 1 developers is to have a release-vX.X.X branch for releases that are going out and vX.X.X tags for releases once they have actually landed.

You mention the recognizing and tagging versions being an issue -- are you imagining a world where two people are releasing something at the same time? I'm not exactly sure

If we assume that what that posts suggests isn't feasible, regardless of how you deploy and do your CI, if your release process can be done by a human, it can likely be automated. Gitlab CI is most robust and yet easiest to understand automated CI system I've seen for making that happen (at least that was my intended point).

This is what interests me a lot.

I have found that there are moments that start to give chaotic behavior in CI that are influenced by the number of people working on mono repository and some other stuff like code maturity: 1 dev, 2-5 devs, 5-10 devs and 10+ devs .....

Something that works great for any number might start to break for other number. Something effective for huge team may be very bad for smaller team. Something that works great in one moment (first several months of the project when thing still get shape 0.X.Y versions) sux when we move to 1.x version and vice versa.

I am not aware of anybody giving more detailed thought about this nor examine what could be most optimal solution for any type of team (if such thing exist) or on what precise moment you start doing certain practices (for example protecting master branch).

I know about this way, but it's basically building CI upon CI, doesn't sound good from any perspective.

Multi repo pipelines are not paywalled? I can have one repo trigger jobs in another today, I've written about it a little bit: https://vadosware.io/post/fun-with-gitlab-ci/

Am I misunderstanding what you meant?

Answered to other sibling comment, could elaborate further for you though.

In order to pull other pipelines through Gitlab's API, you have to build another CI on top of an existing one. What I mean is that you're making the a pipeline to handle projects, but there's a catch - you have to run another pipeline to set the variables in the One pipeline to rule them all because you can't override variables from the jobs themselves! And the main pipeline would run a bunch of scripts to pass variables and handling the state.

Now let's compare that to other systems from my experience. Jenkins way: one Groovy script (might be written as a special job type from UI or through VCS) to do the job with some fancy features like pause to confirm the next step from the CI's Web UI, full control of everything. Teamcity way: either could be made from UI (with reverse dependency chain because you need to start connecting pipeline from the last jobs to first) or write yours from scratch with Kotlin DSL (there's still no direct support for pipelines[1]), you can handle the process through dependency settings and by setting variables from the jobs themselves. [2] An example of Teamcity approach. (black boxes are there because of my NDA)



edit: added picture.

edit2: added link to TC blog and rewrite the link to a screenshot.

GitLab Product Manager for Verify (CI) here.

This is really valuable feedback, thank you for it. It's something we are thinking about and working on fleshing out a vision for. Two epics we have around these ideas include [1] Making CI lovable for monorepos and [2] Making CI lovable for microservices.

Both of these challenge our current assumptions around the project::pipeline relationship and will help make room for improvements around that model and provide flexibility to build more complex models that work for "real world" problems like the ones you've stated. However, I don't believe we have all the answers yet so I would love more feedback on these and the issues attached.

[1] https://gitlab.com/groups/gitlab-org/-/epics/812

[2] https://gitlab.com/groups/gitlab-org/-/epics/813

Thanks for clarifying, I see exactly what you mean now. It looks like some Gitlabbers are in this thread now so hopefully they see this as well

what is your alternative please ?

In one of the previous jobs I've used Jenkins Pipelines (and oh boy it was PITA to set it up in 2016) to handle jobs of different projects. On current one we are using Teamcity (still cheaper than Gitlab Silver license for us) which has the support of this feature as long as branch names have a common variable in their naming convention.

I.e. Project1 has branches in form feature/TICKET_NAME or fix/TICKET_NAME, Project2 uses only TICKET_NAME, TeamCity VCS settings are based upon masks of branch names so ticket 12345 would be branch 12345 in P2 but fix/12345 or feature/12345 of P1 if one exists (priorities could be tweaked too).

Hello, I see a lot of great feedback in this post. I am a product manager working at CloudBees, the primary corporate sponsor of Jenkins. Jenkins is now in the Continuous Delivery Foundation as well.

While it is easy to bash on an inanimate object, there are some very dedicated and empathetic people who care deeply about the project. Some of those people do this work in their off-hours and some to this work as part of their daily work activities AND also in their off hours.

In that spirit, we want to make Jenkins better and created a separate group at CloudBees in the Product and Engineering teams late last year. They focus on open source work for Jenkins and on some proprietary things for CloudBees Core (built on Jenkins). We also have a dedicated user experience/product designer who started working on the project a few months ago. One of the first things he and I worked on was creating a curated, tailored version of Jenkins via the CloudBees Jenkins Distribution. This distribution will focus more and more over time on a guided workflow for continuous integration and delivery with Jenkins. These patterns will also be shared with open source Jenkins - some through direct contributions and others through suggestions (better plugin categorization, documentation, etc.).

Please use this comment thread to share your constructive, honest feedback about how we can improve Jenkins.

I think its already mentioned about Jenkins Configuration as Code. But in general stick to a configuration method, I have had to migrate from JJB > groovy scripts > init.d groovy > Jenkinsfile.

And can we PLEASE open up the issues tab on the Github repos especially for plugins. There is currently no way to provide any feedback/report issues on these plugins because the "issues" tab is disabled. Our current options are - Put a comment on the plugin wiki page - Hunt down the the relevant support forum for the plugin - Find the original source repo and post an issue

All of which are not ideal and does nothing to directly help the development of the plugins.

Jenkins has used Jira for bug tracking since before GitHub existed, so although the project's code and all plugins are now managed on GH, bug tracking continues to live in Jira. Anyone can create an account and add issues here: https://issues.jenkins-ci.org/secure/Dashboard.jspa All plugins track issues there, so you just put the appropriate plugin, or "core" for Jenkins itself, in the 'component' field and the issue should be assigned to the right person.

> While it is easy to bash on an inanimate object, there are some very dedicated and empathetic people who care deeply about the project. Some of those people do this work in their off-hours and some to this work as part of their daily work activities AND also in their off hours.

I've been using Jenkins heavily over the past year since a client of our purchased the enterprise version. It was billed to me as a mature open source product that has been refined over the years by people trying to optimize their dev operations. While I'm sure some people really care about it, the user experience is so utterly disappointing it's almost impossible to imagine how it got to this state without neglect.

Even if you ignore all the complicated stuff, the web UI is embarrassing. While I don't suggest a "pretty" UI is necessary for devops, I would think you'd have a quick win just by having some people re-style the existing UI to make it look and feel like something built this century and not require a dozen clicks to get to important information. There are also bugs that are so painfully obvious it makes me wonder how they still exist. An example is if your Github branch has a slash in it (eg. feature/something) you get a 404 error if you try to navigate to that builds' results.

There are also features that appear to have almost no value yet are in the core UI and clearly took some time to build. The weather icons representing various permutations of previous build states is one ridiculous example that comes to mind.

I would respectfully suggest you run through some real world Jenkins experiences like the ones mentioned in the article. Also setting up a new server, configuring non-trivial SCM settings, debugging Jenkinsfiles, etc. To echo the article's sentiment - it feels like I'm constantly fighting with Jenkins to do what I need instead of being guided into a mature set of features.

Conversely - Octopus Deploy is a related product I have been using alongside Jenkins which has been an absolute joy to work with. Everything from initial setup to configuring its agent software on host servers has been straightforward. It has a simple, elegant UI that provides access to important information and actions where you would hope to see them. And most importantly - everything works. I have yet to encounter a bug or experience any broken UI states.

I'm glad to hear CloudBees is making some effort to improve things and I hope PMs like you continue to be involved in the community and solicit feedback, even if it's hard to hear sometimes.

Jenkins configuration as code. It's hard to configure without going to all the files and options. You have several things to take care of: server config files, plugins, credentials, projects, etc. Going through xml files is not fun. Also, having predetermine pipeline plugin configuration based on runner types would be nice. For example, if k8s I would expect k8s plugin + credentials + organization plugin. In many cases, the only thing I want is a pipeline which I can deliver code. Do something with jenkins declarative and get rid of groovy script. The latter makes a mess in your files, the former is very limited. At the end you have to write groovy if you want things more bespoken. It would be nice to just use any language in a container as drone does for your plugins.

I've been involved in pioneering Jenkins, maintaining Jenkins, and using Jenkins for the past 7 or so years off and on in different roles, often that of the Jenkins admin. The original article has a lot of good feedback that I'd suggest addressing; it hits a lot of the pain points I've had in the deploys I've had my hands in.

For myself, I'd prioritize -

An indepth improvement of the Config as Code system as applied to Kubernetes, both for management of the config and of relevant plugins.

Plugin compatibility and management of plugins. It's not... smooth.

cloudbees shop here - i can say that because of how fragile the plugin environment is, we never, ever update our jenkins - we cant, we would spill money out the instant it went offline. so every few years we just roll out a new jenkins and force the devs to migrate to it, slowly, also over the course of years. every half a decade the cycle restarts. ok, we have been through exactly one of these, but it's a full cycle and starting up again. this has always been the biggest pain point, developers begging for a new plugin they cant have because updating a dependency is forbidden. so they either dont, or they do, by: making their own island ci/cd, or doing it so poorly, just as much risk comes into jenkins as it would rolling the dice on an untested upgrade.

CloudBees Support Engineer here! We offer a free Assisted Update service to customers for exactly this reason. We will examine your existing Jenkins installation, compare it with the target version you would be updating to, and outline any possible snags that you would need to address during the process. We also help ensure that you have a good backup so that you can roll back if need be, and if you are a Platinum customer a Support Engineer will hang out on a conference call while you perform the update. We've done loads of successful updates with customers this way, I think it's one of the most useful services we offer. Updates don't have to be as painful as you've described!

Debugging Pipelines/Groovy is horrible. The stacktraces are horrible. You don't even get a line number for the error!

Also, the documentation badly needs updates and examples.

(Also, amusingly, we chatted a bit by mail on April 25th 2018, but there was no follow up on your side, I guess priorities changed...)

You can use the YAML-like declarative syntax [1] instead to configure the pipelines for 90% of what you do, and just use Apache Groovy for the more complex logic, or interfacing with plugins.

[1] https://jenkins.io/doc/book/pipeline/syntax/

I am using the declarative syntax. It's just as horrible to debug. Play around with it a bit, delete some characters, misconfigure it. You won't even get a line number, just as I said.

I sincerely wish I could move away from Jenkins for the reasons stated in TFA (GUI-oriented, slow, hard to backup/config, test-in-production mentality and boundless plugins) but I've never found something that fits the bill.

The much-touted repo integrations (travis, circle...) all have an exclusive focus on build-test-deploy CI of single repos.

But when you have many similar repos (modules) with similar build steps you want to manage, and want to have a couple of pipelines around those, and manage the odd Windows build target, these just give up (it's docker or bust). Sadly, only Jenkins is generic enough, much as it pains me to admit.

Anyone got a sane alternative to jenkins for us poor souls?

TeamCity from JetBrains is the same thing as jenkins, except the core features are working core features instead of broken plugins. It's paid software though, you get what you pay for. https://www.jetbrains.com/teamcity/

On the other hand there is Bamboo from Atlassian. https://www.atlassian.com/software/bamboo

I really don't understand this mentality of there is no better tools when there are better tools than jenkins and they've been around for a while.

Of the CI tools I've used (most of them) TeamCity was my personal favorite--but the advantage of Jenkins is that it's very widely used, has a greater breadth of capabilities due to the huge plethora of plugins, and a huge amount of support info readily available online. Some plugins are even maintained by an external vendor that produces the tool you're trying to integrate with and are either better supported or the first to get timely updates.

Bamboo on the other hand is IMO the worst of the commercial CI tools by far and where I work has gone down for us the most. Atlassian itself doesn't appear to be investing in it much anymore judging by the slow pace of development in recent years and at their most recent conference, you can hardly find it mentioned or see much presence for it anywhere.

In all the CI systems I've used though, there has not been one that I haven't encountered some major difficulties with.

Beyond that, anything to do with build automation for a large number of users always quickly becomes a support & maintenance quagmire. Users frequently want to install a new (barely maintained) plugin to solve a problem they have, complex interactions lead to difficult to understand failure modes that require time consuming investigations ("your CI tool is the problem and broke my build" ... "No, your build is broken" ...).

Fine, when you are one specific vendor shop, like Jetbrains or Atlassian stack and you have got plenty of financial power, then there is always cool features, what can bring benefit. But in the end CI and CD systems are glorious semi-smart cron runners. Are these tools 10x better than Jenkins. Not so much, CI/CD is from one of the standpoint most important and in the same time less important tool, delivery should suck very much to migrate to new platform just because. Jenkins shines here, it's not perfect, but it works. More or less it's for free from licensing standpoint, you don't have to go thru Corporate procurement hell. It's not for free from workforce perspective, but none of these tools are with zero configuration. Just x,y,z, still some yaml or some other crazy configuration needs to be done (like Bamboo dsl).

Of course it can be 10 times better. It's so trivial to be 10 times better.

First you checkout the project from the repo and it just works, doesn't matter GIT, SVN or whatever. How many plugins does it take to checkout a project in jenkins? Is there even a git plugin working nowadays?

Then, you build the project. If it's any of C# or Java for example, the ant/maven/sln/nuget files are detected automatically, just click next and it's built. Does jenkins even understand what is a requirements.txt? Hint: It's not a bash script.

The JVM and the Visual Studio are detected automatically on all the build slaves and the project is already building in the right places. If you want to aim for specific tool versions, there are presets variables on all hosts to filter where to build and to use in build scripts so paths are always right. How is the build matrix plugin in Jenkins lately? Broken as usual?

The project is built, or is it building? It's easy to tell because there is a clear colored status icon and the estimated time to completion is displayed. Teamcity offers that out of the box for maybe 15 years now. Well, jenkins finally got a progress bar too a couple years ago. I guess I'm defeated, Jenkins caught up on basic core functionality only a decade late, I can't justify to pay for working and polished tools anymore. Well, I hope our sysadmin will install the Extra Status Icon Plugin or we'll have to live without the big colored circles next to the build.

TeamCity is indeed quite good.

> But in the end CI and CD systems are glorious semi-smart cron runners.

I think you’re not appreciating and misrepresenting the complexity and power that comes with these solutions.

You say that, but you do not mention in what way?

You may find our direction page for CI/CD at GitLab interesting if you're looking to learn more about the possibilities involved here. We do all of our planning and roadmapping in public so you read a bit about our overall technical challenges and approach there, and drill down into the stages (CI, packaging, and CD) that make up the capabilities within GitLab, each of which have their own videos and other planning content.


Those tools can build anything, which is the point of a CI/CD system. There is no specific vendor you have to be invested in, it's just paid software.

Hear hear.

And actually, you can get quite far with the free TeamCity license of three build agents and 100 build configs. I’m also fairly sure that Jetbrains would take kindly to license requests from open-source projects and academia.

> I’m also fairly sure that Jetbrains would take kindly to license requests from open-source projects and academia.

They do: https://www.jetbrains.com/buy/opensource/

TeamCity doesn't handle downstream builds properly. Bamboo has severe stability problems. I've worked at places that evaluated them and always found Jenkins was still the least bad option.

Could I ask you to elaborate on the downstream build issues? Thanks!

We had the problem that whenever we built a project it would trigger builds of any project that transitively depended on that module. So if you have e.g. 26 projects depending on each other in a line and you make a change to the first one, in jenkins this will run 26 builds as it builds A, then B, then C, .... . Whereas in Teamcity it will run 26 + 25 + 24 + ... builds: it'll build A, then B-Z immediately, then the build of B will trigger another build of C-Z, then the build of C will trigger a rebuild of D-Z and so on.

It sounds like those builds weren’t quite set up correctly. I’ve used TeamCity’s build chains quite a bit and haven’t seen this behavior. Depending on exactly how the builds are triggered it will sometimes enqueue redundant builds, but as the duplicates come to the top of the queue the server realizes they’re unnecessary and doesn’t run them.

There was no "TeamCity build chain", just normal maven dependencies. We raised the issue with their official support (we had a commercial contract) and they couldn't fix it either. Whereas Jenkins did the right thing by default. Shrug.

TeamCity has an extremely generous 100 build configuration limit, if you’re exceeding that, than in all likelihood you’re getting far better value from it than the additional licensing cost.

+1 for TeamCity. We tried GitLab CI before, but are happier with TeamCity now.

What can we do better in GitLab?

At my work we use TeamCity for some things and Gitlab CI for others. Things that are good about TeamCity:

- Templates

Gitlab has something called templates but it's a very different thing. In Gitlab, a template is used to bootstrap a project, but that's it. In TeamCity a template is attached to a project such that if you change the template, changes are applied to all projects that inherit from the template. Each project can override any settings or build steps it got from the template, without losing the association to other settings. A project can have multiple templates attached to control orthogonal aspects of its behavior. From a template, you can see what projects inherit from it, and you can freely detach and attach to a different template. It makes managing a large number of projects with similar configs, that all evolve at somewhat different rates really easy.

- Build results

Teamcity has very good integration with xUnit and code coverage tools to quickly see test results and coverage as part of a build. Gitlab recently got better at this (it can now at least parse xUnit results), but you can still only see test results in the merge request view. TeamCity can also do things like track a metric over time and fail a build if it drops (i.e. PR builds should fail if code coverage drops more than X %). TeamCity also supports adding custom tabs to the build page so that you can attach reports generated by the build easily viewable in the UI (vs in Gitlab where you have to download the artifact and then open it to view)

- Overall view of runner status

It's very easy in TeamCity to see the build queue, and an estimate of when your build will run, and how long it's expected to take based on past builds.


For me it's easier in TeamCity to see the overall status of deployments to a set of environments (i.e. what's on dev/stage/prod) that might span multiple source code repos. At a glance I can see what changes are pending for each environment, etc. In Gitlab things are too tied to a single repo or a single environment, and the pages tend to present either too much or too little information. Also, in TeamCity I can configure my own dashboard to see all of the stuff I care about and hide other things, all in one place.

- System wide configs

There are some settings that apply to the whole system (repository urls, etc). There's no easy way in Gitlab to have system wide settings, they have to be defined at the group or repository level. In TeamCity, you can configure things at any level, and then override at lower levels.

- Extensibility

TeamCity supports plugins. I know this can lead to the Jenkins problem of too many plugin versions, etc, but in TeamCity you tend to use far less plugins, and the plugin APIs have been super stable (I've written plugins against TeamCity 8 which is 4 major versions old and they work fine on the latest). It's really nice to be able to write a plugin that can perform common behavior and have it easily apply across projects and be nicely integrated into the UI.

To me, overall Gitlab CI seems useful for simple things, but overall it's 70% of the way to being something that could replace TeamCity.

Thanks for the incredible feedback! CI/CD product director here. A few thoughts:

- Templates

We actually have done a lot here recently, we've improved includes so that they have a lot more flexibility (https://docs.gitlab.com/ee/ci/yaml/#include), and have even refactored our own Auto DevOps implementation to take advantage of this: https://docs.gitlab.com/ee/topics/autodevops/#using-componen.... In this way, you can have included behaviors across your projects that can then be updated in bulk.

- Build results

We are planning on adding testing results over time in our vision for this year, thank you for confirming this is important from your point of view. https://gitlab.com/gitlab-org/gitlab-ee/issues/1020

- Overall view of runner status

We did recently add pipeline info to the operations dashboard (https://docs.gitlab.com/ee/user/operations_dashboard/), which I know isn't exactly what you're looking for here but we are making progress in this direction and recognize the gap.

- Dashboard

The next improvement we're making to that operations dashboard is adding environments. You can see the in-progress issue here: https://gitlab.com/gitlab-org/gitlab-ee/issues/3713

- System wide configs

This can be achieved by using includes to set the variables, which is admittedly a workaround. We do have an open issue (https://gitlab.com/gitlab-org/gitlab-ce/issues/3897) to implement instance level variables that would solve this.

- Extensibility

This is an interesting one because Plugins are, at least in my opinion, what makes Jenkins a mess to use in reality and believe me, I've managed plenty of Jenkins instances in my career with lots of "cool" plugins that do something great, at least while they work. It is one of our values that we play well with others, though, so I'd be curious to work with you to understand specifically what you'd like to be able to make GitLab do that can't be done through your .gitlab-ci.yml. Our goal is that you should never be blocked, or really have to jump through hoops, but still not have to be dependent on a lot of your own code or third party plugins.

I hear you on plugins, and I agree they are problematic. I went back and forth on whether to include this or not TBH.

I'll give you a couple of examples of use cases for plugins:

We have an artifact repo that can store NPM, Python and other artifacts (Nexus if you're interested). I wrote a plugin for TeamCity that can grab artifacts from a build and upload them to the repository. Obviously this can be done in a script, but there are a couple of things that make doing it in a plugin nice:

- You can set it up as a reusable build feature that can be inherited from templates (i.e. all builds of a particular type publish artifacts to Nexus)

- You can get nice UI support. The plugin contributes a tab to the build page that links to the artifacts in Nexus.

- The plugin can tie in to the build cleanup process, and remove the artifacts from the repository when the build is cleaned up. This is useful for snapshot/temporary artifacts that you want to publish so people can test with, but have automatically removed later.

Another example of where plugins have proved useful is influencing build triggering: we have some things that happen in the build server, and then other stuff happens outside of the build server. When all that completes, we then want to kick off another process in the build server (that sounds abstract - think an external deploy process runs, and once the deploy stabilizes you kick off QA jobs). In TeamCity you can write a plugin that keeps builds in the queue until the plugin reports that they are ready to run.

While plugins aren't the first tool I reach for when looking at how to provide reusable functionality in a build server, I have written several plugins for both Jenkins and TeamCity. Overall, I don't think Jenkins/TeamCity's model of having plugins run in-process is a good one, and it leads to most of the problems people have with them (although TeamCity is much better here: Jenkins basically exposes most of its guts to plugins which makes keeping the API stable virtually impossible, while TeamCity has APIs specifically designed for plugins that they've been able to keep stable very effectively) A model where a plugin was just a Docker container that communicated with the build server through some defined APIs, combined with some way for it to attach UI elements to a build that could then call back into the plugin would be much nicer. This seems to be more like what Drone is doing, but haven't played around a lot with that.

I think Gitlab has a strong philosophy of wanting to build out everything that anyone will ever need, all nicely integrated, and that's a great ideal. I think in practice, it's REALLY hard to be all things to all people. People have existing systems and/or weird use cases that it just doesn't make sense to handle all of, and plugins are a useful tool in addressing that.

Thanks for sharing, this is really great feedback.

If you work at gitlab, you can download the free version of TeamCity on their website. Setup a few projects and it will be obvious what it does better.

You may want to try a C#, java, python and a go projects to see the differences, with slaves on Windows and Linux. There are some pretty tight integrations for some of these.

My experience of Bamboo has included:

* Broken base Ubuntu images being recommended by Atlassian as the default for agent Image configuration, only to be fixed to a usable state months later;

* Being generally years behind other CI tools, even the traditional ones;

* Data exports corrupting themselves despite apparently succeeding, blocking migrations after upgrades or server changes;

* The official documentation somewhere recommending copying across password hashes directly for setting up new admin for a migration, but I can't find this anymore so they've hopefully improved the process and documentation for this;

* A bug in an earlier version in which a strange combination of error cases in an Elastic Bamboo image configuration could spam EC2 instances in a constant loop, which we thankfully spotted before it ate through our AWS bill;

* No clear messaging from Atlassian about how the future of Bamboo compares to Pipelines. The official line seems to be that Bamboo is for highly custom setups who want to host their CI themselves, but you get the impression from to the slow pace of development that they're slowly dropping it in favour of Pipelines. I'd rather they be honest about it if that is the case.

Those are just the problems I can think of from the top of my head, anyway.

I agree. I used TeamCity and liked it. It was like Jenkins, but easier to setup, less messy and just worked for what we needed it. It was worth paying every penny for it.

When I used it, I found that devs loved to set up jobs as it was way easier than Jenkins configuration

We use Teamcity even though we have Gitlab for source control. Teamcity has worked for years which we needed. Don't know if we ever will switch to Gitlab for CI.

TeamCity is amazing. Well worth the money ... especially if you're a Java shop. It's really not too pricey either.

And it's free for up to three build agents.

> GUI-oriented

Not with pipeline files. I am a total Jenkins noob, but I was able to (relatively) quickly setup a minimal job that automatically pulls config from the relevant GH repo.

Ah yes, pipelines do make a difference in configuring jobs. However, how are you managing your plugins? Your Jenkins configs? Most likely those are manual (however if you've found a way that works well, please share). I've also found that for some functionality, I've had to add Groovy into my pipelines.

That said, pipelines has made a HUGE difference. I still want to migrate but this fixes a large pain point.

> (however if you've found a way that works well, please share)

Not extremely well, but I did a small PoC where Jenkins is running in Kubernetes without persistent storage. Plugins are installed on boot with install-plugins.sh (part of Jenkins' Docker image) and configuration is done via Groovy scripts in init.groovy.d (stuff like https://gist.github.com/Buzer/5148372464e2481a797091682fabba...). It's not perfect (e.g. I didn't have time to find out good way to store & import old builds) & it does require some digging around to find how plugins actually do their configuration.

Plugins and configuration are a given, but it’s something you do once upfront as part of the setup. Not sure how other tools handle this, though.

And yes, you do end up needing to use Groovy for anything non-trivial.

And then somebody needs a different, incompatible, version of plugin X and you set up another Jenkins master.

Or upgrade the the Jenkins master and watch other jobs fail.

And not to mention plugin Y and plugin Z crashing Jenkins when being run together because they share the same classpath.

While in the meantime one of the developer is trying to migrate his pipeline from one master to another and he finds out that of course they won't work because the plugins and configuration are not exactly the same.

This is exactly what OP was complaining about. You don't set up plugins and configuration just once. You want them to be replicable, but Jenkins does not provide a good way to do that.

Most other CI/CD system handle this issue very simply. They just don't have plugins, and have very little (if any) global configuration. This means you can start up an entirely new cluster and chances are your pipeline files will run without a hitch.

And prior to that there were other solutions built around manipulating the Jenkins API, eg: https://docs.openstack.org/infra/jenkins-job-builder/

(My company switched from JJB to pipelines in the last year and has found it pretty decent.)

Buildkite - run your own workers using their agent, they manage the ui, it's a pretty simple system.

I use it for build and test automation and it's been pretty solid.

Have you observed any limitation or problem with it? I've been very interested in transforming our internal Jenkins CI into something lighter and modular with less maintenance which still allows multi-platform slaves, and BuildKite seems like a very interesting new player.

Not really - it's about as simple as buildbot with a nicer UI. All our builds trigger off of Github pushes - I have a handful of cheap Ubuntu VMs on Linode doing builds and tests for our code, and one Mac Mini doing builds for some developer tools - the latter is in a small rack in our office, but it works all the same.

Each build pipeline is just a small shell script which does some setup and runs make to build or invokes our test entry points.

Gitlab and Concourse both support windows runners as far as I can see. They also don't require docker, but you might actually want that for most of your jobs.

My biggest gripe about gitlab is you can't schedule a job in code, and I suppose it's less then ideal to support 3rd party repos in hosted gitlab, but I don't know why you'd not use it as an SCM.

The bigger problem, would be using a group job that triggers a bunch of other jobs to do the many modules type of development you spoke about, but I'd just develop my modules seperatly, and build them in parallel steps if need be.

Hey there, CI/CD product director here. We do allow scheduling/editing/deleting pipelines via code, at least via an API: https://docs.gitlab.com/ee/api/pipeline_schedules.html

Or are you looking more for putting the values in the .gitlab-ci.yml itself? This is something we have thought a bit about, but it gets strange with branches and merges where it's not always clear you're doing what the user wants as the different merges happen.

To your second point, you might be interested in some of the primitives we're looking at building next here: https://about.gitlab.com/direction/cicd/#powerful-integrated.... These, in concert, will help with a lot of more complex workflows.

Indeed I meant in the .gitlab-ci.yml. I would assume you'd name the branch in the schedule, and if not default to the default branch. Similarly, it's sad you can't set a variable in one stage and have it available in another, and there's a couple of other niggles that one needs to work around.

With that said, the product is fantastic and I'm just pointing out some flaws so the parent understands I've actually used the product, and not just a fanboy yelling. :)

We do have an issue about making it possible to use GitLab Serverless for that use-case easier. Please take a look at https://gitlab.com/gitlab-org/gitlab-ce/issues/61171!

Same boat as you. I'm very happy with Gitlab CI. Do look into it, it's extremely flexible. Not quite as flexible as Jenkins, but far more than Travic/Circle CI without it becoming an issue.

They have an integrated Docker registry as well!

They now have configuration includes and cross project pipeline triggers, which is part of what GP seems to be looking for.

Personally I’ve found that for my past and present use cases generating any needed step (e.g test matrix) e.g with a script is much more flexible, predictable, and reproducible since the generated result can be versioned.

I also successfully used various custom runners including baremetal Windows ones and virtualised macOS ones inside VirtualBox.

I don't think Jenkins is gui oriented, slow or hard to backup/config, but I did enjoy using TeamCity few yers back. Sure it costs you arm and a leg, but it worked well without any plugins.

Happy Buildkite user here across two companies. We've built some custom tooling around the GraphQL API here but have since found it solid for both periodic jobs and CI needs.

I’m experimenting right now with how far I can simplify the abstractions, and writing my own thing in rust.

Since my use case is integration with gerrit, I poll the updated changes over ssh, and have the regex-based triggers which cause a “job” launches. Job consists of making a database entry and calling a shell script, then updating the entry upon completion. Since job is just a shell script it can kick off other jobs either serially or in parallel simply using gnu parallel :-)

And voting/review is again just a command so of course is also flexible and can be made much saner than what I had seen done with Jenkins.

So the “job manager” is really the OS - thus killing the “daemon” doesn’t affect the already running jobs - they will update the database as they finish.

The database is SQLite with a foreseen option for Postgres. (I have made diesel optionally work with both in another two year old project which successfully provisioned and managed the event network of about 500 switches)

Since I also didn’t want the HTTP daemon, the entire web interface is just monitoring, and is purely static files, regenerated upon changes.

Templating for HTML done via mustache (again also use it in the other project, very happy).

For fun I made (if enabled in config) the daemon reexec itself if mtime of config or the executable changes.

You can look at the current state of this thing at http://s5ci-dev.myvpp.net and the associated toy gerrit instance at http://testgerrit.myvpp.net

I am doing the first demo of this thing internally this week, and hopefully should be able to open source it.

It’s about 2000 LOC of Rust and compiles using stable.

Is this something that might be of use ?

I think these kind of home-grown systems are pretty hard to "sell" to others. I know that I've written a couple, my general approach was to :

* Get triggered by a github (enterprise) webhook.

* Work out the project, and clone it into a temporary directory.

* Launch a named docker container, bind-mounting the temporary directory to "/project" inside the image.

* Once the container exits copy everything from "/output" to the host - those are the generated artifacts.

There's a bit of glue to tie commit-hashes to the appropriate output, and a bit of magic to use `rsync` to allow moving output artifactes to the next container in the pipeline, if multiple steps are run.

But in short I'd probably spend more time explaining the system than an experienced devops person would be creating their own version.

Oh yeah, agree absolutely. I don’t plan on “selling” this toy outside its original intended audience, just the timing was funny :)

Zuul-ci.org had recently caught my eye, particularly because it fully supports heavy integration testing of multi-repo apps. It doesn't yet have support for bitbucket server though, which is sort of a deal breaker for me.

It's just a simple matter of code. You're not the first to want BitBucket support.

> But when you have many similar repos (modules) with similar build steps you want to manage

How many teams do you have? In all seriousness, if you aren't talking at least one team per repo, have you considered a monorepo setup? Aren't you burning time managing those many similar repos with many similar build steps?

That said, even in a monorepo, I still prefer Jenkins compared to cleaner seeming cloud offerings due to its limitless flexibility.

Internal libraries and similar fun stuff. Common build step ~~ same packager commands run on them. Management is fairly simple with a template + seed jobs. It's just ... everything else is annoying.

I don't understand what you mean by one team per repo?

I agree, as I keep saying at $WORK: Jenkins is the least-worst system out there.

side note: I am confused by your usage of "TFA". I looked it up and it stands for what I thought it does, which has a pejorative connotation. That doesn't seem to be what you meant?


Heyo, sorry about that, I was playing on the fact that common parlance has tamed the usage to have "TFA = The FINE Article" in civil discourse =) My bad, will check my assumptions some more!

Don't worry too much, I also thought that TFA as an abbreviation for "the fine article" is at least as well known as it is for "the fucking article".

Literally the first time I'm hearing this. :P

It's used so much in conversation that it has ceased to be pejorative.

I'd argue that while RTFA had negative connotations, TFA never did. It was just a humorous reference to the former.

Agreed. I think of it more as a joking reference to RTFA. The humor of this probably originates from the early slashdot days.

Hrm... Totally anecdotal but I see it used that way just frequently enough that I'm familiar with the more-general usage but not nearly frequently enough for it to feel "right".

I see this a lot on HN as well and have been equally confused. I tend to rethink of it as "the featured article".

I am not OP but I am 100% sure the author meant "the fucking article".

Yeah but that is derived from RTFA or RTFM, but that meaning doesn't apply here at all.

I think the people using TFA don't know what it means... Whenever I see that I think they are angry about something or arguing, but he's instead supporting the point of the article. Doesn't make sense.

> Yeah but that is derived from RTFA or RTFM, but that meaning doesn't apply here at all.

The meaning of the "TFA" part does. The meaning of the "R" doesn't, which is why it is dropped.

> I think the people using TFA don't know what it means...

They generally do. You, however, seem to be confusing "derived from" with "means the same as". TFA is derived from RTFA, but it does not mean RTFA, nor does the argumentative implication of RTFA come along with it, since the argumentative implication is associated primarily with the implicit accusation that the target has not done what is expected in a discussion and read the source material that is the subject of discussion, which is carried entirely by the "R".

(One can read anger into the "F", but that's tamed by the fact that even in the context of RTFA/RTFM, that's often reconstructed into a non-profane alternative ["fine" is the one I've most frequently encountered.])

Based on the way it's often used on HN, I think reading it as "The Featured Article" makes the most sense.

If it offends, you should be reading it as ‘The Featured Article’

TFA is in reference to actually Reading TFA or RTFA. Historically, it has very strong roots in Slashdot culture, which was sort of the Hacker News of the late 1990s and all of the 2000s. By using TFA, somewhat indicates you RTFA, as opposed to everyone else who is just speculating on the content of the linked article (didn't RTFA).

Some of us here have been using terms like RTFA and TFA for twenty years, maybe longer.


Actually, historically its use doesn't necessarily have a pejorative connotation. You can take it to mean "The Fine Article" just the same. It's more of a joke reference with roots to 'RTFA' used frequently in discussion forums like this.

I think it was here on HN that someone introduced me to reading it as The Fine Article.

While I am a conservative christian myself (hah, most of you didn't guess that) I try to make a point out of not getting offended for such things, and if I can do it so can most people :-)

I'm helping clients move from Jenkins to Azure Pipelines which is part of Azure DevOps (formerly VSTS, TFS). If that doesn't make you dizzy then it's a pretty good product. It has a free tier. Windows build targets shouldn't be a problem since it's from Microsoft. Obviously it's not OSS.

We run our infrastructure off of cloudflare, so we can easily spin up a staging environment that's an exact replica of production (only difference is # and size of instances). We also run a staging jenkins server that's defined in the cloudflare config.

We keep our jenkins jobs version controlled by checking in each job's config.xml into a git repo. In the past I've seen the config.xml files managed by puppet or other config management tools.

This helps us get around the "hard to backup" and "test in production" issues. We can test out jenkins changes in staging, commit those changes to our jenkins repo, and then push up the config.xml file to the production jenkins server when we're ready to deploy.

>Anyone got a sane alternative to jenkins for us poor souls?

I haven't tried this yet myself but AWS CodePipeline lets you have Jenkins as a stage in the pipeline. You use Jenkins only for the bits you need without the extra plugins. The resulting Jenkins box is supposed to be very lean and avoid the problems you describe.

Performance isn't great. We're using codepipeline/codebuild (triggered via jenkins), and it's common to wait 30 seconds while the step is being created

Cloudbuild on the gcp side has had much better performance

I'm quite happy with Buildbot.

I'm still on buildbot, but it's definitely showing its age and I'm hoping to move off of it within a year. I've been keeping an eye on Chromium's buildbot replacement, LUCI (https://ci.chromium.org/). It's still light on documentation and the source is very internal google-y (they seem to have written their own version of virtualenv in go). However, based on the design docs it does look like they ran into a lot of the same problems I have with buildbot, specifically the lack of support for dynamic workers, and how underpowered the buildbot build steps can be.


Doesn't buildbot have dynamic workers implemented as latent workers? [1]

What do you mean by underpowered buildbot steps? Are you implementing your own step classes?

[1] https://docs.buildbot.net/current/manual/configuration/worke...

I'm not on buildbot nine (I think the new waterfall UI is a big regression), but what that is describing looks like a statically defined list of workers that scale up and down dynamically. What I'm looking for is the ability to add and remove workers at will, without having to add them to the configuration list and restart the master.

In terms of underpowered build steps, I have several fairly complicated, 1k-2k line build factories, with multiple codebases and hundreds of steps (some custom, some from the stdlib). There's many dependencies between the steps, and many different properties that can be toggled in the UI. All these steps need to be defined up-front in the master, but their actual execution often depends on runtime information, so it becomes a mess of doStepIfs. I think it would be an improvement to give a program on the worker the power to tell the service what it wants to do, rather than the other way around.

One way to scale up / down workers in Buildbot is to have more workers defined in the configuration than actually needed with generic names (e.g. worker1, worker2, etc) and then start / stop them when required.

Agree with you on the waterfall UI regression. It seems console view is preferred than waterfall in the recent versions. It's slower than waterfall UI though.

try jenkinsx. It's only named jenkins, but works without it. Supports Kubernetes only though

OP really needs to try Concourse. Same container-based workflow as Drone that is touted as a solution, but more mature, and much more testable than Drone.

Concourse really hits his requirements for ops-friendliness and testability. It's easy to upgrade because the web host and CI workers are completely stateless, and the work you do with Concourse is easy to test because the jobs themselves are all completely self-contained. Because Concourse forces you to factor your builds into inputs, tasks, and outputs, it becomes somewhat straightforward to test your outputs by replacing them with mock outputs.

The main issue with Concourse is that it has a huge learning curve to use effectively. When you have many pipelines, you end up learning the undocumented meta pipeline pattern to help manage them. You end up implementing stuff like mock outputs by yourself, since it's not really a first-class concept. Concepts like per-branch versioning that have been in products like Jenkins for years are only now entering development as "spaces". All of the configuration can be version controlled, but it's all YAML, so to wrangle everything together, you end up using something which will compile down to YAML instead. RBAC much improved in v5 but still leaves much to be desired. There are no manual steps, only manual jobs. Job triggering is relatively slow due to a fundamental design tradeoff where webhooks trigger resource checks which trigger jobs instead of triggering jobs directly, to make resources easier to maintain. Nobody really tells you this on the front page.

It has its flaws. But if you delve into it you see very quickly that it's an incredibly solid product. The ability to run one-off jobs from developer workstations on CI workers, and to easily SSH into those CI worker containers from developer workstations, is incredibly helpful for debugging issues. Because everything is stateless, everything tends to "just work". If you go in with your eyes open about its limitations, you'll have a good time.

Tekton [1] works in a similar manner where the pipeline stages define inputs, outputs and tasks. The great part about Tekton is it provides a set of building blocks that can be integrated into a larger system.

I hope to integrate Tekton into Drone [2] and allow individual projects to choose their pipeline execution engine. Projects can choose to start with a more basic engine, knowing they have the option to grow into something more powerful (and complex) when they need to.

[1] https://tekton.dev/ [2] https://github.com/drone/drone/issues/2680

The thing that turned me off concourse last time I checked it out is that their documentation assumes (assumed?) you're going to use BOSH. I don't want to have to learn and maintain yet another infrastructure as code tool, just for my build server. I know you can run concourse without it, but all their examples seemed to use it and I didn't want to hit edge cases that they didn't account for. So I gave up before too long.

There's a helm chart available for Concourse, I haven't tried it yet though. Definitely agree about BOSH making it harder to get started.


You can get Concourse as a Helm chart and completely avoid BOSH (which is how we deploy it).

> ops-friendliness

certainly I love the idea of concourse as a release engineer but the lack of a nice UI for dev feedback/monitoring makes it a hard sell as a drop-in jenkins replacement

So, there is a resource that will fetch your pull requests so that they can be built. It's not quite as good as per-branch builds, but with GitHub's new draft pull request feature (if you use GitHub), it does the trick for us, but we're also a relatively small dev team.

Either way, it's not a drop-in Jenkins replacement. It really does have a high learning curve because it forces you to wrap your mind and your code to Concourse's model. Probably, a lot of your build and deployment scripts would need to be rewritten. The reason why you would do so is to get the benefits described above - everything (including environments) is version controlled, ops are stateless, running code in the CI environment from a developer machine, etc.

Our setup runs Jenkins master and slaves as Kubernetes pods, with plugins limited to only the very few required to get GitHub integration and slaves working.

Jobs are configured by adding an entire GitHub organization. All repositories with corresponding branches, pull requests and tags are automatically discovered and built based on the existence of a Jenkinsfile.

Everything is built by slaves using Docker, either with Dockerfile or using builder images.

Job history and artifacts are purged after a few weeks, since everything of importance is deployed to Bintray or Docker repositories.

By keeping Jenkins constrained in this fashion, we have no performance issues.

That is exactly how we're doing it as well, though I am interested in checking out Cloudbees'Jenkins. We've recently incorporated Zalenium (selenium grid which autoscales nicely natively in kubernetes) - just had to work a little magic with automatic service creation during builds.

I receive vulnerability notifications for Jenkins, pretty much regularly... mostly XSS and RCE.


I'm just waiting for Apache to adopt it, and then it'll sit and fester like everything else in the Apache graveyard, full of vulnerabilities and slowly decaying.

Those are just Jenkins core exploits too... there are so many many more for Jenkins plugins.... https://www.cvedetails.com/vulnerability-list/vendor_id-1586...

Jenkins is now part of the CD Foundation (https://cd.foundation/) which is one of the linux foundation sub-foundations. Don't expect it to show up in the apache foundation.

I don't think tinix was excpeting it to literally become a apache project - he was just saying its in a state of decay that apache is infamous for.

Last place I was at had their unmanaged Jenkins servers get compromised and used to run crypto miners.

Were they using an older version of Jenkins on the public internet? There's been a randomized GUID applied to the initial Jenkins admin password, which you can only access if you have direct access to the Jenkins install. I think this was added in 2016.

It was an older version with a vulnerability but as far as I know not a default password.

Jenkins's plugin manager is absolutely terrible.

If you're stuck on an older version of Jenkins, you better not click the "refresh" button in the plugin management page, cause otherwise the page is just filled with red warnings saying that the latest version of each plugin is incompatible or has dependencies that are incompatible with your current Jenkins version.

There is afaik no way to install the last plugin that was compatible with your version of Jenkins.

Check this out, too. Free forever: https://www.cloudbees.com/products/cloudbees-jenkins-distrib.... We have included Beekeeper, which is an implementation of the plugin manager that provides a list of known-compatible, recommended plugins that CloudBees verifies and tests with each long-term support release of Jenkins.

> There is afaik no way to install the last plugin that was compatible with your version of Jenkins.

Probably not directly, but if you know the version, you can download the HPI and install it manually. Jenkin's Docker build also contains install-plugins.sh (https://github.com/jenkinsci/docker/blob/7b4153f20a61d9c579b...) that you can use to install specific version of plugin via command line.

Thanks! That script definitely gives some pointers on how to update things.

We use Jenkins at work and have found a pretty damned sweet spot.

Let me start by saying that we used to use GitLab, a lot of it was because of the CI but I didn’t have a great experience trying to manage it on top of Kubernetes, they’ve since introduced a Kube native package and I’ve been told it’s much easier, but with the deployed omnibus we ran into a lot of issues with runners randomly disconnecting, it became frustrating to the point where I had developers not wanting to use GitLab and finding interesting ways to work around it.

So I set up Jenkins on a dedicated EC2 instance with a large EBS volume for workplace storage and installed the Jenkins plugin then I wrote a Jenkins library package that exposes a function to read a gitlab yaml file and generates the appropriate stages with parallel steps that execute as pods in khbernetes - took about a week to get the things we actually used from Gitlab CI and their YAML DSL working correctly.

Now we very happily use Jenkins, mostly through YAML but in the occasions where things are much easier going directly to Groovy to use interface with plugins, developers can.

I want to specify this is my own experience and I think a lot of our own issues may have been from mismanaging our self-deployed setup. I’ve had a lot more experience managing Jenkins.

GitLab and their approach to CI (easy to use yaml) really facilitated developers writing CI, which increased our software quality overall.

Have you used the Jenkins Job DSL plugin?

I'm just getting started using it, but it seems like the solution to scaling up to a lot of Jenkins jobs. There's a good talk about it, and since you're one of only two people in the thread who used the word DSL and you are having a good experience with Jenkins, I thought I'd ask.

My config is similar except my single EC2 node is actually running Kubernetes via kubeadm, it's a single node Kubernetes cluster and has enough room for about 3 of my worker pods to execute concurrently before the rest have to wait.

(But that's just my setup and has nothing to do with Job DSL.)

For me, managing Jenkins via the helm chart has been the best part of the deal, but I'm a pretty big fan of Helm already...

I haven’t used it, we use the GitHub Org Folder scan to automatically add a job for every Repo that has a Jenkinsfile- so I don’t have to do anything, it’s pretty great.

Awesome, that also sounds good

The big advantage of the job DSL plugin is that if you have many similar repos, you don't just treat them all the same with multiple copies of the same Jenkinsfile, you actually can build the jobs from the same source, inheritance-style.

There could be some reasons not to do this, but if you have 100 jobs and 20 or more of them are mostly identical in form except for some parameter, it's a better strategy than multiple Jenkinsfile.

But then again if your Jenkinsfile isn't changing, it might not matter.

I'm not at all ashamed, nor a single bit remorseful to comment on the fact that it took a catastrophic data loss for a team I once worked with to finally sit down and look at our CI/CD pipeline before deciding "Maybe jenkins is overkill for what we need".

Which was something I had been kvetching about for months and expressly warned, multiple times to our release manager as a point of concern given how quickly plugin vulnerabilities are reported (as someone comment on elsewhere in the thread).

The day finally came when someone from one of our other offices went to update some infrastructure as code repos, poof. Jenkins server gone. They didn't have a roll back plan, and to complete the trifecta, they somehow also killed all of the instance volume backups. An entire sprint was summarily dedicated to creating a new build pipeline, I had resumes out the door the next day.

This article hits so many of our pain points I joked to a current coworker who followed me out of that place that I wanted to print it and mail it to our former RM.

WHat do they/you use now?

No clue. Didn't stick around long enough to find out, that moment was officially the last straw for me. There was a lot of hemming and hawing, I left and product, project and dev still hadn't made their minds up. Ops (where I lived) would give the team ideas and proposals, dev gatekept everything from tickets to the VCS.

This was one of those shops that hired people and gave them devops job titles, but demanded they maintain very monolithic status quos with everything from tickets and stand-ups to release management.



Jenkins is terrible but I've not found something better yet.

For me, the great thing is that it is an automation platform where we can integrate builds, scheduled tasks (cron), and very basic end-user UIs with Plugins that let me customize the whole experience.

Azure Pipelines comes close in capabilities. If they made it easier to manually start a job and for them to accept their parameters from the user, then it'd probably have everything i need.

There are lots of alternatives for Jenkins as a CI builder but I haven't found many for Jenkins as a web-based cron with a nice UI that keeps track of history, has tons of options for notifications on failures, and quickly allows re-building (even with user-supplied parameters!)

We use Jenkins for running backup jobs, periodically updating data, and building quick little jobs for support staff to run--infrequently enough that they don't warrant adding to our admin app but frequently enough that bothering at developer adds up.

For periodic admin tasks, you might consider giving Ansible AWX a try (https://github.com/ansible/awx). It is a web interface for managing Ansible playbooks, and lets you configure jobs that another user can then execute and provide parameters. It keeps track of all run results, has a fair number of supported integrations for notifications, and can schedule runs.

I work for Red Hat, and AWX is the upstream community project for Ansible Tower, which we provide support for. AWX is one of our newer open source projects (we open sourced it after acquiring Ansible), so you'll sometimes have better luck searching for information on "Ansible Tower".

I spent quite a while grappling with Jenkins pipelines, and while it definitely has its warts, once you get over the hump and get one pipeline the way you want it, it's quite easy to use it as a starter for other projects.

Personally I go to great lengths to minimise the plugin dependencies and push the logic into bash scripts or Maven/Gradle. The Jenkinsfile just calls the bash scripts, so I can run it before I change it. Where I use plugins in the pipeline I document what they are and what they do in case we ever need to go from scratch.

It is hard to test but the linter picks up quite a lot of errors before push, and if you're doing a multi-branch pipeline you can just test your changes by pushing to your own branch. It's a long cycle, but the variation should go down the more you have them.

It has also been indispensable as an ad-hoc task scheduler. For example, I have a job that cleans up old branch artifacts every Saturday night. It was easy to alert on failures and see the results and history of past runs. I don't know anything else that would have fit the bill.

When you make software over 10 years old in such a fast-changing environment, the legacy of the software can weigh you down. I've worked on similar products that were tied to their legacy counterparts (sharing a database), and every change meant carrying the baggage of backwards compatibility with the legacy platform. If I could do it all again, I'd have done it the Basecamp way: Get a clean break from the legacy version, and have a migration path to the new system.

I would like to see CloudBees take all they've learned over the years, take the great work they've done on Blue Ocean and their core job scheduling system and put it all together without carrying the history with it.

Another container based option is Screwdriver, which started as a Yahoo in-house solution built on top of Jenkins. But it is now fully open source and no longer makes use of Jenkins in any way:


This looks really good! For a long time Jenkins has seemed like the only option for flexible jobs. Tools like Drone and Travis and the many other that use a "config file" approach just aren't able to scale against your needs. They're fine for a run-of-the-mill build but that generally just isn't enough.

Drone is working on support for Starlark, a python-inspired configuration language used by the Bazel Build system [1]. For complex pipelines, this should be more analogous to Jenkins scripting and may improve scaling Drone for larger projects (time will tell).

[1] https://github.com/bazelbuild/starlark

I work at a big enterprise that uses Jenkins. We support a large group of legacy applications and the waterfall type development processes that lately have been papered over with agile-lite. I work about 80% of the time administering (Cloudbees) Jenkins. I have lived through Freestyle projects, migrating to Job DSL, and now we are moving to declarative pipelines in shared libraries. I agree with 95% of the article's pain points. Still, I'm a fan, perhaps because each iteration is significantly less painful from the previous one.

Still Jenkins works well for us in our environment. Our waterfall style development processes make for ops-friendly, cookie-cutter type builds: one deployable asset per repository, one deploy target, all projects laid out the same way... Jenkins shared libraries provide powerful ways to capture the commonalities among the projects and provide a way for us to manage them centrally, while providing some (small) flexibility for customizations by the developement teams. Shared libraries also allow the ability to test changes using selected versions (branches) of the pipeline for specific builds. Looking forward to what's next (I still have a few mortgage payments to make...).

Jenkins also routinely has massive security holes.

Last major exploit I heard about - the matrix.org exploit, was from privilege escalation through a Jenkins vulnerability [1]

[1] https://www.zdnet.com/article/matrix-hack-forces-servers-off...


This isn't nec a knock against Jenkins itself, but certainly a knock against thousands of orgs running their own unpatched Jenkins servers, often on the same machine as their other apps

It's true that Jenkins is old, and has systemic architecture problems which mean that I'm sure we haven't seen the last RCE for it. RCEs in build systems are a nightmare because build systems necessarily end up knowing secret tokens.

But drone.io is no kind of replacement. It only works with Docker containers! Even small Jenkins installs can quickly end up with Windows targets, mobile builds, etc.

> It only works with Docker containers

This will only be true for another week or two. We have devised a framework for alternate runtimes, including running pipelines directly on the host machine or in traditional virtual machines, that will be released end of week. Reference issue https://github.com/drone/drone/issues/2680

Does it support running on macOS and Windows, too? CI for native apps is IMHO the most common lock-in for Jenkins at the moment. Anything web or container related can almost always be implemented using pretty much any other CI/CD solution out there.

Yes, it supports windows, macos, and linux. I expect it will also support bsd although I have not tested this yet. This should land in master some time this week.

Unrelated to the content, but the stock photos throughout this article felt completely needless to me. We're not 5 year olds, we don't need pictures unless they add to the content.

Jenkins-X is really cool. I love how you can group services together into an environment and deploy that to a single Kubernetes namespace, and how well it uses github releases.

I really hope Gitlab's Auto-Devops team is looking at it closely and stealing all the great ideas it has because that's what our team is using.

Jenkins X requires Kubernetes and Helm. Not all companies are deploying like this.

(I work for Codefresh, a competitor of Jenkins X)

The most interesting part of this is the very last paragraph.

There really isn't a good self-hosted solution for build metadata/metrics, unless you write something yourself. It would be interesting to see a GUI dashboard for Drone/whatever builds and pipelines.

I never understood the purpose of dashboards. In the context of CI, I see three use cases and there is always a better solution than a dashboard.

1. Detect broken builds. It is better to actively notify somebody by mail, chat, whatever.

2. Prevent broken master. It is better to reject the pull request/patch before it becomes a problem.

3. Analyse the system. It is better to download the data and enable the use of whatever analysis tools are suitable.

Direct emails can't replace the collaborative "hey why are these still red" moments without spamming the whole team.

They can show non-event information such as build time trends at a glance.

There was an interesting community-driven project to create a dashboard for Drone, see https://github.com/drone/drone-wall. I would love to see more like it.

Anyone who agrees with the OP should give Zuul-ci.org a look.

I run Zuul in production with GitHub and AWS, and it's been really useful for us. It scales to the moon, parallelizes everything, and having all job/pipeline configuration in git means you can test things like CI job changes before they're committed.

Cross-repo pre-merge dependency management means I can build whole speculative futures in PR's before it lands and breaks the build.

Funny story, the Zuul build status badge is just a static image, because Zuul does not allow code to merge if it fails tests: https://zuul-ci.org/docs/zuul/user/badges.html

Full disclosure: I worked on a defunct zuul based service for a little while, and still am a core developer with lots of code in Zuul. A few presentations I've given are here:

http://fewbar.com/the-build-is-never-broken/ http://fewbar.com/zuul-ci-crossing-streams/

AWS CodePipeline with CodeBuild supports GitHub webhooks, export artifacts between stages, many types of caching (S3 artifact caching, Docker caching, file/volume caching), Secrets Manager environment variable integration, queueing between stages, pipeline retries, stage retries, manual approval steps, integration with CloudWatch and many of services like Lambda, and provisioning all of the above via API, CLI, SDK, or CloudFormation.

> queueing between stages

Doesn't it only support a queue depth of 1 at each stage? Last I checked, if you have a change that is queued up waiting to get into a stage, and a newer change comes along, that newer change will supersede the old one. That makes CodePipeline only good for workflows where you only care about deploying the 'newest' of something.

I think the worst thing about CodePipeline is how hard they make it to run custom code from your pipeline. Your options are Lambda (limited to 15 minutes per run, or you have to rewrite your Lambda function to be called once every 30 seconds, essentially using the first run to start your deployment and every subsequent run to check to see if it's done yet or not) or CodePipeline Custom Actions (where you have to write the AWS SDK code for interacting with CodePipeline).

The AWS Developer Tools team could learn a thing or two from Azure Pipelines. They did it "right", IMO: you can create a 'bash' stage in your pipeline which runs whatever script you want from the build agent (which can either be hosted my Microsoft or hosted yourself). That's all I really want. CodePipeline could support that with a custom action but it's more stuff that I would have to set up.

And the beauty of CodeBuild is that if you need a custom build environment just for your special snowflake build, it’s as easy as creating a Docker container and it won’t interfere with anyone else’s build environment.

I’d never used Jenkins before, a few weeks ago I needed to do some CI that was more complicated than just pytest or whatever. Installed jenkins, put together a dockerfile, connected the github hooks and It Just Works. I’m a happy customer

The problem is people want to do really complex things with Jenkins. A few hundred repos, each with dozens of builds; Multiple operating systems... There is a reason a team of 10 works full time to administer our CI system: it is complex (the CI system is more than Jenkins)

It is good to support open source software vs closed source. Gitlab is a modern good open source alternative to Jenkins which is also open source. Gitlab has its configuration files default in yaml files which is easy to read for humans. Jenkins uses XML which is hard for humans to read.

One should not use close source solutions just because it is easy to use. Just because it is easy does not mean it is good for you in the long run.

"cripple-wear" made me cringe. Shouldn't it be "-ware", anyways?

I really love the ideas/architecture behind Concourse, but there's a few things that disqualified it in favour of Jenkins for a new CI pipeline during prototyping:

- no hooks to manage worker scaling (https://github.com/concourse/concourse/issues/993). Our builds are _heavy_ and we'd run up an enormous AWS bill without something like (https://wiki.jenkins.io/display/JENKINS/Amazon+EC2+Plugin)

- no way to restart a build, resorting to 'empty commits', which is a huge red flag for useability (https://github.com/concourse/concourse/issues/413)

- limited documentation/examples (network effect)

Wow I never knew Jenkins could scale its workers like that. We definitely have performance issues with our Concourse cluster running in K8s. It's got it's own set of dedicated nodes but we need to scale the workers better as they're ofter under heavy load throughout the day when the devs are pushing code and a bunch of tests are running in Concourse (and PR checks as well)!

Concourse definitely feels more refined than Jenkins. Like other commenters have said, it's a steep learning curve to grasp how things move between Tasks/Jobs.

Co-founder of cloudbees here, just some corrections:

1) there are (since 1 year) restartable builds/stages in open source (a bit over a year IIRC) - when the article mentions that they are not in open source.

2) Jenkins X is something VERY DIFFERENT from Jenkins (despite the name), it is "master less", yes requires a kube cluster to power it, but it has no Jenkins instances as you know it, in a lot of ways quite a bit different as it uses Tekton as the engine for running pipelines (which has a bunch of advantages). So I wouldn't group it in with the same challenges and constraints. It is something new that shares a name and some of the ideas.

Jenkins, along with Spinnaker and Tekton and Jenkins X are now part of https://cd.foundation - worth a look to see how they are evolving (expect some changes).

Unfortunately people are looking for simplicity and this is going in the opposite direction. A kubernetes cluster is just offloading a large part to yet another component you have to run. Spinnaker is another completely separate system.

This is the opposite of what teams want. Drone, Gitlab, Teamcity as mentioned in these comments is a far better approach for 99% of companies who want a solid working solution.

Soon, "running on Kubernetes" will be like "running on Linux", i.e., it won't add any operational complexity because you anyway have a Kubernetes cluster running.

So maybe you are not there yet, but for a future-oriented CI/CD platform with self-hosting option, using Kubernetes as basis is a good approach.

On the point of simplicity, much pain has been described here about running Jenkins - https://www.cloudbees.com/products/cloudbees-jenkins-distrib... is hopefully a better way to handle that.

On kubernetes - imagine if that was already managed for you, all the powerful things that can be done on top vs your self (preview apps, owasp vulnerability testing, progressive delivery - all without writing a pipeline - this is stuff that can be done.

I note gitlab is mentioned a lot here: they noted this power as well (see auto devops) and have started building things on top of kubernetes too.

Agree Kube is not for everyone, no question, I was just trying to clarify what was in the article. (if you are offloading complexity - I would hope it is to a GKE or EKS or AKS or similar, in which case it is very much offloaded...)

I didn't mean to imply that you mix all those things together from the CDF - was just mentioning some interesting projects (some are unrelated) in the mix.

Agree also on simplicity - myself I don't like to run anything, so CodeShip is what we have for that (but it sounds like you are referring to self hosting only solutions?)

(edit: and thanks for insightful comment)

Tangentally I've had great success with Drone. Holding similar frustrations that other commenters are expressing here it was a breath of fresh air to see how dead simple making custom steps in Drone was compared to actually needing to write code for Jenkins.

Every step in Drone is just a container, so if I want a Golang build container I can just set the step image to Golang:latest and start running build commands. And if you're wanting to encapsulate some logic into a "plugin" you can just pre-make your own docker image and from there any settings you pass in with the steps are converted to environment variables that can be accessed inside the container. Many of the default drone plugins like docker registry build, etc. Are just a couple line bash scripts on GitHub.

Is buildbot old too? I sure enjoyed the flexibility. https://buildbot.net/

One complaint about buildbot is that if you get too creative, your buildmaster.cfg gets very hard to maintain, but if you stay very diligent, just having python (and being able to print or log whatevery is happening) makes debugging and having complex setups be very easy.

Also, if you ever need to schedule jobs/tasks (not just ci builds) across multiple machines, buildbot is great because all you need is a master, and slave python processes which just need a network connection to the master.

I think a better headline would have been something more like "Jenkins doesn't have the features I require". There are plenty of examples of older pieces of software or older languages that still work perfectly fine today.

You should really read the article before you comment on it.

What I'd like to see is a CI/CD engine backend. Most CI/CD solutions have relatively the same feature set:

- Code builds

- Code deploys

- Git Hooks

- Parallel builds

- Matrix builds

- Build as configuration

- Etc

If there were an engine that supported these features, the community could create a number of frontends that target their specific needs.

Jenkins was held back, in my opinion, for a number of years by its antiquated UI. And while Jenkins does have an API of sorts, writing a new UI for it is not a simple task.

I've used Drone for a while, and it gets a lot of these things right, but at the end of the day it is a holistic, integrated solution.

I'd like to see a composable, modular CI/CD framework.

I'd argue that the components are:

- job config

- agents & resources

- logs/results

- triggers/schedules

- web UI

- system config/'plugins'

- user directory with optional permission/ACL

and default ancillaries for secrets, artifacts, alerts, etc., with the idea that they could all be replaced with a purpose-built product that's actually good at it (i.e. Vault, Artifactory, PagerDuty/Sensu/whatever)

I think deployment is a separate problem space to CI and shouldn't be included

Sounds like you want https://buildbot.net

What is a "matrix build" in the context of CI/CD?

When your software needs to build on, or run on multiple targets, it's useful to define it in a matrix. Here's an example: https://0-8-0.docs.drone.io/matrix-builds/

An even better outline would be without all the terrible irrelevant stock photos.

Yours is not the first post that contains nothing more than an outline version of the link. Why is this becoming a thing? The page seems perfectly readable as it is...

The page is actually a medium post which immediately pops up an annoying modal trying to get you to give them your personal information and phones home who knows what kind of analytics.

I'm not sure about the ethics of outline.com, but I can totally see why people are using it.

I'm not sure about the ethics of taking other people's content and putting it behind a paywall.

But it's Medium's content. You gave it away when you posted it here. In exchange, you got exposure[1]

[1]: https://theoatmeal.com/comics/exposure

As far as I know, Medium writers opt-in to this, in exchange for money if a paying user reads the article.

It's readable as long as you're willing to log in via Google or Facebook. Some people aren't.

I did neither and it opened fine even when I opened the link on incognito mode. Maybe some A/B testing thing?

Anyway, good to know there is a reason for it. One of my pet projects is a self-hosted open source paywall-buster and read-it-later service.

I wonder if the HN admins would be interested in integrating with the site?

I'm not an HN admin, but I'd love to learn more about your pet project.

It is still very basic, but basically it is an alternative to pocket and wallabag, implemented in Python/Django/Celery - I actually wanted to use this as a way to learn Go, but working on the idea itself plus the potential work (activity pub, IPFS integration) became more interesting than working on learning a new language, so I changed to something I more familiar with.

Anyway, the code is at https://github.com/lullis/nofollow. It would be great to have more people curious enough to actually run this.

Thanks, @rglullis, looks really cool! If I find time to play w/ it I'll let you know.

That would be great. I need to make some updates in the code and in the documentation regarding the sunset of the Mercury Parser service. Hope I can get some of it done this weekend.

It tells me to sign up if I want to read the article.

Try the "Just Read" extension.

They should definitely dig more into Jenkins X: it evolves a lot and it's pretty new but they're gonna have a stable version soon!

I'm still pretty new to devops and I personally love Jenkins but man is it a PITA at times. My biggest gripe is that every plug-in seems to be no longer actively maintained. We really on a huge list of plug-ins and in some cases upgrading Jenkins will break plug-ins. It's gotten to the point where I'm afraid to update to even minor or patch versions.

I'm a very strong Concourse proponent. We've been using it exclusively for three years, and it's better than anything we've experienced that our customers use (I run a consultancy).

We run Concourse London User Group, and the first few instances were like some kind of Jenkins survivor's support group.

Trying to integrate Jenkins w/ Github Enterprise proved to be a real pain for me. It wasn't clear what git* plugin is the best to use for how I wanted to setup builds. It's literally impossible to deploy Jenkins in a repeatable manner with just configuration files, you must get it online and use the api to configure such simple things as API keys (eg, specify the ID of the key so your jobs actually reference the key and don't break).

Another big pain point, for me was, no obvious openstack integration. I wanted my jobs to run on ephemeral instances so all my dependencies could be defined in the job themselves, not having to rely that the Jenkins worker is setup just right.

I think Gitlab runners get this right, but I have not investigated too far.

You definitely can configure Jenkins in a repeatable manor. See https://wiki.jenkins.io/display/JENKINS/Groovy+Hook+Script

You can write all the configuration in groovy and it will execute when the Jenkins process starts up.

There is also "Jenkins Configuration as Code" plugin which allows you to have a single yaml file to configure most of the Jenkins system.


I managed to accomplish the task at hand ~2.5 years ago, so yes, it's possible to deploy jenkins in a repeatable manner.

I guess what I'm driving at is that it's not very declarative and it's cumbersome.

Jenkins is to CI,

as Nagios is to monitoring.

Both work ?

They both work perfectly fine for the very limited use cases they were meant to, at the time they were made, with the limited resources that were available.

Just like foot is the best way to travel and commute to work every day, as proven by the last 10000 years of people using theirs.

I find when people say the tools are limited they are trying to use the wrong tool for the job.

2 cents

> I find when people say the tools are limited they are trying to use the wrong tool for the job.

The tool is wrong because the tool is limited?

No, the key difference is that Nagios has perfectly viable alternatives (Zabbix admin here and it's not the only one). Jenkins however has no alternatives.

Bamboo or whatever it is called now is typical Atlassian crapware. Expensive as fuck, eats more resources than the stuff it builds, and did I mention it is yet another half assed product that got shoddily integrated into the usual Atlassian lineup?

Gitlab CI is great for anything that is code (think build, test, deploy), but it is not suited for abstracting "non-development" jobs which can perfectly be automated in Jenkins (e.g. creation of a dev environment with fresh data from production). Plus it is Docker and the runners are polling - which means at minimum 10s startup time compared to milliseconds for a Jenkins shellscript job running on a ssh connected slave!

Github and friends are cloud which is a big no-no. We're placing too much power in the hands of AWS, GCE and Azure already, no way in hell it is a good idea to put private source code to a cloud provider.



Our org is happy with buildkite. We moved to that after fighting Jenkins for a number of years.

See https://build.gocd.org/ for the live instance of GoCD used to build Gocd (apache licensed CD orchestration tool backed by Thoughtworks).

I agree with this article on all points. Currently we have a massive jenkins pipeline sprawl that's difficult to maintain. It is also difficult to create new jobs in Jenkins itself, specially if you are using pipelines. My average is around 100 test builds before I can get a full pipeline success for anything of modest complexity.

If all you are doing is using jenkins to run simple bash scripts, you may be able to get away with it. The problems start when you want to add some logic to the pipeline – which you are doing, otherwise why bother with a pipeline?

First things first: are you going to use the scripting pipeline, or the declarative pipeline? The declarative pipeline is a bit better, but it lacks examples, has lots of bugs (I've littered my code with references to JENKINS-XXXXX) and is very restrictive (arguably by design). Of course, you can have 'script' blocks inside your pipeline->stages->stage->steps blocks.

Then you want to take advantage of parallelization or conditional steps, and to visualize that you want Blue Ocean. Problem is, not all plugins are compatible with Blue Ocean, it also doesn't have all features, so you drop down to 'old' jenkins often.

People will want to have a whole bunch of tools with incompatible versions in their builders. Not all are supported natively, so you need to figure out your versioning.

Once you figure all that, congratulations. Next guy to automate something will either find a similar pipeline to copy from, or will endure all the pain again. At this point you may want to use Groovy.

Groovy was absolutely the wrong tool for the job. Yes, I get it that it works with Java, which Jenkins is based on. Still it is the wrong choice. You see, the kind of things you want to automate often involve passing commands around, be them bash, ansible, SQL statements, what have you. Groovy's string escaping rules will ensure your life will be pretty miserable (https://gist.github.com/Faheetah/e11bd0315c34ed32e681616e412...)

You could get around most of these by perhaps moving most of the logic to containers and then running those. There again you'll run into problems with declarative pipelines, random things won't work and you'll be scratching your head.

However, if you are going to do that anyway, you're better off using a more modern system for CI, any system. Drone was already mentioned, there's also Concourse and a bunch of others. For CD, you can use Spinnaker as well.

Or maybe keep jenkins around but forget all the fancy stuff. Delegate all the 'thinking' to scripts and pretend the more recent development has never happened. You'll be saner that way.

If it takes you 100 tries for a moderately complex jenkins pipeline, you have other problems that are not jenkin's fault.

I wish I could bring you here to see you do better.

Or do you mean systemic corporate problems? In that case, I agree.

It still doesn't change the fact that Jenkins does not make my job any easier. I'll spend a day worrying about Jenkins idiosyncrasies ("why can't I use a pipe in sh", "why did my bash escaping disappear completely", "why 'dir' doesn't work with a container build agent?! (JENKINS-33510)", "why this input plugin won't work with blue ocean", "why can't I use a for loop in this piece of code in particular but it works elsewhere" (JENKINS-27421)).

Whereas with concourse or other newer build systems I can write a simple YAML description, which is modular and uses an existing standard, and test that in isolation. And then provide it as a building block for other tasks.

I feel you, but why are your jenkins pipelines so complicated? I feel like your workplace's deployable artifacts should follow a familiar pattern and there should not be much guessing/re-inventing the wheel with jenkins scripts. I feel like complicated builds are usually the result of an application that is not very well thought out in the first place.

> I feel like complicated builds are usually the result of an application that is not very well thought out in the first place.

Welcome to the world of enterprise Java or .net programming. Loads upon loads of crap. Best served with multiple frontends (e.g. web + mobile) which need different npm versions to compile and all of it out of a single fucking pom.xml which is a nightmare in itself!

If you're mixing npm with pom files you are asking for trouble. Jenkin's shortcomings have nothing to do with npm's crappy package management. (not saying you said that, just pointing it out)

> I feel you, but why are your jenkins pipelines so complicated?

You have an excellent point. Individual microservice containers are not complicated (then again, all they do is call a standardized script). The script will run a Dockerfile and push it to the registry. I would classify it as a 'trivial' Jenkins job, not even pipelines are used.

The pain starts when you want to do more than CI and try to get into CD. Or even worse, automate 'devops' tasks. That's where you run into all those warts.

A job could call Terraform, or spin up VMs, or run vacuum on a database, or any number of tasks. Or it may perform tasks on K8s to deploy a complex app. It may need to call APIs to figure out where to run things. And so on.

Since Jenkins is not only a CI/CD system, it can do anything, so people will try to make it do increasingly complicated stuff. And I'm arguing that this is wrong. If you have complex logic, it should be moved out of Jenkins so it can be more easily maintained and tested, and dependencies isolated. One of the easiest ways to do that is with containers. At which point, Jenkins loses most of its usefulness and other, newer tools shine.

Alternatively, use more specialized tools. If it is for CD, and Spinnaker works for you, please use that instead.

I agree with you that bash heavy complex things are best suited for something like ansible.

However, deploying containers to environments like openshift and kubernetes is extremely simple with jenkins. I don't think that's complicated at all. As a rule of thumb, you should be able to hide all the complexity in your deployment in the dockerfile. In addition, you can always use jenkins "build with container" functionality to build your application in a dedicated container on the fly. Many ways to hide complexity with jenkins.

I do agree with you that jenkins is abused because it is more than a CI/CD tool. I think that you need some experience using it to know what works well and what doesnt. Unfortunately in the new age "sprint agile" world some random guy has to pingeonhole crap into jenkins in 2 week time windows that shouldnt be there in the first place.

I also think that many devs underestimate what you can do running local jenkins as a tar file on your macbook. I like using jenkins to automate tedious tasks for myself. As an example, it is trivial to write yourself a custom github code scanner that will scan all files and folders in as many repos as you want. I like using jenkins for outside the box things like that.


Um, so doesn't his workflow buy an essentially infinite maintenance problem with all his containers? This sounds like tomorrow's Node left-pad debacle in a different form.

Of all the undifferentiated heavy lifting you can offload to your cloud provider, CI makes a lot of sense:

- You probably want to store artifacts in its object store anyway.

- Rent exactly the right amount of compute for the concurrent builds running in any given minute.

- If your projects have sufficiently simple build-and-test entrypoints i.e. "make" then you're highly portable and not too worried about lock-in. You don't even need to be deploying to the cloud provider you're building on.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact