Hacker News new | past | comments | ask | show | jobs | submit login

It's great to see that Jenkins is following the path blazed by GoCD[1] and Concourse[2] to make the pipeline concept more central.

That said, this appears to be achieved by promoting the plugin into the default installation.

It also misses some of additional the advantage Concourse holds over Jenkins and GoCD: build configuration is purely declarative and can be checked in with the project. You know what version of your pipeline built the software at any point in its history. And you have a reasonable shot at recreating that build, because every task in every job is run in a fresh container.

These are killer features, in my view. Jenkins can be extended with plugins to try to sorta-kinda do either or both, but it's not part of the central organising concept of how it works. Windows can run some POSIX apps, but it's not a nix.

Further out, the Jenkins pipelines are tricky to do fan-out/fan-in with; in Concourse it's trivial. You have to lay out your pipeline in Jenkins, Concourse will lay it out automatically based on declarative information about each job. Rather than a very rich API for plugins, Concourse boils the unit of extension down to "resources", which with three actions (check, get, put) can model points in time, git commits, S3 files, version numbers, interacting with a complex product repository and so on.

I used to tolerate CI/CD, as a necessary and worthy PITA. Now I find myself actively looking for regular tasks, resources, sanity checks and so on I can put into Concourse, so that I don't have to remember them or write them up in a wiki.

Disclaimer: I work for Pivotal, which sponsors Concourse development. But I wouldn't rave about it if I wasn't such a convert.

[1] https://www.go.cd/

[2] http://concourse.ci/




I've been a heavy Jenkins user for the last three years and I can completely see where you (and Concourse team) are coming from. The page comparing Jenkins and Concourse hits all my right buttons (complexity in build specification, minimal pipeline, etc) but I find that having to BOSH the hell out of a new system (pun intended) just to get CI running seems like a PITA to me (it's the one thing preventing me from recommending it as a solution atm). Maybe I'm missing something, but it seems like a lot of complexity is added to the deployment/worker management system that is pretty much a requirement if you want slaves external to the master. Is there a way to not have to use BOSH (BOSH Lite still tries to shoehorn things I don't want/need) to run Concourse builds? We use a combination of VMWare/KVM VMs, LXC and Docker containers for our builds and we have our own working deployment system (Puppet manages all our state successfully for VMs and LXC containers) which I would like to integrate with Concourse. I really hope I'm missing something because what I saw when I tried out Concourse for a week, made me swoon.


@vito from the Concourse team here.

We've recently started building standalone binaries which should lower the barrier to entry. Concourse itself has never been too tightly coupled to BOSH, it's just been the quickest feedback loop for us during development, so it ended up being the first thing we documented, primarily for internal use as we haven't really "launched" yet.

Binaries are available for download in the GitHub releases[1]. Check the README.md[2] for details. We'll be launching 1.0 very soon and part of this will include a major docs/website revamp which promotes the binaries to the "main stage". It also switches to BOSH 2.0, which drastically simplifies the deployment configuration necessary, but it still takes a backseat to the lower-upfront-cost distribution formats in the new site.

Glad you liked Concourse otherwise, and hopefully this helps. :)

[1]: https://github.com/concourse/concourse/releases [2]: https://github.com/concourse/bin/blob/master/README.md


Does this mean it will be (is?) possible to deploy Concourse on a single machine without the headache of BOSH Lite? I've wanted to use Concourse, but when all you've got is a Mac Mini, doing a full BOSH deploy (or even BOSH Lite) is quite a big ask.


Yup (is). You'd just run `concourse web` and then `concourse worker` on the same machine. If all you have is a Mac Mini there's one gotcha, though: currently none of the resources will run on OS X, as they're built as Docker images. So you'll still need at least one Linux worker somewhere.

I think the next step from us may be to start building Vagrant boxes that just spin up a worker, parameterized with credentials to register with a Concourse `web` instance. That way you can run Concourse on OS X for iOS testing/etc. and still have all your resources and Linux containerization when you need it via a tiny local VM.


OK, that makes sense. Anything that makes it easier to deploy would be awesome, particularly when dealing with iOS-related CI/CD.


Are there plans to support other container managers (other than Garden-based managers)?


The nature of Garden is to support container managers as Garden backends. Garden itself is just a client/server API spec.

For example, [Guardian](https://github.com/cloudfoundry-incubator/guardian-release) is in the works to replace the Linux backend with a thinner runC-based backend.

The main value we get from it is having a nice Go API and not having to overhaul everything using Garden every time some shiny new container tech comes out.


Yeah, version control of Jenkins itself has always scared me. There seems to be a pattern that we go through.

(in the beginning, there was light...)

* Create a small, tight, single-purpose Jenkins job

* Add a small tweak to it

(repeat adding tweaks)

(realize the Jenkins job now contains MANY different configurations options and the job itself is now a shell script in its own right)

* Sweep the "job" into a shell script. Check in said shell script

* Back up the Jenkins config, and hope no one asks why something's happened.

I now have a plugin that automatically checks in the Jenkins config to source control, but it again doesn't solve the problem of matching up a particular jenkins artifact to exactly what built it, and why.


We use Netflix's Job-DSL to keep Jenkins job configuration in source control (and to allow easier reuse than offered with job reuse plugins).

https://github.com/jenkinsci/job-dsl-plugin


I use http://docs.openstack.org/infra/jenkins-job-builder/, which is great as well: Jenkins configuration in a simple YAML file under source control.


Apache Bigtop relies on this, and does a pretty spiffy job of configuring itself out of the box this way.

Will look more closely...


We're using this as well, it's got warts but it's 100x better than authoring jobs in the web ui.


At my work we're running all Jenkins jobs in Docker containers using some simple scripting [1].

Works really great. Jobs can run on any slave, no snowflakes, full CI config is versioned in the repo along with the code. Jenkins job only point to a single script and that's it.

[1] https://github.com/kabisa/jenkins-docker



I think you would find Concourse to be a very appealing alternative; out of the box it gets you a lot closer to reproducibility than Jenkins does.


GoCD pipeline config is actually (sort of) declarative and stored in XML format in an internal git repo so it's versioned and you can recover/replay any version of it (in fact you can visualize the very first version of a pipeline you ran 3 years ago in the Value Stream Map visualisation and execute it again).

Agreed that the format isn't ideal but there are non-trivial problems in breaking it down due to the advanced way GoCD manages to tackle the diamond dependency problem (aka the Fan-In functionality).

Totally agree with you re pipeline as a first class citizen: I wrote this back in 2014 so it's kinda old now and surely Jenkins got better but I think it still applies by and large: https://highops.com/insights/continuous-delivery-pipelines-g...


Concourse avoids the fan-in/fan-out problems of GoCD's XML configuration by performing planning for you.

That is, you define the inputs and outputs of a job, then Concourse derives how to carry out your jobs in the correct order under the correct conditions.

A Concourse yaml file is pretty flat, as a consequence.

That's a great writeup. I think you'd find Concourse attractive.


I played with Concourse when it was initially released but it wasn't there for us yet and at the moment there is no reason to change but definitely need to test it again.

I couldn't find any docs on intra pipeline dependency management though, any links?

Thanks!


I'd have to ask around; I imagine the Cloud Foundry release engineering team have some tricks up their sleeves.

As for my own project, we have an increasing group of downstream consumers of our API. There are a few ways I could signal upgrades. They could just watch my repo and narrow it to the directories they're interested in. I can also publish a file on S3 or push into a git repo that they watch.


I have been using GoCD exclusively and have been quite satisfied. The pipeline as a first class citizen concept really resonates well.


Cloud Foundry used to be built on GoCD, and before that Jenkins. But it's close to 100% Concourse now.


> You know what version of your pipeline built the software at any point in its history.

Source control can tell you when changes to the pipeline were checked in, but they actually take effect when they're applied with 'fly set-pipeline'. That may be before or after they were checked in. Perhaps a sensible team would set up a pipeline to watch the pipeline repository for changes and apply them automatically. Mine hasn't.

> Concourse boils the unit of extension down to "resources"

Which you can only actually use by forking the Concourse BOSH release to add them to the workers [1]. I'm not sure i'd honestly call it a point of extension.

[1] https://github.com/concourse/concourse/blob/master/jobs/grou...



Oho! That's excellent. I see that came in 0.74; we're still on 0.70 i think, so i look forward even more to upgrading.


Yeah, a few of us griped at the Concourse team about this large escape hatch from historical reproducibility and they took us seriously.


Wait, what? Jenkins can do whatever pre-processing/post-processing you need by means of tasks. Regular Ant/Maven/Java tasks. And that was there, in production, from the time immemorial. People have been doing that for a decade now.

GoCD and Concourse add... what?


Or I could use Makefiles. Or just write a giant single bash script which only works on three workstations in the office.

GoCD and Concourse make the connections as important as the individual jobs. Concourse goes further and makes checked-in configuration and disposable containers the basis of the build environment.

This sounds like no big deal, but it's critical.

The point is not whether Jenkins can "do" these things. With the right stew of plugins, it can. The point is that Jenkins does not really think this way. It starts with the Single Ball of Build as the unit of work and retrofits other possibilities.


> build configuration is purely declarative and can be checked in with the project.

Seems like you are missing the point of pipeline and the Jenkinsfile concept, namely that the pipeline is part of the SCM.

Yes you can have the pipeline defined in the old-style text box in the job config, but that is intended to be used only while you develop the pipeline. Once you get it developed it gets checked in to source control.

Of course there are down sides to having the build instructions in source control, e.g. drive-by hack via Pull Request... granted this is nothing new... You can always do the drive by hack in a unit test... but it does take a little more work. In that regard having the Jenkinsfile in a text box (or better yet in a separate SCM) can be a useful protection (as can mandating that PRs use the target branch Jenkinsfile rather than the Jenkinsfile in the PR... again a feature in Jenkins pipelines)

For me the real advantage in pipeline is the organization folders support. You can tell Jenkins to scan all the repositories in your GitHub org and automatically create Jenkins jobs for any branches of any repositories that have a Jenkinsfile... PRs will automatically (subject to basic drive-by hack protection) be built and the commit will be tagged with the result.

So as with all things Jenkins, you have choices... we are providing some opinionated defaults (which is a change from the 1.x series)

> That said, this appears to be achieved by promoting the plugin into the default installation.

So the thing to remember is that the core of Jenkins is really better viewed as a platform. The plugins are really where functionality for Jenkins lives. I would expect to see more of the "current core" functionality get shipped out of core and into plugins. There is no reason why the Freestyle job type needs to remain in core. The advantage of having these functionalities outside of core is that we can be more reactive with regards to developing features.

> Further out, the Jenkins pipelines are tricky to do fan-out/fan-in with

I might have a different view on that claim, but hey I'm significantly biased.

OTOH my personal view is that for the 99% of jobs pipeline is overkill and literate is actually a better fit... but sadly most people don't seem to like the idea of actually making your source control have a README.md file with a - shock horror - "build" section that actually has the verbatim command required to build the software in source control (perhaps with an "environments" section that describes the build / test toolchains and environments)... I guess there's too many people who signed up to the mortgage-driven development manifesto [1] to actually want to leave a README file in source control explaining how to build and release the software!

Disclaimer: I created the weather report column and I am an active Jenkins developer

[1] https://refuctoring.wordpress.com/2011/01/13/the-mortgage-dr...


> Seems like you are missing the point of pipeline and the Jenkinsfile concept, namely that the pipeline is part of the SCM.

I guess I missed that Jenkins is heading that way. It's what Concourse does and I'm a fan of having CI/CD live in the repo.

> For me the real advantage in pipeline is the organization folders support. You can tell Jenkins to scan all the repositories in your GitHub org and automatically create Jenkins jobs for any branches of any repositories that have a Jenkinsfile... PRs will automatically (subject to basic drive-by hack protection) be built and the commit will be tagged with the result.

For PR-building on Concourse, the resource I'd recommend is: https://github.com/jtarchie/pullrequest-resource

> we are providing some opinionated defaults (which is a change from the 1.x series)

I see this more and more in the Java ecosystem and I think it's a good thing.

> I might have a different view on that claim, but hey I'm significantly biased.

Me too! :)

> OTOH my personal view is that for the 99% of jobs pipeline is overkill

I am starting to head in the other direction. We've historically fallen into creating "big ball of mud" build systems because it was just too hard to easily decompose and manage them as smaller units that could be rearranged quickly and safely.

Concourse makes it so trivial that the gradient for what is easy points in the other direction. It is less painful to lay out a pipeline (a graph, really) of builds that are composed of small pieces, than to have one gigantic Build To Rule Them All.

In Pivotal the practices around Concourse are evolving extremely quickly, because teams are discovering that it's really easy to delegate more and more to it. You start with a simple git->unit->feature->deploy pipeline, but soon you realise it's easy to assemble all sorts of things. The best is yet to come.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: