
Jenkins pipelines as YAML - eloycoto
https://jenkins.io/blog/2018/07/17/simple-pull-request-plugin/
======
colemickens
This is highly confusing to me. Pipelines in Jenkins are fragile. You can
spend entire days trying to do _very_ common things and run into tons of
friction. The differences between scripted and declarative are severe, in
terms of how erros and continuation are handled, the way that plugins interact
depending on how you've nested stages and steps. Even things like environment
vars and CWD can be a pain to coordinate. Trivial pipelines take far, far, far
too long to setup and get working well.

This just appears to be a layer on top of the still-limited declarative
pipelines. I'm not sure trading a groovy DSL for an extra layer (of yaml, no
less) is a good deal.

It doesn't solve the weirdness around multibranch, it doesn't address the
mismatch between having a parameterized pipeline where that parameterization
has to live outside the Jenkinsfile (or if it is encoded in the Jenkinsfile,
it acts really oddly, like running the Jenkinsfile actually reconfigures the
job that invoked it, etc). This is why everyone has to continue using Groovy
DSL and/or JJB to reinstantiate parameterized jobs or handle jobs that deal
with multiple Jenkinsfiles in a project.

I know Jenkins isn't going anywhere but it's legacy shows a lot.

~~~
kohsuke
First, full disclosure, I'm the creator of Jenkins.

I'm sorry to hear that you had a bad experience with Jenkins Pipeline. I can
see that you know a lot about it.

There's actually no difference between how continuation is handled between
scripted and declarative, so I'm curious to know more about what hit you,
because I suspect it's something else (though obviously equally frustrating!)
Similarly curious about improvement to error reporting, because I think that's
something the Pipeline team cares about, and it's one of those things where
how errors in real world are made is always more interesting than what we can
imagine. I used to work on compilers, so I know the frustration of a poor
error message pushing you down a wrong lane, only to discover a few hours
later that all it took was one line fix! Modern improvements in Pipeline (like
declarative and this one) is in no small part motivated by making those error
checks more thorough, easier, and more upfront. So I think this is a change in
the right direction.

My perception has been that parameterized jobs are in decline, in part because
more people are triggering automation implicitly through commits, and not
through explicit "run this" button. Parameters are more implied from the
context (branch, commit message, creation of tags, etc), as opposed to
explicitly given from UI.

Stepping back from those specifics, I think regrettably software has bugs, and
there's always more usability improvements that can be made, so we are just
working on those one at a time, which kinda summarize my entire journey with
Jenkins :-) So in that spirit, I want to make sure we learn from your
suffering.

~~~
colemickens
Thanks for the response. Mine is a bit terse as I'm on mobile for a few days.

For one, the always/finally block in declarative pipelines flatly doesn't work
in scripted mode. You have to set/maintain job status yourself and
throw/catch. It's very unpleasant.

I am shocked to hear that parameterized builds are in decline. Every project
I've seen or touched that used Jenkins had multiple pipelines in a repo that
needed to be executed with a number of different configurations. If there was
better native support for pipelines calling other pipelines, this _might_ not
be such a problem.

Example: a kube related project. It has a Jenkinsfile for release builds and
another for general checkin/PR builds. Each of those need to be tested with
two versions of kube, with flannel and calico, with and without RBAC. I don't
know of a clean way to do that without Job DSL/JJB and parameterized jobs. In
my experience, this type of scenario is not uncommon. We need visibility at
the config level. It's not helpful to shove all of that into a single job with
coarse -grain reporting up.

Even if you put parameters aside, there is just a fundamental mismatch because
the pipeline only (today) encodes steps. What about triggers? Having them be
part of the pipeline and having the pipeline modify it's own job in Jenkins
feels very, very, very wrong to me. There's a reason Job DSL and JJB kept
those separate. In fact, if it weren't for the Pipeline visualization, I
could've gotten the same functionality with JJB /Job DSL alone, with some bash
scripts, with considerably less heartache.

I don't know that I know a "lot" about Jenkins, but I do know that I strive
for easily repeatable setups that need git triggers, need to post back status,
and need to run bash scripts. This most recent attempt was the third time
owning it all-up and it was still very tedious. And it was really much easier
to achieve this simple requirements with k8s's Prow... Even when having to
dive in and hack some of prow's plugins.

All of the mature, repeatable Jenkins setups I've seen are small mountains of
groovy scripts with their own workarounds embedded. There needs to be a system
config DSL too, because maintaining huge blobs of XML and writing
groovy.init.d scripts keep me awake at night.

Anyway, Jenkins is crazy good stuff, especially for it's age, but for newer
smaller projects that don't need the 10,000 Jenkins plugins, it feels
unwiedly. I had hope that Jenkins X was going to tackle some of these things,
but so far I'm not sure.

Thanks again for taking the time, I hope my further feedback is constructive.

~~~
kohsuke
Thanks, this is really helpful.

I need to find someone from the pipeline team to pull into this, so in the
mean time, just responding to what I can contribute,

Just to make sure, I'm not saying parameters are disappearing. I'm just making
an observation that less people seem to be using it. Take your example of
release vs general checkin/PR builds. I see increasing people doing releases
as automation that kicks in after creating a tag, or by cutting a release
branch. Or the master branch is always deployable.

I agree with you that reporting capability needs to be better in order for one
Jenkinsfile to pack lots of different test cases. I believe the team totally
gets this importance.

The "system config DSL" you mention has evolved into "Jenkins config as code",
and I've referred to it in my other comment. I think we are on the same page
that it's a crucial part of a repeatable mature Jenkins setup. I think I also
totally get what you mean by "unwieldy", and Jenkins Essentials in that
comment is making steps to attack that challenge.

I'd love to hear from you where Jenkins X fell short for you, because I think
it should speak to some of the challenges by embracing a certain best
practices.

------
kohsuke
Creator of Jenkins here.

First of all, this is a Google Summer of Code project. Abhishek, who is
driving this work, is doing a great work, so I hope people can give him
encouragements and feedbacks to push him forward. I'll make sure he sees those
feedbacks and will stop by to answer questions you might have.

This is one of the efforts that is pushing the envelop of Jenkins that solve
problems people had with Jenkins. Reading some of the reactions here, I wanted
to use this opportunity to introduce other bigger efforts going on currently
in Jenkins that I think addresses various points raised in this thread.

* Jenkins Essentials is aiming to be the kind of "readily usable out of the box" Jenkins distribution that is a lot less fragile, because it's self-updating appliance that has sane defaults and obvious path to success.

* There's architecture effort going on to change the deep guts of Jenkins so that data won't have to be on the file system, and instead go to managed data services.

* Jenkins configuration as code lets you define the entire Jenkins configuration in YAML and launch Jenkins as a docker container to do immutable infra. Jenkins Pipeline lets you define your pipeline in your Git repo, so that's the other part of immutable infra, and between modern pipeline and efforts liek this one, there's no need to write Groovy per se. It's just a configuration syntax based on brackets like nginx, which happens to conform to Groovy syntax, so that when you need to do a little bit of complicated stuff you can, but you don't need to

* Finally, Jenkins X is focused on making CD a whole lot easier for people using and developing apps for Kubernetes. It's a great example of how the community can take advantages of the flexibility & the OSS nature of Jenkins to the advantages of users.

* A few people mention about container-based build environment, which is very much a central paradigm with modern Jenkins (and thus obviously with Jenkins Essentials and Jenkins X.) See our very first page of the tutorial! [https://jenkins.io/doc/pipeline/tour/hello-world/](https://jenkins.io/doc/pipeline/tour/hello-world/)

~~~
slavik81
Jenkins Essentials sounds great. I've become very good at not breaking Jenkins
installations. Jenkins is a powerful tool, but it's hard to recommend when
that's a skill you must acquire.

I remember when I installed the git plugin, which forced the credentials
plugin to update, which caused runtime failures in communication with some
other core plugin, so we reverted the install/update, which broke the whole
system because the update had renamed fields in the configuration and the old
version didn't understand them...

Since then, I have always updated all core plugins together in lockstep. Based
on the name and description, that sounds like what Jenkins Essentials would do
too. If so, that's a good sign.

Simpler, more reliable administration is exactly what Jenkins needs and you
seem to have a credible way of achieving it, so I'm excited to see the
results.

------
zdw
Looks interesting, although it seems very similar to another project, Jenkins
Job Builder ([https://docs.openstack.org/infra/jenkins-job-
builder/](https://docs.openstack.org/infra/jenkins-job-builder/)) that can
describe jobs as YAML or JSON, and is designed one level higher as a meta-jobs
creator that can create many jobs all with slight variations, with all of the
config kept in version control (no manual fiddling with the Jenkins GUI, other
than troubleshooting). With JJB, most jobs aren't pipelines, but traditional
Groovy-based pipeline scripts can be included and run.

It would be great if there was a pre-run remote lint step like can be run on
Declarative Pipeline jobs
([https://jenkins.io/doc/book/pipeline/development/#linter](https://jenkins.io/doc/book/pipeline/development/#linter)).

~~~
rhencke
Also highly recommended is Jenkins Job DSL, which, despite also using Groovy
like Jenkins Pipeline, takes a far different approach.

In Job DSL, you write Groovy scripts to declare your job. These scripts, in
turn, write the config.xml that Jenkins uses to actually define and execute
jobs.

One of the big benefits to us is that it is _not_ an alternate execution
engine like Jenkins Pipeline is. It just takes over the job of writing the job
XML for you, instead of editing it through the GUI.

We had a large, complex, manually 'versioned' job chain that did not lend
itself to being re-written in Jenkins Pipeline, as it made heavy use of
plugins that were not supported by Pipeline at the time.

By using Job DSL, we were able to incrementally replace the manually
maintained jobs with script-generated ones. Importantly, with nothing breaking
for end users! All download links, URLs, etc of the new jobs exactly matched
the old ones, so it was a very painless, incremental adoption that people did
not even notice!

If you've got an existing job chain you are trying to get under control, Job
DSL comes highly recommended.

~~~
humbleMouse
Agreed, I love the groovy jenkins job DSL. It's so awesome and simple. Easy to
put everything on github and forget about it.

~~~
vorg
Besides all the special cases in the normal use of Apache Groovy (e.g.
definition of `==` different to Java), you also have to remember all the
Jenkins-specific special cases (e.g. Groovy's collections methods don't work).
I wouldn't call it simple or easy at all.

~~~
rhencke
Are you sure you're not thinking of Jenkins _Pipeline_?

Jenkins _Pipeline_ definitely has these issues. Jenkins _Job DSL_ has always
worked fine, including all the special collection method operations (grep,
find, collect, each...)

------
SOLAR_FIELDS
I think it totally makes sense to do stuff like this as long as you keep a
very thin layer of abstraction between your YAML declarations and Jenkins
config. Like, someone familiar with Jenkins and the naming conventions should
easily be able to come in and map what buttons and boxes they fill in the UI
to what is declared in the YAML. I’ve worked with several build systems that
try to get way too clever with this and all you end up with is huge amounts of
config files that only a few people are able to understand and manage (even if
there is nice documentation!). Build systems are just as important for
following the Principle of Least Astonishment as codebases are.

------
Chico75
This is just lowering themselves to the same level as every other CI tool. For
sure YAML is an easier barrier to entry to overcome, but it's also a low
ceiling.

YAML doesn't let you transcribe logic easily compared to any programming
language, and build logic always tends to go up. Programming languages also
allows you to create abstraction to remove some of the complexity from the
users.

~~~
scarface74
My experience with yml based builds and deployments is basically using the yml
file to tell the build system what code to run to do the build - usually other
shell scripts or some other scripting language that can also be in the same
git repo.

~~~
jdi92
I read yaml for ansible like an ordered array.

It’s a namespace with the values you want inline, in my head

They’re INI files or .cfg to me

Not code

The code is the Python in Ansible, or Go in TF, that applies those values

Using config mgmt tools like they’re vanilla programming languages is weird.

~~~
scarface74
Exactly. Why would I want to learn another pseudo scripting language when I
have Powershell, bash, Python, etc at my disposal?

------
ruffrey
I am so grateful for this. As a non-Java person, using the Groovy pipelines
was incredibly painful to incrementally fix on the server. Nothing like
pushing 20 commits in a row and forcing a build, only to find each time that
an odd syntax error.

~~~
stefan_
Still have to do that to build the thing in the first place. I do not
understand at all how every CI solution out there makes it entirely impossible
to trivially test a pipeline. Here we are building something whose entire
point is that it can run _wherever_ , but the one place it can't run is my
local machine? It's like they are doing it out of spite.

~~~
eddieh
I seriously don't understand how anyone can use most of these CI solution.

Here is what everyone should demand from a CI solution:

* Actual code that specifies a pipeline (not a GUI/config-file/DSL)

* Supports any SCM, workflow, build system, and language

* Supports complex mixed-language, mixed-build-system projects

* Doesn't make assumptions about project's structure and processes

* Able to run locally (but scales to clusters, etc)

* Written in a common scripting language

* Completely end-to-end customizable in the scripting language it's written in

The only one I know that fits is BuildBot, but practically nobody uses it.

[edit: formatting]

~~~
scarface74
Well if the whole world has a different perspective than you, have you thought
that maybe their wants/needs are different?

~~~
eddieh
Sure that could be the case, but I think most people that set up and use CI
just pick Jenkins or whatever on a whim, then we all read about horror stories
and bad experiences because the solution they picked is a square peg and
they're forcing it in a round hole.

~~~
scarface74
Why do I need all of that bespoked complexity, my yml file is just there to
tell the build system the scripts or shell commands I want to run - that are
also in my git repo.

Fundamentally, a CI system for a simple application just has to gather
dependencies, build a package/run automated tests and store the artifact
somewhere.

A deployment system just has to copy the deployment package to a server/group
of servers, and do some type of installation process based on an automatic
(Dev/integration environment) or manual approval process (who ever is
responsible for pushing to any other environment has to approve the release).
There are a million ways to skin the cat but if your build process or
deployment process is too complex, maybe it’s more of a question of the
maturity of your framework or tooling.

My build process for .Net web apps is:

\- nuget restore

\- msbuild

\- run nunit

\- zip artifacts based on version.

My deployment process is

\- run a CloudFormation file to configure AWS resources

\- deploy code to a VM

\- run one or two commands to configure IIS or install a Windows service.

\- run a few AWS CLI commands to start autoscaling, reconfigure API gateway,
etc.

No build servers to maintain, just an agent running on the target VM for
deployment and a slightly custom Docker image for builds.

—-

For Lambda functions and scripting languages it’s even simpler...

Build:

\- import deprndencies \- create zip file

Deployment

\- run CloudFormation YML file to deploy lambda

The CF file can be the deployment step. If I need some programmatic, I can
create a custom lambda backed resource that is run when the CF file is run.

------
mart187
We have Jenkins connected to ECS as build agent pool. Every team has it's own
Jenkins instance running, secured with AD login. Pipelines are using
Jenkinsfiles. My only critique is that it's not fully ci as code because you
need to manually create the pipeline, connect it to fit repo, etc.

~~~
michaelneale
It may not work for you but "organization folders" (a feature of multi branch)
were designed to discover projects from bitbucket, github etc (ie a new repo
appears, with a Jenkinsfile, a new project will automatically configured).
Some people have had luck with them (they don't always work how you want
though, but when they do, don't need to manually create the pipeline other
than dropping in a Jenkinsfile in a new repo).

------
rollulus
I'm wondering, who uses Jenkins nowadays and why?

A few years ago, at one of my first gigs in an enterprise environment, I used
Jenkins for the first time to test and build my stuff. When I needed a newer
version of my compiler or specific linter, some fellow had to install that on
the VM that was running Jenkins.

Later, working in a more modern environment, we started using Gitlab CI. It
was a bliss. I specified the docker image with my favourite tooling and my
stuff got built in there. When some tooling changed, I updated my image.

At my current gig, again an enterprise, it is Jenkins everywhere. They do the
most complex things with it, orchestrating entire releases, integration tests,
etc. I don't know what to think of this yet.

How does the HN crowd see this?

~~~
moduspol
We're using it because it costs nothing and it's what everyone already knows.
We're making a big architectural shift to microservices and I made a pitch for
the Travis / CircleCI style workflow, but after Jenkins pipelines were
discovered, that was the compromise that was made.

We've only got our toes in it now, but from what it looks like, you
theoretically _can_ use a Jenkins pipeline (with a Jenkinsfile) to get some of
the benefits of those systems, but the problem is that they also allow you to
rule out the others. Your Jenkinsfile can assume certain plugins are
installed, assume other jobs are configured on the same Jenkins instance...
basically all the things that led to Jenkins becoming what it has in terms of
a carefully configured sacred cow that must be meticulously backed up and
everyone is scared to update.

Having builds trigger on push isn't easy if you're not using GitHub or
BitBucket, and having a series of pipelines that trigger off of each other
is... not clean. You can certainly trigger another job as a "post" action just
like you could in any other Jenkins job, but now your upstream job contains
the logic for whether or not a downstream job is triggered. What if your
downstream project (like a VM image) only wants builds from a certain branch?
Or should hold off on new builds from a certain microservice while QA
completes their testing? I guess you'll need to edit the Jenkinsfile for the
upstream project (likely someone else's project) and be careful not to break
it.

~~~
fatninja
>> our Jenkinsfile can assume certain plugins are installed, assume other jobs
are configured on the same Jenkins

I think what worked for us in this case was using Jenkins shared library[1].
We provide a common template for the common stacks and expose only few
configurable options. This would really help in maintaining sanity across the
jenkins env and since you maintain the shared lib, you can control the
dependencies.

[1] - [https://jenkins.io/doc/book/pipeline/shared-
libraries/](https://jenkins.io/doc/book/pipeline/shared-libraries/)

------
empath75
This would be great if you didn't have to use scripted pipelines for so much
stuff. Declarative pipeline isn't there yet.

~~~
zdw
From what I can tell, this creates a mapping of YAML to Declarative pipeline
syntax, so it's unlikely to improve on that.

~~~
empath75
Yeah that's what I meant, though I guess it was ambiguous.

------
tuyiown
ha YAML, when mentioned, always reminds that taking a look at the specs[1] is
a great reminder that things can derail badly event without a large committee

[1] [http://yaml.org/spec/1.2/spec.html](http://yaml.org/spec/1.2/spec.html)

------
xfalcox
Amazing. I had this project at my previous job but left before getting it
done. It's so cool to see that someone, somewhere, had the same need.

------
IloveHN84
Maybe they will be cleaner than groovy pipelines..they look really ugly once
your building process involves parallel steps

------
merinowool
There are technologies that are build using COD - consultant oriented
development. That is one of the principals is that only a trained consultant
would be able to configure it and that would require knowledge impossible to
find in freely available documentation. It also needs multitude of gotchas
that couldn't be solved using common sense. Jenkins I believe is one of such
tools. We have completely moved away from it.

~~~
zorkw4rg
I guess rather than having to deal with the agony to move away from a
sufficiently large and complex Jenkins setup, people rather delude themselves
in that its actually good software.

Just the amount of obtuse setup that thing requires is crazy and always ends
in a mess. I have no idea why there seem to be so many people praising that
thing in this thread. I don't fault Jenkins though, its just old obsolete
software that should be replaced.

------
gm-conspiracy
Isn't this how drone.io works?

------
nrclark
I'm excited about this! I would love for pipelines to get easier to configure.

------
jlebrech
I like yaml to edit preexisting files, but they should come with a generator.

~~~
bryanlarsen
Jenkins has a declarative pipeline generator, it should be relatively
straightforward for it to output YAML rather than the standard groovy based
declarative pipeline DSL.

------
jeena
Why does everything always need to be in the root of a git repo?

~~~
digitalsushi
I never put my Jenkinsfile in the root. And it works just fine.

If I left it in the root, I wouldnt have to configure it... but I can live
with that.

------
nik736
What are the reasons one would use Jenkins over Drone?

