Hacker News new | past | comments | ask | show | jobs | submit login

I love Jenkins, and use it professionally.

With that said, I'd really like to see better documentation on the Jenkinsfile Pipeline format. I've tried to get started with it a few times, and haven't had tons of success. Stuff like "How do I pull in secrets", and "How do I control a plugin". I appreciate that it's Groovy-based, but that's not particularly helpful information (for a hack like me, at least).

The snippet-generator is nice, but it doesn't necessarily produce working code. Especially for things like getting secrets into a build. And it doesn't give me a broader picture for "How do I even write one of these from an empty text-box".

I recently tried the job-to-Pipeline exporter plugin, and that didn't work on my jobs - it generated stuff that didn't match the input job, and also wasn't structured like the example snippets Jenkins provides natively.

Maybe some kind of a sandbox I could experiment in? Or a REPL or something? It would really help to have something that gave great discoverability, with fast feedback. Faster than I can get by editing a job, saving it, running it, waiting, then realizing I still don't have the syntax right.




Unfortunately, it’s not very easy to do. If you want syntax validation, you can validate it by POSTing to an endpoint on your Jenkins master.

If you want a “sandbox”, you can just replay a pipeline run and make modifications. Both are not very useful IMO and slow down development pace substantially. Don’t get me started on integrating groovy shared libs.


Same here - I really struggled to set up declarative pipelines starting out. The docs don't do a great job of distinguishing between the full groovy syntax and the new declarative syntax and there is a relative dearth of examples.

I think the swiss army knife nature of Jenkins contributes to this - there's just so much you can do.


After working in a team that was heavily using Jenkins files & scripted pipelines I started to believe that writing Jenkins scripted pipelines is a bit of an anti pattern, as you end up with lots of build script code that can only run inside of a Jenkins, perhaps coupled to plugins, which hampers your ability to locally develop and test changes.

Perhaps sometimes using Jenkins scripted pipeline is a good idea, but if you've got the choice of implementing something as a Jenkins pipeline script or some other script that isn't coupled to Jenkins, prefer the latter.


I work with Jenkins day in, day out.

Doing anything build-script related in Jenkins, whether Pipeline or freestyle jobs, is definitely an anti-pattern. All build-related scripts should definitely be in standalone scripts / build tool config files (make or whatnot), for reasons you describe.

Jenkins should be there to handle the "side effects", as I view them. In our case that's stuff like integration with git PRs (posting results of linting, building, unit tests), sending emails when new builds are available, integration with JIRA (we automate some workflows), publishing artifacts to an internal server, etc.

Conversely, putting any of those side-effects or stateful steps inside build scripts is a bad idea, and it leads to not being able to run build scripts locally without worry of messing up a JIRA workflow or spamming people with build emails. Thus, they should be stored only in Jenkins.

These are all mistakes of my predecessors that I am still living with to this day.


I think thats a great rule of thumb. Declarative pipeline came after the script was "invented", which is slightly unfortunate, had it came before it would have encouraged the practices you describe (declarative is just for orchestration), and script would have been mainly an escape hatch (I think many people get the idea now though).


It's probably not quite so clear cut. For example, suppose you deploy to AWS, and have automated this to be triggered by a Jenkins job. It's advantageous to be able to run this automated deployment from outside of Jenkins, even though deploying is one real big side effect.


With "declarative pipeline" introduced a few years back, this is the direction we are pushing people toward.

Programming capabilities are useful for ecosystem developers to create higher level primitives from existing ones, as it creates a new way of extending Jenkins without plugins.


Where I work it started like this: every "component" had some source in a directory and a couple of simple scripts: build.sh and test.sh.

Then, when we wanted to run them in parallel we just used the parallel Jenkins pipeline statement, so that every step had its own captured output stream (and distinct build statuses too).

This was a slippery slope: now more and more build orchestration complexity moved to groovy code, but fixing that is not obvious because with very long builds seeing which one failed and which not is very useful, and fighting groovy code happens relatively infrequently.

How can I follow your rule of thumb and still let Jenkins capture (possibly in real time) the output and status of work units it doesn't describe and spawn itself?


I think as part of the new Jenkins architecture we should be able to make it much easier to stop at a point in a pipeline & open a terminal/REPL to test out steps.

Also I'm hoping for a nice validated YAML based pipeline syntax that should make editing/validating pipelines easier


We have almost the exact opposite request. RE: stop and repl. Our Jenkins pipelines have escalated privs in that they can deploy code so are a juicy attack vector. Wed largely like these things to be read and execute only and any modifications need to go through review.


You can't just use the docs at Apache Groovy's website because Jenkins pipeline uses a crippled version of Groovy -- none of the functional collections-based methods work.


Yes!

Pipelines can be incredibly rewarding if you spend the time to really dig into it and do a bunch of trial-and-error.

Convincing other people on your team to do that with the current state of documentation is painful, and understandably so.

The documentation really needs a lot of attention.


Sandbox, repl, and a testing platform to unit test my Jenkins files are amazing suggestions. Basically an environment I can actually test in, as opposed to write-run-fix


Totally agree. I tried to move our freestyle jobs to pipeline a few times and the experience was horrible. Official documentation was scarce, main concepts not clearly explained, there was barely any example, and god forbid if you use some unpopular plugins because chances are, the plugin does not support pipeline of if it does, good luck finding the syntax for it.


secrets are scoped and the snippet generator will put in the correct GUID (or whatever that id is) for you.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: