With that said, I'd really like to see better documentation on the Jenkinsfile Pipeline format. I've tried to get started with it a few times, and haven't had tons of success. Stuff like "How do I pull in secrets", and "How do I control a plugin". I appreciate that it's Groovy-based, but that's not particularly helpful information (for a hack like me, at least).
The snippet-generator is nice, but it doesn't necessarily produce working code. Especially for things like getting secrets into a build. And it doesn't give me a broader picture for "How do I even write one of these from an empty text-box".
I recently tried the job-to-Pipeline exporter plugin, and that didn't work on my jobs - it generated stuff that didn't match the input job, and also wasn't structured like the example snippets Jenkins provides natively.
Maybe some kind of a sandbox I could experiment in? Or a REPL or something? It would really help to have something that gave great discoverability, with fast feedback. Faster than I can get by editing a job, saving it, running it, waiting, then realizing I still don't have the syntax right.
If you want a “sandbox”, you can just replay a pipeline run and make modifications. Both are not very useful IMO and slow down development pace substantially. Don’t get me started on integrating groovy shared libs.
I think the swiss army knife nature of Jenkins contributes to this - there's just so much you can do.
Perhaps sometimes using Jenkins scripted pipeline is a good idea, but if you've got the choice of implementing something as a Jenkins pipeline script or some other script that isn't coupled to Jenkins, prefer the latter.
Doing anything build-script related in Jenkins, whether Pipeline or freestyle jobs, is definitely an anti-pattern. All build-related scripts should definitely be in standalone scripts / build tool config files (make or whatnot), for reasons you describe.
Jenkins should be there to handle the "side effects", as I view them. In our case that's stuff like integration with git PRs (posting results of linting, building, unit tests), sending emails when new builds are available, integration with JIRA (we automate some workflows), publishing artifacts to an internal server, etc.
Conversely, putting any of those side-effects or stateful steps inside build scripts is a bad idea, and it leads to not being able to run build scripts locally without worry of messing up a JIRA workflow or spamming people with build emails. Thus, they should be stored only in Jenkins.
These are all mistakes of my predecessors that I am still living with to this day.
Programming capabilities are useful for ecosystem developers to create higher level primitives from existing ones, as it creates a new way of extending Jenkins without plugins.
Then, when we wanted to run them in parallel we just used the parallel Jenkins pipeline statement, so that every step had its own captured output stream (and distinct build statuses too).
This was a slippery slope: now more and more build orchestration complexity moved to groovy code, but fixing that is not obvious because with very long builds seeing which one failed and which not is very useful, and fighting groovy code happens relatively infrequently.
How can I follow your rule of thumb and still let Jenkins capture (possibly in real time) the output and status of work units it doesn't describe and spawn itself?
Also I'm hoping for a nice validated YAML based pipeline syntax that should make editing/validating pipelines easier
Pipelines can be incredibly rewarding if you spend the time to really dig into it and do a bunch of trial-and-error.
Convincing other people on your team to do that with the current state of documentation is painful, and understandably so.
The documentation really needs a lot of attention.