I built this using openAI codex, google cloudrun and streamlit (visualisations by graphviz - codex is not bad at translating pipelines to graphviz without really much prompting which somehow surprised me). Let me know what you think. No fine tuning required for this (as Codex already knows quite a lot).
You mention docker and it is super great for CI: it’s probably one of the widest used feature of Jenkinsfiles (you can specify what image or dockerfile you want a stage to run in - simple but powerful, as you probably know)
I think this "lesson" is not lost on many, including those that built and work with Jenkins and plugins. The Jenkins X project moved to a very different approach, a fresh start.
see https://cd.foundation/ & https://jenkins-x.io (it works with Jenkins if you like, but natively uses the new pipelines and no SPOF masters). Might not be for everyone but worth a look.
disclosure: cofounder of cloudbees here (wanted to comment on original article but not able to for some reason on medium).
Co-founder of cloudbees here, just some corrections:
1) there are (since 1 year) restartable builds/stages in open source (a bit over a year IIRC) - when the article mentions that they are not in open source.
2) Jenkins X is something VERY DIFFERENT from Jenkins (despite the name), it is "master less", yes requires a kube cluster to power it, but it has no Jenkins instances as you know it, in a lot of ways quite a bit different as it uses Tekton as the engine for running pipelines (which has a bunch of advantages). So I wouldn't group it in with the same challenges and constraints. It is something new that shares a name and some of the ideas.
Jenkins, along with Spinnaker and Tekton and Jenkins X are now part of https://cd.foundation - worth a look to see how they are evolving (expect some changes).
Unfortunately people are looking for simplicity and this is going in the opposite direction. A kubernetes cluster is just offloading a large part to yet another component you have to run. Spinnaker is another completely separate system.
This is the opposite of what teams want. Drone, Gitlab, Teamcity as mentioned in these comments is a far better approach for 99% of companies who want a solid working solution.
Soon, "running on Kubernetes" will be like "running on Linux", i.e., it won't add any operational complexity because you anyway have a Kubernetes cluster running.
So maybe you are not there yet, but for a future-oriented CI/CD platform with self-hosting option, using Kubernetes as basis is a good approach.
On kubernetes - imagine if that was already managed for you, all the powerful things that can be done on top vs your self (preview apps, owasp vulnerability testing, progressive delivery - all without writing a pipeline - this is stuff that can be done.
I note gitlab is mentioned a lot here: they noted this power as well (see auto devops) and have started building things on top of kubernetes too.
Agree Kube is not for everyone, no question, I was just trying to clarify what was in the article. (if you are offloading complexity - I would hope it is to a GKE or EKS or AKS or similar, in which case it is very much offloaded...)
I didn't mean to imply that you mix all those things together from the CDF - was just mentioning some interesting projects (some are unrelated) in the mix.
Agree also on simplicity - myself I don't like to run anything, so CodeShip is what we have for that (but it sounds like you are referring to self hosting only solutions?)
Charles Sturt Uni is a fairly well regarded Australian university. It isn't one of the so called "sandstone club" but I believe it does legitimate stuff (it isn't that easy to be called a university here). They also often specialise in rural/earth science matters (I always thought of them as a rural uni). If that helps?
Yeah it is my instinct to to use credentials but then I pull myself up and think "is this a high brow ad-hominem thing" but it probably makes sense to be curious/skeptical at first.
Ruling out potential bias isn't ad-hominem fallacy IMO. Ad-hominem fallacy relates to irrelevant personal attacks. Bias is relevant- just like citing an author's history of proven fraud would not be ad-hominem fallacy, because it is relevant.
Or, at least, that's how I see it. Anyway, I think this is fundamentally different than logical arguments. The burden of proof very much rests on the paper or study or critique. Shaky credibility raises the bar of proof. Remember, in science, new papers are not de-facto gospel until proven wrong- they must be repeatedly supported by successive studies & challenges before they are accepted as truth.
Consider, for example, datasets. There is a certain amount of implicit trust in the dataset provided in some new paper. The dataset can be fudged in ways which are undetectable, and no logical argument can refute- data is not argument. Therefore until you are prepared to replicate the dataset, you are trusting the author.
> Remember, in science, new papers are not de-facto gospel until proven wrong- they must be repeatedly supported by successive studies & challenges before they are accepted as truth.
Yet we read that the findings in most published research are either never reproduced, or not reproducible.
I completely agree there's a lot of danger of ad hominem analysis in these kinds of cases. From my perspective, I probably would not post about this sort of thing at all, if it were a more mundane issue. But there seems to be a lot at stake in the climate crisis issues, so I'm trying harder than usual to figure out how to get my bearings.
This can be done with plugins, kubernetes-plugin for example. I'm looking forward to seeing what they come up with. It was good to see knative on their roadmap! This could be something else majorly.
Part of the ideas mentioned are to resolve this stability, and not depend on in process plugins (a new extensibility architecture that won't hurt stability). There are many things in plugins which should be core functionality (and will be).
That's just the thing. Why has there been so little adoption of commonly needed functionality as part of core, or at least as officially supported plugins?
Like, there's no official backup functionality? And why is version control not standard in 2018? This isn't something you just bolt on, or incorporate as a response to competing products.
I think they should abandon all hope of Jenkins being competitive. They should remain the weird old school universal tool it always was, and let it become relegated to legacy systems, like the Apache web server.
Jenkins was useful, but it's living in the past and trying to solve the wrong problems.
>That's just the thing. Why has there been so little adoption of commonly needed functionality as part of core, or at least as officially supported plugins?
Oh there are core bundled plugins, official etc - they are just core functionality that happens to be implemented by plugins.
>And why is version control not standard in 2018?
that is and always has been - "git" support used to be not in by default, but that was a while ago (it is included now).
I think thats a great rule of thumb. Declarative pipeline came after the script was "invented", which is slightly unfortunate, had it came before it would have encouraged the practices you describe (declarative is just for orchestration), and script would have been mainly an escape hatch (I think many people get the idea now though).