I have used and vouched for Jenkins in several companies and some decent sized licenses were bought mainly because of my input.
But to me, Cloudbees has done a major dick move making the stages not restartable in Jenkins 2.0, among other things. E.g: Dropping stage view out of nowhere and focusing only in Blue Ocean. I complained about in the channels that I had at the time and the response was, it's going to be unsupported from now on.
It's a super squechy thing to have such a useful feature bundled with a bunch of support and useless stuff that I don't care, and then charge me per node. I am migrating away from Jenkins into GoCD after close to 10 years using it, and don't get me wrong I don't feel happy doing this, but it's hard to justify it.
Fortunately the future looks bright, there are several interesting solutions available, Argo is super interesting to me, looking forward for Argo CD!
Thanks for using Jenkins for close to 10 years and sorry to see you move on, but I just want to correct the record here because I don't think the time line of events and your description are accurate.
First, pipeline stages have never been restartable in Jenkins from the beginning of Jenkins Pipeline. It wasn't as if we started with restartable stage and decided to close-source it at one point. From the very beginning, it was a feature we exclusively developed for CloudBees products.
From time to time, we do move some features from products to Jenkins. As somebody later in the thread pointed out, in JENKINS-45455 we are doing just that. Another example of this from early days is the folders feature, which is now used by many.
Any company building enterprise products on top of OSS will likely keep some features in products. And for any given person, only some of those features are likely useful. So while I understand the frustration of "that feature should be in OSS" or "I should be able to just get this one thing for a small price", I don't think there's anything inherently bad about these practices.
As for Pipeline stage view, it is still available today, and IIRC it is also still a part of Jenkins 2 default experience. Now, you are right that, as a contributor of the project, CloudBees is focused on pushing Blue Ocean forward. We think Blue Ocean solves the problem of pipeline result comprehension a lot better, and we'd rather make one solution better as opposed to work on two separate things that solves the same problem simultaneously. That is not to block other people from carrying the pipeline stage view forward, though, if anyone is willing.
I hope that helps,
Feel free to correct me, here is my take on it.
You are right the restarting in the declarative was not open, then closed. But I didn't say that, I said not having it was a dick move.
My point is a Stage is just an alias for a Job (or multiple) for the regular Joe who doesn't work on Jenkins code. That was always restartable at any time in my pipelines (Delivery Pipeline Plugin, Build Pipeline plugin, etc...). So when we started writing the old and new pipelines as code we assumed the feature would be the same, while learning the new DSL at the time it was not clear that it was not, at least to us.
Lot's of people thought the same the link below is an example of it. This was 2016 if I am not mistaken, it's now open source and congratulations for changing that as I said when it was pointed out to me, but I am not following the topic anymore.
I have no problem with the business, as I said I vouched for it, and in the end they brought the licenses. My problem was, the main reason to buy it was not support or something juicy like Cloudbees cluster management features, It was that we wanted restarts.
Is it bad? Absolutely not. But it's not something that made me feel happy. I just think it was not a thing I could easily communicate with the people making the decisions, was like punishing engineers for a problem that they can't solve, give managers a reason to buy it, Cluster management is a nice one, Automatic backups another, having miserable engineers is a terrible one.
About Stage view, I was frustrated from training support and others to use it on a complex pipeline just to see a new tool take over and from what I remember it was a very fast switch. There was no gain in changing to Blue Ocean at the time, as our restarts where not working on it. So we had one UI that worked without support and another that didn't with it. Again I am not following it the topic anymore, so this might be fixed by now.
I hope they improve their ways, it's a nice tool and I invested a lot of my time into it, but right now GoCD is my default option from now on.
I'm the founder/CEO of Codeship and we got acquired by CloudBees earlier this year. And I want to make sure that Jenkins + all CloudBees products get better :)
Last year I was in contact with the Jenkins team and had a couple of meetings to discuss the things mentioned above and some more.
The developers where very nice and interested, I know they were doing their best but the problem was my problems where not top priority, and I saw some problems solved over the months but I completely lost my will to help when blue ocean started and Stage view was abandoned, the super dick move of restarting stages as well.
It was not a small deployment mind you, the system was a critical one (payments) with multiple sites and all the works, not a small license as well, I left that company but before leaving we were already migrating away.
We had a complete pipeline dependent on it, we were an early adopter of the whole JenkinsFile (A developer myself I pushed hard for it) and stage view even without being awesome was already part of our way of working. We wanted more features that I was already discussing and bugs fixed, out of the blue they change to a completely different thing that was pretty but didn't solve any of my old problems, and that will have it's own problems and was/is not even close to complete.
I just feel I wasted my time, I don't plan to make that mistake again.
But I hope Jenkins X can improve on the past mistakes and become a contender again.
Sorry to hear about those issues. I shared your feedback with a couple of people, and we will work hard on being more mindful when making such changes going forward.
GoCD is a super simple CD platform (from user/developer point of view), easy to learn and with little possibility to snowflake it, with a Fan in dependency resolution that works nice.
The UI is not awesome but does the tricky for support teams and others.
BTW I have no association to any of the CD/CI companies I just like to work on this topic.
With regards to your question: I agree with user kerny.
Stop packing stuff on top.
What I want from Gitlab is to use it to manage my repos. If you improve that aspect (which is already fine though IMO), you'll make me happier.
If you improve Gitlab integration with other CI tools, you'll make me happier.
If you improve your CI solution (which I found lacking when I evaluated it 1.5 years ago -- no idea how it's now), I still won't use it -- I explicitly don't want to rely on one, integrated solution.
In my experience, such integrated solutions are fine for a while, until they aren't. My use cases tend to expand to things the integrated solution doesn't provide, and them I'm stuck.
Do one thing, and do it well. Doing yet more things detracts from Gitlab's appeal. Personally, I wouldn't mind you utterly removing Gitlab's CI tool (I know, not gonna happen, and that's fine -- just saying).
And GitLab Omnibus also does one thing. (It bundles those.)
Making it more configurable (as in makeing more features disable-able) would be nice though.
We think there is a lot of value in a single application for the complete DevOps lifecycle https://about.gitlab.com/2017/10/11/from-dev-to-devops/ and we'll continue to build that in 2018.
We'll also keep improving the existing parts of GitLab, such as the managing of repo's https://about.gitlab.com/direction/#code-review
We want to play nice with other CI tools and for example have great support for Jenkins https://about.gitlab.com/features/jenkins/
If you didn't use our CI since 1.5 years give it a shot, it is very good https://about.gitlab.com/is-it-any-good/#gitlab-ci-is-a-lead...
We don't want you to get suck with an integrated solution that is bad. We'll make sure that we'll keep improving every aspect of GitLab together with the wider community (100+ contributions in the last month). And if you want to use GitLab with something else you're welcome to https://about.gitlab.com/features/github/
And as a GitLab user you can turn off the things you don't want under project settings https://docs.gitlab.com/ee/user/project/settings/#sharing-an...
Please let us know if there is anything we should add there.
My primary concern is that instead of polishing the features that have already been released, the platform is trying to do too many new things. Some of that stuff is cool (k8s monitoring integration, though EEP is too expensive for me, and my Grafana dashboard does basically the same thing), while some of it seems a bit bloated (SAST/DAST for example, which was a few lines of code to implement ourself).
I really want the core Github replacement use-case to be as ergonomic as Github is. And the CI/CD piece is also great, but still has plenty of rough edges (e.g. Environments are a great feature, but I still can't clean up stale ones, which makes the environment list basically useless).
My experience with support has been a bit lackluster; e.g. see https://news.ycombinator.com/item?id=16897644.
General reliability in CI/CD is not great; I'd say something like 0.25-0.5% of my build jobs fail from intermittent infrastructure failures (mostly gitlab runner/API issues from what I can tell), which wasn't a problem when I was using Jenkins.
Ops is still a significant concern; site reliability has improved in the last year, but that's not saying a lot; it's still a fairly frequent occurrence to get errors during/after a deploy. I'm not sure if this problem would be better or worse if I self-hosted, as I don't know how hard it is to run a GL instance (seems like it's hard, given how often the gitlab.com site has issues).
Performance has also improved in the last year, but the site is still on the slow side (e.g. compared to contenders like gitea).
Oh, and the pricing model is a bit broken -- all of the other SaaS platforms that I use let me pay monthly (at a higher rate); when I was evaluating paying for gitlab.com vs. doing self-hosted EE, I really wanted to pay for my team to use the hosted offering for a few months to see how things went, but I wasn't prepared to lock in for a year, so I didn't end up trying out the hosted paid offering.
None of these points in isolation is enough to make me leave the platform and go back to Jenkins, but they are enough to make me pay close attention to the alternatives.
Regarding polish we're improving existing features all the time, we're very close to a big refactor of the merge request view.
Regarding CI/CD environment deletion I can find it in the API https://docs.gitlab.com/ee/api/environments.html#delete-an-e... but I'm not sure about the interface.
Support has had trouble scaling, we just hired a director to make sure we get on track, sorry about that.
Having you builds fail intermittently is bad, this should be a problem only on GitLab.com Reliability of that is not where it should be and we're taking drastic actions to improve it. If anyone reading this is up for the challenge please see https://jobs.lever.co/gitlab/a9ec2996-b7b6-4d87-aed0-1fc2ce3...
Regarding the yearly pricing this is a tradeoff we made, see point 7 of https://about.gitlab.com/handbook/product/pricing/#when-is-a... I thought we offered a 30 day money back guarantee but I can't find it onhttps://about.gitlab.com/pricing/ so I'll ask product marketing what is up with that.
Environments currently can't be deleted from the UI. We have an issue open about it at - https://gitlab.com/gitlab-org/gitlab-ce/issues/25388
The issue gained a lot of traction. It's scheduled for 11.1.
Could you elaborate on that please?
# This check runs Openstack Bandit, a Python static analysis tool that checks for
- bandit -r -x 'tests,test_,/migrations/,./src/' -c bandit-config.yaml -ll ."
Of course there's also some window dressing to display the errors on the main MR, instead of having to dig into a step failure, but that doesn't make a meaningful difference to me.
(This feature could well have moved on since it was first implemented, that was the only time I dug into it).
I had to do lots of Jenkins integrations some time ago, and even though I tried to minimize the number of plugins and make things as simple as possible, things would randomly break from time to time or exhibit weird behaviour etc.
I had the impression that Jenkins is deeply conceptually confused about some of its concepts, e.g. how builds are triggered. Also, it is a huge pile of untestable spaghetti code which explains the weird bugs.
I modified an open-source plugin only to find out that it's almost impossible to write meaningful tests for it: you can't even mock Jenkins API without using the darkest Java mock magic. Jenkins classes are just written in an old style that makes testing _really_ hard but probably can't be changed without breaking all of Jenkins.
I tried Jenkinsfile which was only even more unreliable (at the time at least, this was > 1y ago). The whole idea of using groovy, modifying the hell out of it to make it even more weird and surprising and edge-casy just didn't go well for me.
I ended up with generating a _lot_ of very simple jobs for each project and connecting them via triggers instead. It was not very pretty, but it was the most reliable that I could get out of Jenkins.
So the thought of integrating Jenkins deeply into you deployments, talking to Kubernetes and sitting in the middle of a huge pile of complexity (Jenkinsfile, Dockerfile, Helm, ...) and magic "that you don't need to worry about" scares the hell out of me.
But then again, if you want to do CI/CD with Jenkins that's what you might want, right?
(I would prefer more simple approaches if forced to use Jenkins, though)
To make it work, they had to cripple Apache Groovy so you can't use its functional collections-based functions. Not sure if you can really call it "Groovy" with that handicap.
One of the big changes to traditional Jenkins is we don’t expect folks to have to configure Jenkins, add/remove/edit plugins or even write Dockerfiles or Jenkinsfiles.
If you really wanna do that Jenkins X won’t stop you - but we are trying to help automate and simplify CI/CD through standard tools, standard file formats (Dockerfile, Jenkinsfile, skaffold, helm etc).
Is the cloud, kubernetes, docker, helm & istio complex? Sure - but our goal is to simplify, automate & avoid folks having to look at all that detail.
It’s still early days and a challenge. Eg even Lambda & the AWS API Gateways is complex. But we hope to keep improving to make things easier to use & to help folks go faster by providing automated CI/CD as a service on any kubernetes cluster / cloud
Having a good set of plugins that integrate well should limit the complexity explosion and the number of edge cases users run into. So that's an improvement. It won't fix the Jenkins Heisenbugs though, of course :)
Also you get the close integration of your CI tool and your git repos, which is very nice from a visibility point of view.
Having said that, GitLab is trying to own all parts of the build and deployment process, which from previous HN discussions, is of great annoyance to a lot of people who want to cherry pick what they use GitLab for.
Is there something we can add to GitLab to make it more composable?
We are going to start experimenting with the new cloud native GitLab chart, but it would need to gain some maturity before we use it in production.
Do you know if the new GitLab cloud native helm chart will allow you to turn-off certain things like mattermost and prometheus? That was something that we didn't like about the omnibus chart because it exposed several extra services/ports that we didn't really want to manage/think about at the time.
Thanks for giving the charts a try in alpha/beta, please pass along any feedback. We'd love to get it!
As a counterexample I present Jetbrains' TeamCity (if running a build server yourself is necessary).
If using proper DSLs https://github.com/jenkinsci/job-dsl-plugin , https://github.com/hmrc/jenkins-job-builders, etc. you can at least version control, quickly clone/rebuild servers, create new builds and in general automate most of the CI/CD process.
But do I recommend Jenkins? No.
We have a few issues with it, like Jenkins suddenly deciding to build & test all branches/PRs in all repos, killing the server.
• What is Jenkins X exactly, and how does it relate to Jenkins? Is it just a CLI utility that generates git repos, k8s clusters and Jenkinsfiles for us? Is it a fork of Jenkins?
• What would we gain from switching to Jenkins X?
• How does it work with our existing setup?
* automated CI/CD for your kubernetes based applications using Helm Charts & GitOps to manage promotions (manual or automated)
* a single command to create a kubernetes cluster, install Jenkins X and all the associated software all configured for you OOTB (including Jenkins, Nexus, Monocular etc): http://jenkins-x.io/getting-started/create-cluster/ - ditto for upgrading
* a single command to create new apps or import them via build packs to create docker images, pipelines and helm charts with GitOps promotion: http://jenkins-x.io/developing/create-spring/
* automated release notes + change logs with links to github/JIRA issues etc
* feedback on issues as they move from Staging -> Production
i.e. more automation around CI/CD and kubernetes so you can spend more time focussing on building your apps and less time installing/configuring/managing Jenkins + Pipelines
> What is Jenkins X exactly, and how does it relate to Jenkins? Is it just a CLI utility that generates git repos, k8s clusters and Jenkinsfiles for us? Is it a fork of Jenkins?
This seems like the best resource on what exactly this is
So looks like its just a CLI tool that generates bunch of stuff for kubernetes and jenkins. Is that right?
I notice that kops recently added support for Digital Ocean
So we should be able to add a command `jx create cluster do` for using kops on DO - the current `jx create cluster aws` uses kops under the covers to spin up the kubernetes cluster.
I’ve raised this issue to track it: https://github.com/jenkins-x/jx/issues/705
I suspect you're using the "organization folder" plugin-thing that periodically re-scans your origin (Github/Bitbucket) and builds all branches that it discovers. Check this out: https://stackoverflow.com/questions/45832235/how-to-restrict...
> Relationship between Jenkins and Jenkins X
Jenkins is the core CI/CD engine within Jenkins X. So Jenkins X is built on the massive shoulders of Jenkins and its awesome community.
> We are proposing Jenkins X as a sub project within the Jenkins foundation as Jenkins X has a different focus: automating CI/CD for the cloud using Jenkins plus other open source tools like Kubernetes, Helm, Git, Nexus/Artifactory etc.
> Over time we are hoping Jenkins X can help drive some changes in Jenkins itself to become more cloud native, which will benefit the wider Jenkins community in addition to Jenkins X.
We recently deployed AD in our self hosted gitlab instance and combined the SAST container checks with our production policies, it’s been rock solid.
Add to this the fact we are able to manage all the production policies via the pipeline API’s and AD templates, the whole Jenkinsfiles deal seems far less scalable and difficult.
I have no affiliation with gitlab.
Or the automated feedback on releases to all your issues as they move through Environments: http://jenkins-x.io/about/features/#feedback
Or the automatic publishing of Helm charts to the bundled Monocular for all versions of your apps for your colleagues to easily be able to run via helm?
Or that it works great with GitHub, GitHub Enterprise & JIRA and has awesome integration with Skaffold?
Or easy setup a kubernetes cluster with Jenkins X on any public cloud in one command: http://jenkins-x.io/getting-started/create-cluster/
I am using gitlab, though we quickly grew beyond auto devops.
Your definition of auto devops is different than gitops. Gitops is the practice of using commits and pull requests to execute change and do releases.
Weave uses it to mean git as the source of truth.
Kelsey Hightower talked about it and and has demoed the workflow of using pull requests to initiate promotion and deployments.
Gitlab's auto devops does not seem to tackle promotion via environment repos, so in my understanding does not fit gitops and would be confusing to call it such.
Kelsey Hightower's kubeconf talk - https://www.youtube.com/watch?v=07jq-5VbBVQ
And a better writeup on weave's site -https://www.weave.works/blog/gitops-high-velocity-cicd-for-k...
Don't get me wrong, I think auto devops is a good thing, but it's most certainly not gitops.
To add to that, GitOps as defined by Weaveworks and Kelsey Hightower is a technology agnostic approach.
IMHO a lot of stuff in this space are either focusing on making life harder through added complexity to up-sell support or more services or on solving a narrow problem in such a way that you have to still take care of a lot of other stuff.
I really don't like the idea of my ci/cd tooling being responsible for provisioning its own k8s cluster....there are a lot of other more mature projects out there for doing this.
Is the idea that the ONLY thing running on this cluster is jenkins-x and review/preview environments or something?
The default is to use separate namespaces in kubernetes for each teams developer tools & pipelines, Staging & Production environments (plus Preview Environments). Multiple teams can obviously use the same cluster with different namespaces.
We’d expect ultimately folks may want to use a separate cluster for development & testing to Production. GitOps makes that kind of decoupling pretty easy but we’ve still work to do to automate easily setting up a multi-cluster approach:
* I use one namespace per env (staging, prod, etc), is this supported or must I go with the default (slightly wacky) staging and prod releases side-by-side in the same namespace?
* How are bugfix releases handled? If I pushed 1.2.0 to staging, and want to hotfix the prod release 1.1.0 with 1.1.1 (a common bugfix flow), can I promote releases from the hotfix branch?
* Is there a permission model? Does it bottom out to GitHub permissions for each env repository? E.g. can I have a smaller set of users approved to promote releases to production?
Promotion is either automatic or manual. By default Staging is automatic and production is manual. You can manually promote any version to any environment whenever you wish: http://jenkins-x.io/developing/promote/
For promotions we're delegating to the git repository for RBAC; so you can setup whatever roles you want for who is allowed to approve promotions & if you need code reviews etc
I haven't looked into the current state of this recently but I ran into a lot of problems with this with a bunch of hosted CI services in the past. Somewhat ironically, as of a couple years ago if you needed to build your own docker container as part of a build you had to specifically stay clear of CI services that mentioned docker at all because that meant they were running their builds inside of containers and it was a pain to figure out how to run my own docker build, much less spin up a cluster per build with something like docker-compose, inside of a running container.
Curious if and how Jenkins X solves this. Or have things changed and it's now easy to build and run docker containers inside of a container?
(Aside from that, I'm not sure how I feel about Jenkins coordinating with a Kubernetes cluster. I've always found their monolithic approach to be a pain to work with, and always wished that, for example, I could just have Jenkins trigger jobs by pushing them onto an ActiveMQ queue or something and read back the results on another queue. Then I could just set up an autoscaling group of build servers, and provision them with whatever tools I'm already using to just start up and listen on this queue. Instead, jenkins wants me to duplicate a lot of this work I already have CM tools doing, and set it up manually through the UI, using community plugins that are often out of date).
Offloading build queue outside Jenkins to another service, auto-scaling of build servers, configuring Jenkins with your configuration management tools are all something we are thinking about / looking into / actively working on. Some of them haven't gotten to the point of proper write-up yet, but see
Yes we can support things like parallel steps & tests spinning up separate clusters, namespaces or environments (we do this ourselves to test Jenkins X).
We delegate to an OSS tool called Skaffold to actually build docker containers that gives us the flexibility to use different approaches for docker image creation (e.g. kaniko or Google Container Builder or use the local docker daemon etc)
Using Kubernetes as an engine for orchestrating containers works very well - thats kinda what Kubernetes was designed for. Though you are free to extend & integrate tools like ActiveMQ into Kuberenetes if you think it'll help your use cases.
I'm not huge fan of the demo video, since it doesn't really handle what I can only imagine is a very common use case : I already have a Jenkins 2.0 instance with Jenkinsfiles, how easy would it be to migrate to Jenkins X ? Is it isofunctional with added capabilities ? How much will I lose ?
Bootstrapping a java spring app from scratch is fun, but I suspect most people have an already existing codebase with already existing CI/CD tools.
It’s mostly about automation of install, configuration, environments, pipelines & promotion with GitOps and more feedback.
Just out of interest what kind of demo would you like to see?
You can define what a Preview Environment is in the source code of your application - its just a different Helm chart really. You can of course opt out of Preview Environments completely if you wish. http://jenkins-x.io/about/features/#preview-environments
Though I've personally found them to be super useful - especially if you are working on web consoles - it lets you try out changes visually as part of the Pull Request review process before you merge to master.
e.g. so you could deploy just your front end in a Preview Environment but link it to all the back end services running in the Staging or Production environment. Each team can configure their Preview environment helm chart however they wish really.
Using separate namespaces in kubernetes is a great way to keep software isolated and avoids apps interfering with each other; but at the same time its really handy to be able to link services between namespaces too.
I use Jenkins and k8s and the objective is generally Spring services, so it sounds like this is for me.
OpenShift also includes some Jenkins support; e.g. you can add BuildConfig resources via a YAML file in the OpenShift CLI which will create a Jenkins server and a pipeline. But Jenkins X isn't yet integrated into OpenShift - but its easy to add yourself for now :)
If you are pondering which kubernetes cluster to try for developing Spring services: OpenShift is a good option if you are on premise. If you can use the public cloud then GKE on Google is super easy to use; AKS on Azure is getting there & EKS is looking like it will be good if you use AWS.
On the public clouds the managed kubernetes services are looking effectively free; you just pay for your compute + storage etc. So its hard to argue with free + managed + easy to use kubernetes - if you are allowed to use the public cloud!
(Disclosure: I run the Certified Kubernetes program at CNCF.)
I.e. it’s not using the upstream distribution of Kubernetes like the public cloud vendors or Heptio etc.
It’s great it’s a Certified Kubernetes though!
So our focus is currently anyone looking to automate CI/CD on kubernetes, the cloud or any modern platform like OpenShift, Mesos or CloudFoundry which all come with kubernetes support baked in.
You can use just the CI part and do CI & releasing of non-cloud native apps if you want - we use Jenkins X to release jars, plugins & docker images using it - but doing so does miss out all the benefits of automated Continuous Delivery & GitOps promotion across environments
Given that Jenkins is pretty popular, you'd think that they'd be able to sort something out with GitHub to get bumped up the list for something along these lines.
There's always the Cloudfare option, but I've never felt that this was an ideal solution when HTTPS should be extremely straightforward for GitHub to set up on their pages.
Looking at the dns records, it looks like they didn't do this, and instead set up an A record.
I realized Github probably documents this, and found: https://help.github.com/articles/using-a-custom-domain-with-...
My suggestion would likely work for a www subdomain, but not for the apex domain.
Edit: Previous posts for reference
See dang's comments on the issue for the official position: