-improved version of Gitaly service called Gitaly Cluster for high availability git storage 
-simplified deployment of GitLab to Amazon ECS 
-added epic hierarchy on roadmaps (+ other improvements to our epic and milestone features) 
Puma reduces the memory footprint of GitLab by about 40% compared to Unicorn.
Considering one of the most frequent complains about Gitlab is memory usage. 40% is huge.
The biggest issue with GitLab I find is their stubbornness to have different prices for reporters vs developers. It really makes no sense other than they just want more money (even though they are losing a bunch because lots of companies refuse to do it).
If you've got some users at a higher tier how are they going to interact with other users within a group that isn't on that tier?
Features recently moved down to silver from gold enabled us to leave jira which has been great. Epics + roadmaps
There are a lot of companies with 10 developers but 100 reporters. Why should those two groups of people cost the same? The interaction is easy - GitLab already works on the permissions with the different user types. And calculating cost is just as easy - how many reporters vs how many other types.
Enforcement of that seems less than ideal. You would have to have GitLab check the number of Reporters and Developers any time someone gets Developer access to a repo. Which means that the repo owner gets the error when you run out of licenses, which is less than ideal. As opposed to a user limit, where presumably whatever team owns Gitlab gets the error when they add the user.
I would guess that most startups or small devshops using gitlab are going to have 5-6 devs to a maybe 1 reporter user.
Edit - I'm not disagreeing with you that a reporter user needs less features and in theory has a lower cost to gitlab. I think the target customer for the gold/ultimate tier that has reporter users won't blink at paying for it.
I would like to see parent/child pipelines recive some love as currently it does work but quirks are all around. For example, its not easy (or sometimes even possible) to pass pipeline variables from parent to child, pipline UI behaves differently when being part of relation (and many times unusuable or shows a wrong thing), not being able to repeat manual jobs with the same var initially passed, not even being able to run it again with any other var unless you delete all previous executions, strange limitations on masked secret vars, $ in your password will get evaluated as variable etc ...
Seems like an afterthought, rather then a carefully designed feature set. Too organic for my taste (I guess Conway's law is full blown there).
While Gitlab CI is getting better (or more capable, rather then better) then before in every release, it does seem bloated, overly complex, full of surprises, not reproducible locally, very slow (caching is ridiculous feature that makes job often run longer then without it) and with strange design that doesn't let me view my build log full screen or collect entire pipeline output easily.
Generally, Windows is also lagging behind (I can't use pwsh as runner shell today after 4 years in existence?). The worst recent offender: you display 'fail' on every job in color: "WARNING: Failed to load system CertPool: crypto/x509: system root pool is not available on Windows". This trips everybody that job has failed when it didn't.
I think Gitlab guys need to start taking CI/CD/runner more seriously or at least bring some fresh mind to it. After all, this is one of the major reasons for many people to use it.
For a list of planned follow-up issues as we iterating on the parent-child pipeline MVC, please check out our epic for this feature: https://gitlab.com/groups/gitlab-org/-/epics/2750. We welcome your comments and up-votes on the issues that matter most to you.
EDIT: Sorry; misread your first sentence; seems you saw the announcement that there is an alternative to passing variable via artifacts. Please consider opening an issue (https://gitlab.com/gitlab-org/gitlab/-/issues/new) to let us know how you would like jobs to communication.
For being THE feature that made gitlab big the whole CI area seems to be getting away from them
Then wrap your job in a untar/tar:
tar --use-compress-program zstd -xf node_modules.tar.zst || true
YOUR NORMAL JOB
tar --use-compress-program zstd -cf node_modules.tar.zst node_modules
It is unusable for NPM or composer, or the version of yarn we use (maybe newer ones too but I know they were trying to improve their internal cache a while back). Anything with a ton of small files or entire git histories. It’s awful, and it’s not weird or specific.
I hope they begin to focus on more production stability, this week alone we've had 3 disruptions due to issues (although one looked like Google Cloud outage).
Anybody else having that problem?
I wonder if they have a whitelist and only accept users who use the big boys like gmail etc. Or if they for some reason have totally legit email providers on their blacklist.
Edit: @gmx. address by any chance?
It could be that there is a legit email provider that got on the banned list inadvertently. Would you mind sharing the provider?
If you want to not do it in public you can DM me at olearycrew on Twitter or email boleary [at] gitlab.com
Could you DM / email me the details and a screen shot? I’ll try and help as best I can.
1 error prohibited this user from being saved:
Email is not from an allowed domain.
You don't need to have a Yandex account for that. Just put in firstname.lastname@example.org as your email and you get the above message.
I can understand that not everything has to be free but the current feature split of project management and scanning features among the tiers feels a bit haphazard with some crucial things at the highest prices, making it hard to justify the in-between tiers and impossible to buy the highest.
Relevant convos https://gitlab.com/groups/gitlab-org/-/epics/1887
My company would switch if the cost was the same or close, but I can't justify paying double even with all the extra features in gitlab.
I wish they add some kind of addon system for these features.
Short answer: yes.
Long answer: Also yes, but with a link to the relevant epic: https://gitlab.com/groups/gitlab-org/-/epics/2875
Most companies simply ignore questions like these because letting users know the roadmap is somehow scary.
You can't delete attached files if some of them were uploaded by mistake - It's better to use the issue tracker for a feature request.
Last time I worked on the attachment code, there wasn't a persistent database relationship for uploads and notes. Which would make this feature hard to implement, it would be a great feature though!
1. Gitlab CI with DinD kills the docker cache on every build. For a large monorepo this is a huge pain. Really needs to be addressed with some host volume mount of the docker daemon layer cache per concurrent job.
2. No say to specify CPU/disk/memory usage of a CI stage. I have a large number of builds that need ~512mb of ram per build and another build that needs 4GB of ram. Because of this I need to make sure every instance of the runner has exactly 4GB of ram available. This is a large waste of resources.
Gitlab is a fantastic product and I'm super happy for migrating multiple companies to it and that they keep innovating.
On the runner side you can tell it which tags to accept, and on the .gitlab-ci.yml side you can tag your jobs.
We mainly use this to make most jobs run on AWS EC2 spot instances, but a few use on-demand instead. We also use it to give a couple of jobs a larger instance size.
As for the actual upgrade, you should be able to follow the standard process outlined here since you are persisting your data outside the container: https://docs.gitlab.com/omnibus/docker/#update
We have to do the same actions on different environments and it is very troublesome.
For example, we have:
Models A, B, C, D
Builds Debug, Release
Normally this would be
And so on, and so on. In our case, where we have about 90 different models, it would be extremely useful to be able to configure, via a GUI of some kind, which of those intersections we want, rather than rebuilding the entire set for a configuration option which only affects one or two models.
At this point, the only option available to us is the Jenkins Matrix Build Plugin, which is awful in a few fairly frustrating ways, but is also the only thing that does what we want.
Examples: configuring an SCM in a matrix build job results in one svn checkout for the matrix build job and one checkout each for the cartesian product, and then, since we have a separate child job for doing builds, one checkout each for the child jobs. We don't want the matrix job to do any checkout, but if you configure it with a Git or SVN URL then it will do it regardless.
So messy that at the moment it is just a little toy and it will never actually work in production.
With 90? No there is no way you can create a reliable gitlab CI with so many environments.
Beside there are several bugs in gitlab CI and support is basically useless even for an organisation like ours where I believe we are one of their largest costumers. Our tickets (or at least the one I create) are simply ignored.
this in our jenkins.
The matrix that you see is one of the most complex one, jobs that build the documentation are much simpler.
If you need more, please let me know.
You can install GitLab and only _use_ it for 'git' (source control management). But the install is the same - GitLab is a single application
Last I checked, I gave up after try to set up keys for each user into a ‘git’ user in the VM running git. Any guidance here would be of great help.