Our teams evaluated Gitlab for a year before completely migrating away from it (to Github Enterprise for SCM, Confluence for wiki, YouTrack for issues and TeamCity for CI).
Not a single team (out of 20) was happy with the overall performance (and especially the performance of code search).
As far as wiki is concerned, Confluence's UI has its own share of issues, even after their recent overhaul, but all in all it is much better product. YouTrack and TeamCity are simply much more polished & stable products and are (in our experience) easier to administer than the competing offerings from Gitlab.
Hi! I'm the current PM for our search and as Sid mentioned we've steadily been working to improve that. It's been getting a lot better, but most of the improvements are heavily reliant on also having Elasticsearch enabled. Without that, there's really no way for us to provide a optimal search experience for the amount of data and content there.
If you have any specific search feedback, please feel free to open an issue or reach out to me @phikai on GitLab.
Apparently it can deadlock processes on the server, stopping the clone -- and they never get cleaned up. While they don't prevent anyone else from using the machine, they do sit around wasting resources, and potentially causing a denial of service attack.
I figured out bugs for github's issues, but I haven't been able to find one for gitlab's, and they've been pretty silent on the bug that someone else filed about this as far as I can tell.
It didn't feel like a particularly easy to trace system when I tried debugging.
Hi ori_b, do you have a link to an issue for this? Happy to bring this up to the team for prioritization.
Besides, if you manage to find their issue you'd see carnage this decision caused for some of the customers. Even those with own runners found them swamped in jobs doomed to fail , blocking actually inportant pipelines.
Autodevops as a product feature targets very niche audience of teams running gitlab, but without dedicated DevOps team , it is by definition a very opinionated view on CI/CD and I can not see how it was acceptable to suddenly enable it on all existing projects on whole gitlab.com overnight.
We are working to make Auto DevOps smarter and only run in cases where it can add value (https://gitlab.com/gitlab-org/gitlab-ce/issues/57483).
I agree with you, we don't currently cover all the use cases we'd like to, but we're working to expand the feature set and technologies we cover.
If you had a particular use case that was not well covered, I'd love to learn more about it so we can evaluate related improvements. You can reach me at daniel at gitlab dot com. Thanks.
The current state of the wiki isn't great but we're also not seeing a lot of people care about it. Many people are switching to static websites. Therefore we're not investing to get the wiki better than the current state of a half circle.
BTW We've recently measured experience baselines in GitLab and we agree we still have a lot of work to do https://about.gitlab.com/2019/09/05/refining-gitlab-product-...
It's not dataloss but it deletes text in the same line as wiki internal links. And what kind of wiki doesn't use lots of internal links. https://gitlab.com/gitlab-org/gitlab-ce/issues/67132 This basically makes the wiki unusable but it's marked as "backlog".
If you really do feel this way about the wiki then it should be clearly marked in your marketing. I read about your "integrated devops experience" and was onboard, but if your intent is as you say then you need to put parentheses next to the wiki feature item list that says "this is crap and we intend for it to stay crap" and we'll know to value it appropriately when choosing a provider.
I'm surprised that this wiki bug that two people here mention has only one upvote https://gitlab.com/gitlab-org/gitlab-ce/issues/48641 I'm not sure if the complaints are over multiple issues, if the wiki doesn't get a lot of usage, or something else is going on. The bug sounds legitimate.
Hi! I'm the current PM for our wiki's and I agree that the experience isn't up to par at the moment. As Sid mentioned, historically wiki's were not a priority as many users weren't coming to GitLab for those features. We're starting to see shifts in that thinking as we've penetrated deeper in to some markets so we're working to adjust accordingly.
> It's not dataloss but it deletes text in the same line as wiki internal links. And what kind of wiki doesn't use lots of internal links. https://gitlab.com/gitlab-org/gitlab-ce/issues/67132 This basically makes the wiki unusable but it's marked as "backlog".
That issue is interesting because it's a link mechanism that relies on the underlying wiki project that we use. (i.e. that's no common markdown syntax). Discoverability of things like that is limited to some power users who know that we're using Gollum underneath and know of some of the supported syntax. What we've seen is that most users don't use that functionality and instead just use absolute links in markdown. This likely explains why the upvotes are so low on the bug report as well.
For comparison, we've seen more demand to support linking to projects: https://gitlab.com/gitlab-org/gitlab-ce/issues/20726, than within the wiki itself.
As an FYI, I've also asked a couple of our engineers to take a look at that specific issue to see if it's something that can be relatively easy to fix. No guarantees on anything here, but we'll try to get a bit more relevant engineering information to assist in understanding scope.
> If you really do feel this way about the wiki then it should be clearly marked in your marketing.
That's a lot of what our maturity pages are trying to do. We're being honest with ourselves and with our users about where we think the functionality of certain features are. In fact, when I started we had the Wiki listed at a `Complete` maturity which I reduced to viable: https://gitlab.com/gitlab-com/www-gitlab-com/merge_requests/...
If you take a look at the wiki strategy (https://about.gitlab.com/direction/create/wiki/) our next focus is going to be on making editing easier and improving navigation. We think these things will start to move the conversation forward for users and we'll see more adoption.
Even more, if you think we're not moving in the right direction with the Wiki's please be vocal on the issue tracker, or open an issue for the Wiki Strategy or even a merge request to change something. We're happy to have the feedback, and don't hesitate to tag me on wiki issues I'm @phikai on GitLab as well.
But anyway the double brackets is not a Gollum feature. It's a wiki thing, going back to wikipedia.
Since you're here I think this issue is also misprioritized: https://gitlab.com/gitlab-org/gitlab-ce/issues/66898 You should reconsider as a lot of integrations rely on webhooks and broken images make the whole thing impossible.
That's fair, the point was that it's syntax specific to the wiki system vs. the rest of our markdown filters.
> Since you're here I think this issue is also misprioritized: https://gitlab.com/gitlab-org/gitlab-ce/issues/66898 You should reconsider as a lot of integrations rely on webhooks and broken images make the whole thing impossible.
I'll take a look at this - it popped on to my radar recently but I need to dig in further to understand the use case and functionality here. Thanks for bringing it up.
I work for a GitLab shop, but we don't use the bugtracker (Jira) or wiki (Confluence), we mostly don't use the CI (Jenkins is just more flexible), and we definitely don't use the container stuff. We currently do use the review workflow but at various points have considered moving to a different tool for that as well.
So yeah, we would have been fine with a GitLab that did less— a lot less, and instead prioritized supplying first class integration points and supported plugins for other best-in-class tools.
If they did this I wouldn’t want to use it any more. The whole point of Gitlab is that it does everything. To prevent the need for any integrations.
Removing features and making them into extensions, would be a different thing.
GitLab then introduced another way of specifying dependencies using the keyword "needs" ("dependencies" was already taken by the half-baked implementation). This allows jobs from multiple stages to run concurrently, as you would expect. However, you are still required to shoehorn your jobs into stages, even though they should be completely deprecated by now. Worse, the restriction that jobs cannot depend on other jobs from the same stage still remains. On top of that, the visualization of a pipeline pretends that the new way of specifying dependencies does not exist, so all the arrows between the jobs are meaningless.
I would much rather have had a proper implementation of the simple, more general approach, than having to deal with the legacy of a hacky and half-baked solution.
The items you mention are scheduled for follow-ups in our epic https://gitlab.com/groups/gitlab-org/-/epics/1716. Your feedback on sequencing or how we are approaching the different improvements is more than welcome.
I would have loved for the feature to be useful for you too in the MVC iteration, and I'm sorry it wasn't. We are still working on it, though, and I hope that it does become valuable for you also. In the meantime you should still be able to use GitLab in the same way you always have - let me know if you're having trouble running pipelines without the DAG.
Also, this is just a single example that I pointed out to outline what I think is a problem with the whole development culture around GitLab, which puts too much focus on releasing early, and too little focus on quality assurance. Of course, you shouldn't spend years perfecting the next release, but you release too many features too early for my taste.
Releasing early to get feedback on issues is important to us but we can do better communicating around what's an early preview vs. a mature feature. The maturity page that's linked to in this discussion is actually part of how we are trying to improve our communication around that. This is more at the stage and category level, but features have a maturity level as well and it's worth us reflecting on how that can be made more clear.
In the DAG model (which really is just the model implemented by every sane build tool ever made), stages give you no extra expressive power, they only add arbitrary restrictions to that model and aren't useful at all.
As per your second point, and this is speaking personally and not really officially, I do think our feature goal has been historically a bit too ambitious (although hopefully transparently so: https://about.gitlab.com/company/strategy/#breadth-over-dept...) seeing as how we're barely out of the startup phase. But we have had a big hiring push this year and have almost tripled our headcount (which did introduce growing pains of it's own lol). Once we've stablilized a bit, we will be able to dedicate more resources solely on maturing features.
I hope this doesn't come across like making excuses, these are just my observations as a user-turned-employee. I will take your feedback about not being heard into consideration though so we can improve on that!
The point of the GP and others where that as of now there is nothing positive about the "unified experience for DevOps", it is very buggy and lacking, maybe someday.
Because _if_ I feel like the feature might be useful, I'm going to try to use it, and invest time in trying to use it, and only then find that it's not reliable or well-polished after all. The more such features there are, the more chance I'll spend at least some time being frustrated by at least one of them.
One of the joys of really well-polished software is that if the feature wasn't worth doing right (and not every feature is!), it simply isn't there at all, so if a feature is there I can count on it being well-thought-out and well-executed.
That may be because people who do care use a different product. Be careful you don't metric your way to extinction.
At some point tomorrow once I’m on an actual computer I’m going to have a dig and verify that they also don’t cookie you until you’ve consented - if that the case then I’m deeply impressed.
"Job marked as success when job terminate midway in Kubernetes" https://gitlab.com/gitlab-org/gitlab-runner/issues/4119
IMO a production CI bug that falsely reports success merits a high priority patch release but Gitlab doesn't seem to see it that way.
To find the relevant Product Manager see https://about.gitlab.com/handbook/product/categories/
Please note that even if you're not a customer our issue trackers are open so you can @mention the Product Manager to help them understand the severity and help to diagnose and fix things if you're so inclined.
We concluded the same internally recently and hence the focus on bugs and wider community contributions in 12.4. We overly focussed on getting the DAG https://docs.gitlab.com/ee/ci/directed_acyclic_graph/ and Merge Trains https://docs.gitlab.com/ee/ci/merge_request_pipelines/pipeli... out the door and missed a bad bug, we're sorry.
The runner self-modifies the config file and uses it as state which makes it basically impossible to deploy it in an immutable way (Like on nixos). I found out that most config options (but not all of them) are also avaible as environment variables and cli flags so that's how I hacked around the issue but it means I can't support all features that the runner has because the environment variables do not expose all the available config options.
The fact that it's so hard to automate a piece of (in theory) stateless software is really annoying :(
In particular UX of a runner registration was improved with config template feature https://gitlab.com/gitlab-org/gitlab-runner/merge_requests/1...
There is still a problem, that self modifying config option doesn't fit nicely with Nix though
Kubernetes story is quite telling. They released "cloud native charts" a year or so ago , that is their official way to run Gitlab on Kubernetes, yet when they just dip their toes into kubernetes world, when they switched their docker image registry to run on kube, they've quickly discovered glaring omissions like missing liveness probe and storm of errors on any version update. That is what their customers were sold under the label "ready for Kubernetes".
Same goes for almost every feature.
On this episode, an over-ambitious family restaurant decides to expand like crazy and ends up with a huge menu of stuff that the overworked chefs can't cook properly.
Some TV chef should be yelling at them for wasting the opportunity of a lifetime by ignoring good business sense, or something like that.
384 engineers, out of a total of 873 employees. I'd say that's a pretty reasonable size.
You are right, we had some missteps with the Helm chart that were unfortunately not discovered by others or us. Our test cases of scaling registry up and down worked perfectly in all our synthetic tests we did so it was not obvious to us that the liveness probe was missing. In hindsight it is quite obvious but at that time we had 16 other charts to write and some things did slip through. For a number of other services, community and paying users were reporting issues that we solved as we went further. Registry is one of the components that receives traffic in bursts so for majority of users this was probably never an issue or it was one of those "gremlin" moments.
For GitLab.com the things are quite different. We hold 2PB of data in docker images alone and there is continuous flow of traffic. For GitLab.com scale, none of the services we are porting to K8s have the luxury of the bursty traffic so we are careful in how and when we switch over traffic. The good thing is that all the edge cases we found are fixed and now in the Helm charts releases so that users can really put this on k8s as ready. If you are curious on all the issues we had to cover during this process, see the main registry migration epic https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/70 .
Also gitlab UI is a huge mess. It got all the features, sure. But the UI is not that user friendly. Don’t use flat the wrong way. Use contrasts for buttons PLEASE. Github is sort of flat and their interface is great with adequate use of contrasts.
You can follow along on some of the progress we are making in each release post in the "Performance Improvements" section. For 12.2 you can see we had 58 MR's related to performance: https://gitlab.com/groups/gitlab-org/-/merge_requests?scope=....
On the UX front, two efforts under way are establishing experience baselines (https://about.gitlab.com/handbook/engineering/ux/experience-...) as well as a common design system (https://about.gitlab.com/handbook/engineering/ux/pajamas-des...). Hopefully these efforts allow us to look at our UX holistically, and to focus on making high quality components that are used throughout the product.
Again thanks for the feedback, and hopefully we will have some more concrete improvements here soon.
Keep up the good work.
Annecdata, but to me it seems more like their official server is the slow part, I run my own instance and it seems pretty snappy.
We are currently working on addressing performance, having a dedicated workstream for testing and measuring performance across the different GitLab versions.
Our plan is to publish the results across the versions as they have gone through testing. https://gitlab.com/gitlab-org/quality/performance/wikis/Benc...
We currently test for latency, I will take the average 500MS feedback to the team.
We have already identified slow endpoints with high latency so they can be improved. E.g:
I was so shocked that I re-visited the page in incognito mode. 10+ seconds of waiting happened again.
Hope the improvement goes well.
If you don't mind can you please share the area you are located (roughly), I will bring this feedback to our infrastructure team.
Plus you can make it as fast as you want if self-hosing is an option by throwing more hardware at it.
No straight upgrade path when you skip major-ish releases. I end up wasting a day sifting through cryptic issues and docs from vague error messages.
Suddenly can't commit because now my branch is "protected" and failed a deploy pipeline I didn't implement courtesy of GL's "vision".
Not to mention all this comes at the cost of increased complexity and resource usage.
I wish there was a "thanks but no thanks" setting to keep our instance the bare minimum without having to adopt features whole hog every time they get inspired to incorporate the next "big" thing.
We've put a lot of effort towards making the upgrades as painless as possible, with few surprises. One way we've achieved this is by requiring users to update to the last minor version, before making the jump to the next major version, as you note. The reason for doing this, is that we add a lot of validation to that last minor package which checks for any deprecated features/configuration.
This way if you try to upgrade with a configuration that would result in GitLab no longer functioning, we abort the upgrade and tell you exactly why, leaving you with a functional instance in the interim.
Please upgrade to the next GitLab version before trying to upgrade to the latest. The command for upgrading to a previous version is as follows:
sudo apt-get install gitlab-ce=10.8.7-ce.0 -V
See the url below for available releases:
Going from 10 to 11 was even more difficult because the gitlab.rb file format changed and I was missing required information that was not apparent from the error messages after running "gitlab-ctl reconfigure" (I think the error was Chef related but my memory is hazy, below are the GitLab issues that finally provided a hint). This wouldn't be so bad if my GitLab 10 instance was working but after going from 9 to 10, all I got was a blank page so at this point I had no choice but to go all the way.
After finally replacing the gitlab.rb file with the template and adding back info from the old file, I was able to finally run reconfigure and update as needed. Of course, I was soon alerted by coworkers that their commits were suddenly being blocked for failing a commit pipeline. Overall it was not a fun experience and I'd rather not upgrade GL unless I have to but I also don't want to skip versions if I have to repeat the hellish upgrade process by manually babying the upgrade.
It is very important for the team to ensure upgrades are smooth. Right around 50% of our installations are running a version at most 3 months old, and we want to continue to improve that number so everyone can take advantage of the latest fixes, features, and security updates.
The team has opened another issue to explore ways to make it more apparent which version should be upgraded to: https://gitlab.com/gitlab-org/distribution/team-tasks/issues....
We are currently working on upgrade testing for single-hop upgrades. We are hoping to have this for 11.9 going forward.
This effort is currently being tracked here https://gitlab.com/gitlab-com/www-gitlab-com/issues/4852
One of the main reasons I moved was because of price (free), and the available CI. I use a $5 DO droplet to do all my CI running - which gets me unlimited usage. It's awesome, and has only had to be rebuilt once it 2 years due to unresponsiveness.
I've been a big fan of the CI - I use it to build my Docker images, then they get run on k8s. Note - I do not use the AutoDevops feature - so I really don't know how that works for people.
I pack the `gitlab-ci.yml` file with my main building steps and all the other tasks I need to run: deploying to k8s, deleting pods if necessary, running db migrations, stuff like that. It works great - everything just works at the push of a button. And now that I've ironed out the `gitlab-ci.yml` file, I don't even have to think about it anymore when it comes to a new project.
Honestly - I like GitHub's UI a bit better as it's a little more modern, but with GitLab's free private repos - it just lured me at first and I'm content for now.
Thanks for the issue!
"So breadth over depth is the strategy for GitLab the company. GitLab the project should have depth in every category it offers. It will take a few years to become best in class in a certain space because we depend on users contributing back, and we publish that journey on our maturity page. But that is the end goal, an application of unmatched breadth and depth."
Now my experience is exactly the opposite. I like all of the things GitLab does, it is an obvious and significant value add but ... when I have experimented with it in a home lab and elsewhere there were several things which gave me pause, and I've seen the same expressed elsewhere which alleviates doubts I may be wrong.
Now I'm in the position where I like something and hesitate to recommend it worrying that the details will make my life more difficult and my users' lives more difficult and not less, not because of missing big features but because of quality and depth, as you call it. Maybe it's the right thing for your business to go for breadth even for years longer and seek depth in the future to solidify your position. But for now, as a potential user, GitLab might not be right for me, which I think is unfortunate.
NPS = Net Promoter Score
We'll try to link from the legend but since it is automatically generated this is hard.
Each new release there'd be a new release with great new features - yay! - followed by 3-4 patch releases to fix the bugs. You eventually get worn down by it and don't bother using these things.
I see the same with other dual licensed stuff. Kong is a good example - bugs that actually stop routing traffic are eventually acknowledged, but with no feedback unless I keep chasing them. However i'll get regular emails with new products. They've got a new Service Mesh they announced recently. There's not a chance in hell i'll use it until they fix the bugs/issues i've reported in the core product.
Not cool, it’s only in Ultimate. No way am I going to be able to get my company to shell out for that.
We do our best to align features with the likely buyer according to our pricing model (https://about.gitlab.com/handbook/ceo/pricing/). Sometimes we don't get it right and need to fix our mistake after a feature is launched based on feedback from our wider community (example: https://gitlab.com/gitlab-org/gitlab-ee/issues/13856). We have been and will always be transparent about our pricing model and our mistakes in aligning new features with it.
If you're interested in collaborating with us as we build out requirements management, we would love that. It will help us build the right features for the right likely buyer. Please leave some additional thoughts on the epic discussion if you have them!
Convincing them to spend $20/developer/month is going to be hard. Convincing them to spend $100/dev/month is going to be a whole different ballgame (even if it’s ultimately perfectly great unit economics if gitlab increases productivity by at least 2%).
I fear that GitLab has simply bitten off more than they can chew. They try to implement so many features that many things feel half-baked. Ultimately - those features that aren't finished are useless to paid or enterprise subscribers.
GitHub has been successful because they focused on doing one thing very well. Now that they have established market dominance, they are slowly introducing new features - but in a way that makes them feel like useful pro-level products right away.
I don't know how GitLab recovers from that other than continuing to march forward. If I were them, I would stem the tide of new product introduction and focus on building out what is already there. It's a very ambitious roadmap they have given the number of features they want to implement.
I like the GitLab flow, and I absolutely love the "Create merge request" button in the issue detail. I miss it dearly whenever I'm working on GitHub, and `hub pull-request` isn't quite the same.
I don't like the code review functions: for example when I'm in the changes tab, where I can both resolve discussions and double-check the code, cycling through unresolved discussions breaks every time and I have to go back to the discussions tab where to check the code I have to open the linked diff in another tab.
I don't like the settings UI, with all those "expand" buttons so far on the right side. I mean, there's even one of those buttons in places where you have only one fieldset worth of settings.
I did notice there's a bug where the hover text doesn't go away after clicking on the "Jump to first unresolved discussion" button and I've created an issue to fix that (https://gitlab.com/gitlab-org/gitlab-ee/issues/15462.)
Also the auto-collapse size is far too small, and the fact you can't even see the diff for some large changes in infuriating. It makes using it for code review a struggle. My company is trying to move us all to gitlab-ee, but the lack of features and polish to the code review tool is preventing some teams from making the jump.
I feel like the code review portions are more like a half-circle, and not a heart. There are so many deficiencies that make it unusable for large projects.
If I were gitlab I would look at gerrit for inspiration on what you should use for code review. It's ugly, but it's very functional and performs quite well on meager hardware.
I know there are feature requests to make code review more usable but I can't understand how you could dogfood that code review tool and not go insane.
Also, merge trains for FF-only repos, please!
Improving the performance and scalability of merge requests is the priority for the Source Code team right now, and we are starting with progressively loading the diffs over coming releases. See https://gitlab.com/groups/gitlab-org/-/epics/1816. Currently we load them in a single request which is a serious bottleneck.
In parallel we are exploring a range of UX changes to streamline and improve the code review experience. If you have any specific ideas please let us know, or create an issue!
> Also, merge trains for FF-only repos, please!
Absolutely! We are iterating towards this https://gitlab.com/gitlab-org/gitlab-ce/issues/58226!
Made me think of https://www.reddit.com/r/dataisbeautiful, so I submitted it there https://www.reddit.com/r/dataisbeautiful/comments/d1xqfz/git....
Since then, I've moved most of my personal projects over to GitLab and try to evangelise them whenever possible.
Now, most of my code is hosted on GitHub, with a few projects hosted in a Gitea instance that I run myself.
This is calculated through those surveys you get sometimes that say something along the lines of "From 1 to 10 how likely are you to recommend X product to a friend or a relative".. or something like that.
So I think it's determined by a combination of personal reach-outs and probably the same questions scoped to particular features.
You can see some of the discussion here (https://gitlab.com/gitlab-com/Product/issues/323) which continued to some degree here (https://gitlab.com/gitlab-com/Product/issues/386).
If you have feedback on some other metrics to use, please contribute! I don't think we've landed on the right formula just yet. But directionally, we'd like these to be based on user outcomes rather than our own opinion.