> We evaluted CI/CD minute usage and found that 98.5% of free users use 400 CI/CD minutes or less per month.
Okay, so just that 1.5% of free users, each using at most 1600 minutes more than the 400 under the new limits... is enough cost to actually matter and make it worth making this change?
Or they anticipated that number going up if they didn't make the change?
It seems odd to me to say "This hardly effects anyone at all, almost everyone can keep doing exactly what they are doing for the same price (free in this case)", AND "this was necessary for the sustainability of our budget."
What am I missing?
Of all the free users, how many of them even have a repository? Out of all the users who have a repository, how many of them actually make any use of CI/CD?
They are saying "1.5%" to make it sounds small, but those 1.5% could account for a significant portion of total minutes of CI/CD used.
That said, I'm not complaining about free.
Man, I was expecting some sort of business/budget answer that I woudln't have insight into not being in this kind of business. I was not expecting straight up "misleading statistics, that number probably doesn't mean anything like it seems." :(
And I guarantee you that they did the math to optimize for conversion and churn. Behind all product decisions are an army of analysts.
So then it's more like 1 in 6 of the people that do use CI/CD need more than 400 minutes per month.
Just to be clear, these numbers are completely made up. They seem reasonable though.
They could have grandfathered the existing accounts into the 2000 minutes. Then the segment would no longer be growing and stay small, no?
For a toy example, imagine one user uses 2k minutes, 10 use 100 minutes, and 1000 use 1 minute. In this case you have "Over 99% use 100 minutes or less", but 50% of your cost is going to the one 2k minute user.
I have no idea if this is their exact curve, just showing how a fat-head distribution could explain what they are saying. I'd expect the "demand" (i.e. minutes used if they were not constrained / all paid for) to be a power law distribution.
Also worth noting that there's a sort of bimodal selection effect going on here -- it's unlikely that you use exactly 2k minutes/month, since you'd be hitting your limit and that would be disruptive. So the closer you get to using 2k minutes, the more likely you are to pay for more than the free tier. So I'd expect this pricing change to also force some users that were previously on 400 minutes +- 100 to have to upgrade too; this will impact some of the 98.5% of free users that are using <= 400 _on average_ per month.
I am a little dismayed to find out that open source projects no longer get unlimited minutes, though.
Do you have a blanket position against being curious about the business models or policies or statements of companies offering free stuff about their free stuff? Does that apply to facebook and google too?
Do CI/CD minutes ever go down once you start? I suspect not.
They probably plotted the trend line and realized that they had to do something.
Or, it could be bifurcated. Most people use like 2 minutes but 1 in 100 use 2000 minutes.
They may also be tracking and saw that people were using the free 2,000 minutes as "overflow" with multiple accounts.
Which doesn't necessarily seem like enough to justify a disruptive change that might scare customers... but I don't really know I'm hardly an expert or have experience in this kind of business. You think it is, is your hypothesis?
That said, this move seems 100% reasonable. I care about having a free tier. If they were killing the free tier, I'd be sad. But if I'm not paying anything, I'm okay being required to make my CI/CD pipeline efficient for my benefactor. I'd even take less than 400, gladly.
I don't think he feels guilty. I think he feels the risk of becoming dependent upon a service when the service is clearly unprofitable.
Paradoxically, a lower free tier makes me a lot more likely to use Gitlab CI now, since I now know they know their costs and limits, and from now on, not eating the cost for a future drop of the hammer in an undetermined timeframe.
The idea is that instead of paying Google/Facebook to display adds in the hopes that it will convert people as leads and hopefully down the line as a paying customer, it's usually much much cheaper to provide a free plan instead, which helps out as a "trial" of the final product.
Despite the sales factor, it also helps you get real users early on which can provide invaluable feedback to you, and help you prioritize the parts of the software that has real need versus what you imagine that people would want instead.
To be clear, you store all your code on Gitlab's servers (i.e. not self-hosting git instance) but just "outsource" the CI/CD work to your homelab? That's my ideal.
The omnibus-installer (for gitlab itself) works - but knowing a bit about rails (gitlab is a ror monolith) - it's a bit terrifying.
But the runner is very nice.
The runner will start jobs that do the real building (or whatever you have coded in your CI). Again you can choose to have jobs executed in docker containers. Also documented by gitlab, easy to set up. Of course you need to provide a suitable Docker image where your build can work. Nobody can do that for you. In simple cases you can pull something existing from Docker hub without further additions.
I think for CI/CD having a generous free tier is great because it makes it easier for people to get started and really dig into a project, not to mention the obvious benefit to open source that works as a continuous PR machine. Practically everyone knows what Travis CI and CircleCI are.
I still agree that it can be unnerving at times. I worry about services that seemingly offer no paid tiers. Like, draw.io. Thankfully draw.io actually seems to be sustainable, but you wouldn't guess it based on their very unobtrusive app!
This describes how I've felt about Discord for years.
On the other hand Slack's pricing is pretty crazy.
There's a lot of audits and regulation, in addition to tighter security, that Slack needs to prove to its enterprise customers that they can trust their employees blasting confidential information on it every day of every year.
The impedance mismatch brings a lot of the cost to software, in many different forms.
Nitro is for individual accounts and provides features for you as a user, boost is for the server and provides features for every user of the server.
Maxing out a server takes 30 boosts but the level 3 perks seem pretty… thin on the ground:
- +100 emoji (from 150 to 200)
- 384Kbps audio (from 256)
- 100MB uploads (from 50)
- custom URLs
Only the third one is somewhat useful, but 15 boosts for that doesn't really seems worth it.
As to price, a boost is $5 so a level 3 server is indeed $150 (level 2 is half), however Nitro ($10) provides 2 boosts and 30% off all boost purchases, meaning you can max out a server for $108, or 55.5 for a level 2. Nitro classic is only $5 and also provides 30% off of boosts, but doesn't include the free boosts, so it comes out at $110 to get a level 3 server on your own.
Is it worth the cost? Probably not. But I'd put it above larger uploads in terms of importance, and my discords hit the upload limit pretty often.
I agree that free tier is moderately reckless, as it invites people like me who bookmark https://free-for.dev to devour your service with no gain.
The pricing question though is tough. How much would an instance in AWS cost you for 4000 minutes? Two dollars? Pretty sweet markup if you can find a buyer.
But you are right: Such model is not sustainable for very long.
What kind of server are we talking about? What CPU? How much RAM? How fast is the storage access? Is my instance virtualized? And if so, do I have dedicated resources?
I have a build that takes around 70 minutes on an 8-core i9 with 32 GB of RAM and M.2 SSDs. What does that translate into for Gitlab "minutes"?
In the future, for Linux and Windows Runners, we will offer more GCP machine types. For our soon to launch macOS Build Cloud beta, we are planning to start with one virtual machine size and then possibly offer different machine configurations at GA.
And yes - the virtual machine used for your build on GitLab.com are dedicated only to your pipeline job and immediately deleted on job completion.
Finally, the only way to know how long your current build job will take on a GCP n1-standard-1 compared to the 8-core machine is to run the job and compare the results. I assume that your 8-core machine is probably a physical box, so you should of course, get better performance than a 1-2 vCPU VM.
A few reference links:
Darren Eastman: Product Manager GitLab Runner
I can see free plan users starting to see their CI jobs randomly timeout if they have access to those machines.
I get that execution time isn't a bad metric, all things considered. But I would have expected actual CPU `time` (1), maybe mixed with memory usage.
Tthat your load may spend more or less time waiting on IO instead of actually using the CPU... I would not expect to effect your charge. Which is the main difference between wall time and actual CPU time, right?
Under a system like that, users could maybe choose between a couple of different worker types. Or if there's only ever the one type, periodically the 'n1-standard-1' could be swapped out for whatever is the latest-greatest for the same price.
For instance, "time to compile XYZ well-known project"?
It’s presumably in a big standardized DC. They don’t have a continuum of instance configurations, they probably upgrade rarely and systematically. If they are mid upgrade just have 2 instance types available then sunset the older one. Since the upgraded instance is a new instance type it can have new (or same) pricing. In addition, they could publish benchmarks for each instance type if they want.
It is literally what we see with cloud providers having v1/v2/v3 names for some instance types.
I am being very pessimistic with these numbers, but I am continually amazed at how slow computers in the cloud are compared to my desktop. And when you're being charged by the minute, there is no incentive to make the computers faster, of course -- the business incentive is to make them slower! Buyer beware. (To be fair, they are getting a lot better Wh/build out of their system than you are. If you were paying for the electricity and cooling and got paid no matter how slow the build was, you'd make the same decision.)
They give you a certain power for a number of minutes. It seems reasonable.
If they upgrade their servers, they can fit more minutes per machine, or give it away by speeding up runs for free.
If you want to think about “minutes on XYZ machine”, then build your own CI on AWS and pay exactly what you want.
Either they went the cheap route and stuck it on some price efficient EC2 instances, or they went the vogue-but-expensive route of lambdas for "rapid processing and ease of development"
Excatly. Microsoft owned GitHub is more interested into getting more customers for Azure than GitHub itself.
So as you get going initially with GitLab SaaS, you don't have to set up your Runners for your first CI/CD jobs. Then, depending on your requirements, and as your use cases evolve, you can easily set up your own Runners but still benefit from the included minutes.
Darren Eastman - Product Manager GitLab Runner
“We want to reduce cost and make more money, therefore today we reduce the number of free minutes from 2000 to 400 for free accounts. There are options to buy more minutes. kthxbye.”
So tired of corporate PR BS.
Does anyone else think this is a Gitlab campaign against overuse of monomorphization in Rust projects? I just can't get myself to use Dyn...
No,not really. I mean, in healthy projects build times are dwarfed by the time it takes to run tests. In web development projects even the delivery and deployment steps dwarf build times.
I don't doubt that's often true but many Rust projects may be outliers here. A full, non-incremental build of a Rust project involves building all of its dependencies. This can add significant amounts of time if a project uses a big framework like Actix-web, which adds many dependencies.
My tests however run very quickly, ~1ms each. So running thousands of tests only takes a few seconds, even on relatively slow gitlab runners.
It definitely helps but not as much as in C++.
This assertion on the amount to tests makes no sense at all. Tests are not about language features. Tests are about checking invariantes, checking input and output bounds, and checking behavior. Tests focus on the interface, not the implementation. Tests only work if test coverage is high.
-James H, GL Product Manager, Verify:Testing
If you haven't touched your gitlab pipelines for a few months, check out DAG pipelines  - I got my web project deployment pipeline down from 25 to ~12 minutes by running tests sooner.
I presume this is some kind of joke. If not, why on earth would GitLab campaign against the use of dyn types? GitLab doesn't even use Rust.
Even if those companies aren't losing revenue (they probably are), there is less investment money too, so they are still impacted.
Those things the OP is talking about are all investment, so it's natural that they get cut.
1.5% of 6m existing free tier users is 90k accounts who now need to reduce usage, pay Gitlab, or move to another platform. Only one of these options is fast!
I don't think this is the only reason they are doing this, but it does sweeten the pot.
edit: I had the look at the docs but they're quite overwhelming with lots of options. I run linux on my laptop. If the setup is too complicated I'll just purchase some minutes and call it a day
Now I am going to see if they fixed the bug were copy to clipboard stopped working a few months ago. Why would I want some JSON blurb intrad of branch name when copying it ️
I am asking because my use is definitely in the minority of the user base which is just slapping projects into a managed git repo that is not owned by Microsoft.
This was before github decided to allow private repos for free.
AWS Code Build will always be an order of magnitude cheaper, it's just slightly harder to set up but it works very well. It's unclear how all these other services will ever compete with that.
For example, to run a CI server on Gitlab for a team of 8 that never spun down, it would cost $492 per month on their 'shared' runners. On AWS Code Build, you get a DEDICATED ec2 instance for $223 per month and only pay for what you use when it's running.
If you’re using a dedicated ec2 instance, most providers (gitlab, buildkite, github, etc) will let you connect it as an agent for free.
IMO using your own runner is a better way to go in general because the standard ones tend to be very underpowered and you can get much faster builds without spending much.
At that point different providers are largely competing on price and UX (imo Buildkite have the best developer UX and the time saved as a result is well worth the price).
(Not affiliated with buildkite other than as a user).
And no, we don't do this on purpose in order to have you buy more minutes.
Raspi and similar devices will some day hopefully rid the world of this tyranny of having to trust a website like this and pay them for eternity and hope they dont go down and hope they dont raise prices (oops) and hope they dont obfuscate pricing like ermmm well every cloud provider has.
The devices will pay for themselves within the first year and generally my philosophy is to avoid building your operation around 50 services cobbled together because there is a possibility you will spend more dev time trying to understand a service's idiosyncracies than actually just rolling your own.
At my last employer, we used a free tier of CI/CD through CircleCI as it was sufficient enough and easy to spin up for testing a couple of small internal libraries we needed to hook things together with a SaaS product we were using. We weighed the benefits and came up with a number that balanced the estimated cost of implementation and running cost of self-hosted against the free tier offering and estimated cost of implementation there.
Once you factor in engineering costs and the additional server to maintain, it made sense to go hosted for us. But every team is different, and hardware cost isn't the only thing. You need to consider the running cost of utilities, maintenance, and in the case of larger equipment, even cooling costs.
That said, for personal projects, yeah, I just kick things onto my home file server, since it's running anyways, and normally has nearly no load other than managing my ZFS and occasional backup operations.
You can get a Ryzen 5/64G for like $40/month on providers like hetzner. While I get your point about maintenance, it's not right to say that dedicated servers are that expensive (you also can't compare the performance of 2vCPU/4G with a dedi but that's beside the point)