There’s also a proposal https://gitlab.com/gitlab-org/gitlab-ee/issues/4517 to support federation between GitLab instances. With this approach there wouldn’t even be a need for a single central hub. One of the main advantages of Git is that it’s a decentralized system, and it’s somewhat ironic that GitHub constitutes a single point of failure.
In theory this could work similarly to the way Mastodon works currently. Individuals and organizations could setup GitLab servers that would federate between each other. This could allow searching for repos across the federation, tagging issues across projects on different instances, and potentially fail over if instances mirror content. With this approach you wouldn’t be relying on a single provider to host everybody’s projects in one place.
But that can change.
Recently (just as most of the world has apparently moved on from desktop computing, haha) Linux is pretty much fine for the traditional desktop computing. I have current Mac, Windows, and Ubuttnu on my desk, and they are essentially interchangeable except for a few special-case purposes (say, Final Cut video editing, or opening one of those weird wonky Excel sheets that only open on Windows).
Firefox, too, is suddenly performant and I've switched to it as my main browser (for default website use) — something absolutely, utterly unimaginable two years ago.
I hope that GitLab is reaching the same kind of transition point. I've heard horror stories from people that used it 2-3 years ago and are happy to be back on GitHub. But I don't hear much recent grumbling. I moved a toy project to it and it seems nice. As fast as GitHub for me (though I am in Japan, and GitHub is slow in Japan). More features. A nice, sort of adult/professional aesthetic. And — yay! — the open source core it's always had.
I might be wrong, since I haven't used it for reals, but it looks like they might have hit that critical usability threshold?
Open source and worse isn't a very compelling sales pitch. But an open source tool that is broadly equivalent to a closed source one is generally more attractive, especially when you're talking about services that will be used as part of your core infrastructure.
Its market share was actually much higher in the past and only went down (the introduction of Quantum didn't really help market share wise).
And in term of polishness.. Firefox was MILES ahead of any browser for a very long time. When Chrome launched, it was as horrible as you can think in term of functionality, and Firefox at that time was already as solid as today. But Chrome advances very fast. People often attribute Chrome's success (merely) to Google's push, but Chrome's technological development plays a bigger role IMO.
You click enough links in Slack that open in Firefox, don't work and then you have to C&P the meet link into Chrome...eventually your commitment wears down and you just start using Chrome as the default again.
And I hate that. It's a very IE6 type move on Google's behalf. Short of applications / my system giving me a clear way to say "always open links on this domain in this browser" it makes the workflow a pain.
Maybe the Facebook stuff will make that type of thing more popular. If I could make Firefox my default browser, but always open Facebook and Google links in Chrome I'd be pretty happy. Currently running Linux Mint.
When I left Chrome a few years back, the sluggishness of Firefox was quite palpable. I eventually moved to Edge, which, while crashier, overall performed better. But after both the Electrolysis project, and Quantum, Firefox won me back quite solidly.
Long story short: Speed > features when it comes to web browsers.
That's quite a weird thing to think in 1998 with respect to a browser, given that they'd only had any sort of a browser for less than 3 years at that point.
To which I gave a related anecdote and responded
"So I would say that releasing very buggy software without blinking is not a new concept to Microsoft."
Since they released an entire OS that was massively unstable (win98), while claiming their browser was an integral part of it, it doesn't surprise me that they would release a buggy browser at a later date and expect people to use it.
I wasn't only addressing browsers, but rather my perception of the general quality history of Microsoft products at that time. It affected my perception of Internet Explorer, which at that time was not as popular as Netscape Navigator.
The reason this is a problem is because when a malicious webpage hijacks your browser, and you have to forcibly close it to escape... Edge helpfully reopens the malicious webpage each time you relaunch Edge until you find out you need to hack on the registry to fix it.
However, I think you are probably correct in that I think it is far more likely that Chrome has a good automated regression test suite than it has a good unit test suite.
For a small unit of code, or a library, the unit tests effectively prove that the code/library does what it says on the box.
For a continuously worked on application, regression tests holds the guarantee that the application continues to work correctly for whatever thousands or millions of use cases built up over its lifetime - even when the implementations, algorithms and libraries used change underneath.
Acceptance tests are incredibly important. They tell you if the system is working. No amount of unit tests are going to help you with that. Once you have accepted the behaviour, what you're really interested in is whether or not the behaviour has changed. You do not need your acceptance tests for that -- your unit tests will tell you.
I'll write it a bit more concisely because I think it is important: acceptance tests tell you whether or not the code is working correctly. Unit tests tell you whether or not the code is doing the same thing it was doing the last time you ran the tests.
The reason I don't favour a large suite of acceptance tests is because they are unwieldy. It's fine for a few months, but once you get a few tens of thousands of lines of code, you will end up with a lot of acceptance tests. These acceptance tests are extremely hard to refactor. It's extremely hard to remove duplication. Over time, they get more and more problematic until you are spending more time trying to figure out how to make your acceptance tests pass than you are trying to figure out how to make your production changes.
Unit tests, when written in specific ways, have less problem with this. Some people think about a "unit" as being a class. But really a "unit" is anything that you might want to isolate in your clamp. It can be a function. It can be a class. It can be a module. Your unit tests should probe the behaviour in the unit (and by "probe" I mean, expose the internal state). Michael Feather's has a great analogy of a "seam" which runs through your code. You try to find (or make) that seam and you insert probes to show you the state in various circumstances.
IMHO, you should write unit tests the exact same way you write any code. Your "circumstances" (or scenarios, I guess) consist of creating the data structures to give your initial state. Your "tests" consist of probing the state along the seams and comparing it to expected values. This is simple code. You should be able to maintain this code using the same tools you use to maintain any code. You should write functions. You should write classes. You should write modules. You should use all the tricks of your trade to reduce the complexity of your "test" code. Your goal is to create specificity when tests "fail" (the probe detects behaviour different than your expectation -- or the clamp detects that your wood has slipped). When behaviour changes, only a few tests (ideally one) should "fail". It should report the "failure" in a way that immediately describes the difference between the state you expected and the state that you probed. It should be easy to change the probe when the behaviour is intentionally changed (ideally changing only one place). It should be easy to probe new behaviour (just build your data and add an expectation). Finally, it should be easy to reason about the behaviour of the code by reading the "tests". Refactoring your tests and removing duplication is very important here.
As for acceptance tests, like I said, they are incredibly important. What I don't find particularly useful is a large suite of regression acceptance tests. The unit tests already tell me when the behaviour has changed. When written well, they even tell me exactly where in the code the behaviour "slipped". I often write manual acceptance tests. Once I have tested it, it is not necessary to test it again (as long as I have a good unit test suite).
My personal opinion as to why people find automated acceptance tests suites important is because they have never worked with a good unit test suite before. There is a general lack of experience in the industry with these concepts. Quite a few people's experience with well tested code is with green field projects. Often these people leave after a year or so. It's not until you have a lot of experience working with the legacy of various testing techniques that you can understand the advantages and disadvantages. I think this is why Michael Feathers is so respected -- as far as I can tell he specialises in legacy code.
Having said all that (and I'll be surprised if you make it to the bottom :-) ), I do value a small automated acceptance test suite. It's my canary in the mine. If it ever fails, then I know I've really stuffed something up and I launch an immediate investigation. Also, there are some things that can't be unit tested effectively (for example testing a web application across both client and server) -- you end up faking the boundaries, which leads to the possibility of skewing. Again, in those cases, I try to find a few end to end tests that will hit the majority possible problems.
I hope you found that interesting. I've typed up essentially this same message in at least 10 different threads of the past couple of years. I think it's slowly getting better, but I think I still haven't managed to explain the concepts as well as they need to be explained. If you've made it this far, thanks for reading :-)
GitLab today in my opinion is now a better piece of software than github.
I've been using linux on the desktop for over 15 years with no intention to ever stop the near future.
I had been using firefox since its first release under the name phoenix then firebird then firefox (I had been using mozilla suite before that) and I've stopped with no intent to ever come back to it when mozilla finally killed what made firefox useful to me after a lengthy agony process (BTW the claim of firefox being not performant enough 2 years ago is totally unsubstantiated as I used it daily with over 250 tabs open concurrently without a single hiccup despite having 35 extensions loaded as they were required to put back the useful features mozilla had removed, to remove the unwanted cruft mozilla has added and to add the necessary features mozilla refused to add). I have now switch to waterfox, and its name says it all.
So really comparing gitlab to linux on the desktop means gitlab will never happen, and comparing gitlab to firefox means it will be mishandled into irrelevance by a shady finance operation aiming for market domination.
To me Gitlab seems a better alternative to proprietary and centralized github that will be bought at some point in the not so distant future, has been my stance on this matter. That Microsoft is the one buying would not have been my first bet but is not a huge surprise either considering their change of PR to jump on the opensource bandwagon as an attempt to extend their agony further.
It's been fine for the past decade, it's just the trope of "Linux on the desktop" that's slow to die.
As opposed to Windows or OSX where you're just screwed.
There are rough edged in Linux on Desktop but people seem to be completely house-blind* about Windows and OS X. If you spend a lot of time on Linux and go back to Windows or OS X the rough edges in those platforms become immediately obvious.
* If English isn't your first language, house-blind is where you get so used to something being out of place that it becomes part of the decor. e.g. a jumper on the back of a chair that stays there for a week+.
But we have to put this in the context of an elderly lady who is terribly frightened of breaking the computer and always believes it's her fault when software screws up, because the feedback loop of computers is terrible (either absent, or opaque jargon, or marketing lies). That is, my mother.
And keep in mind that I live abroad. Helping her out remotely is a very difficult and slow process - if I lived close by the story would be different. In that context, when it comes to Windows or OSX, my mother has a lot of people who can help her out other than me - my sisters, my father (they're separated but still get along), some of her friends.
Now, my younger sisters are getting into programming (because all professions need it) so maybe I can get them into Linux too - they're definitely capable but the question is whether they consider it worth the investment. But still.
Maybe we're so used to expecting Year of the Linux Desktop to mean Year of the FOSS Linux Desktop that we ignore the successes.
Let’s not be disingenuous. I don’t know about Windows, but macOS has had absolutely wonderful critical failure recovery for a while now: There is the recovery partition, which acts like a mini-macOS and lets you do various things like drop into Terminal.app, use Disk Utility for drive scanning and repair, or do an ‘archive and install’ (extremely useful for the technically challenged) which keeps all your files but sets up a fresh macOS install. If even the recovery partition is borked you get the option of ‘Internet Recovery’, which connects to WiFi and automatically downloads and installs a fresh copy of macOS (with the aforementioned archive function, if an old install is detected).
Compare this to Linux, where you either get dropped into GRUB or a bare shell..
If someone asks me to help for the administration of his Linux machine, I would accept because it is so easy and so little work compared to windows. I think Linux is perfect for the noob who accept to delegate administration.
It's not that I'm not willing to help, but I live abroad. If I lived close by, I would gladly install something like Ubuntu or KDE Neon on her machine (probably Ubuntu though - the mainstream would make it easier for her to find things on her own).
Whenever I'm home I help her with her computer. The whole thing is very educational for me as an interaction designer as well. It often shows how modern interfaces make her feel like she is the dumb one, when honestly it's often the arrogant UI designers who think everyone is on board with modern UX paradigms. Or worse, abuse dark UI patterns for evil purposes.
UI wise I would have helped her switch to one of the distro's ages ago, and I agree that Elementary is a good candidate.
Trying to fix her Lenovo Yoga I had to navigate a forest of dark UI patterns, with pre-installed apps trying to trick me into sharing private data every step of the way.
I really fucking hate this user-hostile attitude that can only be explained by greed. I've "fixed" her computer one more time, but I think the next time I'll let her try a bootable Linux distro, and see if she likes it enough to be willing to give it a try.
I switched from Windows to Ubuntu last December, and it has given me a whole new appreciation for Windows. The polish (things just working well) of recent Windows versions is just amazing, in comparison to Gnome/Ubuntu.
PS. Will stick with Ubuntu though.
PPS. Gitlab is an awesome product and company.
I wonder if this move by GitHub is motivated by them seeing this writing on the wall.
But all in all, good product, I hope they succeed!
Yeah, on the desktop things are getting better. But ... everybody is moving to the smartphone. And there things are getting worse. For example, my Banking app works only on 2 platforms, which are not open.
It's exactly because the world has moved away from desktop computing that Linux on the desktop has become viable: collaboration tools are increasingly web-based (or is at least web-enabled for the 80%-usecase), and those tools are exactly what tends to anchor an organisation on a single platform. These days, even Outlook has a perfectly usable web-interface that works fine in Chrome and Firefox.
That's because change is the only constant.
So either they do it themselves which is a risk or looks like GitLab.
Gitlab is essentially salting the earth for dev tool startups. I had my issues with Github, but at least they had built a business around a dev tool, behaved ethically and gave back generously, and so I wished them well. To see so many people dropping them for a fauxpen-source competitor whose primary selling point is “it’s free!” just makes me sad.
If you want nice things, you have to pay for them. If you aren’t, I guarantee you that someone else is, and they’re the ones with control.
> I had my issues with Github, but at least they had built a business around a dev tool ...
That's a strange claim given that in the current top story - of Microsoft buying Github - the following claim is made:
"The acquisition provides a way forward for San Francisco-based GitHub, which has been trying for nine months to find a new CEO and has yet to make a profit from its popular service ..."
Perhaps it comes down to your definition -- can something be non-profitable for a decade and still be called a business?
”can something be non-profitable for a decade and still be called a business?”
This is such a bizarre talking point...do you honestly believe that Gitlab is a better business? Their model is “just like Github, but with even more stuff given away for free!” And let’s not forget that Github has to compete with Gitlab cannibalizing the low end of the market. I’m sure that hurts margins.
Someone has to pay those Gitlab engineers who are writing the bulk of the code. At the very least, as soon as the dumb money dries up, the velocity of development on Gitlab will drop like a rock. In the worst case, you’ll get an conflicted corporate hydra, like mysql.
> You can’t build a business competing with someone who is using VC money to give away their product.
This is delightfully worded, given it could apply to both github and gitlab.
Remembering that github started in 2008, while gitlab.com started four years later (first commit to their codebase was 2011).
Github is running on $350m of VC funding.
In response to my question 'can you call a 10yo company that still isn't profitable, a "business"?', you avoided the question, called the matter bizarre, and tried to distract from the question by claiming github is a _better_ business.
Your claim that github has 'built a business and .. gave back generously' is also weird in that gitlab has released the source to their core product, but github hasn't. This also speaks against your claim that you're more likely to be abandoned if you commit to gitlab than github.
Finally, the idea that the 'low end of the market' is where all the money is does not match any other tech startup's experience, is belied by the pricing structure of both companies, and further invalidated by the fact that gitlab is not swimming in cash from their cornering of the frugal user segment.
And what that means is, yeah, either they keep burning $$$ every month and selling more of the control to VCs to feed the war chest until they maybe buy 2nd place, find an acquirer (and with that much ever-increasing VC control, a likely push), or yeah, layoffs will happen. Gitlab is extra interesting because their definition of innovation is biting off even more surface area (e.g., CI), and therefore even more burn.
Keep in mind.. all this says zero about how nice the product quality is or how friendly the people are. But just in the same way you don't get mad at what happens if you stick your hand in a lawn mower (https://www.youtube.com/watch?v=-zRN7XLCRhc&t=34m7s) ... there are financial forces at play from being a high-spending bottom feeder that are hard to escape. Possible, and I wish them luck, but that's a real bet.
>Keep in mind.. all this says zero about how nice the product quality is or how friendly the people are.
Then don't use the term bottom feeder since that means the people are making a shitty product with no ethics to really innovate. It says the people are shameful hacks and the quality of the product is bad.
In reality Gitlab is a better product and the people involved should be proud of their work.
Based on that, having 275+ employees, and their stated IPO targets, I ran the numbers recently. My guess was their costs are ~$40M year (admirable: I expected way higher but they focus on non-US hires and pay only 50% percentile in _local_ markets: super low!). Likewise, their stated IPO and growth targets make me guess they make ~$20M/yr. So two different reasons to believe they're burning... ~$20M/yr. The positive thing for them, which they're not public about but I'd guess, is while they're probably growing OK in regular accounts (hard competition vs bitbucket, github, etc.), they're probably Super Great on retention + internal expansion, so net negative churn, compounding factors, etc. I think they _can_ stop hiring and let revenue catch up, though other forces take hold then: so it does look like they're on the classic growth-over-control VC treadmill (despite saying they're not), and will keep ceding control to VCs.
During phase 2 there is a natural inclination to focus only on on-premises since we make all our money there. Having GitHub focus on SaaS instead of on-premises gave us a great opportunity to achieve phase 1. But GitHub was not wrong, they were early. When everyone was focused on video on demand Netflix focused on shipping DVD's by mail. Not because it was the future but because it was the biggest market. The biggest mistake they could have made was to stick with DVDs. Instead they leveraged the revenue generated with the DVDs to build the best video on demand service.
No label is ever 100% accurate, but a lot of that dynamic has played out here pretty clearly..
Gitlab is also running on a bunch of VC money.
> In response to my question 'can you call a 10yo company that still isn't profitable, a "business"?'
Gitlab also likely runs at a loss. Gitlab has certainly never claimed to be profitable and some estimates are that as few as .1% of their customers pay for Gitlab.
> I understand that you're claiming gitlab is salting the earth, but still don't understand why / how.
It's pretty clear to me at least that neither Github nor Gitlab have sustainable business models. The OSS community is crazy to think that either business will continue to subsidize OSS development while losing millions of dollars a year. All of the anger against Github and the new "faith" in Gitlab is pure delusion. Both these companies subsidize OSS development while losing millions of dollars. This will go on until it stops. It certainly can't go on forever.
Personally I suspect the absolutely best thing to happen to both Github and Gitlab would be being bought out by real companies that heavily depend upon OSS and, you know, actually make money.
It came up before and now the chatter has started up again around Gitlab. I think it still makes a lot of sense for AWS to purchase Gitlab. There's a fundamental strategy alignment there (both Gitlab and Amazon aim to be a "one stop shop"), Gitlab offers the potential to lure a bunch of developers into the AWS platform with a free offering and, ultimately, Gitlab offers the same computational economics as other Amazon products because it is just another hosted product that requires a database. Wouldn't be surprised at all to see such a transaction in as little as 2-3 years.
That is what I said.
Uh...Gitlab is built upon libgit2, rugged and github-linguist. In other words, the core parts of Gitlab —
the ones that interact with git are built, maintained and open-sourced by GitHub. And these are just the obvious dependencies. Github people contribute heavily to open-source projects that most Ruby websites use.
If you’re going to fanboy all over the place, fine, but at least know what you’re talking about when you do it. And don’t try to weasel out of it by talking about “core products” —- without GitHub’s substantial technical contributions to the infrastructural code that interacts with git, it’s a safe bet thst Gitlab wouldn’t exist. That’s core.
Originally you declared:
> If you want nice things, you have to pay for them.
And I don't know how that fits in with people releasing / maintaining free software.
I responded to your first rant because you appeared to be 'going all fanboy' over github, declaring them a successful, superior business. I asked you if a company that hadn't turned a profit despite first mover advantage and a decade of trying could be termed a business ... and you weaselled out of that question.
If you believe Github isn’t a business, then you’re going to be sorely disappointed by Gitlab, whose business model is worse.
I’m done talking to you now.
The challenge discussing this with you is all your comments about Github are based around comparing them (favourably) to Gitlab.
> I'm done talking to you now.
This is a shame, as I'm consumed with curiosity on your take of today's news that Microsoft spent US$7b buying github.
From what you've described it sounds like they should have just cloned libgit2, rugged, and github-linguist, and rattled up their own gitlab over the weekend.
But the damage is already done - people think MariaDB is some bastion of good intentions and open source software now, because they very rarely look deeper.
There was strong precedent for fearing what may happen with MySQL. Knowledge of what happened to Hudson, OES, OpenOffice, Solaris ... this would concern the stewards of any bit of software that got swallowed up by Oracle.
(Edit: Also I recall some worrying stories coming out from Monty and other key developers.)
What's this 'bullshit licence' that MariaDB has? I thought the source was (L)GPL all the way down?
This is like a scavenger hunt.
I've looked up MariaDB MaxScale ... and found an optional / add-on product that is aimed at Enterprises, seems to require an Enterprise support licence for the Enterprise edition of MariaDB ... and I completely fail to see how any of this demonstrates that the 'MariaDB story is a bit of a fuck you'.
Basically - their formerly GPL proxy for doing HA deployments is suddenly not open source.
They can of course make this decision - it's their code to do with as they wish. But it's quite fucking rich for Monty to claim Oracle will close source MySQL, create a fork and company which then uses that fear to grow in popularity, only to do the very thing he accused oracle of doing: closing an open source product.
Also, if you think only "enterprise" customers need database clusters that survive individual node's being offline, you're in for a big shock.
Definitely not the most extraordinary story over the last decade.
And trumped by IBM's famous first $1b spend on 'Linux' just shy of twenty years ago, and their subsequent announcement that they'd recouped that money within a year.
Coincidentally this speaks to your claim:
> This is likely the greatest act of charity the planet has ever known.
These guys aren't in it for the charity. There's doubtless plenty of positive PR spin from contributing to free software -- but don't mistake pragmatism or happenstance for altruism.
And IBM's contribution was, frankly, marketing. It does not compare to the volumes of high quality technology that the companies I mentioned have simply given away for free.
Many on HN and others are perhaps too close to it but I think people will look back upon this extraordinary corporate charity as a decisive event of the century.
I think you're being overly charitable to think these tech corporations had charitable intentions when they contributed resources to tech projects that happened to improve their tech business prospects.
> And IBM's contribution was, frankly, marketing. It does not compare to the volumes of high quality technology that the companies I mentioned have simply given away for free.
Bizarre you didn't mention that up front when you named 'the big five contributors'.
On what do you base your bold claim that IBM's contribution was marketing, and the other corporations weren't?
> Many on HN and others are perhaps too close to it but I think people will look back upon this extraordinary corporate charity as a decisive event of the century.
IBM announced their first billion spend last century.
Just ask Amazon
Maybe, we should ask ourselves first if it's a fair comparison. Amazon kept investing the profits into themselves. I don't think that is the case with Github though
People on GitHub is not just to sharing codes, but also to get in touch with each others. GitHub succeed because it's not only just a free online Git repo services, but also a developer + user community where you can put your code on, share it, and 'earn' stars & forks as feedback. And stars + forks can help you stand out in a job interview and many other occasions.
Bitbucket is another Git repo service, but it sort of failed to build it's community. Result? It received less attentions compare GitHub.
So, while GitLab is also trying to be 'yet-another' Git repo + you can host it on your own, the benefits of become community can't be ignored. And federation can help that by connecting all the GitLab instances together to form a bigger and global community.
Even better, the federation protocol itself can be an open-sourced public standard, so all the other Git repo software can implement that in their product. The potential is huge.
But then, as I understand it, Bitbucket was an acquisition rather than a brainchild of the Atlassian team, so you can expect a certain amount of neglect.
An omen for github?
Devil's advocate: why would Microsoft invest in improving, say, the issues functionality of GitHub when it could instead integrate and push users towards its existing products and tools for project management, like SharePoint?
See their GitHub for more:
Also their focus is now on developers. They made VS Code completely free, MIT Licensed and not for themselves, but for developers.
For example, if a Gitlab instance posts a Pull Request object over to a Mastodon instance, what is the latter supposed to do with that? presumably it won't have any UI widgets to display the content, and no way of acting on it semantically.
As far as I can tell, Activity Pub is a way of federating instances of the same application, with the same semantics. But on the internet I'm seeing some dialogue which seems to presume that AP would make it possible to federate instances of all sorts of different applications and have them all Just Work™
(Apologies if I've misunderstood your post, I'm kinda rambling at this point)
For example, GitLab projects could publish their activity feed to Mastdon. You could follow it to see what commits they make, issue discussions, and so on. Meanwhile, federating things like pull requests would happen across Git based services. So, if Gogs decides to implement compatible ActivityPub protocol, then it could integrate into the federation of GitLab servers.
In any case, this highlights once again the issue: the community will have to put enough pressure on the parent company to release certain improvements as Open Source. And their interests are most likely not aligned.
GNOME already planned to self-host and have enough of a community to maintain a close fork.
Does anyone know of any more substantial differences? Which should I choose?
The following shows how active the two products are:
https://rhodecode.com/ Dual licensed AGPL / commercial
https://kallithea-scm.org/ GPL3 fork (pre-AGPL)
Now with LFS, code review, etc features. Built on the JVM - blazing performance.
Gitlab's main codebase is in Gitlab, though.
To the extent that I think really the only thing I'd want to see to not need Github any more would be some sort of federated social/discoverability/search layer similar to what you're describing.
nobody wonders that, as every week there’s a post on HN prising gitlab.
So, GitLab is not quite Mozilla (which is itself not quite Wikimedia or GNU level dedication). But GitLab is still a standout in FLO-commitment compared to the mediocre norm.
We have no relation to Oracle.
But they do though. It might be significantly more expensive than GitLab and also only sold in packs of 10 user licenses and also not allowed to be run in a public-facing capacity. But they definitely do have a self-hosted option.
In fact I have been literally considering migrating our internal GoGS install to GitLab for the last week or two.
The end of my day on Friday was downloading Gitlab and figuring out where to host an evaluation install.
Migrating my personal account over from Github to GitLab.com is a good chance to get some hands on time. Plus I can consolidate my personal CI setups at the same time, and I don't have to pay monthly for private repos.
PS: Always interested when your content comes up in my RSS reader. I do wish it was easier to share links without the unsafe content warning though. :P
If anything, I hope that Microsoft's acquisiton of GitHub means that GitHub is going to keep growing in features for varied enterprise uses, and that we're going to see even more competition in this area.
- I can't push or something in general goes wrong with one of my repos (but not others).
- Gitlab's status page is green
- Other people are having issues on Twitter and tweeting @gitlabstatus about it but there is not general across-the-board outage
This seems to indicate that Gitlab tolerates (and very often has) a reasonable amount of instability and error rates across its platform, but just takes the average of these as a baseline of performance: i.e. it's a very spikey graph with a reasonably high average line fit.
This tweet supports this impression:
"Errors should be down to normal" - the idea that there is an non-zero error rate that is openly described as "normal" is worrying. Not that I'd expect a constant zero error rate, but at least aiming for it should be a consideration.
Services at this scale will have errors for all sorts of strange reasons, it doesn't mean the service is poorly engineered. In fact, if users don't notice these problems it usually means the service is resilient and robust when it encounters strange situations.
There are other strange problems that come with large services which means all components should be fault tolerant if possible.
Also, please don’t make disparaging comments about other people’s experience unless it’s highky relevant. It doesn’t add anything to the conversation and will likely derail the conversation.
As per the really simple example: generally you'd be better off rolling out a second endpoint for the new api and then stop serving responses that use the old one. First this doesn't break everyone who had your page up, and second you can stop rollout safely if you find a problem with the new api.
Of course, and as I said, zero errors is not a practicably achievable in this type of context. The issue is with metrics though: the idea of taking averages instead of looking at troughs is still problematic.
> In fact, if users don't notice these problems it usually means the service is resilient and robust when it encounters strange situations.
True. But in the case of Gitlab, users are noticing these problems. Constantly. It's just Gitlab's own metrics that could be (I've not done more than browsed their Grafana instance a bit, so my comment is generally a bit speculative) ignoring the problems because they're focused on averages instead of specifics or thresholds.
> Consider a really simply example ...
lallysingh has already pointed this out, but I'll reiterate that this is a very apt bad example. You're right that ideally components should be fault tolerant if possible, but frankly that's a big ask. Especially for highly-scaled services supporting many many components of various types - ensuring that all of those components are completely fault tolerant is much more difficult than simply ensuring the old API continues to operate for a grace period while the new one is served from elsewhere.
I think your example is apt, because it's indicative of a common excuse for bad engineering: the assumption that downtime or disruption is necessary because of necessary software upgrades/improvements and poorly planned orchestration.
Other comments further down are showing other's are too. Hacker News Hug of death?
It's not a great look.
Thanks to Comcast for creating Trickster.
Hope my post didn't come across as snarky as some others have... HN are like the Spanish Inquisition. No one expects.
But you know, when the internet decides it's time for everyone to look at your site, some random new stuff might be better than serving 5xx all day. :-D
HTTP 512 - Social Media induced DDOS due to media related frenzy :)
So while they still have improvements to make it would be a lie to say they haven’t improved at all.
Also, I don't think GitLab has had a long downtime recently. At least not for any of my projects.
That mostly depends on whether you're using CI/CD I'd think, that's had some day-long outages/problems lately. Of course, GitHub doesn't even have it's own CI/CD, and GitLab's is amazingly flexible, so it's still the better product. But it'd be nice if it were more stable.
(Note: all this is on GitLab.com. If you self-host, it's presumably much better.)
(I am using the free tier though, so this is more informative than that I'm complaining.)
However, I re-valuated and did migrate about 2 years ago and it has been fine during that time. There have been a few hiccups, but not for more than an hour or so. I've had a team of 4-7 devs working in it all day for the last two years and we have not had performance problems. We run our own CI runners as well, and while the cloud runners do often have delays, I've never had issues with delays to my own runners unless they were all busy.
I love GitLab and it’s UI, but recently the performance of the hosted version is awful (not sure why - just being overloaded?).
In fact, even their own status page reflects it: https://status.gitlab.com/ - the current “project HTTP response time” is around one second which makes me cry when using the UI.
I wish them the best but would be moving to a competitor (or maybe a self-hosted GitLab) in the meantime until they sort it out.
I find it boggling that a commercial team chooses to accept this kind of external dependency. What do they offer which makes it worth the extra risk?
Then again, I come from a largely non-web background where external dependencies aren't just accepted as inescapable. I guess if your entire business is producing an add-on for some other company's web service (not saying yours is but many out there seem to be) then what's one more on the pile?
That’s a myth: https://en.wikipedia.org/wiki/Boiling_frog
I ask because it's not a particularly good luck that the viewers from HN are capable of bad gateway hug of deaths to the site?
I do hope they're using Prometheus federation to expose this instance to the fickle internet and that they have one or more internal Prometheus instances that aren't directly queried by this instance. After all, that stuff is responsible for paging if something goes wrong in prod.
We used to use Federation, but now we just have the public server scrape the same targets as our private one.
I'm adding a caching proxy (https://github.com/Comcast/trickster) to the public server now to improve performance. :-)
This link is for the monitoring page. The imports are going through.
Not to mention, it’s visually customizable.
Many other reasons to be concerned about performance, but there's no evidence that they're withholding essential features like this from their free version.
Reminds me of the classic "The main Rails application that DHH created required restarting ~400 times/day. That’s a production application that can’t stay up for more than 4 minutes on average".
"We’re already in the process of migrating GitLab.com to Google Cloud Platform."
So looks like they wont be on Azure for much (?) longer?
1 - https://about.gitlab.com/2018/04/05/gke-gitlab-integration/
Edit: Found this super-interesting architecture overview: https://about.gitlab.com/handbook/infrastructure/production-...
In my case i simply want to move since i really dislike microsoft and their business decisions. I simply don't want to be involved in any "direct" way with this company. Microsoft doesn't have much control over GitLab, just because they use their servers. But Microsoft will have a lot of control over GitHub.
GitLab offers free private repos. I expect that's a factor in many people choosing it.
Meanwhile Google is GitLab's largest investor, and I've been burned more times to count. Google Wave? Google Wave?
There are many articles covering this, for instance:
imgur has figured out how and when it's safe to redirect PNG/JPG requests to a "JS blob" (of advertising), unfortunately. They tried to pull this a few months ago, completely bungled <img> embeds, and had to turn it off in a hurry. I think they've figured it out this time, sadly!
Time for a new image host... imgur has gone all high-level and "scale"-ey, it would seem (particularly with the new video with sound thing).
I was going to say something about toxicity, but this is sadly just a scaling problem. Now that sound - and competing with youtube - is the new "major consideration", just being a competent works-anywhere image host has been relegated to the region of rounding errors, so it doesn't matter in the same way if they get that right anymore.
What?! Holy shit. Is that considered a positive thing these days?! I must be quite out of the damn loop.
So the answer is "it can be" considered a positive thing.
If you're way out of the loop then all you need to know is that PHP 7 is quite awesome.
I run some OSS stuff on PHP, but in containers to keep it all isolated. Although things have gotten better with PHP7, knowing what I know about the language makes me hesitant to use it on any new project for anything except the most trivial systems.
What exactly is it you know about the language that makes you hesitant to use it?
Although there are namespaces now, the idea of everything being in the global scope was insane (and still even with namespaces, much is still in the global scope).
There are no type-safe comparisons for greater or less then (You have ===, but no <== or >==).
This article: https://eev.ee/blog/2012/04/09/php-a-fractal-of-bad-design/
This subreddit: https://reddit.com/r/lolphp
The number of bugs that are closed as not-a-bug/wontfix.
I'm not sure how much of this has been fixed, but I've moved on to different languages and career choices and don't really want to look back that way.
I think that has not been true for a while now. Maybe it's just the world i live in, but putting a bunch of files into a directory on a vanilla LAMP configuration as a deployment scenario is not something i encounter a lot nowadays.
edit: specifically referring to the "everyone is used to" part.
I read small parts of the source code and it doesn't adhere to any of the modern standards set by the PHP community.
- The UI is lacking, esp. when viewing large diffs, compared to GitHub, VSTS-git, etc.
- `arc` CLI is awful
My own company is moving to GitLab - attempts to upgrade Phabricator have failed:
`git pull` is not a modern upgrade strategy - it ought to have an RPM.
> We'll have a liquidity event. We're aiming for an IPO https://about.gitlab.com/strategy/#goals but we can't rule out an acquisition.
And it's worth pointing out that you can run GitLab on your own server, so even if GitLab.com ever ends up disappointing you, you don't need to go through the trouble of migrating to the next best thing.
Meanwhile Sundar Pichai Google hasn’t been the friendliest.