Hacker News new | past | comments | ask | show | jobs | submit login
Why GitHub won (gitbutler.com)
652 points by hardwaregeek 26 days ago | hide | past | favorite | 504 comments



Actually, Google Code was never trying to win.

It was simply trying to prevent SF from becoming a shitty monoculture that hurt everyone, which it was when Google Code launched. Google was 100% consistent on this from the day it launched to the day it folded. It was not trying to make money, or whatever

I was there, working on it, when it was 4 of us :)

So to write all these funny things about taste or what not, is totally besides the point.

We folded it up because we achieved the goal we sought at the time, and didn't see a reason to continue.

People could get a good experience with the competition that now existed, and we would have just ended up cannibalizing the market.

So we chose to exit, and worked with Github/bitbucket/others to provide migration tools.

All of this would have been easy to find out simply by asking, but it appears nobody bothers to actually ask other people things anymore, and I guess that doesn't make as good a story as "we totally destroyed them because they had no taste, so they up and folded".


You sound like you're proud of this work and this plan and this sequence of events.

code.google going away, without the excuse that google itself was going away, after I had started to rely on it and link to it in docs and scripts all over the place, is what taught me to never depend on google for anything.

If google had said "The purpose of this service is an academic goal of Googles, not to serve users needs. This service will be shut off as soon as Googles academic purpose is met." I would not have used it.

But Google did not say that. Google presented the service as a service whos purpose was to be useful to users. And only because of that, we used it.

Do you see the essential problem here? Effectively, Google harnessed users for it's own purposes without their consent by means of deception. The free-ness of the service that the users received doesn't even count as a fair trade in a transaction because the transaction was based on one party misinforming the other.

So thanks for all your work making the world a better place.


> code.google going away, without the excuse that google itself was going away, after I had started to rely on it and link to it in docs and scripts all over the place

It didn't go away, though. It got archived and that archive is still up and running today. Those links you put all over the place should still be working.

> If google had said "The purpose of this service is an academic goal of Googles, not to serve users needs. This service will be shut off as soon as Googles academic purpose is met." I would not have used it.

That's not an accurate representation of what DannyBee said. Moreover, what DannyBee did say is in line with what Google itself said was its goal when the service launched: https://support.google.com/code/answer/56511

"One of our goals is to encourage healthy, productive open source communities. Developers can always benefit from more choices in project hosting."

> Effectively, Google harnessed users for it's own purposes without their consent by means of deception.

This does not appear to be a good faith argument.

None of what DannyBee said in their comment aligns with that interpretation. Neither does that interpretation line up with Google's publicly stated goals when they launched Google Code.


There is a difference between

> Actually, Google Code was never trying to win.

> It was simply trying to prevent SF from becoming a shitty monoculture ... . Google was 100% consistent on this ...

And

> One of our goals is to encourage healthy, productive open source communities. Developers can always benefit from more choices in project hosting.

These are not the same. One of these makes it out to be the singular goal. The other does not.


Yes, there is a difference, but one thing you should do when you see this difference and think it is critical consider whether you are parsing words way too strongly in a colloquial discussion on hacker news.

I'm trying to participate in a discussion. Not write comms that go to hundreds of millions of people. That is why the rules of hn are very clear on assuming the best, not the worst.


"It didn't go away, though. It got archived and that archive is still up and running today. Those links you put all over the place should still be working."

I think this is misrepresenting what the commenter stated. He appears to have stated the project hosting service "went away". This fits with the context of the OP which is comparing project hosting services, e.g., Google Code, Sourceforge, Github.

If the context was software archives, e.g., software mirrors, instead of project hosting, then we could indeed claim "Google Code" still exists. However, the context is project hosting. And no one can upload new projects or revisions to Google Code anymore. Google Code project hosting did in fact "go away":

https://codesite-archive.appspot.com/archive/about

The old https://code.google.com/p/projectname URLs need to be redirected to https://code.google.com/p/archive/projectname

Google then redirects these /archive URLs to storage.googleapis.com

Too much indirection

https://code.google.com/p/projectname becomes

https://storage.googleapis.com/download/storage/v1/b/google-...

Downloads

https://storage.googleapis.com/download/storage/v1/b/google-...


He made that claim but then conflated it by talking about "those links being gone" which isn't true.

I'm going to defend Google on this. They don't need to maintain products forever but this case is a good way to shut down a service. Allow access to existing projects but make clear that active projects need to go somewhere else. The commenter can be upset that they can't use Google Code as a product but they shouldn't misrepresent the situation by saying the code is inaccessible. I checked a university project I created 15 years ago and it's still there. The commenter is objectively incorrect.

> Too much indirection

I don't think this is a valid criticism. The web is designed explicitly to do this. You can still access the code, that's good enough.


No, it's not good enough. The service went away.


You're free to expect for-profit businesses to fully support a free product forever. Just as they're free to decide they don't want to do that.


Thats funny, no business ever does anything for free.

People/businesses who do stuff for free, ask for donations.


Sometimes businesses do things that they believe to be in their own self-interest, but is not.

Sometimes businesses do things to create good will (which has tangible value).

Sometimes businesses do things which destroy good will and create animosity (which has a tangible cost).

Google seems to have managed to have accumulated significant animosity by shutting down services that could instead have been left to die a slow death on a long tail. I still remember a time when I could plausibly believe that "Google Is Not Evil". Shutting down those services prematurely when people still depend on them is evil. That's the cost.

And, by the way, sometimes people do things for no other reason than because it is virtuous to do so. I suppose Plato does make the argument that you should do virtuous things so that you can hang out with other virtuous people, and not have to put up with *ssholes. Darwin would probably argue that people do virtuous things for free in order to increase the survival rates of their progeny. But those are both deep and nuanced arguments that are best left to incurable pessimists.


The web, i.e., HTTP and HTML, is designed so that Javascript is not necessary to redirect URLs or publish direct download URLs. But here a $2 trillion online advertising company tries to coerce the computer user into enabling Javascript for basic redirection.


> It didn't go away, though. It got archived ...

"It got archived" means it went away for actual use. i.e. in not-just-read-only fashion


But that's ok! It's easy to switch to a new code host. It's hard to change all the links on the internet if your link rots.

Putting a service like code.google into read-only mode is pretty much the ideal outcome for a discontinued service.

Google should be praised for how they behaved here.


Coding is a solitary activity? Switching everyone to a new environment is hard.

Also, "Google sunset their project really well" is damning with faint praise.


>Switching everyone to a new environment is hard.

Sure, but this is the danger you get when you rely on an outside vendor's service for anything. If you don't want to deal with this danger, then you should never, ever use an external vendor's service for anything; you should only use your own self-hosted solutions.

Of course, Google does have a worse track record than some when it comes to their services being EOLed if they aren't search, Maps, etc., but still, this can happen with anything: it can be shut down, or bought out by a competitor, etc.

>Also, "Google sunset their project really well" is damning with faint praise.

I don't think so in this case. I'd say Google has done a poor job of sunsetting other projects of theirs, but if this one actually keeps all the links alive albeit in read-only mode, that's really a lot better than most other EOLed or shut-down (like due to bankruptcy) services (from Google or anyone else), where it just disappears one day.


> Of course, Google does have a worse track record

Yes, that's the point of the above comments. The repeated lesson is that there's always a risk of shutdown, but don't trust google in particular to keep services running.


This is a repeatable pattern:

  1. A well-known major company sees that people are relying on something broken and it's hindering progress.
  2. The company decides to compete with that thing, but it's not part of their mission, so they make it free.
  3. Because the new thing is free, and run by a major company, lots of people come to depend on it, even though it's intentionally not the best.
  4. Another company builds more competition and works to actually be the best.
  5. The major company sees that their original objective of replacing the bad version is fulfilled, so they sunset the thing.
  6. People who came to depend on the thing feel betrayed.
This is why we should all be cautious about any non-core free service from major companies.


Is GitHub a core service of Microsoft?


Github is free for a large class of users, but it's also an enterprise product that feeds off the funnel created by the free userbase. Almost every developer knows github by now, knows how to use it and integrating your own source control management with the place where all of your open source dependencies live is a significant lock-in effect. And while I don't know the numbers for Github, I'd expect that GH itself is profitable or at least could be profitable.


Very close actually. Their strategy has always been to build essential developer tools. Developers, developers, developers.

So I think it is core to the way Microsoft expands and holds market share. And that market has changed to not only want windows only tools, so they change with it. Microsoft culture has always been kind of pragmatic. They were a walled garden for developers when they could get away with it (and still are for some), but now they have opened up a bit out of need.


Microsoft has over six thousand repos on Github including flagships like Typescript and VSCode. For all intents and purposes, it is a core service.

Microsoft is a different beast because so much of their revenue is B2B. Continuity of operations is kind of their whole deal. Google Workspace is the equivalent Google product and they're much less likely to dump a service, unlike the rest of Google.


And, how will the existence of source control management in both GitHub and Azure DevOps be reconciled?


They're sharing a lot of the resources as pragmatically as possible it seems. GH Actions and DevOps workflows are really similar, and afaik run on the same clusters of hardware.

There's also some pretty decent integration features for DevOps projects with source control in GitHub for that matter iirc. Not to mention the potential upsell to/from GH Enterprise and/or Azure DevOps.

I can imagine at some point, GH Enterprise and Azure DevOps may become more similar and share more infrastructure over time.


I hate to break it to you be GitHub is going to shut down too. Everything does.


People are just now catching on that Google services always fold and leave you hanging. Your comment is insightful. You're ahead of the curve predicting that eventually non-Google services will screw you too.

Whats the solution? Is the future self hosted? Making it accessible for non-technical people?


> Google harnessed users for it's own purposes without their consent by means of deception

Every profitable company on earth, ever.


[flagged]


You imply their time has no value, and will not be factored in future decisions in what concerns "Google" and "code hosting".


I mean to imply that expecting a for-profit company to continue giving you something free forever is foolish, and that thinking they ought to continue giving it to you is morally unsound at best.

>You imply their time has no value

They have chosen to risk wasting their own time. Absent longevity guarantees from Google, the responsibility for GP's choice to use Google Code lies with GP, not with Google. (I'm not saying it was a bad decision at all -- for, say, personal hobby projects, I actually think it would have been a very sensible choice to take the value on offer -- and accept the risk that it could all just go away at a moment's notice.)

>and will not be factored in future decisions in what concerns "Google" and "code hosting".

I don't understand this part, sorry.


Have read comments, on launch of new products, founders stating that free-tiers when applicable are skipped because free users tend to be the most entitled. (That said Google isn't a stranger to shutting down paid services as well.)


That says more than your lack of empathy than anything else.


If I give you something for free, does that oblige me to continue giving it to you for free, forever?

If so: Watch how much I give.


Incidentally, this is why I don't use any Google products other than search. It's never clear to me which ones are real products that are trying to win and make money, and which ones are just fun side projects that a multibillion dollar corporation is doing out of the goodness of their hearts, but could get shut down at any time.


Every project by Google I was willing to jump platforms for, they killed. So outside of email, maps, and search, sometimes voice, they have nothing left that is worth investing time into. It will disappear within a couple years or less.


Voice? Could easily fold and be sold to Sprint. I’m honestly surprised (and thankful) it’s been working for 15+ years. Completely out of Google character. I hope they forgot about it running on a server under someone’s desk in the basement just below a red stapler.


Hush now, don't remind them of it!! ;P

In all seriousness, though: if I were a user of Google Voice, I'd be seriously concerned that they would shut it down in a way that caused me to lose access to my phone number (even if only indirectly, as happened recently with their divesture of Google Domains or whatever their registrar was called)... you are much braver than I ;P.


Presumably because it's still providing value. It was originally about mass collecting voice data so they could train their speech-to-text capabilities with various dialects.

Dialects are always in flux and there are plenty of languages out there they haven't conquered, so ... I'd guess they're just leaving it running to detect new speech or languages or dialets or ... personal data gathering or... ?


Google Voice was also supposed to revolutionize how many parts of cell phone service were to be provided -- signing in to an account instead of SIM cards, multiple phone numbers on one device, visual voicemail, advanced calling features. Android was supposed to be the vehicle of delivery.

In fact, Google Voice on Android was supposed to make calls via data-only (no minutes usage) on a smartphone, but carriers would not allow Google to do that. Hence the strange random-number dialing which utilizes a 3 way call to make a 2 way call by default. Years later, they quietly allowed calls over data but by then GV was already on lifesupport.


Youtube has significantly better voice data


The audio quality of Youtube videos is not representative of the audio quality of someone speaking into their phone.

The speech patterns also differ heavily, with Youtubers (usually) talking in a presentative manner and speaking very clearly.

I think the way people talk in private phone calls, as well as the call audio quality, are much more similar to that of someone speaking to an assistant.


Man, they collected a lot of prank calls from my younger-self lol.


Same here. I'm equal parts glad and puzzled that it hasn't been killed. I genuinely don't know what I would replace it with.


I used it since a few years before Google bought GrandCentral in '07. A couple of years ago I moved to voip.ms as part of my de-googling and am happy with it. There are oodles of such providers.


Oodles of providers that give you a permanent US number for free?


I didn't look for a free option after leaving Google Voice. I'm happy to pay the low cost to be the customer instead of the product. Same with gmail to fastmail.


I wouldn't worry about voice. They offer it as a paid feature of GSuite to business customers. It's not going anywhere.


They're now calling it "Google Workspace" for the moment.

Like Google Domains?

https://support.google.com/domains/answer/6069226


I've been burned by Google too many times, but I'm definitely thankful for them continuing to maintain Voice, and even actively improving it over the last few years. I've been using it as my primary phone number since the GrandCentral days, so it'd be a pretty big pain to have the rug pulled out from under me at this point.

It would also be pretty shortsighted from a business perspective, IMO. Voice could still be extended to compete with iMessage and WhatsApp, like they sort of tried to do back when they were going to merge it with Hangouts.


> compete with iMessage and WhatsApp,

God please no more messaging apps from Google for the love of god.


lol, I'm certainly not advocating for that, but I'd rather they release 100 new messaging apps than kill Google Voice.


I think there is/are some executives at Google/Alphabet that utilize GV so much that they couldn't kill it if they wanted to. I used to have a voip device that worked with my GV number, and Google extended the life of the API used several years, I since stopped using that device and the service had still been working for it.

I do miss when GV and SMS were integrated with Hangouts though... was imo a much better experience than you get today with all the broken out apps.


Does Sprint even exist anymore


no, Sprint was folded into T-mobile.

The only reason that merger was even allowed to happen was because it was extremely clear that Sprint would not have been a going concern into the next year if it would stay independent.


And as a result, T-Mobile has gotten worse, because now they don't have to try as hard to compete with AT&T and Verizon.

I feel like Sprint failing may have been better for the market; I expect more Sprint customers would have switched to AT&T or Verizon than to T-Mobile, and T-Mobile would still have to fight for customers.

Instead, T-Mo has jacked up prices just like everyone else, reduced nice perks, and their "uncarrier" movement is but a memory.


Sprint is T-Mobile now


Fiber is also still running, somehow. I don’t think they’re expanding much if at all but I’m shocked it hasn’t been sold off.

Frankly, I’m just as shocked they didn’t go full throttle with it either, because talk about a data selling gold mine with all that traffic

While I’m on that subject, it could have been a real opportunity for them to push fiber + YouTube TV as a package. Google isn’t good at making original content but at some point they have made a software + services play that makes such a package more palatable and user friendly, imagine subscribing to a YouTube channel and it becomes a channel as part of the TV app for instance. A lot of people watch channels like this as it is.


They've been expanding again, and are now offering 8gbps in my area. I've been very happy with it, and I'm still only paying $70/month.


That’s absolutely amazing. Besides hosting at home, do you feel any difference vs say a 1gbps line? Surely most servers won’t saturate this when downloading or browsing?


Our ISP (Sonic) offers 10Gb uncapped for $50/mo.

I don't feel any difference over our previous 1G service, other than it never lags, even if multiple kids are streaming video and I have a backup running. The biggest difference is that it's half the price of the 1G service that ran over AT&T's fiber.


I really wish Sonic would expand out to West Marin. My only options are comcast and starlink.


Ouch. I was ecstatic when they started offering service in my neighborhood in Alameda.


Google literally abandoned Fiber in at least one city and left roads in disrepair via “shallow trenching”

https://www.spglobal.com/marketintelligence/en/news-insights...


> talk about a data selling gold mine with all that traffic

IIRC, Google doesn't sell user data. But it's plenty valuable to them all locked up.


Google Fiber is actually working on expanding to where I live, I keep getting ads on Youtube for it.


Search?

DuckDuckGo is great for most purposes.

The best product Google has is Maps. That's about it.


DuckDuckGo is literally Bing but with location-based ads added to the bottom of your search results.


When did that change? I haven’t tested it in a while, but in the past DuckDuckGo indexed some stuff that wasn’t on Bing.

I’ve been using Google and Duck on different kinds of searches these days, because neither is universally better but I might try Bing again.

Edit: NM, Bings UI is still annoying. It slides stuff in from the side when you scroll down.


> When did that change?

It didn’t. The person you’re replying to is either exaggerating or uninformed.

Maybe give Ecosia a try. The interface is decent and I’ve been pleasantly surprised by the results.


You're right, it didn't change. It was always just a thin wrapper around Bing.


>The best product Google has is Maps. That's about it.

Are we just going to pretend that Chrome and Android don't exist just like that, or do I have a different definition of "has"?


> fun side projects that a multibillion dollar corporation is doing out of the goodness of their hearts

Back then it felt like it was actually just possible that they just were that cool. Noughties Google was really something compared to the staid, implacable incumbents. 15GB (with a G!) of email storage that would grow forever! Google Earth! YouTube! Live long enough to see yourself become the villain indeed.


Google's leadership made their intentions clear with the purchase of the much reviled DoubleClick in 2007. They didn't become villains; it was always all about the money, just like everyone else.


I think at least at first it was genuinely not villainous. The highly technical founders, I think, did mean it with "Don't Be Evil". PageRank wasn't designed from the start to be a universal panopticon. After all, the global crash hadn't happened, rates weren't zero yet, no one had smartphones, social media wasn't a word, mobile meant J2EE and Symbian, and the now-familiar all-pervasive Silicon Valley financialisation MBA cancer was yet to come. That said, the realisation that "all the data" wasn't just about search index data and book scans did clearly hit for them (or, if that was the plan since 1998, surface) by the mid 00s.

DoubleClick was the year after Google Code. They had a good few years of being considered cool and, well, Not Evil. Google Earth was 2001, Gmail, Scholar and Books was 2004, Reader and Gtalk (RIP x 2) 2005, Patents in 2006 and dropping 1.65 billion plus (which sounds trivial now that everyone and their mum's are worth billions for whatever dog-walking app, but not then) on YouTube that year, even though it was only a year old. Mad times, you could barely sign up for a beta before another one landed. The search engine itself was of course revolutionary and peerless. And you could use it on your WAP phone until something called the "iPhone" happened.

For those of us who were young and, yes, naive, and who weren't paying attention in the first dot com crash in those newspapers adults read while we played (8 year olds in the 90s rarely using IRC to get the lowdown on Silicon Valley), it seemed like it was possibly a new age. "Going public" in 2004 wasn't yet another obvious "oh here we go again" moments, because they were among the first in the generation, with the pervious generation mostly fizzling before pervasive internet access (Amazon made it, but it was a slower burn).

Chrome and Android was 2008, and I remember first hearing around then the phrase "Google Mothership". Though I never stopped using Firefox (and Opera, I don't remember when I switched to Firefox), Chrome was undeniably a technical coup at the time. Being cool and shiny and actually good at being a browser, while kicking that evil Microsoft in the teeth helped too. It took time to burn though that goodwill. Even today, very many otherwise FOSSy people won't move from Chrome.


This made me nostalgic for the low bandwidth, no javascript, almost all text gmail interface. It felt so snappy.


It was actually fun to go poke around and see what new sites they were cooking up. I'd forgotten.


As for real products, at Google's current size it has become near impossible to launch new ones worth their time.

Google currently has a quarterly revenue of $70-80B.

Imagine an internal team launches a new product to collect $100M in quarterly revenue. An earth-shattering success for any entrepreneur.

For Google...it doesn't even move the needle. Does nothing for stock, it's not strategic, and may become a liability later on.

You would need to launch a multi-billion new sub business for it to be of any interest to Google, which is impossibly hard.


This is why they should go full conglomerate and spin off companies all the time.

Otherwise, with those expectations, it's impossible to build something good and impactful.


I really thought this was what the plan would be when Alphabet was formed.


Google X/X Development LLC is the Google incubator for possible spin-offs as far as I can tell.


What would be the benefit of doing it spun-off vs. in-house when it's still owned by the same company and making the same amount of money?


Less red tape, more freedom to operate.

If done inside a conglomerate, it can be the best of both worlds. Access to Google tech and funding, but free from rigid corporate hierarchies, at least at the beginning.


Spinning it off limits the downside/risk to the parent company. Basket of options worth more than an option on a basket and all that.


... which is the reason why many large corporations acquire products: only once they are big enough, they are relevant.

Issue for Google: They have to be carful for Anti-Trust not blocking the acquisition for some reason.


That's the big question mark I have for Flutter. Looks like a pretty nice platform from the outside, but I cannot see Google NOT killing it


Googler, opinions are my own.

You have to look at what teams at Google are using Flutter. Any dev tool Google officially releases tends to be funded by some product that actually likes/uses it.

Current list from https://flutter.dev/showcase: GPay, Earth, Ads (this is a big one), and others.

There are a lot of teams using it, which is why it's still getting so much support. If you see Google apps moving away from it, then it's time to start looking for an alternative.

It's also why AngularDart still exists and is getting updated. There are large projects that use it, so they will keep supporting it.


The fact Pay and Ads both use, along with Youtube Create even, is a pretty good sign because if they have a non-trivial codebase of flutter/dart app(s) then killing it would impact all those teams who are doing important work. I've debated trying flutter/dart a few times and this makes me feel more willing to try it.


And none of Google’s flagship cross platform apps use Flutter.


> Ads (this is a big one),

How is Ads using Flutter? It likely doesn't, or uses it for some largely irrelevant things like mobile SDK integrations


Ads is the reason Dart is still around, they saved the team after the project folded, after migrating from GWT to AngularDart, they weren't into doing yet another rewrite.


> Ads is the reason Dart is still around, they saved the team after the project folded,

Wasn't it Analytics which had Dart? But I do remember something about Ads saving it.

> they weren't into doing yet another rewrite.

It all depends on politics within the company. For example: youtube was hastily rewritten using alpha/beta versions of Polymer targeting Custom Elements v0. The moment they did that v0 was deprecated.

So within next 4 years they rewrote Youtube again with Polymer version targeting Custom Elements v1. The moment they did that, Polymer was deprecated and replaced with lit.

Even though they haven't rewritten Youtube, they've now spent time integrating Wiz, an internal Google framework (that also got merged with Angular).

The costs of rewrites don't matter at Google as much as promotions.


I think Ads has a mobile app for people who run them?


I assumed Flutter is open source; if they fix kill it off, is there a reason to not to expect the community to fork and maintain it? Presumably they'd have to rebrand it without Google giving permission for the name, but that alone doesn't seem like enough to stop it from existing in some form.


The base ROI of Flutter to Google isn't all that clear because it's relatively complex to maintain. Worse, it requires maintaining Dart, which appears to be a dead-end language in terms of adoption.

If Flutter and Dart were donated to the community, they would most likely slowly die because no one outside of Google gets enough benefit from them to justify maintaining an entire programming language.


It's worse. Flutter actually works against google's own interests. Webpages that render text in a canvas are not as easily indexable as webpages that emit html. It's funny too because the same is true for sites made with flutter. They aren't SEO friendly

You could suggest using AI to OCR the canvas but even that would be subpar because most sites that use HTML provide multiple screens worth of info but sites that render to a canvas render only what's visible. The rest of the data is internal. You'd not only need the AI to successfully OCR the text, you'd need it to interact with the page get it to render what's not currently displayed.


Accessibility interfaces are ideal for this situation, allowing an LLM to interact with a GUI strictly through text. Unfortunately, they're often an afterthought in implementations.


Same can be said from GWT.


> is there a reason to not to expect the community to fork and maintain it?

I think you should ask the opposite question instead, and this goes for any project that is corporate "owned"/sponsored like Flutter: why should we expect the community to maintain it if the company abandons it?

Are there any non-Googlers with commit access to the project? Do non-Googlers comprise a decent percentage of contributions? If not, the bar goes up quite a bit for a fork, or for a "peaceful handoff" of the existing project.


Google Maps isn't going anywhere. I'd even argue that it's more important than search; it's far more difficult to switch to a competitor.

GMail isn't going anywhere either.

YouTube is a bit less clear: it doesn't really have any competitors, and is extremely popular and widespread, but it surely costs a fortune to keep running.

It'd be interesting to see the financials on these different business units to see which are the healthiest and which aren't.


How is it difficult to switch to a maps competitor? Really depends on how much you use it, but most of my use involved looking up a location, then getting directions to it. There's no cost to switching, expect perhaps setting up my commonly used addresses, which are basically in my contacts anyway. I'm sure that there are cases that are harder to switch, but I'd guess that they don't apply to the majority of people.

Gmail, on the other hand, like any email service, is much harder to switch. Until we get mandated email address portability, like was done with phone numbers some years back.


>How is it difficult to switch to a maps competitor? >but most of my use involved looking up a location

Right, and that's where most of Google Maps' value is: it's really a business directory combined with a navigation system. I'm not going to look up an address for some place, I want to just search for "<business name> near me" and find all the nearby locations and pick one and go there. Even better, Google Maps has reviews built-in so you can see if people hate a certain place.

>but I'd guess that they don't apply to the majority of people.

If you think the majority of people keep business names and addresses in a contacts list, you're really out of touch.

Also, GMaps has features competing systems usually don't, depending on locality. Here in Tokyo, it's tied into the public transportation system so it'll tell you which train lines to ride, which platform to use, which car to ride in, which station exit to use, when a train is delayed, etc. It'll even tell you exactly how much each option costs, since different train lines have different costs. Then, inside many buildings, it'll show you level-by-level what business is on what floor.


> If you think the majority of people keep business names and addresses in a contacts list, you're really out of touch.

This is not at all what they were saying, and I'm not sure where you got that. What they're saying is that Google doesn't have a monopoly on business location data, so searching for a business on a competitor (especially a large one like Apple, but mostly even OSM) does just work.


So you are not aware that public transport companies in general have a public API for that data, and that openstreetmap exists?


"In general" is quite a stretch.

I am aware of plenty public transports in European countries that you better print out those PDFs, from their website, assuming they have them in first place.


Those are all great features, but they are not features which lock you into the platform. If bing or Apple maps had the same utility I could switch to them at the drop of a hat.


It is the only map application that allow you to check public transport (bus/metro/tram) with changes other than (if it exists) a local app for the city.

As far as I know, there is no other map application that does that.


Exactly. Here in Tokyo, there is a local app that's tied into public transit just like GMaps, but all it does is tell you how to get from Station X to Station Y. If that's all you want to know, it works quite well. But most people want to know more than that: where is Business A, and what's the fastest/cheapest way to get there from my current location? Which station exit should I use? And oh yeah, how are the reviews on it? And can I see the menu, and place a reservation too?


Apple Maps does this


Transit is generally pretty awesome.

https://transitapp.com/


> YouTube is a bit less clear: it doesn't really have any competitors, and is extremely popular and widespread, but it surely costs a fortune to keep running.

There was a brief moment when it looked like Twitch would kill YouTube; then, much like Instagram responding to SnapChat, they took the best parts of it and added them to itself. I'd be amazed if YouTube wasn't profitable now with the combination of more ads, more subscription/superchat options and the pandemic-era boom in streaming.


Business strategy is more than just launch product, make money. Companies operate in a dynamic space, and if you play any game (basketball, chess ..) you know that moving forward at all cost is not how you play the game. Sometimes you side-step, sometimes you take a step back, sometimes you sacrifice your piece.

If you expect you team to just go-go-go, you might something in the short term but you'll fail miserably in long-term.


That is totally fine, but Google is a case where they go into new business units and fairly often kill those units quite soon.

It's not like they're doing cereals and now they're doing other cereals, so you can fall back to their previous cereals. You always have to find a new supplier, or then just start buying bread.


Yeah, there are so many examples. It's one of the reasons I was unwilling to jump on board Google Stadia... I kept thinking "let's wait and see how it does". Particularly since you had to buy non-transferable game licenses.

And, of course, they shut it down. At this point, Google is going to have to be in a space for at least 5 years and clearly be highly successful before I would even think about using them.


And I worked at AWS Professional Services and sat in on sales calls all of time where one of our talking points was “a company couldn’t trust Google to not to abandon a service they were going to be dependent on”.

Was that partially FUD? Probably. But there were plenty of services sales could point to.


I don't even use search anymore. Kagi schools it


Eh, when it happened the world was ready. It felt like a different internet back then, and it was pretty great for a company to say that everyone was already on github anyways, so let's all go there (Microsoft followed almost the exact same timeline with Codeplex, and really who cares).

More Google style would be shortly after shutting down Google Code starting up a new code hosting project, migrating several times to always slightly incompatible frontends requiring developer changes, shutting down the project, and shortly afterwards starting up a new code hosting project...


Unrelated to the thread - do you use any email providers with a custom domain? If so, would you suggest them? Who are they?


Fastmail has been solid for the last several years. I would recommend it.


Fastmail is just about perfect. Feels like email 30 years ago but with a spam filter.


Also a happy fastmail paying customer. The aliases are really good.


You can also use gmail with a custom domain. Helps with not being locked in to Google, doesn't of course help in them selling your data.

E.g. https://juri.dev/notes/email-routing-gmail-cloudflare/


I agree with a lot of the other options, but I'd be remiss if I didn't mention one that isn't always obvious.

With all the Big Corp asterisks, Microsoft Business Basic can be a pretty great deal at 6 USD/month. Solid reliability, aliases, (too many) config options, 1TB of OneDrive storage, cloud MS apps, etc.


Apple hosts custom domains at no extra charge if you have iCloud+. I've used it for a couple years now to host my family's domain and a couple of hobby ones for myself.


I would probably still be on Apple mail and use its generated email aliases except it went through a stage of not dealing with various back bounce spam so I switched to fastmail and have had no issues.

Either now is probably a good choice.


If you need something cheap and are willing to deal with a tiny company, have a look at <https://purelymail.com>. I've been happy with them for two years, never had any problems with delivery, and they support infinite domains/aliases, and custom Sieve rules. But do not use it if you need 99.999999% SLAs or anything like that, because again -- it's a one-man show.


Zoho Mail. Cheapest, no nonsense, very straightforward UI, all the settings you'd ever wish for, support replies quickly even on the cheapest tier.


I've used Fast Mail, liked it a lot, but I think at the time the pricing wasn't the best.

I then used some other platform that was quite "old school" that is recommended here. The Mail Admin was very opinionated which lead to some mail being blocked. That wasn't cool.

I'm now with Migadu on the cheapest plan and it's been fine. Had a few outages here and there, but otherwise solid.

I'd happily rec Fastmail or Migadu.



i really, really like purelymail. great pricing, good customer service, reliability, and documentation.

other popular options are fastmail, migadu, and mxroute.


Proton mail


Fastmail


uberspace because of SSH access


Migadu


mentioning the sunset of a google products, in general, in any thread about some specific google product, is a kind of godwin's law.

because of ordering by vote, we can't see how quickly it happened in this case. in godwin's law the likelihood increases by length of discussion, implying a slow/linear kind of probability, but for google product sunset i would argue that the likelihood ramps up very aggressively from the start.

i hereby dub this `jiveturkey's law`.

like godwin's law, the subthread becomes a distraction. unlike godwin's law, the sunset distraction is always a legitimate point. it's just that it has become tired.


a simple heuristic: does it make Google a boatload of money? if yes, it's safe


Anything besides ads, GCP, and Apps in that bucket?


I won't disagree with you, but can you put ads in the product, and have them be front and center? That also counts as ads.


photos and email, sources of personalization data and AI training data are also part of the ad supply chain


I'm still dealing with the fallout of the domains selloff.


    All of this would have been easy to find out simply by asking
I'm not a journalist, but in an ideal scenario, how would somebody have known that you were one of the key members of the project?

It's not like Google (or anybody else) makes this easy to know. And call me jaded, but something tells me official Google PR channels would not have been really helpful for this.

And also - are most engineers in your sort of position even free to remark on such projects w.r.t. NDAs, etc?


"I'm not a journalist, but in an ideal scenario, how would somebody have known that you were one of the key members of the project?"

It's not about asking me, it's about not asserting things you don't know.

Instead of saying "people did x for y reason" when you have literally no data on x or y, you could say "I don't know why x happened, i only know about z". Or if it's super important you try to put something there, beforehand, you could say "hey does anyone know why x happened? I'm working on a blog post and want to get it right".

Then, someone like me who saw it could happily email you or whatever and say "hey, here's the real story on x".

Or not, in which case you can leave it at "i don't know".

The right answer is not to just assert random things you make up in your head and force people to correct you. I'm aware of the old adage of basically "just put wrong stuff out there and someone will correct you", but i generally think that's super poor form when it's about *other people or things and their motivations". I care less when it's about "why is the sky blue".

In this case, it also happens that there are plenty of on-record interviews and other things where what i said, was said back in the day.

So a little spleunking would have told them the answer anyway.


This explains so much about modern media, news, and story-telling. It's easier to make up a plausible narrative that supports your story than simply admitting you don't know.

You can see how as the article develops, they go from being "uncertain what made GitHub succeed" to definitively being sure about why it succeeded. It doesn't surprise me that details were glossed over as the story rose to the ultimate crescendo of "GitHub dominates".

This is how a good tale is spun and the people lap it up. What's a good tale without a bit of embellishment? (said every bard since antiquity)


You think you can ask google and get an honest reply?


I think you just got an honest reply from 'DannyBee.


To be a bit more generous: I think from Scott Chacon's point of view, "They had no taste and we beat them in the market" is a fair way to hold the elephant. Lacking the Google-internal perspective, it's a reasonable conclusion from the signal he has. I don't get the sense from this post that he's trying to publish a doctoral thesis on the historical situation in the industry; he's providing some primary-source testimony from his point of view.


I guess i'm going to disagree with you.

He's not just providing primary source testimony from his point of view, he's trying to pretend he has primary source testmony on what others were doing as well.

If he left out the parts where he has no data (or said i don't know), it would have IMHO been a better post, and actually primary source testimony.

You also don't run into the Gell-Mann amnesia problem this way.

To each their own, of course.


I'd imagine that if you can say "I don't know why x happened" then you can also save some breath and say nothing at all, and there are billions of folks doing that right now.

Putting out a general inquiry of "why did x happen?" also has a lot of stigma attached to it in RTFM, LMGTFY internet culture. The result will not be a cloud of other people interested in the answer upvoting the question to make it visible to folk who might actually know. Snide comebacks notwithstanding the question will languish forgotten in a dusty corner of whichever forum it was asked within.

But bold assertions based on plausible conjecture? That can earn upvotes, drive engagement, draw ad clicks, and sometimes even prompt corrections from actual experts.

Certainly not an ideal situation but this does appear to be where we're at.


Scott Chacon had absolutely zero obligation to reach out to you.

No more than a movie critic has an obligation to speak with the director before printing their review. He is comparing/contrasting GitHub with Google Code, the product you released. That is all.

As for his claims, I don't even see how the linked article significantly contradicts what you yourself have said about the genesis and goals of Google Code.

You claim that the purpose of Google Code was mainly just to break the SourceForge monoculture. In other words, it wasn't a product intended to be a polished worldbeater. Not something great. Not the next Gmail. Just something functional. A monoculture-preventer.

Okay.

(I am a software engineer as well, so I understand that even this level of creation involves many thousands of engineer-hours of blood, sweat, and tears. Not a knock.)

So yeah, it doesn't sound like you created Google Code with "taste" which the linked article seems to be using as shorthand for "lots of product passion and UX polish."

While the tone of the linked article seems a little more aggressive than it needs to be it... seems correct, based on what you've said?


He had an obligation not to assert as fact things he didn't know to be facts. I'm not going to high-horse it; it's a character fault I share, and I think we all do at times. But at the same time, you can't flip it around on the person who knew he was wrong, and publicly cleared the air. Chacon was wrong about something. That's his problem, his fault, nobody else's.


Specifically, what "facts" did Chacon get wrong?

From the article I'm going to quote Google Code mentions.

There are other assertions about Google's internal adoption of Git, but these seem to be backed up by e.g. the email he screenshotted.

   Furthermore, the players (Sourceforge, Google 
   Code, etc) who eventually did care, after seeing 
   Git and GitHub rising in popularity, simply had 
   no taste.
This is a subjective and opinionated statement, for sure. It doesn't seem like an assertion of fact to me.

    In 2011, both Google Code and BitBucket added Git 
    support, which I’ll mark as the year that the nail 
    was in the Mercurial coffin. 
First part fact, second part clearly opinion.

    Just 4 years later, in 2015, Google Code just 
    completely gave up and shut it’s service down. In the 
    email they sent out, they basically said “just move to 
    GitHub”. 
What is non-factual here? He screencaps the email.

    So, Why Not Google Code?
This section is really about what Github achieved, no direct Google Code assertions.

    The original article is correct, the other 
    hosts focused on distribution and revenue streams. 
    We cared about developers.
Well, this is a speculation that (according to one Google Code member) is not correct - "DannyBee" claims they just wanted to avoid a SourceForge monoculture.

Is this really the point of contention?

I read the article in the context of "a guy who worked at Github talking about his experience at Github, which unavoidably will also mention externalities like the competition" and not at all in the context of "hey! this is the inside scoop on google! I got facts about Google's inner workings!"

I just don't think there's a reasonable assumption that this should have been like, a rigorously fact-checked statement.

Expecting a personal blog to adhere to the standards of some other kind of publishing is misguided and unrealistic. This is clearly a personal account and I'm just baffled that anybody would confuse a personal account like this with capital-j Journalism.

This probably reads as pedantry (if anybody actually reads this post) but it's really, an honest attempt to understand.


Yeah but you said "nobody bothers to actually ask other people things anymore" and I don't think it's reasonable to expect someone to ask about this when the probability of getting an answer is so low.


[flagged]


Really letting your opinion on Google dictate the rest of your response here huh?


The way it used to work is that tech journalists (or sports journalists, or any other type) had contacts in the industry. If those people were not directly involved, they probably could at least suggest someone else who might know. Leads were followed up, and eventually the writer got the story.

I'm not sure how it works now, cynically I would suggest that the writer asks an LLM to write the story, gets a rehash of Wikipedia and other sources, and they maybe makes some attempts at firsthand verification.


That is neat, but Scott Chacon is not a journalist, does not act like a journalist and what you are reading is not tech journalism.

You are reading the personal diary of someone with personal connection to a topic and complaining that it is not up to the standards of professional journalism.


I'm complaining about nothing, here.


The linked article doesn't claim to have insight into the inner workings of the Google Code team, or what Google's leadership hoped to accomplish with that product.

Rather, he is comparing the actual released products.

Specifically, he says that competitors to Github had no "taste", which he seems to be using as shorthand for "making a polished product and really focusing on developer/user experience."

You don't need to interview the folks who worked on Google Code to make that claim, any more than I need to interview Steven Spielberg before I comment on one of his movies.

(Based on my memories of Google Code, I'd say the linked article's observations about Google Code are also absolutely correct, although that's really beside the point)


Well, finding, vetting, and getting comments from sources is like half of journalism. If you can't or won't do that, whatever you are doing is probably not journalism. It's just an editorial, think-piece, or whatever.


Journalists are supposed to investigate not speculate because finding an email is too hard


Maybe a cofounder of GitHub has the reach and network to ask for the e-mail of someone who worked on the Google Code team. A journalist might not, that's true.

Just flat out saying they had no taste in product development, however, is a bit of trash talking for no reason.


I don’t see a contradiction; it’s all part of the story.

Understanding your (Google’s) motivations explains why Google Code didn’t improve as much. It doesn’t contradict that Github had better UI, or their explanation of their motivation to build a better UI.


> It was not trying to make money, or whatever

If Google Code succeeded, it’s hard to imagine that Google would not have tried to monetize it someday.

This also reminds me of Google’s (also initial) position on Chrome vis-a-vis Firefox: create a product “not trying to make money, or whatever” but just to limit the market share of a competitor.

The less flattering term for this in the context of anticompetitive behavior is “dumping”: https://en.wikipedia.org/wiki/Dumping_(pricing_policy)


"If Google Code succeeded, it’s hard to imagine that Google would not have tried to monetize it someday."

Google code did succeed in that sense. It had hundreds of thousands of 30-day active projects, and some insane market share of developers.

I don't honestly remember if it was even shrinking when we decided to stop taking in new projects.

I doubt we would have monetized it directly (IE sell an enterprise version) - the entire market for development tools is fairly small.

In 2022 it was ~5 billion dollars, and future estimates keep getting revised downwards :).

CAGR has been about 10-14% in practice, sometimes less.

I don't remember if it's still true, but most of that 5 billion dollars was going to Atlassian (80% at one point).

Now, if you project backwards to 2006, and compare it to other markets google could be competing in, you can imagine even if you got 100% of this segment it would not have actually made you a ton directly.

Indirectly, eh, my guess is you still make more off the goodwill than most other things.

It's actually fairly rare to make any significant money at development tools directly.

Nowadays, the main source even seems to be trying to sell AI and productivity, rather than tools.


It seems highly likely that a successful Google Code would be used as an onramp to Google Cloud. IOW, indirect monetization so it likely would still have a generous free component.


Yeah exactly, it is extremely easy to imagine this, because when Github was acquired by Microsoft in 2018, Diane Greene (who headed Google Cloud at the time) commented on this

It sounds like Google would have paid some amount of billions for Github, but not the amount that Microsoft paid

I personally don't think Google would have tried to monetize Google Code, even if it had 10x or even 50x the users that it eventually had. (And I say that having worked at Google, and on Google Code briefly!)

I think it made more sense as complementary to another product (which in some sense explains a big problem with developing products at Google)

---

https://www.cnbc.com/2018/07/24/google-cloud-ceo-diane-green...

CNBC previously reported that Google was looking at buying GitHub, but Greene wouldn’t confirm the report.

“I think the only thing I’ve said is that I wouldn’t have minded having them,” said Greene.


Relatedly, it is hard to understand how operating Google Code in a manner “never trying to win”, “not trying to make money, or whatever” and “just to prevent … a monoculture” was in the best interests of Google the corporation and its shareholders


I'm not sure what "SF" means in this context. San Francisco? I can't figure out what you want to say Google Code was for exactly. If Google launches a major project, I find it hard to believe that it's just for fun.


It’s short for Source Forge which is still around technically but a shadow of its former self.


There is still some code hosted on SourceForge that has no other public source. This is unsettling because I don't know how long SourceForge will continue operating and Wayback Machine captures of SF pages don't include tarballs. Download backups yourself whenever you find something like this.

I'm contributing to someone's software that started as an academic project. The current version is on GitHub with history back to 2014 but early releases back to 2008 (from before the author started using version control) are on SF in the form release_1.0.tgz, release_1.1.tgz, etc. I stumbled on these old versions this weekend while looking for related material. Once I decompressed them I found that they contained notes and old code that really helps to understand the current project's evolution and structure.


Yeah, what especially irks me with SourceForge is the common habit of projects regularly deleting all outdated releases (due to some per-project size limit? or just not to clutter up the list?). In old projects with messy releases, it can be very hard to piece together exactly which revisions went into a version "x.y.z" that everyone else depended on, except by actually looking into the released files. If those files don't get archived anywhere, they just get lost to the ether. (At least, short of a manhunt for anyone with the files in an ancient backup at the bottom of the sea.)


It was early example of the "enshittification" phenomenon. It was a particular bad example of advertising and other spammy distractions because sites for developers have the lowest CPM of anything except maybe anime fan sites.

It is super hard to break through a two-sided market but it is possible when a competitor has given up entirely on competition, which might have happened in the SourceForge case because the money situation was so dire they couldn't afford to invest in it.


  ...lowest CPM of anything except maybe anime fan sites.
I am not in ads, so could you expand on this? Why are anime sites low value vs other niche? I would naively expect that anime has huge numbers of <20 year old fans who are more prone to advertising merchandise.


They pirate or already subscribe to Crunchroll. Many sites are nowhere near brand safe (one reason Danbooru is as good as it is that is not even brand safe for porn ads.) Some of them buy figurines, but they already know where to get them. Good smile would rather invest in 4chan than buy ads on western anime sites. I have no idea what it is like in Japan.

I am skeptical of claims that “advertisers want to target young people because they have a lifetime of consumption ahead of them”. Maybe that is true of Proctor and Gamble (who expect to still be selling Dawn and Tide 50 years from now) but few advertisers are thinking ahead more than one quarter, if that —- particularly in the internet age where an ad click can be tracked to a transaction.

People today will say all the ads on TV target oldsters because only oldsters watch TV but circa 2000 there was no streaming and watching TV made me think “I want to die before I get old” because there were so many ads for drugs and embarrassing health conditions and personal injury lawyers and relatively little for things you would spend your own money on because not a lot of people in the audience have money to spend particularly after paying the cable bill.


A small SourceForge retrospective for those not around at the time:

This post's overview of contributing to Open Source is largely correct. You'd get the source tarball for a project, make some changes, and then e-mail the author/maintainer witch a patch. Despite the post's claim Open Source existed long before 1998.

Rarely did Internet randos have any access to a project's VCS. A lot of "projects" (really just a program written by a single person) didn't even have meaningful VCS, running CVS or RVS were skills unto themselves. There was also the issue that a lot of Open Source was written by students and hosted on school servers or an old Linux box in a dorm.

SourceForge came along riding the first Internet bubble. They let a lot of small FOSS projects go legit by giving them a project homepage without a .edu domain or tilde in it. They also got a managed VCS (CVS at first then Subversion later) and contact e-Mail addresses, forums, and other bits that made the lives of Linux distro and BSD ports maintainers much easier. They also had a number of mirror sites which enabled a level of high availability most projects could never have had previously.

Then SourceForge's enshitification began as bubble money ran out. The free tier of features was decreased and then they started bundling AdWare into Windows installers. SourceForge would literally repackage a Windows installer to install the FOSS application and some bullshit AdWare, IIRC a browser toolbar was a major one.

As the officially upstream source for FOSS projects bundled for package managers the AdWare wasn't much of a problem. But SourceForge was the distribution channel for a significant amount of Windows FOSS apps like VLC, MirandaIM, and a bunch of P2P apps which were impacted by the AdWare bundling at various points.

A GitHub founder patting themselves on the back for the success of GitHub is sort of funny because GitHub followed a similar track to SourceForge but got bought by Microsoft instead of a company coasting on VC money. I can easily imagine a world where an independent GitHub enshittified had they not been bought by a money fountain.


> Open Source existed long before 1998

It did not. Free software did. The term "open source" was coined by Christine Peterson at a meeting in January of 1998, as Netscape was contemplating releasing their source code as free software. The Open Source Initiative was founded a month later, and forked the Debian Free Software Guidelines, written by one of the OSI founders, Bruce Perens. This was a deliberate marketing exercise, both to avoid the unfortunate "as in beer" connotations of free software, and to distance the concept from the FSF and Richard Stallman.

All of this is well documented.


Does something only exist once it is named? Free software is also open source software, even if that name was only coined later on.


We don't have to get philosophical about it, this is what the Fine Article says:

> Most of you kids probably don’t remember a time where there weren’t millions of open source projects around, but the phrase was only coined in 1998.

100% and unambiguously true.


In 1998 there definitely weren't millions of open source projects. Debian 1.3.1 (released in 1997[0]) had over two thousand packages. I pick Debian here because they only packaged software with unambiguously Open licenses. That's just packages shipped by Debian and not a full accounting of all open source software packages available in 1997. I'm sure some Walnut Creek CDs had a bunch more tarballs with more ambiguous licensing.

Open source software existed before 1998. I don't know why you're trying to quibble about the exact branding, just because the Open Source Software term wasn't coined until a certain date doesn't mean that is the start of all software with open licenses or people publicly releasing the source of their software. The GPL, MIT, and BSD licenses are all from the 80s.

[0] https://www.debian.org/News/1997/19970708


I'm not the one quibbling here. You are.

You said this:

> Despite the post's claim Open Source existed long before 1998.

I've illustrated two things: it is not true that the term "open source" existed before 1998, and, the Fine Article's claim is, to quote it again:

> Most of you kids probably don’t remember a time where there weren’t millions of open source projects around, but the phrase was only coined in 1998.

That is, it is explicitly a claim about the phrase, meaning no possible interpretation of your expressed contradiction of the article can be correct.

All of your points could have been made without incorrectly stating that the author was wrong. In that case I would have upvoted and moved on.


I agree w/ the analysis in the article and yours. “Good taste” for them must have been influenced by “let’s not be SourceForge”.

Part of the enshittification story is the tragedy of the non-profitable site that has a large user base. Here is a recent example

https://news.ycombinator.com/item?id=41463734

Potentially investments in a site could have huge leverage because of the existing user base.

I (and most of my team) lost our jobs at one of the most well-loved open accessing publishing sites in the world because of a complex chain of events that was rooted in the site not having a sustainable funding source despite the fact that it was fantastically cheap to run if you divided the budget by the number of daily users. Fortunately they figured it all out and the site is still here and if you work in physics, math or cs you probably used it today.

Still it was painful to see “new shiny” projects that had 4x the funding but 1% the user count at most, or to estimate we could save users $500M a year with a $500k budget.

Thus you can overthrow SourceForge but cannot overthrow something profitable and terrible such as Facebook, Match.com or the “blob” of review sites that dominate Google, see

https://news.ycombinator.com/item?id=41488857


> GitHub followed a similar track to SourceForge

Can you provide an example?


GitHub offered a better experience than the existing offerings. They then scaled massively to the point where it was expensive to run without a lot of good options for monetization. Thankfully for GitHub they got bought by Microsoft and not say Verizon or GameStop (which owns the corpse of SourceForge for unfathomable reasons).

GitHub could have easily enshittified in an effort to make money had they not been bought by someone with deep pockets.


I thought GameStop sold SourceForge off over a decade ago and that it has changed hands a couple times since.


GitHub was not on the same track as SourceForge, and I would hazard they were in a completely different world than then one SourceForge developed in. For instance, GitHub is far less likely to host an executable for any software, which is where you're going to get bundled installers with AdWare or malware. I know that GitHub allows installers to be uploaded, but if we're going to compare the time period before Microsoft purchased GitHub, I really don't think this is fair. I understand the history of not trusting Microsoft, and even as someone who is deeply involved in using GitHub and Microsoft software and features, can understand a level of distrust. Everything you said about SourceForge is correct, so I don't mean to put down your entire comment here.

I believe GitHub's underlying use of the Git SCM, as well as the interface that allowed web users to look at "pull requests" as a concept was the real value in GitHub, far before hosting binaries or attracting the attention of Microsoft. The attraction to Microsoft was the ability to pull in the growing network of git users, the popularity of GitHub to a growing number of developers, and the ability to integrate the service into their developer offerings (this was the key that would have made the other two "positives" worthless to them).

I think any tool or technology you should have an "out", in case another corporation/company takes over and doesn't align with your values. Being stuck on SourceForge, Google Code, GitHub, Bitbucket, etc. is a recipe to lock yourself into being put down to pasture because you couldn't adapt and realize that there is a huge world out there, and tools and tech come and go. Always have something as an alternative for whatever you do, because things change too quickly, plus you get another point of view with solving problems (if that's your thing, and you aren't just developing for the money, which is fine if you can admit it to yourself). The fact that you are able to dive back into time with SourceForge tells me you are one of those people that have been into technology since pre-dot com bust, but probably got burned by Microsoft in some form. I'm not defending Microsoft for their past practices, only coming at this from what they have done with GitHub to this point. Hopefully I'm not wrong, but I do have a plan in place in case I am, and I think that's the most important thing in software.


I don't think GitHub's situation is completely analogous to SourceForge. You're right that GitHub doesn't have a huge moat by virtue of the way git works. I think Microsoft realizes that, no one necessarily loves GitHub so much they'd not jump ship if GitHub became too user hostile.

To be clear I'm not trying to be down on GitHub here. They made a good product and a very good alternative to SourceForge. I think they just got lucky getting bought by Microsoft when they did. By 2018 I think they'd gotten to the point where their costs would have required to start chasing revenue.


Didn’t they also start packaging malware into binary downloads?


Was it actual malware? I thought it was just automatically checked software like the Ask.com toolbar for Internet Explorer, along with the Oracle.com JAVA JRE. Maybe I was just more careful than most, but I have been using FileZilla for many years, and never had any of these issues as long as I paid attention to the installer and what was included.


You got a chuckle out of me with such a specific reference that I know I got auto-installed too. I don't know, personally, but I do recall reading accounts of malware from them from around that era.

I suppose I would call what I saw "Grayware" [1], which is debatably not malware (but debatably is, too). It was enough of a smell for me to stop using their site, though. I'd actually just forego software I was seeking out from them instead.

[1] https://en.wikipedia.org/wiki/Malware#Grayware


I definitely don't deny that what they did was definitely not on the "up and up", for sure. Dark patterns, and I haven't heard the term "grayware" before, but it definitely fits! I was only lucky because I actually watch installers, and tend to mess with install locations, strictly when it comes to Windows software. I would prefer to be on a Linux-distro for work, but it's honestly just easier for me to stick with Windows because I have to help so many other people, and I know I would eventually end up losing the (current) knowledge of what can go wrong with Windows. Over time, I would be out of touch.


I only learned the term grayware from this thread. I wasn't sure precisely what did and didn't qualify as malware, so I checked Wikipedia and found it there.

As for the Windows fear, I understand that. That was me 10+ years ago, and I _am_ less knowledgable at helping people with Windows these days. I can still figure some things out by searching, and "I couldn't figure it out after 5-10 minutes" has turned out to be an acceptable answer too sometimes :)


Yep.


By then they were already desperate from the previous enshittification.


"shadow of its former self"

Was there ever a point in time where it wasn't something that basically sucked? For some reason there are still some widely used ham radio packages that are hosted on sourceforge and it annoys me greatly. When you click the big green "Download" for the project you get.... .... a dll file. Why? Because the actual release artifact is some other zip file and for some reason it doesn't deserve the "Big Green Download" button.

SF has always been this bad. Their core data model just doesn't jive with how people actually interact with open source projects.

... and for that matter didn't they stir up some controversy a long while ago for tampering with project artifacts and adding extra "stuff" in them? (spyware / nagware / **ware?)


Yes, they were cool once upon a time. It the place to be, you didn't have to host your own CVS without charge (no git back then, hell, even SVN was released few years after SF). It was like geocities.

It looks almopst impossible today, but launching a service was really hard and expensive back then. It cost a lot of money/effort in just software. All that stuff you can just download and it actually works? No way man, didn't exist yet.

That is why LAMP stack was so great back then, it was free, working and reasonably low-maintenence and super easy to set-up.


Yes, they used to be great for open source projects. They did get wrapped up in controversy where another company took over and were including other software in the installers, if you weren't careful to uncheck the optional (and unrelated) software. There is still great software hosted there, like FileZilla if you use a Windows environment. FileZilla did have the optional software installs for about a year or so, but as long as you paid attention, it was easy to get around (you just had to pay attention, but that's not an excuse for what they did).


> Was there ever a point in time where it wasn't something that basically sucked?

Yeah, when it launched it was cool and hip. Free public CVS server to host your open source cool project was cool. Probably went downhill as the ad market fell apart post dot-com, and the only way to get revenue was big green download buttons.


It's been a while since I bothered with SF but AFAIR the maintainer can select which file is the default download that you get through the big button.


The fact that letters "SF" may need explanation in a context of code hosting and building says how thoroughly the job has been done. A number of good alternatives exist, there's no monoculture (even though some market domination is definitely in place, but now by a mysterious GH).


> A number of good alternatives exist, there's no monoculture

That doesn't sound true to me at all, except maybe in some very small niches. I've used Bitbucket at exactly one job; I've found Codeberg, but no project I've used was actually hosted there; and literally everything else I see or use is on Github.


GitLab is relatively more widely represented, but of the projects I encounter, about 2-3% are on GitLab. I encountered projects on Codeberg, too, and even on sr.ht.

A bunch of larger projects have a mirror on GitHub for easier access.

BTW there's launchpad.net which is often overlooked, bit it's vital for Ubuntu-specific projects.

At paid day jobs, I had to use BitBucket at least twice, and I miss better code review tools, like Phabricator.

GitHub definitely dominates the market, partly due to the network effects, but I don't think they have a lot of moat. If something goes badly enough wrong there, there will be plenty of viable alternatives with an easy to trivial migration path.


> GitHub definitely dominates the market, partly due to the network effects, but I don't think they have a lot of moat. If something goes badly enough wrong there, there will be plenty of viable alternatives with an easy to trivial migration path.

Their moat is a billion development tool vendors that have "integrate with Github" as a must-have and expected functionality.


> BTW there's launchpad.net which is often overlooked, bit it's vital for Ubuntu-specific projects.

It's overlooked because in true Canonical fashion they went hard in on their not-invented-here-syndrome VCS that nobody asked for or wanted. That and also the integration with Ubuntu and nothing else.


I've used bitbucket at almost every job I've had. I suspect it's usage is much higher for private companies than people realize - if you've already bought into Atlassian for JIRA or Confluence it makes bitbucket an obvious selection.


Why is that? Jira github integration is nice and simple.


Why wouldn’t it be? Simpler to use first party integration and have centralized user management. Bitbucket works just fine.


Bitbucket is also just good - legitimately. I prefer the UI for a lot of stuff.


Everything has at least a mirror on GitHub, but quite a lot of projects are either on GitLab (e.g. KiCad) or self-host (freedesktop).

There is also a lot of stuff on Gitee (China), but due to langauge barrier, it's hard to judge.


A decent number of larger open source projects self-host.


It reminds me of how Stackoverflow won so successfully that to even know about the old "expert sex change" joke is to thoroughly date oneself in modern conversation.


Earlier today I said that "what your github stars say about you" site was slashdotted. No one reacted so maybe I'll write about it on my LJ.


SF is SourceForge, which at the time effectively had a monopoly (and also sucked)


It still sucks, but it sucked then too.


I miss mitch hedberg


I now realize that it's SourceForge. :)


I thought he was talking about SpaceForce. Wait until we get to 2050, SpaceForce develops a really shitty monoculture. That's why I came back.


Sourceforge.


This post replied to a post talking about Source Forge. Had the same problem :)


I think it means SourceForge.


SourceForge, probably.


SourceForge


> "Actually, Google Code was never trying to win."

Wasn't Google reported among the bidders for GitHub?

https://www.cnbc.com/2018/06/05/github-interest-from-google-...

Maybe Google Code itself was never trying to win but tendering an significant offer to the auction suggests Google was trying to win something.


> Actually, Google Code was never trying to win.

Herein lies the tragedy. Google could've offered, even sold, its internal development experience (code hosting, indexing and searching, code reviews, build farms, etc...) which is and was amazing, but it decided that it wasn't worth doing and let GitHub eat its lunch.


Developer infrastructure at google reported into cloud from 2013 to 2019, and we (i was there) tried to do exactly that: building products for gcp customers based on our experience with building interval developer tools. It was largely a disaster. The one product I was involved with (git hosting and code review) had to build an MVP product to attract entry level GCP customers, but also keep our service running for large existing internal customers, who were servicing billion+ users and continuously growing their load. When Thomas Kurian took over GCP, he put all the dev products on ice and moved the internal tooling group out of cloud.


I had this theory that generations raised on the internet and exposed to it from birth would be the most humble generations ever, because we all look for ways to be uniquely valuable, and it became nearly impossible to be egotistical when faced with the entirety of even just a mature youtube platform.

Instead what we got was higher degrees of selective attention, and very elaborate and obscure flip-cup tricks.


The only real tragedy here is that Google really did have best-of-industry semantic search integrated into their code searching tools, something that nobody has been able to replicate.

GitHub is great, but it's absolute ass for search. To the point where for any nontrivial question I have to pull down the repo and use command-line tooling on it.


New GitHub full text search [1] is amazing. It is so good that for me it often replaces StackOverflow - I just use it to see how some API function is being used. Especially useful if you're searching for an example with a specific argument value.

[1] https://cs.github.com/


have you used google's internal code search though? the link you posted is amazing in its performance, for sure. but once you are in some repo and doing what most of us call "code search", github drops off in utility vs google's internal tooling pretty quickly.

i'm only remarking on this because of the context in the parent you are replying to, whom i agree with. local tooling is better than what github provides. as a standalone comment i would simply upvote you.


Chances are that a random stranger on the Internet has not used Google's internal code search. Even if that person has, it would be useful to provide the context for others to understand.


They're talking about Kythe

https://en.m.wikipedia.org/wiki/Google_Kythe

https://youtu.be/VYI3ji8aSM0?si=D3Z3FIsB8wa7MTE6

Think jump-to-def and listing cross references working with c++ aliases, across generated code boundaries, etc.


> The entirety of the Google team working on Kythe was laid off in April 2024, as part of a company push to move certain roles overseas.

ouch just OUCH


Yea :(


Local code search is great in JetBrains products. I use PyCharm and even on large codebases the search is almost instantaneous and there are enough filters and options to nail down what you need. While JetBrains often drops the ball on the responsiveness of their products, the search remains fast as far (and as recent) as I remember.


I've used both. Google's code search is usually better if you know what you're looking for. It's less so if you need to do something that involves cross-language references (e.g. code file that references a translation string).


Why is this not the default?


I believe it is nowadays. For a while it was in beta.


Do you mean the non-semantic indexing, which covered most of Google Code? Like grep-style supporting, but no real semantic data?

Or are you talking about the few repos that had semantic indexing via Kythe (chromium, android, etc)? We never got that working for generic random open repos, primarily because it requires so much integration with the build system. A series of three or four separate people on Kythe tried various experimentation for cheaply-enough hooking Kythe into arbitrary open repos, but we all failed.


Isn't it working here: https://cs.opensource.google/bazel/bazel/+/master:src/main/s...

I remember there were docs how to onboard a repo to that list.


Yea it's still there, that is backed by Kythe.


I'm talking about Kythe, and learning that it ran into issues generalizing it for non-Google-controlled APIs explains a lot of the history I thought I knew!


Yea we never had it for even all Google controlled repos, just the ones that would work with us to get compilation units from their build system.

I was the last one to try (and fail) at getting arbitrary repos to extract and index in Kythe. We never found a good solution to get the set of particular insanity that is Kythe extraction working with random repos, each with their own separate insane build configs.


It almost makes me wonder if the right approach (had Google been willing to invest in it) would have been to wed Kythe and Bazel to solve that "insane build configs" problem.

"Okay, you want generic search? Great. Here's the specific build toolchain that works with it. We'll get around to other build toolchains... Eventually maybe."

Would have been a great synergy opportunity to widen adoption of Bazel.


Yea Kythe + Bazel is a billion times easier to do


  > Actually, Google Code was never trying to win.

  > It was simply trying to prevent SF from becoming a shitty monoculture that hurt everyone
Being an insider of Google might make one be completely out-of-touch of reality. Google Video was trying to prevent Youtube from becoming a shitty monoculture that hurt everyone, too? This one clearly failed then.


Being a Google insider comes with the gift of being able to see the real rationales behind many products, and also the curse that nobody outside will believe you.

This is perhaps true of all big companies, but Google also seem to adopt a more passive PR strategy and don't try too hard to explain things, and it's just that much more difficult to understand Google when everyone else is louder.


I hope you never have to work for Apple ;)


different projects have different objectives? The guy literally worked on the project! There were 4 people at its peak! Why would you think that would be the source of a major initiative?

Though there probably is some deeper critique about how you have a pretty amazing service running with "just" 4 people and aren't able to turn that into something useful beyond that objective. Innovator's Dilemma I guess.


>Google Video was created to prevent Youtube from becoming a shitty monoculture, too?

Like Google+ and all the other attempts:

https://killedbygoogle.com/

Google is actually the good guy to prevent monopolies, we just don't understand them ;)


To be fair, you "lacking taste" and "not trying to win" are not mutually exclusive. You could argue they are respectively the proximate and ultimate cause for GitHub's win.


> We folded it up because we achieved the goal we sought at the time, and didn't see a reason to continue.

In 2018, MS bought Github for 7B.

Google Code started to be shutdown mid-2015. In 2015 it wasn't clear yet that it would be valuable for Google to host the world's code?


Well, it is hard/very distasteful to put ads on a source code hosting website, so this likely isn't aligned with Google's interest. No, I am not joking.


github shows me ads to their other tools, conferences and whatever all the time. And my company is a paying customer.


> It was simply trying to prevent SF from becoming a shitty monoculture that hurt everyone

Initially I thought SF means San Francisco, and I thought "Wow, what kind of monoculture can be prevented by Google Code", and then I realized that SF meant Source Forge.


This is a very bold statement, isn't it? So Google says the credit of making SourceForge (very deservedly) irrelevant is theirs and not GitHub's (or SourceForge themselves)?

As a complete nobody, why can't I think that in every example of a product launch, if it wins it wins, and if it fails, I can claim it was never intended to win?


Props for serving as primary source material. One of the reasons I, basically, no longer trust "tech" journalism.

Aside from too many hit/bait/paid-for pieces, writers have simply gotten "lazy" as they're no longer incentivized to "get it right"--just "get it out".

Granted, this post is, essentially, meant to be a whitepaper for their product offering, but c'mon guys, you had the references to reach out, but were lazy for...reasons?!

AWS also has a history of some of these "buit-to-marketize" products (e.g. CodeCommit), but at least there's a "solid core" of a stack to re-build these niche services from-scratch.

What's Google "reliable core" anymore aside from Compute Engine and Search? Don't get me started on Cloud SQL.


Whether Google wanted to win or not: Don't kid yourself into believing they _could_ have won even if they had tried. Google has a history of being terribly at executing on product ideas and Google Code never had the same "feel" that Github did. There's a reason Microsoft paid billions for Github. If Google could have created that in-house themselves they would have done it.

Not saying Google tried and failed - they may just have realised that actually winning this was never an option.


Why make this about Google Code? It reads a little like Main Character Syndrome.

Github beat basically every Code hosting platform out there. Probably dusins of Github-like startups tried and failed because Github did so well.

So whether Google Code "wanted to win" or not doesn't really take away anything from the posts argument that Github won due to timing and taste.


> was there, working on it, when it was 4 of us :)

raises eyebrow

When there were 4 of you?

So about $800k a year for that time period?

Just out of interest?


[flagged]


Not sure where you're seeing failure? I remember that Google code was very popular back then alongside MS's Codeplex (for .NET stuff at least), both being better than sourceforge, but neither really attempting to be businesses and just providing free code hosting. They both shut down around the same time with migration tools for GitHub, and as we all know, MS eventually went on to buy GitHub outright.

For me, GitHub had only entered my sphere of knowledge when the Google code phase out started. It was pretty painful because I was already struggling to teach myself to code and git seemed incomprehensible compared to svn. Sourceforge was up there with cnet as being a somewhat sketchy site which could occasionally have something genuinely interesting to download, but usually did not. So I can kind of believe that GC/Codeplex weren't necessarily aiming to be profitable products. Selling access as an enterprise product was a common model even back then, and it would've been an obvious route if they were actually aiming for profit.


Curious whether you read the (pretty short?) post you replied to...

That post said that their goal was to make sure SourceForge, which was truly awful and also the only game in town back then, did not become (or remain) dominant.

Pretty hard to argue they failed at that goal when SourceForge is so not-dominant today that some people here didn't even recognize its acronym!


[flagged]


Google engineering culture ¯\_(ツ)_/¯


Your claim is Google Cloud Code is what brought SourceForge down?

And why is it a believable goal that as long as you bring down a competitor, it's OK if you fail?

No one (sane) has such a goal.


What evidence do you have, exactly?

I've been super-consistent on this for at least 10 years: https://news.ycombinator.com/item?id=8605689

It was also written in our OKRS, etc at the time. I probably have plenty of internal evidence.

In the end, we would have been happy if SF had become a great and reliable place as well. You are assuming the goal was to destroy it. But in practice, we expected them to compete for real and become better as a result.


> No one (sane) has such a goal

I can't speak to modern Google, but old Google definitely did things to ecosystem-shape. It wasn't "sane" in the sense that, yes, it does buck simple calculations of profit-maximization; being able to avoid those simple calculations is the reason the founders structured the IPO to maintain executive control, so they could make wild bets that broke with business tradition.


If the goal is to stop a competitor from slow or stop a competitor from gaining market share, and you slow or stop them from gaining marketshare, how is that failing?


So a third competitor appearing and taking the marketshare from both of them doesn't count? Such a great logic to get promotions. Of course it would come from google.


You are again thinking in terms of winning, and in literally the first sentence i wrote that we were not trying to win anything.

The goal was to get SF to not be a shitty monoculture. Ideally by SF being forced to compete with us, and making their product less shitty. It does not require destroying or harming SF to achieve this goal.

They chose, ironically, not to compete, and were destroyed as a result of their own choice.

It also happened that third parties came along and helped make it both not shitty, and not a monoculture.

Goal achieved. Why does marketshare have to enter into any of it?

Nobody was trying to destroy anything. The person who started the project (dibona) came from VA linux and slashdot, and was very good friends with the people who ran both (still is!).

He also started summer of code and lots of other things.

Stop being so amazingly cynical.


Why does it matter?

If you send troops to block the enemy from advancing, if those troops block the enemy from advancing, they've succeeded. Even if those troops didn't "win the war". Even if they all died in combat. The mission was a success.

If you want to say this is revisionist history from a googler... sure - make that case. But simply deploying a service to try to 1) prevent a competitor from gaining marketshare and/or 2) get the competitor to suck less... it's a valid move.

Personally, I don't think google code alone made much of an impact on SF directly, but google code and ms codeplex together probably did get people to start considering using something beyond SF.


> Why does it matter?

Because the truth matters? it is quite annoying to see people trying to bend reality to make themselves or their projects more important than they really were. Google code sucked, like many google projects.

Github wasn't successful because Google Code made people "start considering using something beyond SF". Github succeeded because git is great and social network features allowed it to reach a much bigger audience.


Google code did not at all suck, compared to the alternatives - essentially just SourceForge! - at the time it launched.

A lot of what the OP wrote rings true to me, Github obviously hit on a better model, but they clearly had different goals.

I'm sympathetic to people who came into this landscape after Github was already around feeling like google code was a lame also-ran, but as someone who thought "why is everything hosted through this shitty SourceForge website" when I first started using open source, it was a huge improvement.


DannyBee didn’t claim that they “brought SourceForge down” or even attempted to. They said Google Code was intended to prevent a monoculture, i.e. SourceForge being the only popular option.


Again, I'm sorry, but you really need to work on your reading comprehension! My post in no way claims that "Google Cloud Code is what brought SourceForge down".

It just says that the original post says their goal was for SourceForge to not be the shitty but dominant monoculture that it was when they started, and that fast forwarding to today, it clearly is not.

It may well be the case that they had absolutely nothing to do with that! But the goal, as stated, was achieved either way.

> And why is it a believable goal that as long as you bring down a competitor, it's OK if you fail? No one (sane) has such a goal.

I just can't fathom why you think this. Can you explain it more? You have stated it a few times, but not yet explained it.


in hindsight, the success of GitHub could be seen as a missed opportunity for Google with Google Code but at the time SourceForge was a website with some advertising, the commercial opportunity was minuscule compared to what GitHub is today. I'm sure you can go back to Hacker News from 2007/2008 and find discussions that confirm what the parent said.


So wait, you tried to prevent SF to become a "shitty monoculture"?

First: That sounds completely not like Google

Second: Now you have GH as the "shitty monoculture" (owner is MS and erases your license for Co-pilot)

Third: >>We folded it up because we achieved the goal we sought at the time, and didn't see a reason to continue.

Yeah ok that sounds like Google, try's to enter another market just to hurt them then folds ;)


This was 2006 Google, which did stuff semi-altruistically all the time.

At that point, SF was serving malware and stuff. It was really not a great time.

Github became a monoculture years later when others folded. Google code was shut down in 2016. Github wasn't quite a monoculture then.

I also said, back in 2014, that it might be necessary to do something like google code again in 5-10 years: https://news.ycombinator.com/item?id=8605689

10 years later, here we are i guess :)

Though i think what i said then still holds - Github is not anywhere near as bad or unreliable as SF was.


SF served "malware" in 2013 NOT 2006:

https://en.wikipedia.org/wiki/SourceForge#Adware_controversy

After slashdot was purchased from condenast (i think?)


They had a bad name for the download pages being ad-infested even before they bundled the malware in the installers.

(And yes, fake download buttons on a site serving binary downloads went exactly where you'd expect.)


>fake download buttons

Yes and today Ad-Sense (Google) took the Crown from being then biggest Scam AD's deploy-er.

And really i don't think that's true before they where sold, ad's sure, scam/malware stuff? I don't think so...at least i cant remember.


I mean, that's just when they did it fairly deliberately. Regardless, I think you would be hard pressed to argue SF was a great hosting environment when Google Code launched, which was the point.


>hard pressed to argue SF was a great hosting environment when Google Code launched

But SF had FTP, Websites, SVN hosting and i think even a WIKI, so you can hardly compare it with Google-Code...and hey at least they opensource'd their "forge":

https://allura.apache.org/

IDK i don't have such bad memory's about SF, even today you serve big files over SF because of GH limits.


SourceForge was originally open source, but they later closed it. GNU Savannah (https://savannah.gnu.org/) runs on a fork of the last open version of SourceForge.


>SourceForge was originally open source

True after slashdot got buy'd, they also served malware AFTER the takeover (2013), and now look at that year:

Allura graduated from incubation with the Apache Software Foundation in March >>2013

https://en.wikipedia.org/wiki/Apache_Allura

Google-Code was in 2006 right?


I do wish there were enough incentive to have a strong commercial gerrit offering. There are some very good ideas in gerrit, and it would have strong differentiation vs github-styled offerings.

Not just because I like gerrit, but because the github monoculture is wearing on me.


For a comparison on the scale of harm from the monoculture, recall that SourceForge was bundling malware with downloads, and still has a full page of ads when you download from it.

If I recall correctly, SVN was also more popular than Git at the time, so migrating hosts was a lot more painful than now...


SVN's model is what everyone is using. Sure you git, but almost nobody is using the distributed parts - they all sync to a central sever (github). SVN just could get user management, or merges right - those should be solvable problems but somehow were not. (I don't know enough about SVN to speculate on why they didn't)


The main reason is that SVN's model of branching/tagging is based on directories in the working directory, whereas git's model (and to a certain extent, commercial SVNs like Clearcase and Perforce) is that branching/tagging is of the entire repository and not related to the file tree structure.

This is a fundamental difference and the reason that git's model works much better when branching/merging.


And then you force people to change from google code to something else just to prove a point since people then where unable to setup svn server ;)

Even today you find death links from google code repos.


It's a bit of a cop out to say "we were never trying to win"

If you were never trying to win, that's a product failure

You should have been trying to win, you should have built a strong competitor to GitHub and you shouldn't have let it rot until it was shut down

The world would have been a better place if Google code tried to be as good as GitHub


> It's a bit of a cop out to say "we were never trying to win"

It's literally not? We had a goal from the beginning - create enough competition to either force SF to become better (at that time it was infinite ads and malware), or that someone else wins.

> You should have been trying to win, you should have built a strong competitor to GitHub and you shouldn't have let it rot until it was shut down

That's your goal, not mine (or at the time, Google). Feel free to do it!

You don't like what we had as a goal - that's okay. It doesn't mean we either failed, or had the wrong goal. We just had one you don't happen to like.

> The world would have been a better place if Google code tried to be as good as GitHub

One of the things to ask before you either start something or keep doing something is "who actually wants you to win?" If the answer is "nobody", it might not make any sense to do.

It's not obvious in 2016 anyone would have wanted us to win. By then, Google had lived long enough to see itself become a villain. There was reasonable competition in the space.

I don't believe we would have really served people well to keep going.


It sucks you're getting so much anti-Google sentiment when you're not at all attached to the reasons google sort of sucks.


It was similar with Chrome. Internet Explorer was the monoculture browser and stagnating. Google had things they wanted to do on the web but needed better browsers. The original goal was to introduce competition in the browser space so that all browsers would get better. They may have changed goals along the way, but that was the original stated goal. In the end they killed IE and now they are the monoculture outside of Safari.


When Chrome was released, Internet Explorer was not the monoculture browser, and 1/3 of the users had Firefox installed.


Different groups of people have different goals. Not every group of people has "winning" a market as their primary goal.


> If you were never trying to win, that's a product failure.

what?


Well Sourceforge literally bundled malware for a while. So everyone had to move.

https://news.ycombinator.com/item?id=31110206

This articles about the open source distribution side but I will also point out that the number of developers who don’t realise your remote GitHub repo can be located on any machine with an ssh connection and nothing more is surprising. As in people use private GitHub repos thinking that’s THE way you work with git. If GitHub was just for open source hosting I suspect they’d have trouble monetising like sourceforge clearly did which led to scammy attempts to make money. But they always had this huge usage of private GitHub repos supporting the rest. This must have helped a lot imho.


This is not my recollection, at least at the time. I remember meeting with one of the SourceForge founders and being a little star struck. SourceForge was a huge deal at the time and we totally felt like we were the underdogs in that arena. Perhaps later they got more desperate, but in 2008, SourceForge was the 900lb gorilla.


2013 is when the binaries had malware included although even in 2008 they were guilty of having 5 download buttons due to excessive and unpoliced inline advertising with only one of those buttons being the holy grail that linked to the download you actually wanted. Choose wisely.


I completely forgot about the absolute gamble that was "which big green button is the actual download button this time" when using SourceForge and Tucows back in the day.


To help our recollections, let's look at Sourceforge's browse page, from back in 2008: https://web.archive.org/web/20081118033645/http://sourceforg...

They did indeed host quite a lot of stuff, and it was undeniably popular as a place to get your binaries hosted free of charge.

But at the same time, is it being used as a source code repository? A lot of those projects don't show the CVS/SVN features. And sourceforge never hosted the biggest and most established projects, Linux and Gnu and PHP and Java and Qt and Perl and Python were all doing their own thing. And pretty much every project visible on that page had its own separate website, very few projects hosted on sourceforge exclusively.


No, you’d upload source tarballs. Live public access to VCS wasn’t a thing for most projects.


SourceForge was the upstream source of truth for a huge percentage of small apps bundled by various distros (and BSD ports etc). Even when the upstream maintainers just uploaded the latest tarball to SF and didn't use their hosted VCS, just the hosting was a major boon to all of the tiny teams and individual maintainers of FOSS projects.


Who is "we"?


Sorry, "we" is GitHub. I'm the author of the article and one of the GH cofounders.


Well damn. So much for the “Github had better taste” thesis.

Still, Sourceforge was a terrible user experience. Github was a breathe of fresh air.


Oh! Heh, that makes sense now.


> If GitHub was just for open source hosting I suspect they’d have trouble monetising like sourceforge clearly did

It made it harder to monetize, but it enabled Source Forge to use a huge amount of voluntarily-given bandwidth and saved them a fortune at a time bandwidth was crazy-expensive.

Bandwidth costs were one of the reasons something like GitHub didn't appear earlier, and suddenly popped-up a lot of times out of nowhere.


> your remote GitHub repo can be located on any machine

It's such a an easy mistake to say that you did it while explaining you don't need GitHub for git repos. :)


> your remote GitHub repo can be located on any machine with an ssh connection

Technically true, but GitHub provides so many more tools that it's almost silly to do so. Aside from the "hub" in GitHub such that it is often the first and only way that some people will look for projects they're interested in, you also get the nice web interface for quick code browsing, issue queues, the ability to add and remove people with a simple GUI rather than SSH key management, wikis, email notifications, and so on and so on.

Some of this can be mitigated by using a self-hosted web-based Git tool like GitLab, Gitea, Phorge, etc. But you still lose the "everyone uses it because everyone uses it" factor of GitHub on top of whatever GitHub features the clones may lack.


> Sourceforge literally bundled malware for a while. So everyone had to move.

This was after SourceForge hugely declined in popularity.

The correct sequence of events is:

1. SourceForge massively declined in popularity,

2. and then in a desperate attempt to extract cash they started bundling malware.

Not the other way around.

All of this had little to no effect on the migration away from SourceForge, which was already well underway in 2013 when the first controversy started. It may have expedited thing somewhat, but not even sure about that. See for example [1] from 2011, which shows GitHub is already beating SourceForge by quite a margin. I found that article because it's used as a citation for "In response to the DevShare adware, many users and projects migrated to GitHub" on Wikipedia, which is simple flat-out wrong – that DevShare incident didn't happen until 2013 (I have removed that from the Wikipedia page now).

It's baffles me how people keep getting the sequence of events wrong on HN.

The reason is simple that SourceForge is just not very good and never was very good. Part of that is because of the ad-driven business model, part of that is that many features were just not done very well. Who actually used the SourceForge issue tracker or VCS browser? Almost no one, because it's crap.

[1]: https://redmonk.com/sogrady/2011/06/02/blackduck-webinar/


I suspect that Microsoft has just accepted losses year after year after year. That’s what they do. They are very willing to invest heavily in projects that they think will work out in the long-run, see also their OpenAI investements.


I distinctly remember that what annoyed me about SourceForge was that it hid the source code behind multiple clicks. GitHub was a breath of fresh air because it made the source code front and center.


The malware bundling was long after SourceForge had precipitously declined in popularity, not the original cause of it, no?


The celebrity of Linus definitely helped Git win, and GitHub likely benefited from that by the name alone. Many people today mistakenly equate Git and GitHub, and since GH did such a good job of being a friendly interface to Git, to many people it _is_ Git. They did an early bet on Git alone, at a time when many of its competitors were supporting several VCSs. That early traction set the ball rolling, and now everyone developing in public pretty much has to be on it.

Tangentially: it's a pretty sad state of affairs when the most popular OSS hosting service is not only proprietary, but owned by the company who was historically at opposite ends of the OSS movement. A cynic might say that they're at the extend phase of "embrace, extend, extinguish". Though "extinguish" might not be necessary if it can be replaced by "profit" instead.


I do go into Linux and Linus in the article in some depth, but even Linus credits the Ruby community to a degree with the explosion in popularity of Git, which is fairly clearly due in large part to GitHub. But, it's certainly a chicken/egg question.

I would also argue that MS is nothing like the company that it was 30 years ago when that philosophy was a thing. The truth today is the via GitHub, Microsoft hosts the vast majority of the world's open source software, entirely for free.


>I would also argue that MS is nothing like the company that it was 30 years ago when that philosophy was a thing.

This is like saying that a cannibal has stopped eating people because there have been no disappearances in the last two days. Sure, technically correct, I'd still not eat their curry.


It is not anything even remotely like that. Fuck, I love programmers.


Yep, they became worse.

30 years ago your PC was at least your PC, now they shove all kinds of cloud and AI services down the users' throat and put ads where they don't belong.


You are making the mistake of thinking that the Microsoft that owns Windows and the Microsoft that owns GitHub is the same org[1].

1 / https://i.insider.com/4e0b340dcadcbbdd35120000?width=700&for...


The head is the same.


MS have realized that producing the right kind of important open-source software gives even more strength than producing closed-source software. Hence Typescript, VS Code, a few widespread language servers, etc.


MS has long known developers were critical to their success. For a while they were worried that projects like Linux would take away their market, but it is now clearer to everyone where linux is going and so they don't have to worry as much. (so long as they are not stupid)


They were smart enough to offer MS SQL Server for Linux, and to support (rather than oppose) Mono and Xamarin early enough.


> The truth today is the via GitHub, Microsoft hosts the vast majority of the world's open source software, entirely for free.

To be fair, though, y'all did 90% of the work before the acquisition. MS only hosts the vast majority of the world's open source because they backed up dump trucks full of cash at the houses of the people who actually built that capability.

> I would also argue that MS is nothing like the company that it was 30 years ago when that philosophy was a thing.

I don't think I can ever truly trust their motives, though. I will agree that it's a different company in many ways, but their history is still that of a company that, through anti-competitive practices, set personal computing back decades. And worked tirelessly to keep open source at bay where they could.

At this point MS realizes it's more profitable to work along side open source than against it. If at any point they no longer believe that's the case, you better believe we'll see a reversion to their former behavior.


The truth is that Microsoft trains Copilot using the vast majority of everyone's code, entirely for free.

Corporations like Microsoft don't do charity.


Github won, not because of taste, but also because of providence and comms. The whole article is written from someone looking out, not in - you can't play a football match and watch it.

For the rest of the world, Github came along when blogs and rss feeds were also close to their zenith. IIRC Github used to employ a rather sweary chap that used to blog a lot, and he appeared in everyone's feeds that I knew of promoting github.

Whereas, Bitbucket, and FogCreek's kiln had little comparable publicity or comms.


It really was because of Github, and not Linux. If Github had Mercurial support from the get go, I would expect both to be heavily used today.


Thats actually interesting. Was there any concern at any point in the early days about supporting other VCS or being too focused on git?


There was concern actually. We debated a bit the concept of naming the company "GitHub", since "git" is baked into the company name. We worried a little about what happens when the next big VCS thing comes along, not knowing that it's going to be dominant for at least the next 20 years.


> Though "extinguish" might not be necessary if it can be replaced by "profit" instead.

Let me bite: why is this bad?


I didn't say it was. If anything, it's preferable to "extinguish". :)

Though a core philosophy behind the OSS movement is creating software for the benefit of humanity, instead of driven by financial reasons. Not that developers shouldn't profit from their work, but it's ironic that a large corporation who was historically strongly opposed to the movement is now a leader in it. It's understandable to question their motives if you remember the history, regardless of their image today.


I believe that the only sustainable model for software that benefits the humanity has to have a profit motive. Even a delayed one, as in the case of the universities and government grants.


That depends on your political beliefs. :)

But certainly the free software movement has provided incalculable benefits to humanity, where their authors were not chasing profits. The only reason this is unsustainable _in some cases_ is because we haven't established a good model to support this work yet. There are some attempts with varying success, but even in its current state, I would argue that more good is produced with this model than with one whose primary goal is profit.


GitHub won because Git won. It was obvious by the late 00s that some DVCS was going to upend subversion (and more niche VCS like TFS). It ended up a two horse race between Git and Mercurial. GitHub bet on Git. Bitbucket bet on Mercurial.

Git took the early lead and never looked back. And GitHub's competitors were too slow to embrace Git. So GitHub dominated developer mindshare.

It seems strange now but there was a period of time during the late 00s and early 10s when developers were pretty passionate about their choice of DVCS.


Not just that. They invented "pull requests" and offered (initially minimal) code review tools. This made contributing in the open.much easier, and making small contributions, vastly easier.

Something like git had to take over svn / cvs / rcs. It could be Perforce, it could be BitKeeper which apparently pioneered the approach. But it had to be open-source, or at least free. Git won not just because it was technically superior; it also won because it was at the same time free software.


Pull requests predate Git. The kernel developers used them in the Bitkeeper days:

    I exported this a patch and then imported onto a clone of Marcelo's
    tree, so it appears as a single cset where the changes that got un-done
    never happened.  I've done some sanity tests on it, and will test it
    some more tomorrow.  Take a look at it and let me know if I missed
    anything.  When Andy is happy with it I'll leave it to him to re-issue a
    pull request from Marcelo.
https://lore.kernel.org/linux-acpi/BF1FE1855350A0479097B3A0D...

I do not know to what extent Bitkeeper had browser-based workflows. Moving cross-repository merges away from the command line may actually have been innovative, but of course of little interest to kernel developers.


That's interesting. I know BK had "pulls", but iirc it didn't have a "request-pull" command, so clearly the "pull" terminology came from BK and the "request" part came from how people talked about it in email.

I actually just shot a video showing how BitKeeper was used. I'll post that and a blog post on our GitButler blog soon.


Mercurial also supported pull requests. The unique thing about github was an easy central place to do them from and ensuring they didn't get lost. Once you have a github account you can fork a project make a change and pull request it in a few minutes. emailing a patch isn't hard, but with github you don't have to look up what address to email it to, if you just say open pull requests it typically goes to the right place the first time.


I remember we used a tool, I think it was Gerrit, before I'd heard of GitHub or Pull Requests. It worked with patches which is also how we used to share code, through email with patches. GitHub won because it had a cleaner UI and a likable name.


I found Gerrit recently.

I love it so much, I hate how the other code review systems kinda suck in comparison but people prefer them.

I guess it's proof that features and shiny are more important than a good idea.


Git also massively benefitted from GitHub. Do you know a single person who even knows you can use git without a "forge" like GitHub, let alone knows how to or actually does it?

It's hard to remember but there was a time when git was resisted. When I first started to use it, a lot of people were saying you don't need it, you only want to use it because it's hipster and the kernel uses it, but you're not the kernel etc. It's exactly the same as k8s is all these years later (the tide seems finally turning on k8s, though).

Without GitHub (or something else), git would have remained a weird kernel thing. But, equally, without git GitHub would have had no raison d'être. It's a symbiotic relationship. GitHub completely the picture and together they won.


I taught my research group Git version control in college. It was part of a "new student/researcher onboarding" series that we put all the new grad students and undergrads through. But we were in Radio Astronomy, so there was a lot of data processing and modeling stuff that required being comfortable within a remote ssh session and the basics of Linux/bash/python. I know it was already being used in Radio Astronomy (at least in the sub-field of Pulsar Astronomy) at the time and was part of the reason I didn't get pushback when I proposed making sure our group was trained up on using it.

We switched to Git as a whole in early 2009 since it was already a better experience than SVN at the time. Could be off by a year or two, given how long ago this was and the fact that I was working with the group through the end of high school in 2007-2008.

We only added GitHub to our training later in 2011-2013 era, but we ran our own bare git repos on our department servers until then. And students/groups were responsible for setting up their own repos for their research projects (with assistance/guidance to ensure security on the server).

Last job also made use of our own internal bare repos, admittedly mirrors of our private GH projects, and our stack pulled from that mirror to ensure we always had an instance that was not dependent on an external vendor.

Current role also makes use of bare git repos for similar reasons.

I think the knowledge is there and plenty people do it, it's just not news/blog worthy anymore. It's not new or groundbreaking so it gets little attention.


> Git took the early lead and never looked back. And GitHub's competitors were too slow to embrace Git. So GitHub dominated developer mindshare.

And Mercurial spent an enormous amount of effort going after Windows users and basically got absolutely nothing for it.

In my opinion, this was what really hurt Mercurial. Nobody in Windows-land was going to use anything other than the official Microsoft garbage. Consequently, every ounce of effort spent on Windows was effort completely wasted that could have been spent competing with Git/Github.


> GitHub won because Git won.

Sorry, but Git won because Github won. Lots of people loved (and still use) Mercurial. It lacked the network effect because Github didn't support it.

> GitHub bet on Git. Bitbucket bet on Mercurial.

Bitbucket didn't lose because of Mercurial. They lost because Github had a better product (in terms of sharing code, etc). It also was neglected by Atlassian in 2010.

> It seems strange now but there was a period of time during the late 00s and early 10s when developers were pretty passionate about their choice of DVCS.

Sorry buddy, but there are still plenty of us Mercurial users. Maybe, just maybe, even dozens!

(Seriously, I use Mercurial for all my projects).


Me, I picked Python and Mercurial for primary language and DVCS, respectively: one of those worked out really well. I still miss hg and have never really gotten the hang of git.

Regarding Mercurial, would you happen to have recommendations for a GitHub/Bitbucket-like service that still works with hg?


> Regarding Mercurial, would you happen to have recommendations for a GitHub/Bitbucket-like service that still works with hg?

Use "jujutsu" (jj).

It's the goodness of Mercurial but works in the crappy world that Git has bestowed upon us.

I made the switch from Mercurial because it's just getting too hard to fight the git monoculture. :(


This looks cool; I might wait for 1.0, but the idea of git underneath and something better on top is appealing.


If you just want an online repository, go with Sourcehut (https://sourcehut.org/)


As someone who used both git and hg, I must say I'm sorry git won. Its chrome sucks (though less than it did) and the naming is confusing as hell. Still, if everyone uses git, and you have to use BitBucket for hosting instead of GitHub/Lab... Nah, not worth it. Kudos to you for sticking with it!


As someone who tried out both git and hg around 2012 with only svn experience, I found hg confusing and git easy to understand.

Unfortunately it's been so long since then I don't remember exactly what it was that confused me. Something around how they handle branches.


I've only recently started to use mercurial in earnest for one project (legacy reasons). It's branches for me too. At least considering my experience with it is limited.

I don't like how every time you pull someone else's changes you end by default in a state that is probably similar with git's detached head. With git, most of the time you are on a named branch, you know where you are and you pull/push stuff out of said named branch. With mercurial some of the branches are unnamed and it's still confusing why I'd want that. Perhaps the original designers didn't like having private local-only named branches, I don't know.

This may just be an artefact of my very limited experience with hg though.


When not sharing with others, bookmarks is the way to go - not branches. Mercurial bookmarks act more like git's branches. I think they've now made it so that you can share them too, but since no one else at work uses mercurial, I don't have experience with distributed bookmarks.


> Still, if everyone uses git, and you have to use BitBucket for hosting instead of GitHub/Lab

Isn't this supporting my point? That a barrier to use mercurial was that people preferred Github over Bitbucket?

> Kudos to you for sticking with it!

It's simple a lot easier to use it vs Git! Kudos to whoever suffers through the latter!


Is Git and GitHub a DVCS though?

Have you ever checked out code directly from a colleague's machine? GitHub is very central-looking from where I'm standing, and the differences between Git and SVN are very academic and does not really apply in practice any more.

GitHub allowing forks of repo's to request PRs between one another is probably the only DVCS thing about all this. But this model does not apply to orgs hosting their proprietary code on GH, where the developers don't have their own forks of their employer's code repos. I'm pretty sure it would have been possible to replicate pull requests with SVN on GitHub in some alternative reality.


Git is still a DVCS, even today it's not being used in the way it was designed to be used by Linus and co.

The key distinguishing characteristic is the fact that every git checkout contains the full repo history and metadata. This means a consistent network connection to the master server isn't necessary. In fact, it means that the concept of a "master server" itself isn't necessary. With Git, you only need to connect to other servers when you pull down changes or when you want to push them back up to the remote repository. You can happily commit, branch, revert, check out older revisions, etc. on just your local checkout without needing to care about what's going on with the remote server. Even if you treat your remote repo on GitHub as your "master", it's still a far cry from the way that centralized VCS works.

If you've never worked with true centralized VCS, it's easy to take this for granted. Working offline with a system like Perforce or SVN is technically possible but considerably more involved, and most people avoid doing it because it puts you far off of the beaten path of how those systems are typically used. It basically involves you having to run a local server for a while, and then later painfully merging/reconciling your changes with the master. It's far more tedious than doing the equivalent work in Git.

Now, it's important to note that Git's notion of "every checkout contains all the repo data" doesn't work well if the repo contents become too large. It's for that reason that things like sparse checkouts, git-lfs, and VFS for Git exist. These sorts of extensions do turn Git into something of a hybrid VCS system, in between a true centralized and a true decentralized system.

If you want to understand more, here's a great tech talk by Linus himself from 2007. It's of note because in 2007 DVCS was very new on the scene, and basically everyone at the time was using centralized VCS like SVN, CVS, Perforce, ClearCase, etc.

https://www.youtube.com/watch?v=MjIPv8a0hU8


I’d say that the key distinguishing characteristic is that your non-merge interactions with GIT do not rely upon a connection to the master server.

I think it would be possible to have a DVCS without the full repo history and metadata. Doubt that it would be worth the effort though.


Having been on call 24/7 for production services, I found "git log" absolutely essential, almost more so than the latest code. We were usually expected to roll back rather than take additional risk fixing forward on outages, so the question was "roll back to what?"


Git is a DVCS though. Just because GitHub exists does not exclude Git from the category of a DVCS. You get a local copy of the entire history with Git which is what pushes it into that category, nothing to do with GitHub. SVN is centralized in the sense that you are not grabbing the entire copy of the repo locally. Not academic differences.


It's been a hot minute since I've used SVN at work, but in my last job where it was SVN, each dev checked out the entire repository locally. Even though you /could/ check out a section of the repo, it made no sense to do that, because you need the entire codebase to run locally. Branching was still a mess though, and Git has really innovated in this space. We used to all dev on `develop` branch, and we'd daily pull from the server, fix merge conflicts locally, and then push up to the server. On releases our lead dev would merge dev with master and run off a build.

I still maintain the differences are academic, because even though Git is a DVCS (and I agree it is), and it is possible to use it as a DVCS. But given that GitHub is the defacto standard, and everyone uses it for work and OSS, I posit we are actually using Git as a CVCS, and any argument about Git being better than SVN because it's a DCVS is moot because nobody is using Git's distributed features anyway.


I think we are missing something here, would like to be corrected if wrong.

Git is a DVCS because when you clone/pull a repo it includes the entire working history of the repo. Thats why its distributed, you could pull a repo from somewhere and never need to touch that source again. Has very little to do with Github.

SVN which I have not used in recent history but historically your local copy does not include the full change history of that repo and relied on a SVN server for that information.

I actually don't quite follow your arguments because while yes, we tend to setup Git so that is "centralized" the distinction is not about Github but that your local working copy is everything.


I think it was a misunderstanding based on different views of what "the whole repo" means -- all the files or all the history.

It quite nicely demonstrated the difference in philosophies, albeit accidentally. :)


So much has gotten better thanks to distributed VCS that I think this perspective is a bit like a fish in water.

Every commit is identified by a globally unique content-addressable hash instead of a locally unique or centrally managed revision number. This means two people on opposite sides of the globe can work on the same project with no risk that they will think the same revision identifies different code, nor that they must coordinate with a distant server constantly to ensure consistency.

Moreover, core VCS operations like committing and branching require no server interaction at all. Server interaction is a choice that happens when synchronization is desired, not a mandatory part of every VCS command. "Commit early and commit often" could never happen with CVS or SVN on a large or geographically distributed team. And, of course, you can continue working on a cloned Git repo even if the server goes down.

Finally, forking a repository is still common even in the age of GitHub dominance. In fact, GitHub natively understands Git's internal graph structure, and makes forking and pulling across forks pretty painless. Yes, those forks may all be hosted on GitHub, but there can be far more dynamic collaboration between forks than was ever possible on say SourceForge.

So sure, everybody working on the same code may have the same GitHub repository as their origin more often than not, but we are still miles ahead of the world of non-DVCS.

It's probably worth noting too that even the canonical example of Git in practice, Linux, is essentially "centralized" in the same way. Linus Torvalds's clone is "the" Linux kernel, any clone that differs from his is either not up-to-date or intentionally divergent and thus unofficial. A lot of work gets merged first in other people's clones (with some hierarchy) but Linux also has tens of thousands of contributors compared to the average Git repository's handful or less.


You only checked out the latest version of each file in the entire repository. You did not check out the entire respository, like you do in Git.


Git is a DVCS, and GitHub uses Git, so it's a DVCS because I can clone locally. GitHub is a central location, that's true, but I can still have my local clones, and I can host my own forks locally, anywhere I want, at a GitHub competitor, under multiple GitHub orgs / users, whatever. So, yes, it's a DVCS.


Yes, article seems to miss this. I believe (at the time, and still) that git won because the cost to host the server side of it is orders of magnitude lower than the competitors (svn, perforce, etc). All those other revision control systems ended up with a big server cost that couldn't justify a free hosting service. Plus git provided a reasonable (but still not great) solution to "decentralized development", which none of the others attempted to do.


I'm curious how you come to this conclusion. GitHub has always had fairly insane hosting problem sets. When someone clones the Linux repo, that's like 5G in one go. The full clone issues and the problems of a few edge case repos create sometimes crazy hosting costs and scaling problems. Most centralized systems only have to deal with one working tree or one delta at a time. There is not much that goes over the wire in centralized systems in general, comparatively.


Multiple other distributed version control systems in the 2000s had support for easy hosting. Darcs was actually the best in this era, IMO, because it was far simpler than both Hg and Git -- a Darcs repository was just a directory, and it supported HTTP as the primary pull/patch sharing mechanic. So, you could just put any repository in any public directory on a web server and pull over HTTP. Done. This was working back in like 2006 as the primary method of use.

In any case, the premise is still wrong because as mentioned elsewhere, the distribution of repository sizes and their compute requirements are not smooth or homogonous. The cost of hosting one popular mirror of the Linux kernel, or a project like Rails, for 1 year is equivalent to hosting 10,000 small projects for 100 years, in either SVN or Git. The whole comparison is flawed unless this dynamic is taken into account. GitHub in 2024 still has to carve out special restrictions and exemptions for certain repositories because of this (the Chromium mirror for example gets extended size limits other repos can't have.)

Git also lacked a lot of techniques to improve clones or repo sizes of big repos until fairly late in its life (shallow + partial clones) because 99% of the time their answer was "make more repositories", and the data model still just falls over fast once you start throwing nearly any raw binary data in a repository at any reasonable clip (not GiB, low hundreds of MiB, and it doesn't become totally unusable but degrades pretty badly). This is why "Git is really fast" is a bit of a loaded statement. It's very fast, at some specific things. It's rather slow and inefficient at several others.


Why didn't mercurial win then? There were almost a dozen other distributed version control systems built in those early days, most of which I cannot remember but all had the same distributed ideas behind them and should be been as easy to host (some easier).


At my university, performance. The CS department was clued into Linux development but also the Haskell world so darcs use among students was high. Our underpowered lab machines and personal devices struggled with darcs for reasons I no longer remembered and a group of us made use of mercurial for an OS project and had a rough go of it as the patch sets got more and more convoluted. Back in those days the core was C but a lot of the logic was Python which struggled on the memory constrained devices available. Some one of us learned about git trying to get into Linux kernel work, told the rest of us and it was just comically fast, is my memory. I spent a tedious weekend converting all my projects to git and never looked back, myself.

Some years later Facebook did a lot of work to improve the speed of mercurial but the ship had sailed. Interesting idea though.


Thank you for sharing this, Scott! He mentions "Taste" throughout the post and this intangible quality makes all the difference in an early-stage winner-take-all market dominance race.

In 2007 I was teaching myself programming and had just started using my first version control tools with Mercurial/Hg after reading Joel Spolky's blog post/love letter to Mercurial. A year or two later I'd go to user group meetups and hear many echo my praise for Hg but lamenting that all the cool projects were in GitHub (and not bitbucket). One by one nearly everyone migrated their projects over to git almost entirely because of the activity at GitHub. I even taught myself git using Scott's website and book at that point!

"Product-market fit" is the MBA name for this now. As Scott elegantly states this is mostly knowing what problem you solve, for whom, and great timing, but it was the "flavor" of the site and community (combined with the clout of linux/android using git) that probably won the hearts and minds and really made it fit with this new market.

Edit: It didn't hurt that this was all happening at the convergence of the transition to cloud computing (particularly Heroku/AWS), "Web 2.0"/public APIs, and a millennial generational wave in college/first jobs-- but that kinda gets covered in the "Timing, plus SourceForge sucked" points


I learned git first because it was already very popular when I decided to learn it. But when I later learned hg for fun, I realized how much of a better user experience it is:

* After using hg which doesn't have the concept of an index, I realize I don't miss it and the user experience is better without it. Seriously, even thinking about it is unnecessary mental overhead.

* As someone who modifies history a whole lot, `hg evolve` has superior usability over anything in git. The mere fact that it understands that one commit is the result of amending another commit is powerful. Git doesn't remember it, and I've used way too much `git rebase --onto` (which is a poorer substitute) to be satisfied with this kind of workflow.

* Some people, including the author, say cheap branching is a great feature of git. But what's even better is to eliminate the need to create branches at all. I don't need to use bookmarks in hg and I like it that way.

I sometimes imagine an alternate universe where the founders of GitHub decided instead to found HgHub. I think overall there might be a productivity increase for everyone because hg commands are still more user friendly and people would be stuck less often.


>imagine an alternate universe where the founders of GitHub decided instead to found HgHub.

I'm reading this as "HugHub" and audibly laughing!


Git really does need a commit header to track the original commit.


> After using hg which doesn't have the concept of an index, I realize I don't miss it and the user experience is better without it. Seriously, even thinking about it is unnecessary mental overhead

... for some people. I know a substantial audience who believe $(git add -a) is The Way, and good for them, but if one has ever had the need to cherry-pick a commit, or even roll back a change, having surgical commits is The True Way. JetBrains tools now even offer a fantastic checkbox-in-the-sidebar way of staging individual hunks in a way that only git-gui used to offer me

I just had a look at $(hg add --help) and $(hg commit --help) from 6.8.1 and neither seem to even suggest that one may not want to commit the whole file. I'm glad that makes Mecurialistas happy


I love surgical commits too. Having surgical commits is extremely easy in hg. You simply specify what you want committed using `hg commit --include` or `hg commit --interactive`. You probably didn't look very closely at the help page.

And if you accidentally committed something that should have been broken up, the user experience of `hg split` exceeds anything you can do with git commands. Quick tell me: if your git commit at HEAD contained a three-line change but each line should be its own commit, what do you recommend the user do? I guarantee `hg split` is so much easier than whatever you come up with.


> You probably didn't look very closely at the help page.

  $ hg help commit
  hg commit [OPTION]... [FILE]...
  [snip]

  commit the specified files or all outstanding changes
  [snip]
   -i --interactive         use interactive mode
Yup, it sure does explain that using interactive mode would select individual hunks, sorry for my "if you know you know" comprehension failure, especially in light of the absolutely orthogonal context of that word used further down the help page you allege I did not read:

   -y --noninteractive    do not prompt, automatically pick the first choice for
                          all prompts
> the user experience of `hg split` exceeds anything you can do with git commands

And yet:

  $ hg split --help
  hg: unknown command 'split'
  'split' is provided by the following extension:

      split         command to split a changeset into smaller ones
                    (EXPERIMENTAL)
without saying what makes it experimental - is that "lose work" kind of experimental?


I still miss hg. I migrated to github years ago because github is a much better workflow, but I miss hg which can answer questions that git cannot.


There is no real winners in business.

Just people/products that are temporarily on top.

SourceForge was probably "the winner" for some time.

The same will be for GitHub.

Someone just needs to build an actual superior product and provide a service that GitHub will not provide. Then build a sufficient audience.

One such service is an end to end encrypted Git repo service.

Some anarchists I know don't want everyone to know what they are working on.

The same goes for algorithmic trading. I need strong guarantees that my code will not be used to train an LLM that will leak my edge.

I am shocked a superior Git service to GitHub has not been built.

I really liked source hut. But the custodian is abit arrogant (crypto projects for instance are banned)


> One such service is an end to end encrypted Git repo service. Some anarchists I know don't want everyone to know what they are working on.

I doubt there is a big enough market of anarchists for Github to even bother worrying.

> One such service is an end to end encrypted Git repo service.

There are so few people that need this, that they can just use client side tools and store all data that gets to remote servers encrypted


>I doubt there is a big enough market of anarchists for Github to even bother worrying.

A lot of people writing prorietory code bases would definitely use it.

I don't think a founder wants the startup's codebase to leak via an LLM?


A ton of proprietary code lives on GitHub, on closed paid repos. A lot of people reasonably think that GitHub's security chops are better than theirs.

But if you care, there is a whole gamut of on-prem solutions, from running bare cgit to fluff like Gitea and GitLab.

Lock up your central repo machine all you want, the code is still checked out to developers' laptops. For more security, don't allow that, and let your devs connect to a server with all necessary tools and access to the code, but without general internet access, for instance.


I don't think founders care if parts or the entirety of the codebase leaks, it's not that valuable.


It’s already feasible with Keybase (although I wouldn’t trust them any more, because of the Zoom debacle).


I wish something like Forgejo/Gitea had federated identities so that I could fork a project on the server you're hosting and submit a PR as easily as I can do that if you're hosting it on GitHub today. Everything you're asking for is available today in self-hosted services. I mean, consider that you don't even need a Git server. You can swap code with your pals via SSH/email/whatever right now, today, without the rest of the world even knowing about it.


>Everything you're asking for is available today in self-hosted services

There is a reason why people use hosted Git services it's not practical for everyone to "self host".

We can run a self hosted Signal app for privacy. But it's neither convenient nor practical for everyone.


That's true, but if you have unusual requirements that make GitHub impractical, there are other options. Devs can update their origin to point at a shared SSH server and coordinate merges through email or Signal or anything else. I think that's a lot more practical than hoping GitHub adds something like end-to-end encryption, or worrying that they might train their LLMs against private code.


For an end to end encrypted git repo;

git remote add origin ssh://user@host/srv/git/example

Where the host is simply an ssh server you have access to. Encrypt the servers drive itself however you see fit. This is how git is traditionally used btw. GitHub is a third party to the git ecosystem and really there’s little reason to use it for private repos. Just use ssh for the remote connection.


Generally people mean "E2E Encrypted" as "the hosting service cannot see it". Git-over-SSH does not achieve this, it just encrypts in transit.


> really there’s little reason to use it for private repos

Admin costs? I paid $7/month to github for years for private repos (atm private repos are free so i switched to not paying when the card i was using acted up and i couldn't be bothered to fix it). I'm sure the time I would have spent admining a ssh based server would have cost more, even at 1 hour/month.


If your code does not want edge leak, why is it on GitHub?

Who trusts private repo off GitHub?

Simply store encrypted files somewhere like Dropbox or cloud storage solutions.(Encrypt before you upload)


Plenty of large companies. The risk is much higher that an individual's computer gets compromised, which often has a lot worse than just source code.


It won't be another git service that replaces github. It will be something completely out of left field that replaces git and that method of code collaboration. There is only incremental improvements to be made to git. It will take a brand new hotness with a brand new way of doing things that shakes things up.


> Someone just needs to build an actual superior product and provide a service that [...] will not provide. Then build a sufficient audience.

I wish this was true for social media and instant messaging platforms, operating systems...


It's extremely difficult to unseat the leader with a superior product alone. Once sufficient traction is established, people will flock to where everyone else is, further cementing their position. It also requires monumental fumbles by the leader to actively push people away from the platform. Unfortunately for those who don't like GitHub, it's run by a company with limitless resources to pour into it, or to flatout buy out its competition. Microsoft has a lot of experience with this.

> I really liked source hut.

Sourcehut never has and likely never will be a serious competitor. Its UX and goals are entirely different, and it's directed towards a very niche audience unlike GH.


I used both GitHub and BitBucket during the early days. There was no comparison. GitHub was simply nice to use. The UX was phenomenal for its time and made sense. BitBucket was horrible but my then employer wouldn’t pay for hosting and GitHub didn’t provide free private hosting.

One of my biggest gripes was that switching back and forth between code view and editor mode would wipe whatever you had written. So you better had them in separate tabs. Also be sure not to press the backspace key outside a text window.


Idk, I loved BitBucket and I loved Mercurial. It was much easier to use and had native JIRA integration. I always thought (and still do) that github looks too cute and not very serious.


As a "younger" programmer it always shocks me how things like git were only created in 2005. It feels so ubiquitous and the way it functions has the "feeling" of something created in the 80s or 90s to me.


Subversion (svn) was absolutely fine before git. Before that, there was CVS but that really was painful.

Svn gets a lot of hate for things it doesn't deserve, even this article talks about "checking out" and the difficulty of branching, but that doesn't track with subversion.

Branching in subversion was just as easy as in git, it had shallow branches. You could branch largely without overhead, although unlike git it was a server-side operation. ( Imagine it like git branch with auto-push to remote).

Most software also automatically checked out files as you modified them, and it was a local oepration, there wasn't any locking or contention on that. It was the older CVS/sourcesafe style version system that those.

I still maintain that most workplaces with less than, say, 10 devs, would be better off with subversion rather than git, if not for the fact that most the world now works on git.

Subversion solves problems with less mental overhead than git, but it's not worth doing anything non-standard, because everyone now knows git and has learned to put up with the worse developer user experience, to the point where people will argue that git doesn't have bad UX, because they've internalised the pain.

Before subversion there was CVS and Visual Source Safe. These are much older. These solved a problem of source control, but were based on the concept of locking and modifying files.

You'd "checkout" a file, which would lock the file for modification of all other users. It was a bit like using a global locking file repository but with a change history.

It was as painful as you might imagine. You'd need to know how to fix the issue where someone would go on holiday having checked out a critical file: https://support.microsoft.com/en-us/topic/5d5fa596-eb9c-d2b5...

Or more routinely, you'd get someone angrily asking who had such-and-such file checked out.


Branching in Subversion was fine, but merging was quite painful (at least at the time I was using it, around 2008ish). From my recollection, SVN didn't try to figure out the base commit for a merge - you had to do that manually. I remember having a document keeping track of when I branched so that I could merge in commits later.

And even if I was using it wrong or SVN improved merging later the fact was that common practice at the time was to just commit everything to the main branch, which is a worse (IMO) workflow than the feature-branch workflow common in git.

But you're right, SVN was largely fine and it was better than what preceded it and better than many of its peers.

Edit: Forgot to mention - one of the biggest benefits to git, at least early on, was the ability to use it locally with no server. Prior to git all my personal projects did not use version control because setting up VC was painful. Once git came around it was trivial to use version control for everything.


Subversion was great up until your working directory somehow got corrupted. Then you'd be in some kind of personal hell cleaning it up.

And honestly, it was always a pain in the ass setting up "the server". Unlike with git you needed a server / service running 24/7 upon which to check your code into. Which was always a pain in the ass at home... needed to keep some stupid subversion service running somewhere. And you'd have to go back into that service and remember how to create new projects every time you got a wild hair up your ass and wanted to create a new thing.

Git you just do "git init" and boom, you have a whole complete version control system all to yourself with no external dependency whatsoever.

That being said, TortiseSVN was the best GUI for version control ever.


Subversion didn’t get working merge support until years after git. Like CVS it basically required trunk-based development. Feature branches were not supported. You needed a separate checkout for any work in progress. You could not checkpoint your work with a commit before updating to the latest head. Every update is a forced rebase. It sucked.


CVS was absolutely not oriented around locking files.

It was about merge and conflict resolution like SVN or Git.

VSS was oriented around locking. And also broke all the time. Oh, and also lost data... And oh, it was also the expensive one used by everybody that kept saying "you get what you pay".


One thing that Git and the other DVCS's massively improved over Subversion is that commits are local, and you only need to talk to a remote endpoint when you push/pull. In Subversion, every commit would require uploading changes to the repository server, which encouraged larger commits (to amortize the overhead).


Yeah, this was huge at the time. Laptops were good but Wifi wasn't as ubiquitous. If you wanted to work on some code while you were traveling or at a cafe or something, you'd at best be working without version control, making ".backup" files and directories and stuff. With DVCSes you could branch, commit, etc. as much as you wanted, then sync up when you got back somewhere with good internet.


This was not my recollection. The big thing about git over subversion at the time (at least before everyone started putting their repos up on github with pull requests and all) is that it was truly distributed i.e everyone maintained their copy of a repo and no repo was the 'master' or privileged source of truth. And merging changes was/is relatively seamless with fine grained control over what you want to merge in to your copy that svn simply didn't provide. Svn on the other hand is a server/client architecture. Although you could have multiple servers, it was kind of pointless as keeping them in sync was more trouble than it was worth. For most workflows there was the server or master repo and your local copy is not under source control. And if that master/server should go offline for any reason you were not able to 'check-in' code. I remember this being such a pain point because if the master was offline for a significant amount of time you essentially had no way to track changes you made and it would all just be one big commit at the end (which was also a major pain for the administrator/repo maintainer who would have to merge in a whole bunch of big breaking changes all at once). Maybe git vs mercurial was a close fight with no immediately obvious winner, but subversion's days were pretty much numbered once git showed up.


I never had the time (thankfully) to get good at Subversion. But now that I’ve “internalised [the pain]” of a DVCS I could never go back to a centralized VCS. Interacting with a server just to check the log? Either promiscuously share everything I do with the server or layer some kind of second-order VCS on top of Subversion just to get the privilege of local and private-to-me development? Perish the thought.


SVN is fine as long as you don't have multiple people editing the same file in the same time. In that case, generally one person gets his work overwritten. Committing on SVN is basically the equivalent of "git push --force".


Git clients became popular interfaces to SVN. This is how the organizations I was at moved from svn to git -- we/devs started preferring git locally and eventually migrated the backends to match.


Git is great when you use it from CLI. There is no single good git GUI app. Ideally a GUI git app would give you a interface for rebase and visualize the history, branches and trees. You'd drag commits around and then it would construct a rebase operation for you. In case of conflicts, you can either abort/rollback the operation or open the conflicting files in editor and fix the conflicts. That's it.

The github's desktop app in particular is a mess, because it does not try to be a UI, it's simply a frontend for the CLI , having buttons for the CLI operations basically, and it does everything worse than CLI.

In hindsight, it's hard for me to take seriously a programmer who can't spend some time to learn git, it isn't hard.


As an "older" programmer, I feel the opposite. Git became mainstream very recently, though admittedly it's been a good ten years or more. I sometimes think younger programmers' attitudes toward git are borderline cultish—git or GitHub is not required to do programming—it's just another tool. I half expected something would have replaced it by now.


Git's been around for almost 20 years now. I would say fairly dominant for 15 or so.


Git is overrated for a DVCS. But it’s not overrated considering the old-school competition like SVN.

The assumptions of SVN makes it feel like a dinosaur now.


Already in 2010-2012 majority of projects I encountered were using git. Last time I saw an SVN-based project was in 2015, before I migrated it to git.


I have to agree on the cult aspect. This is unfortunate because better tools exist already today, but lots of people refuse to even entertain that possibility.


I feel about git how I presume vim users feel. Maybe there are better ways, but I've become so accustomed to how it works that I doubt I could easily switch to anything else.


All the more reason to look beyond your comfort zone now and then!


The idea of Github having a unique "taste" advantage resonates with me a lot. I don't like the fact that Github is using my code to feed Microsoft's AI ambitions, but I dislike Bitbucket and Gitlab more simply on the grounds that they "don't look fun".

It's tricky, because any serious Github competitor would implicitly have to compete by attracting the deep pockets of enterprise clients, who care little for "fun". Getting revenue from solo devs / small teams is an uphill battle, especially if you feel obliged to make your platform open source.

Still, I wish someone would make a Github competitor that's fun and social.


This. GitHub is a joy to use compared to its competitors. Using bitbucket at work is frustrating, and reminds me of a lot of Microsoft web interfaces, ironic, given that it’s GitHub and not Bitbucket that is owned by them now


You don't need enterprise clients. Projects like KDE self host and are enough to keep you around and getting new features if you can get them on board. Plus enterprises often look at their bottom line and ask if something else is a better value so if you are "free" some of them will switch to you.


sourcehut is always worth a mention, though I have never used it in a collaborative environment.


sr.ht doesn't go for "fun" but brutal minimalism as their main selling point.


This article reinforces a lot of my biases around early bets. Taste is so so important, everyone looks at you weird when you say you're betting on "niche, tasteful solution" (git) instead of "common, gross solution" (SVN). Github bet on Git and made tasteful choices, and that was a huge propellant for them.

I feel the same way about tscircuit (my current startup), it's a weird bet to create circuit boards with web technologies, nobody really does it, but the ergonomics _feel better_, and I just have to trust my taste!


I would argue that hg was more tasetful than git at the time github began. The one thing git had going for it was that the most common operations were absurdly fast from the beginning, while hg took a bit of time to catch up.


I agree with this take, I think hg could have overtaken git for a while, but git catered a bit better to the OS communities and hg catered a bit more to big companies from a DX perspective. Maybe in this case, the important thing is knowing that your partners/technologies are aligned with your vision of the future- git has been more open-source first (I would argue)


It's just survivorship bias. If HH hadn't won no one would be trying to reverse justify their success.

This bet worked, the mercurial ones didn't.


Not sure what is even meant by "taste" here; what I see over and over is that convenience wins, where winning is defined as widespread use.


The article uses "taste" pretty broadly compared to many folks in the comments. First mention is about the site being pretty. But later he says "We had taste. We cared about the experience" which more aligns with your perspective of convenience.


fundamentally interesting thing that I think showed in everything that we did was that we built for ourselves. We had taste. We cared about the experience.

The "taste" thing is weird, making an analogy with Apple vs. Microsoft, who explicitly competed at many times

As DannyBee said, there was never any Google vs. Github competition, because Google's intention was never to build something like Github

In fact, one thing he didn't mention is that URL, as I recall, was

   code.google.com/hosting/
not

   code.google.com/   # this was something ELSE, developer API docs for maps, etc.
SPECIFICALLY because management didn't want anyone to think it was a product. It was a place to put Google's own open source projects, and an alternative to SourceForge. (There was also this idea of discouraging odd or non-OSS licenses, which was maybe misguided)

That is, the whole project didn't even deserve its own Google subdomain !!! (according to management)

(I worked on Google Code for around 18 months)

---

However if I try to "steel man" the argument, it's absolutely true that we didn't use it ourselves. The "normal" Google tools were used to build Google Code, and we did remark upon that at the time: we don't dogfood it, and dogfooding gives you a better product

But it was a non-starter for reasons that have to do with Google's developer and server infrastructure (and IMO are related to why Google had a hard time iterating on new products in general)

I think also think Github did a lot of hard work on the front end, and Google famously does not have a strong front end culture (IMO because complex front ends weren't necessary for the original breakout product of search, unlike say Facebook)


I think the analysis is largely correct. But not entirely. My take on this is that 1) Github fixed the one problem that Git had: terrible UX. Github made it more like subversion. Enough so to consider switching. Indeed a lot of small companies treat it like a central repository with commit rights for everyone. 2) It fixed a big problem OSS projects had: it was very hard to contribute to them.

The first reason was why Git became interesting, the second one is why it won.

Prior to Github the way to contribute to OSS projects was a protracted process of engaging with very busy people via mailing lists, issue trackers, and what not and jumping through a lot of hoops to get your patches considered, scrutinized, and maybe merged. If you got good at this, you might eventually earn commit privileges against some remote, centralized repository. This actively discouraged committing stuff. OSS was somewhat elitist. Most programmers never contributed a single line of OSS code or even considered doing so.

Github changed all that. You could trivially fork any project and start tinkering with it. And then you could contribute your changes back with a simple button push: create pull request. It actively encouraged the notion. And lots of people did.

Github enabled a bunch of kids that were into Ruby to rapidly scale a huge OSS community that otherwise would not have existed. That success was later replicated by the Javascript community; which pretty much bootstrapped on Github as well. What did those two communities have in common: young people who were mostly not that sophisticated with their command line tooling. This crowd was never going to be exchanging patches via some mailing list, like the Linux crowd still does today. But they could fork and create pull requests. And they did. Both communities had a wild growth of projects. And some of them got big.

Github gave them a platform to share code so they all used it. And the rest is just exponential growth. Github rapidly became the one place to share code. Even projects with their own repositories got forked there. Because it was just easier. A lot of those projects eventually gave up on their own central infrastructure. Accepting contributions via Github was easier. In 2005 Git was new and obscure; very elitist. In 2008 Github popped up. By 2012 it hosted most of the OSS community. Game over by around 2010 I would guestimate. By 2015 even the most conservative shops were either using it or considering it at least.


Another big advantage of Git for sites like GitHub is that you are never putting your eggs into one basket. You have your local copy of all history in a project. GitHub is merely a mirror. Sure, some features have been sprinkled on top like pull requests and an issue tracker, but those are not the most critical part. If GitHub goes down you can move your whole Git history to another site like GitLab, sourcehut, or just self-host it, or you can even start doing it right now with minimal effort. This was never the case with CVS and Subversion.


> They never cared about the developer workflow.

Man, given how terrible GitHub's developer workflow is in 2024... there is still no first-class support for stacked diffs, something that Phabricator had a decade ago and mailing list workflows have been doing for a very long time.

I personally treat GH as a system that has to be hacked around with tools like spr [1], not a paragon of good developer workflows.

[1] my fork with Jujutsu support: https://github.com/sunshowers/spr


You can't even see a commit graph (no, the insane glacial-js "network" tab doesn't count). You can see it in bitbucket for heaven's sake. The basic data structure of git, invisible. On a GUI.


I professionally used RCS, CVS, Subversion and Perforce before Git came along. Hell, I was actually in a company that FTP'd it's PHP files directly to the production server.

People in the field less than 20 years might not appreciate the magnitude of this change (though, adding my two cents to the author's article, branching in p4 was fine). People may have also dealt with ClearCase (vobs!) or Microsoft Visual SourceSafe.

Git did as much for software development velocity as any other development in recent history.


That's all true for me, too, although I hadn't used p4. I resisted Git for a little while because I didn't see the massive appeal of a distributed system in an office with a central server. CVS... worked. SVN was a much more pleasant "faster horse". And then I made myself try Git for a week to see the fuss was all about and my eyes were opened.

Git is not perfect. There are other products that did/do some things better, or at least more conveniently. But Git was miles ahead of anything else at the time that I could use for free, and after I tasted it, I never wanted to go back to anything else.


I was a late adopter, also, and git is definitely not perfect. Mercurial did some things better, and at the time, notably, the forest extension. Git's flexibility is a two edged sword and the history rewrite footguns should be harder to use. Git does comes close enough to solving a fundamental problem it will be very, very durable, though. As long as it is used for linux kernel development I expect it continue to be the dominant dvcs.


God I hated ClearCase, did a migration from it in 2016(!) for a project that had been around since the late 80s. People were really resistant to moving but once it was done were like "Oh wow, it's really fast to create a branch, this means we don't have to have one branch for three months!"


Around about that time, I was working on a Mercurial frontend https://github.com/tanepiper/hgfront - it was around the time GitHub was starting to pick up, and BitBucket also appeared around then (we spoke to the original developer at the time but nothing came of it). Funnily enough also a Gist-like tool that had inline commenting, forking and formatting (https://github.com/tanepiper/pastemonkey).

I always wonder what would have happened if we had a dedicated team to make something of it, but in the end git won over hg anyway so likely a moot point.

Edit: there's a low-quality video of the early interface we worked on - https://youtu.be/NARcsoPp4F8


Fun fact, I (original author), wrote the original version of Gist. That was my first project at GitHub. Gist #1 is my claim to fame: https://gist.github.com/schacon/1


> one of the Linux developers reverse engineered the protocol, breaking the licensing terms

Tridge never accepted the license terms of BitKeeper and re-implemented client libraries only by observing black box behaviour of the server, so at no point did he break the terms of the license. He also told Linus what he was doing and asked, "how do you think [Larry McVoy] will react?". Linus said he thought it would be okay.

https://lwn.net/Articles/969221/


I clicked on this link thinking "timing and product quality", so I was satisfied to see that GitHub co-founder Scott Chacon credits it to "GitHub started at the right time" and "GitHub had good taste".


git won because linux used git, and the vast majority of open-source code was written for linux. Simple as that. GitHub won because it made remote collaboration on a code base easier than anything else.


I think that if github hadn't come out something other than git would have one. While git did have Linus behind it, the others were objectively better in some way and working on the areas they were objectively worse, and eventually the advantages would have got everyone to switch. However the others never had anything like github - even 20 years latter they still aren't trying (rumor is they are not dead)


I don't think that GitHub is the most popular forge because it is _good_. I have never heard anyone say "I use GitHub because it has a good UI", "... has good accessibility", "... is well designed", "... I prefer proprietary services" or anything remotely similar.

The main reason I hear people say they use GitHub is due to network effect. "It's what everyone else is using", "You'll get more visibility and more contributions" or "My employer forces us to use it".

Essentially, the same reasons why Windows is a popular platform; not really technical reasons, mostly business and ecosystem factors driving people towards it.

Sure, GitHub was* better than the alternative in its initial days. But things have changed in the last decade. GH has continuously declined while alternatives have surfaced and improved dramatically.

Personally, it pisses me that GitHub presents itself as "an open source hub", when it is a proprietary, hosted, service.


I wonder if there's an alternative universe where Fog Creek pushed Kiln - their hosted Mercurial product - harder and exploited the mindshare of Stack Overflow somehow. Perhaps if they'd tried to get open source projects and their maintainers onto a shared platform to manage both code and (potentially paid) support they would have earned a mention here.


A commercial proprietary platform that came around at the right time to cash in on the open source movement, and was then acquired by Microsoft for $7.5 billion¹ in 2019.

Becoming a jewel for (one of) the worlds most profitable proprietary software and platform company.

And GitHub is still super popular, evolving more and more into a social network. where the users work for free to increase the value of platform and promote it for the chance at more GitHub stars.

With the common echo chamber: "" Windows is bad, eww proprietary bloated spy software and evil Office. Opensource and Linux is goood. "" and then

"I have 3000 stars on GitHub now, check out my repos."

Did Microsoft predict the value having trivial access to all those codebases for training software models?

The kind of insight they have with Microsoft GitHub and Microsoft LinkedIn is quite something.

¹ In stock


Github won because sourceforge was ruined already.


SF lost its way. My vague memory is that SF and also CollabNet were focusing on higher level functionality and while valuable neglected the basic code sharing growth/ease of use which is the rationale for all their existence. Too early into higher level functionality.


I worked for CollabNet from pretty early on. CollabNet had one major customer... HP. Everything got developed for a relatively stodgy old customer. They wanted centralized version control. They wanted tons of ACLs. It all was very corporate sales focused.

Github "won" because they were not CollabNet.


At the time I was in an industry collaboration and we needed a place to work together and looked at SF and CollabNet. Internally we also would have benefitted from such a tool so I even got an onsite demo by SF. It was impressive but also as you say enterprise focused and very different from their free offering. And that was imho the key reason: There was not one but were two offerings with different tech and customers. Focus was lost.


The guy who reverse-engineered the Bitkeeper protocol, triggering the creation of git, is the same guy who created rsync - Andrew Tridgell.


Also Samba. https://en.m.wikipedia.org/wiki/Andrew_Tridgell

Reverse-engineering proprietary protocols is definitely one of his things.


I work at Microsoft, so I write a lot of pipelines and interact a lot with git.

This is my own opinion:

- GitHub looks nice but PR merge window between forks is still bad

- GitLab CI is so much more intuitive than GitHub CI and there is a lot of existing codes that you can copy/paste specially if you use Terraform, AWS, K8S

- I am biased, but AzDevops both looks most intuitive, and its pipeline system is the best


Want to expand on that second point? Github actions have more prebuilt workflows than I can shake a stick at; no copy pasting anything, you just say "uses: actions/whatever@v123" in your own yaml, configured with some "with" statements.


I'm not GP, but I firmly agree with the observation. I also readily admit that I'm for sure biased because I was on GitLab before GHA was even a dream in someone's eye

The hazard to "prebuilt workflows" is that one needs to know about them, and load their assumptions into your head before using them, which can be true of any sprawling namespace but tends to be less true within a single organization. That's not even getting into the risk of folks who do both things: copy-paste someone else's "uses:" statement eliding the version pinning because "what's the worst that can happen," amirite?!

As for the "for terraform, AWS, k8s" part, that is 110% why I am a GitLab fanboy because the platform natively speaks those technologies - I don't need to (deep sigh) set up an S3 bucket with a DynamoDB to have Terraform State - it ships with GL. I don't need to do crazy "uses:" with some rando shit to use AWS federated credentials, it ships with GL. I for sure don't need to do crazy "uses:" to have k8s rollouts, rollbacks, status checks, and review environments: they are built-in concepts in GLCI

Also, unless something has gravely changed in the past little bit, how in the universe can anyone use GHA with a straight face without "show me the expanded and linted version of this yaml" as with https://docs.gitlab.com/ee/ci/yaml/lint.html#simulate-a-pipe...

I'll fully admit that $(gitlab-runner exec) is a cruel joke, but every time I hear someone claim that Act (or its like 50 forks over in Gitea/Forjeho-land) are "local GHA" I throw up in my mouth, so I consider that pretty much a wash

---

ed: I realized this debate is also very similar to the Maven-vs-Gradle argument: do you want executable junk in your build process, or do you want declarative steps? I am firmly, 1000000000000000000% in the Maven camp, which also explains why the last thing I want is some minified .js files someone else wrote to be injected into my CICD process


Sourceforge was horrible to use. GitHub was widely used but it only really reached proper dominance I think when it started offering free closed source repositories to people that weren't paying them, which was what, 2014/2015 or so? Until then it was pretty common in my experience for people to use BitBucket for Git for private stuff.


The ease of creating, deleting and merging branches and patches is what sold me on git over subversion (which itself was a vast improvement over the only VCS I had experienced up until that point, CVS). When the author describes the literal jaw-dropping demos, I remember having a similar reaction.


I’m a little stunned by “taste” as the defining factor, but GitHub has certainly brought the industry a long way!

Whenever I’ve asked for help using GitHub (usually because I’m getting back into coding) the dev helping me out stumbles, forgets, and is confused often. What’s surprising is that’s true no matter how senior they are.

GitHub did a ton to smooth out dev workflows, for sure, but there’s something almost intensely counter-intuitive about how it works and how easy it is to miss a step.

I’d assume good product taste is reasonably indexed to “intuitive to use” but GitHub doesn’t seem to achieve that bar.

What’s an example of GitHub’s good taste that I’m missing?


Have you used SourceForge or SVN? Or sent a zip of files named "v23_backup_(copy).zip" to other engineers?

Compared to everything that came before, Github may as well have been Nirvana.


I still miss Codeplex from microsoft ;) it was a really beautiful website


I was there from the ground floor and Gitlab.com failed miserably in SEO.

There are ancient threads about it on their issue tracker, that go nowhere for years and years. It's almost as if they were trying to sabotage themselves.

SEO was hugely important because when you searched for something Github projects actually came up in Google, gitlab.com did not. Even if there was an interesting project there, it wouldn't have been known.

So I'm not surprised Github became synonymous with Git forge online.


I think GitHub won because its UI/UX it's the best among competitors-and I don't mean it's perfect, just that it's the best among competitors.


Because it was a lot better than the alternatives and free?

I introduced Subversion at my first job, we were sharing updates over FTP and heavily coordinating who worked on what before.

Subversion was definitely a step up, but branching/merging was very problematic and time consuming.

I still find Git confusing as hell sometimes, and would guess most developers use like 50% of the features tops; without GitHub or something similar it wouldn't have gone anywhere.


And they STILL don't have IPv6 support >..<


I always enjoy replying to these comments with

  $ dig +short gitlab.com. AAAA
  2606:4700:90:0:f22e:fbec:5bed:a9b9
and, just to pour more salt in that wound

  $ dig +short bitbucket.org. AAAA
  2401:1d80:321c::bbc:1:df7c
  2401:1d80:321c:2:0:bbc:1:df7c
  2401:1d80:321c:1:0:bbc:1:df7c

  $ dig +short git.sr.ht. AAAA
  2a03:6000:1813:1337::155
and it's not like Microsoft doesn't know how to run IPv6, both microsoft.com and portal.azure.com are on V6


I wish they even had some official statement as to why this hasn't been implemented yet, but from what I can tell, basically nothing has been said about it


vi: 1976 GNU Emacs : 1984 BIND: 1986

are (along with too many other projects) from way before Nov 1993, where "The Total Growth of Open Source" graph starts from 0.


It all depends on how you're counting. For one, "open source" was not a phrase before 1998, so there is some retrofitting of Free Software projects. But also, there isn't a registry, it's rather difficult to be more than approximate with this. The article is very specific about their methodology, I'm only using one graph as a general example.


From the paper

> "The database contains data from January 1990 until May 2007. Of this time horizon, we analyze the time frame from January 1995 to December 2006. We omit data before 1995 because it is too sparse to be useful"

> "Large distributions like Debian are counted as one project. Popular projects such as GNU Emacs are counted as projects of their own, little known or obsolete packages such as the Zoo archive utility are ignored"

So even though methodology is "very specific", it seems very incomplete/ inaccurate/ selective. Even Linux kernel, as per their source, started in 2005 (https://openhub.net/p/linux).

Source: https://www.researchgate.net/publication/45813632_The_Total_...


I remember the days when on sourceforge, you sometimes had to find a small link as opposed to the big download button that gave you an installer bundled with "offers". As far as I know this is something SF added on top of the binaries the author was trying to distribute.

That left a market opportunity for something better. I think that _might_ have had something to do with it.


> we won because we started at the right time and we had taste.

2012, https://a16z.com/announcement/github/

  We just invested $100M in GitHub. In addition to the eye-popping number, the investment breaks ground on two fronts:
    It’s the largest investment we’ve ever made.
    It’s the only outside investment GitHub has ever taken.
2018, https://web.archive.org/web/20180604134945/https://a16z.com/...

  Six years ago we invested an “eye-popping” $100 million into GitHub. This was not only a Series A investment and the first institutional money ever raised by the company, but it was also the largest single check we had ever written.. At the time, it had over 3 million Git repositories — a nearly invincible position.. if I ever have to choose between a group of professional business managers or a talented group of passionate developers with amazing product-market fit like GitHub, I am investing in GitHub every time.


Bitbucket got funded before GitHub did, and yet GitHub was still bigger before they got any investment.


What is the point you're trying to make here?


Did $100M investment help Github to win, or had Github already won in 2012 with profitability and 3M git repos?


I would argue that GitHub already won in 2012. The investment helped us grow in a different way, but I don't think anyone involved in that deal would have said that we had almost any serious competitive threats at the time, which is to some degree why it was such a great deal.


Did the investment encourage corporate buyers to sign up for Github Enterprise, where corp developers were already using the free product?


That was certainly one of our internal arguments, that the institutional investment would be helpful for large company trust.


Personally I really liked darcs, always felt more natural and intuitive to me.

Though fortunately was compatible and natively convertible to git and made the git takeover mostly smoothless.

At the time it felt that github and the rapid tooling and integrations developed in response cemented git's rise and downfall of everything else including darcs.


Surely Github is most popular git hosting nowadays but fortunately there are good alternatives like Gitea for those who don't want to give Microsoft free access to any code you you host on Github. Spinning up a new instance can be done in one afternoon and is not complicated


I don't know about that, it was a dead platform for my projects by the time that the US government policies went and blocked accounts and projects of some middle east developers.

Since then I'm happy self hosting Gitea. GitHub is still a decent place to contribute to others projects.


LWN linked to a summary of the origin of git: https://lwn.net/Articles/974914/

They also provided a set of links to LWN articles from that era.


It’s also never to late to use a different vcs. Especially if you have no one who’s interested in your code, like me. “Winning” isn’t everything as they say.



Have to weakly agree here. But a very important point is missing, github actions. It is a pristine example of vendor lock-in.

Also the interconnected pr references by #prnumber across many repos will make it very hard to keep track of future development in any foss github alternative the project moves to.

So until git-bug etc., mature enough to include everything in git itself, or people start using gerrit, email patches etc. which are all non-trivial compared to github prs, giving up github is harder than giving up google search.


[flagged]


You only feel this way because it's written about something you worked on.

* Co-Pilot is trained on copyrighted code without attribution: ludicrous?

* Github works with ICE: incoherent?

* All of Github's hosting code is proprietary and secret: ridiculous?

* Github tries to discredit copyleft: hyperbolic?

* Github is wholly owned by Microsoft who also doesn't like copyleft: conjecture?

As to SFC not having anything better to do... this is exactly what their mandate says they should do. This is what they collect money to do: to point people towards free software. If you want SFC to find something better to do, I assume you want them to completely shut down their organisation and stop complaining about undermining copyleft and stop complaining about non-free code.


I don't particularly want to engage, but why not?

* Co-Pilot is trained on copyrighted code without attribution: ludicrous?

This is debatable, but yes, it's a ridiculous reason to not use github. If your code is on the internet, which it is with every other alternative host they mention, it will be crawled and used for training, just as it will be read by humans and learned from. We can have a fair use debate, but GitHub is not alone in this stance or problem set.

* Github works with ICE: incoherent?

No, I would file this under ridiculous. GitHub is a government contractor. It licenses it's software to lots of organizations. People can disagree with a lot of them, but trying to manage that and dictate changing political morality at a company level is insane. All of these other solutions are certainly used by organizations that are a lot more controversial than a major federal institution, as much as I may even personally disagree with policies under certain administrations. But again, singling out GitHub is just finding some reason to be mad, it's not GitHub specific.

* All of Github's hosting code is proprietary and secret: ridiculous?

The computers that you're writing your FOSS code on, that indeed they wrote this article on, have proprietary chips, have software you can't access. You think everyone at SFC uses a Stallman-esque laptop? They're probably happily typing away on their Macbooks, full of non-FOSS software. The software community is an ecosystem of lots of models, nobody is all-FOSS, it's not possible. GitHub has open sourced Electron, libgit2, a thousand other things. Core code was never helpful to anyone and would not be helpful today. Git isn't a lock-in proprietary thing, you can always easily transfer your code elsewhere.

* Github tries to discredit copyleft: hyperbolic?

It doesn't try to discredit copyleft as a corporate stance. Several individuals have pointed out weaknesses in copyleft vs permissive OSS licenses. But it's never that you should use closed instead of copyleft or something, it's always something like if you're using copyleft, it's generally better for the community to use MIT or Apache or something more permissive that doesn't need to involve lawyers and gives you more freedom.

* Github is wholly owned by Microsoft who also doesn't like copyleft: conjecture?

This is a 30 year old take on what Microsoft cares about. Microsoft probably never thinks about copyleft these days, any more than the rest of us do. MS contributes to Linux, contributes to Git, both GPL projects. Almost certainly they contribute more to GPL projects globally than you or I or the SFC do.

> As to SFC not having anything better to do... this is exactly what their mandate says they should do

My point was that they can actually help the Git project by sponsoring meetings, educating people in a useful way, etc. I have helped donate a fair amount to the SFC through GitHub events and was hoping the money would be better spent on actual community building and project fostering rather than cheap think-pieces like this.


I could be wrong but I don't think hyperbolic conjecture is going to swing anyone the other direction.


Which is ironic, because that entire article is hyperbolic conjecture.


The rise of github also coincided with enshitification of sourceforge.net. SF although was not git based at that time but it had the mindshare of lot of open source projects and it went complete downhill.

So, a downfall of a potential alternative was also a factor IMO.

Edit: after I commented I realized that SF was already mentioned in other comment


I would argue that SF was always pretty shitty, because it focused entirely on advertising. I remember Chris giving a talk comparing the signup process of GitHub and SourceForge. SF had like 8 fields and GitHub had 2. This was because SF wanted to know ad demographic info - where did you hear about us, etc. GitHub just wanted a name and a password. But this was the difference in everything - SF cared about advertisers, not developers. GitHub never thought about what anyone other than the developers using the product wanted.


Agree but my point is when you see a new and better rival then instead of pivoting SF became even worse and became malware-ised.

Also SF was based on SVN. They failed to understand and capitalize on a better tech on the market i.e. git.


They actually did so very early.

In early 2009 they added Git, Hg and Bzr support: https://arstechnica.com/information-technology/2009/03/sourc...

That's less than a year after GitHub launched and was still very small.


I stand corrected! Thanks!


Sourceforge was always awful to navigate. Because it was dependent on ad revenue, not subscriptions. It was trying to compete with consumer focused things like download.com (remember that ?) where the end user just wanted a tarball of the executable, and the host was trying to make money selling ad space on the page where the download link was.

The fact that end users could peek at the folder structure of the source code was a novelty at best


I am seriously worried about the Microsoft acquisition, because products acquired by MS almost never end up well.


Well, I can login and use core functions on github with a noscript/basic (x)html browser... gitlab... well...


They had the timing, the name and the reputation boost.

Though I wonder if bit bucket would win if they had githubs name


I didn't realize Scott Chacon was a founder of Github. Did they all cash out equally?


cough. its, not it's.


Github won in part because git won. And git won because, for complex sociological factors, the software engineers were able to argue that their needs were more important than the needs of other parts of the companies for which they worked.

For a counter-point (which I've made many times before) from 2005 to 2012 we used Subversion. The important thing about Subversion was that it was fun to use, and it was simple, so everyone in the organization enjoyed using it: the graphic designers, the product visionaries, the financial controllers, the operations people, the artists and musicians, the CEO, the CMO, etc. And we threw everything into Subversion: docs about marketing, rough drafts of advertising copy, new artwork, new design ideas, todo lists, software code, etc.

The whole company lived in Subversion and Subversion unified every part of the company. Indeed, many products that grew up later, after 2010 and especially after 2014, grew up because companies turned away from Subversion. Google Sheets became a common way to share spreadsheets, but Google Sheets wasn't necessary back when all spreadsheets lived in Subversion and everyone in the company used Subversion. Likewise, Google Docs. Likewise some design tools. Arguably stuff like Miro would now have a smaller market niche if companies still used Subversion.

At some point between 2008 and 2015 most companies switched over to git. The thing about git is that it is complex and therefore only software engineers can use it. Using git shattered the idea of having a central version control for everything in the company.

Software engineers made several arguments in favor of git.

A somewhat silly argument was that software developers, at corporations, needed the ability to do decentralized development. I'm sure this actually happens somewhere, but I have not seen it. At every company that I've worked, the code is as centralized as it was when we used Subversion.

A stronger argument in favor of git was that branches were expensive in Subversion but cheap in git. I believe this is the main reason that software developers preferred git over Subversion. For my part, during the years that we used Subversion, we almost never used branches, mostly we just developed separate code and then merged it back to main. Our devops guy typically ran 20 or 30 test servers for us, so we could test our changes on some machine that we "owned". For work that would take several weeks, before being merged back to main, we sometimes did setup a branch, and other times we created a new Subversion repo. Starting new repos was reasonably cheap and easy with Subversion, so that was one way to go when some work would take weeks or months of effort. But as ever, with any version control system, merge conflicts become more serious the longer you are away from the main branch, so we tried to avoid the kind of side projects that would take several weeks. Instead, we thought carefully about how to do such work in smaller batches, or how to spin off the work into a separate app, with its own repo.

A few times we had a side project that lasted several months and so we would save it (every day, once a day) to the main branch in Subversion, just to have it in Subversion, and then we would immediately save the "real" main branch as the next version, so it was as if the main branch was still the same main branch as before, unchanged, but in-between versions 984 and 986 there was a version 985 that had the other project that was being worked on. This also worked for us perfectly well.

The point is that the system worked reasonably well, and we built fairly complex software. We also deployed changes several times a day, something which is till rare now, in 2024, at most companies, despite extravagant investments in complex devops setups. I read a study last week that suggested only 18% of companies could deploy multiple times a day. But we were doing that back in 2009.

The non-technical people, the artists and product visionaries and CFOs and and CMOs, would often use folders, when they wanted to track variations of an idea. That was one of the advantages of having something as simple as Subversion: the whole team could work with idioms that they understood. Folders will always be popular with non-technical people.

But software developers preferred git, and they made the argument that they needed cheap branches, needed to run the software in whole, locally on their machines, with multiple variations and easy switching between branches, and needed a smooth path through the CI/CD tools towards deployment to production.

I've two criticisms with this argument:

1. software developers never took seriously how much they were damaging the companies they worked for when they ended the era of unified version control.

2. When using Subversion, we still had reasonably good systems for deployment. For awhile we used Capistrano scripts, and later (after 2010) I wrote some custom deployment code in Jenkins. The whole system could be made to work and it was much simpler than most CI/CD systems that I see now. Simplicity had benefits. In particular, it was possible to hire a junior level engineer and within 6 months have them understand the whole system. That is no longer possible, as devops has become complex, and has evolved into its own specialty. And while there are certainly a few large companies that need the complexity of modern devops, I've seen very few cases myself. I mostly see just the opposite: small startups that get overwhelmed with the cost of implementing modern devops "best practices", small startups that would benefit if they went back to the simplicity that we had 10 to 15 years ago.


its


Why GitHub was started. To ease ass pain.


> Why GitHub Actually Won

> How GitHub _actually_ became the dominant force it is today, from one of it's cofounders.

> Being at the very center of phenomena like this can certainly leave you with blind spots, but unlike these youngsters, I was actually there. Hell, I wrote the book.

Downvote all you want for being “non-substantive” but for some reason I can’t voluntarily tolerate such a density of well-actually phrasing. It’s grating.

It also seems to be everywhere these days but maybe I’m too attuned to it.


They might have taste but they still don't have IPv6. Sorry for the rant, but I'm always baffled that they haven't switched yet. Anyone has insight about the challenges they are facing?


git won because of empty hype, bzr was far superior in basically every aspect. Much easier to program with either for plugins or to be embedded, much saner "hide your development commits" model with log levels, much saner command line interface. It's just better.

It's not the first thing to be carried by hype instead of careful comparison.


That's simply untrue. Bzr was dog slow on repos with lots of history. It had lots of early users and support from hosting services like Launchpad, Savannah, and SourceForge. I'm certain that everyone didn't migrate to git because of hype. I mean, it's not credible to say the Emacs team stopped using it because it wasn't fashionable.

There were lots of DVCS projects at the time, like arch, darcs, and dcvs. People were running all kinds of experiments to explore the Cambrian explosion of new ideas. Some of them did some things better than git, but git handled most of those things reasonably well and it was fast. We all mostly ended up on git because it was generally the better option. It earned the hype, but the hype followed the adoption, not vice versa.


So in exchange for a little speed we are stuck with one of the most user hostile tools out there. That's not the deal I would have wanted to make. The interface is atrocious as some switches change completely what the command does -- this was partially acknowledged and fixed in git switch but there's so much more, it loses work way too easily and some of the concepts are near impossible to grok. (I did learn git eventually but that doesn't mean I like it. It's more of an uneasy truce than a friendship.)


It wasn't a little speed. Other options were many times slower. I just renamed a large subdir in a large project. `time git status` took 41ms. That kind of speed lets you add all sorts of interactivity that would be impractical if it were slower. For instance, my shell prompt shows whether the current directory is managed by Git, and if so, whether the status is clean. I would never tolerate my terminal being detectably slowed by such a thing. With git, it's not.

There are a thousand little ways where having tooling be fast enough is make-or-break: if it's not, people don't use it. Git is fast enough for all the most common operations. Other options were not.


I think PR and network effects of GitHub definitely played a role in the success of Git over other options like bzr, but you should also remember that bzr had tons of issues. It was slower, there was no index/staging area, there was no rebasing, etc. Mercurial was very good too, but while there were pluses and minuses with all of them, I think there was a _lot_ of careful comparison too. None of them were clearly and in all aspects better than the others.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: