Hacker News new | past | comments | ask | show | jobs | submit login
GitHub Sunsetting Subversion Support (github.blog)
319 points by mikece on Jan 20, 2023 | hide | past | favorite | 228 comments



As one of the GitHub cofounders and the brainchild of this particular feature, I want to let everyone know that this is maybe the funniest thing I've ever done.

We released this feature and published the announcing blog post, on April Fool's Day, 2010. I remember demoing it to the other GitHub guys and saying how funny it would be if we made this an April Fool's day post as though it was a big stupid joke but then it actually completely worked on every repository we had and we all thought it would be great. Until nobody believed us. Which in hindsight we should have seen coming, since that was the joke, but nobody actually tried it. Then people tried it and it worked and they thought it was a trick or something.

It was really helpful for people migrating from legacy SVN based systems to us (CI and stuff) but I'm surprised to some degree that it's still running 13 years later when nobody is really facing that issue anymore. And I'm still undecided if the joke was worth the massive confusion it caused. But if I'm pressed, I would say that I would 100% release it on April Fool's Day again.


As the PM who ended up finally killing it, I (on behalf of the team) thank you for your "joke" that wasn't really a joke. It did help some customers who had legacy SVN workloads land on GitHub.

I wanted to announce this on April Fool's Day, but just couldn't make the timing work.


> I wanted to announce this on April Fool's Day, but just couldn't make the timing work.

Thank you for not doing that. Releasing a wacky feature on April 1 is funny. Discontinuing a service that people might rely on is distinctly un-funny.


I disagree. It would probably prompt a few support calls and emails, but otherwise I think it would be great.


I don't read the news on April 1 because it got old decades ago. Sunsetting a service, rather than killing it outright, is a kindness. Making the announcement on April 1 means people will miss it or ignore it. Uncool.


“A few support calls” = At least a few companies who have unnecessarily faced major enough issues that they worked through internal investigations of many different pieces in the stack and then eventually decided to reach out to external support for GitHub, potentially having reached out to other vendors’s support because some PM thought it would be “funny”.

I don’t actually think it would cause any problems personally, but I’m astonished at the flip dismissal of a “few support calls”.


Instead add FTP version control support with the ability to create backup files.


i'm still angry at mozilla and google for deprecating that, being able to set up a simple anonymous FTP server and send normies a link to it that they can open in their browsers was really convenient. Now if I want to use FTP i have to explain to them that they have to copy it into the windows file explorer or download filezilla.


Why not an HTTP server with a directory index?

FTP is a pretty clunky protocol: it's round-trip heavy, not friendly to NAT on the client-side when you don't use PASV, not friendly to firewalls on the server-side when you do. I'm not sure what benefits there are to it these days.


https://github.com/sigoden/dufs

A file server that supports static serving, uploading

MIT/Apache-2 in Rust


Yeah a simple `python3 -m http.server` works pretty well.


Python http.server doesn't handle concurrent requests, so it works pretty well until it doesn't. Example don't use it to serve an ISO to iLO/iDRAC, boot will fail.


I believe this was updated in a recent version of Python to use threads.


Ftp is dead, just enterprise and mid level companies won't let go. It boggles the mind why it has persisted so long.


Though I typically use rsync these days, FTP can be useful for copying files to and from a Linux server when the client is a Windows machine.


Recent windows versions natively support ssh, scp, sftp on the command line.


The types of companies that use Windows will atypically take a long time to get on a modern version for their stack. Desktop fleet withstanding.


There is tons of old crappy software that cannot understand anything else, so it persists. Like fax machines.


But how can we pop a box without have wuftpd installed?


I didn't even use it all that often but, yes, I also feel like this was useful to have. Every now and again I come across an ftp site that I now have to copy into a dedicated program instead.


I made one of these online tool from which you can create deep links to an FTP server like this: https://demo.filestash.app/login?type=ftp&hostname=ftp.gnu.o...

It works not only with ftp but sftp, s3, webdav and pretty much any other storage backend you can think of.


Not just FTP is being deprecated, but WebDAV (DeltaV) is deeply based on Apache's mod_dav_svn subversion integration as well. WebDAV maps over regular HTTP GET/POST routes to provide quite sophisticated versioning of web resources at the same endpoint using just additional HTTP verbs (though also is/was often routed at URLs with a special prefix).


source.php.old.ReallyOld.bak.thistimeitsold.old


I wonder how much PHP source code is publically reachable out on the Internet because people would do this without realizing that modphp wouldn't treat it like a PHP file anymore, so an HTTP request to it would cause it to dump the file's source code.


index.php~ for anyone using Emacs


Exactly. When it is a small change, just comment it out. But when it’s a large change, create a .old.backup.2013-12-07.dontdelete.


source.php_newnewnew


I tip my hat to your team and I owe you all beers, my friends. You are doing the lord's work.


I love the internet dearly


> Until nobody believed us.

I remember having this problem with Gmail's "1GB for everyone!" April 1 announcement.


I think April 1st should definitely be "Crazy-but-real announcement day". It'll take some attention off the unfunnier-every-year "jokes" which have turned a pleasant and fun yearly occasion into the internet being unusable for ~36 hours.


Lee Valley tools traditionally “announces” a new fake tool on April first, but will sell it if there’s enough interest. Most have been discontinued by now, but one is still available for sale: a blank tape measure so you can make up your own dimensions.

https://www.leevalley.com/en-ca/shop/tools/hand-tools/markin...


BBC usually does a “stories which sound like april 1st jokes, but are actually true” compilation on that day. I always found it funnier than the real april 1st jokes. Also it kinda shows of their fact checking and news gathering muscles, that they are able to pull the crazy-but-true out of the sea of general crazyness.


You mean the internet being actually lighthearted and fun (like it used to be) for 36 hours?


There is nothing funny or lighthearted about modern-day Internet April Fools. It is too ingrained in ‘social media marketing’ culture at this point. What used to be ‘people having fun’ is now a temporal dumping ground for a bunch of unfunny forced jokes pumped out by soulless marketing teams as some sort of brand awareness exercise.

Bring back Google TISP I say.


Bring back Slashdot, OMG Ponies!


You sound like the kind of person who thinks saying "it's just a prank bro" makes a variety of awful behaviour ok.

AF is a day with a mix of corpos trying So Hard To Be Cool (remember this shit? https://www.theverge.com/2016/4/1/11344044/google-gmail-mic-...) and plainer-than-usual disinformation campaigns.

The only redeeming thing about that day is I get to have an excuse to not be reachable for a day and can be disconnected for a while. Hopefully my loved ones don't end up in the hospital on bad timing.


Google used to have some great jokes though. TiSP was great.

https://archive.google.com/tisp/index.html

As with any prank or joke there is a skill and art to telling a good one. But I don't think we should let bad pranks outlaw humour in general.


That’s full of subtle gold:

> #6 Insert the TiSP installation CD and run the setup utility to install the Google Toolbar (required) and the rest of the TiSP software, which will automatically configure your computer's network settings.

As well as some less subtle jokes as well :)


Amazon's Dash had the most prominent response, imo.


I recall needing/wanting to clone only a specific folder from a repo and the best solution being to use SVN. I'm pretty sure I used that trick a few times and even shared it a couple of times. Don't know if it's possible with git now, but that was how I discovered this feature and it seemed eminently useful to me at the time!


That's called a "sparse checkout" in git-land.

https://www.git-scm.com/docs/git-sparse-checkout


A sparse checkout isn't quite the same thing. It still has to clone the whole repository, it just only puts part of it in the working tree. With svn, you don't have to pull down anything for other directories. More recently, you can combine a sparse checkout with partial clone to get more similar functionality, but it isn't exactly simple.


FWIW, it's called a shallow clone:

https://git-scm.com/docs/shallow


That is yet a different concept, where it doesn't include the full history, but without a partial clone it still pulls the entire tree for the head commit.


Thank you.

A few different SVN -> Git migration tools failed to migrate a large legacy repo at a previous employer. I thought I was doomed to be stuck on subversion forever.

It was GitHub's migration that finally worked and got us migrated across.

Funny we were only svn because I had previously been part of an effort to migrate them to svn when I'd joined a few years prior because before that they were using an old TFS system where developers would block each other by checking out files which would lock them for the whole repo.

I knew that switching immediately to git would have been too much of a culture shock so I got them over to a CVS where people could at least work independently first. ( I also have a soft spot for svn anyway, I think for many small teams it works just as well as git with fewer opportunities to shoot oneself in the foot. )


FWIW we aren't planning to turn off Subversion import [1]. Only the two-way bridge where you could keep using `svn` against the Git repo.

[1] https://docs.github.com/en/get-started/importing-your-projec...


Is SVN just not used really? Why not continue supporting (or even adding more) alternative version control systems?


This is the right way to do April Fool's.

Similarly, I added support to del.icio.us for color: urls for april fool's. It worked, correctly, everywhere.


what did it do?


instead of a url, it would just show a rectangle of color.


And I just want you to know that I know of SVN's shortcomings but I do like it's comprehensibility very much, and though I use git it has caused me a lot of unnecessary pain. FYI.


If my memory is correct, the content on svnhub.com used to look different back then, was it related to the April Fool's Day joke?

https://svnhub.com/


As part of the deprecation plans, I updated the look of that site to match modern GitHub branding a little better.


I've always been really curious how this works - from what I remember back in my subversion days, both sides have things which are very hard to represent in one another.

I imagine some of those are overcome by enforcing heuristics (e.g. the branches/tags/trunk hierarchy is mandatory and has business logic run based on it), but I'm really curious if there was ever a more detailed writeup on how it works?


> As one of the GitHub cofounders and the brainchild of this particular feature, I want to let everyone know that this is maybe the funniest thing I've ever done.

Obviously it's unimportant compared to the rest of the post, but, in case you like to know, the feature is your brainchild. You would be its brainparent, I guess.


It sounded right in my brainpan.


> brainchild

I have to point out that you've reversed the meaning of this word...


Oops, so I have. Or perhaps GitHub’s svn invented me…


> … I'm surprised to some degree that it's still running 13 years later …

Nothing lasts longer than a temporary fix.


Thank you for building this. Best 13 year “joke” ever — and I do think it also did some good.


I fondly remember that blog post, having a play, and my sheer delight in finding it worked. I've always thought of it as the April Fools' joke that kept on giving.


> when nobody is really facing that issue anymore

heh heh heh, yeah nobody


Y’all should sunset it on April 1st.


My company only moved from subversion to got 3 years ago.


This was such a help when selling to Enterprises!


Where did the name slumlord come from?


> I'm surprised to some degree that it's still running 13 years later when nobody is really facing that issue anymore.

It's still running because Subversion has better CLI.

PS The joke is not that funny when it lasts for 13 years. Ha-ha, how funny (not).


Only partially related, but since they mention it at the bottom of the post: I really like GH's "brownout" approach to discontinuing services. They've done it a couple of times before (like this one[1]), and it's always struck me as a very sensible way to handle deprecations on such a massive service.

[1]: https://github.blog/changelog/2021-08-10-brownout-notice-api...


Make sure to get the rate right. Like 1/1,000 or 1/100,000. Poetry did a brownout of its install script at 5% failure. With a matrix of builds user's CI had closer to a 100% chance of failing.


These are also known as scream tests. We use them internally at $dayjob as part of EOL'ing APIs. Are they not common?


Lol brown outs is such a nicer name, but I've always known them as a scream test.


They're ever so slightly different.

Scream test is to discontinue service entirely and see who screams, akin to pulling the plug on a server in a rack when you've got no idea what's running on it, to then plug in back in when you find out it's the finance team running payroll software.

Brown outs are typically short-lived purposeful errors or aborts to either shed load to prevent a failure due to capacity exhaustion or cause issues that are investigated by a human to find the service has been sunsetted which raise awareness that something needs to be fixed before it's totally broken.


I have no clue! I just appreciated them as a lowly user.


We started using feature flags to disable, rather than enable capabilities. Turn them off and see if anyone notices. Ramp up the rate. If you identify some cohort that needs the feature, keep it on for them, but isolate them until you figure out a migration plan to close whatever gap you may have (or at least some way to massage unhappy customers if you aren't going to support some legacy thing).


This reminds me of when Slack shutdown its IRC bridge [0], and in a pretty negative way. I get that usage is down, but that tends to happen. Supporting niche users is how you win customer goodwill. The active users of SVN on GitHub clearly aren't going to switch to git, and I can respect the maintenance costs, but they're probably just going to switch to a different provider. For them, their workflow is now in-danger and they have to solve that problem in-addition to whatever they're working on. I always thought it was a lovely little feature GitHub supported, and I'm sad to see it go for this reason.

[0]: https://it.slashdot.org/story/18/03/08/2049255/slack-is-shut...


There's some differences as well: GitHub gives a year lead-time, Slack gave two months. IRC and XMPP are a fundamentally different workflows from that horrible web UI, subversion (for basic commands) is not all that different than git (alias svn=git can almost work).


I relate here with Atlassian discontinuing their Jira Server product. I suppose they did the analysis and it wasn’t worth their dev time but there are some customers who CANT move to Jira Cloud for compliance reasons.

So now we are stuck. Stick with unsupported Jira Server, pay $40k/yr for Jira Data Center (!!) or switch with all the business costs associated with that.


They want those subscription revenues and that's as simple as that. In 2-5 years server will be back as never before but you'll be paying a subscription rate.


Sounds about right! And look, so much less expensive than Data Center per year!


Given the datacenter product still exists, I'm fairly sure dev time is not why Atlassian does that.


True that makes sense. Although managing two products vs. one , Server and Data Center, even if they are mostly the same, is not free. As @adra noted, subscription revenue (although there are always support payments) is a likely driver.


To be honest, the timelines Github have gave here are very generous (1 year). Most companies (even big one) these days will shut-down in a short 30 days notice.


> The active users of SVN on GitHub clearly aren't going to switch to git

Seems likely to me that some significant percentage of GitHub SVN usage comes from decade-old scripts running on a build server somewhere. I suspect most of those users will choose to rewrite the script rather than moving to a new provider.


This reminds me of the time on Something Awful that radium banned the one user that was using WebTV.

> About 91% of visitors are on Windows. Mac users make up 5% and Linux is 2%. The other 2% are permabanned IRC trolls browsing the forums with a text-based browser written in Ruby on OpenBSD. Oh yeah, we have one guy using WebTV but I banned him because WTF.


Lowtax banned Phoenix/Firebird at one point, early on. He didn’t like people complaining that stuff didn’t render correctly. He only allowed Internet Explorer for some time. It wasn’t funny.


Hadn't heard that one before. Radium's incompetence was really something.


He may very well have just made that up.


The one thing I really miss about svn is the central,authoritative, auto-incrementing revision numbers. Git hashes are less friendly to use - in the svn days it was easy to tell at a glance if you had an earlier or later version.

Yes, I know the many ways that git is better in practice, and would miss some of the workflows git's nature allows if they were gone. But we really did lose something too, especially when running git in "quasi-centralized" mode such as on GitHub.


It's not as simple, but you might be able to get what you want with `git describe`. E.g., a working copy of mine now says `v0.7.5-43-g9060dbf`: 43 commits after tag `v0.7.5`, hash `9060dbf...`.


It can be done, but probably shouldn't: https://github.com/zegl/extremely-linear/commits/main


> With the shit ("short git") wrapper, you can use commands like shit show 14, and shit log 100..150

I found this amusing.


There is no immutable and consistent way to do this in a distributed system.


Mercurial supports those, but they're only guaranteed to be consistent per clone. So it mostly helps you when working locally. On the downside, inexperienced developers love to refer to commits by their sequential revision numbers, potentially causing confusion.


Maybe you can count the first parent commits on mainline, and then reverse that list.


Git doesn't have a good concept of a mainline branch when going back past a merge. It simply records more than one parent for that commit and not which branches they were on before they got merged.

Mercurial does somewhat better in that department by storing the branch name in the commit, so you can pick the parent on the same branch reliably.


I don’t understand what you’re saying. If you have a main branch which is always merged into then `--first-parent` will only list merge commits and regular commits on that branch. I have never seen it fail on our main branch.

In `git merge feature`, `feature` will become the second parent while the commit that you are on will become the first. The linear mainline history falls out of that.


It fails if someone merges main into their feature branch, and then performs a fast-forward merge of their feature branch to main. Now the two parents are in the opposite of the expected order, and --first-parent will follow along the feature branch. This can cause the new state of main to have a lower number of first-parent-commits than prior to the merge!

However, if you have infrastructure that prevents fast-forward merges to main (prevent developers from pushing directly to main, allow only PR merges and disable fast-forward merges for that), then the `--first-parent` approach can work.


> However, if you have infrastructure that prevents fast-forward merges to main

I did say merge, did I not? Yes.

> > If you have a main branch which is always merged into then

If fast-forward is a “merge” too then fuck it, I can’t be bothered to use Git lingo since every idiotic little thing needs to be qualified (see also: Git tag, which is sometimes only an annotated tag but also sometimes also its cousin “lightweight tag”).

I guess I should have said “merge merge”. Huh.


> I did say merge, did I not? Yes.

You said "git merge feature", which _does_ perform a fast-forward when possible. In my experience, developers on a large enough project will inevitably break Git in a creative way, and this is one of them.

This is the default because "git pull" performs a merge by default. You _want_ that to fast-forward when possible or everyone will create a merge commit every time they pull updates.


All these technicalities.

For someone to break the mainline parentage they would have to:

1. Do that ugly merge of mainline into their feature branch[1]

2. When they are ready to “incorporate” their commits into mainline: do a fast-forward since that’s apparently possible

But the proposed setup contradicts this chain of events: if their Git config was set up to do a fast-forward when possible, then a proper merge (a merge commit) wouldn’t have happened in stage 1 if mainline and `feature` had not diverged. And since that means that the two have diverged (evidenced by the merge commit), you cannot do a fast-forward in step 2.

And even if the above somehow is not true: step 2 is impossible because now `feature` contains a merge commit from mainline into `feature`, which mainline does not have. So a fast-forward is impossible.

What am I missing here?

[1] Oh right, I have to be specify now: a proper merge, a merge commit. The one with two parents, not an octopus one. Explicit enough already?

> You _want_ that to fast-forward when possible or everyone will create a merge commit every time they pull updates.

Surely this makes only a tiny difference in practice. We’re just a handful of developers and people usually diverge from mainline before they get their PR merged. So there are merge commits going from mainline into feature branches absolutely everywhere, because only two or three of us use rebase.

But I do in fact seem to recall talking to one of the other developers about him breaking the conventional commit parentage at some point in the history. So there is some bump there, a few years back.

But the usual story is that the people on our project do all those nasty merges on their own branches and then use the “merge” button in the Web UI when they get their PR approved. And there is in fact a fast-forward option there, but it’s not like we ever get to use it (unless we rebase) since mainline has diverged already. (I wouldn’t personally use it for “merging” (incorporating my changes) into mainline.)


People will tend to think of a fastforward as a "merge" of sorts since it will happen when you run "git merge," when possible.


I've seen a number of places use the "Build Number" from their CI tool on the mainline branch to represent a similar notion. Presuming you've "centralized" things, this tends to work well.


The one thing that I also miss. It even gamified my development somewhat. It was satisfying to see how much the revision number went up, and I also loved when I hit a special number, like 1024.


Didn't even know GitHub had svn support... it's been about a decade since I've even had to interact with a subversion repo professionally also.


I remember when they added Subversion support; I thought it was hilarious. And it worked!

This quote from the linked blog post made me raise my eyebrows though:

> In 2010…it was not yet clear that distributed version control would eventually take over, and even less clear that Git would be the dominant system.

I think it was actually extremely clear that Git would win. It had a guaranteed audience by virtue of hosting the Linux kernel. And forget even about its distributed nature; Git was already better at the centralized model than "centralized-only" version control systems ever were.

When Subversion came out I was overjoyed; finally someone had fixed most of the broken things about CVS. I jumped on it with gusto. But when Git came out it was so much better than SVN that I was blown away. And it wasn't long before there were great tools to translate between SVN and Git. By the time Github released its SVN bridge feature I think the place I was working at had already used those freely-available tools to move our old SVN repos to Git with full history. It was so easy to do! The only hard part was organizing all the developers to stop using the SVN server and switch to pushing to the Git server at the same time.

Sometimes a new technology comes out and it's immediately obvious that it's the future. The ones that come to mind for me are Git, the iPhone, and node.js.


> I think it was actually extremely clear that Git would win. It had a guaranteed audience by virtue of hosting the Linux kernel. And forget even about its distributed nature; Git was already better at the centralized model than "centralized-only" version control systems ever were.

I think you're forgetting about Mercurial, which was also created by a Linux kernel developer around the same time. It brought all the same benefits of git, but had a simpler command line API and better cross platform support. Mercurial saw wide use, especially in large corporations like Facebook.

Indeed, a lot of people will say that it was the success of github itself that pushed git over the top. In a world where bitbucket won instead of github, we could all be using mercurial.

Although I've never used it, I'm not sure how much of an improvement either of these was over BitKeeper. The primary impetus to create git and mercurial was licensing changes in BitKeeper, not technical deficiencies.


Maybe I'm the weird one, but Mercurial never made sense to me. I thought that Git's model made perfect sense.

I never tried BitKeeper, so I can't speak to that. But being proprietary seemed to doom it.


I also do not understand why people claim that mercurial is more beginner-friendly. Especially putting relatively basic features into optional modules is very confusing in the beginning.


Just because I don't get to talk about this much, as far as centralized version control goes, I liked Perforce when I used it. It could store everything: assets, builds, source code all in one, and you could do fine grain checkouts (I think git structures this as shallow clones but they're hard to get right).

It also let you put permissions on the repository itself, so if you didn't have the correct ACLs you couldn't see part of the code. It was really good for monorepos.

It has its quirks, and when I used it, could be a bit slower on the execution side, but generally thought it was really nice.

I also maintain that Mercurial & Fossil are superior to git, but git won the marketshare so its kinda moot now.


Perforce is still popular with games developers. It scales pretty well, even if you have tons of stuff in your depot, and since all the state is on the server the disk space cost locally is just the data. As you say, you can just put everything in there, and use it as your centralized file store for distribution of absolutely everything. Not unusual to have the CI system just commit the results back into Perforce, and that's how everybody gets updated builds or data.

History is per-file, so (in effect) you can treat any folder as a submodule, which simplifies having an enormous multi-project monorepo. This is something I've never done much with personally, but colleagues have had good results leveraging this to look after shared code that's used by multiple projects at different revisions.

The check in/check out model is also quite easy to reason about, and works well for the non-mergeable binary files that most games developers work with. (Programmers seem to kind of like it when they have some horrid complicated thing to get to grips with, that comes with a pile of new terminology and endless ways to cause yourself increasingly painful problems. Artists, designers and production staff... not so much.)

Main issues with Perforce:

- nobody seems to work on the product any more. If it's changed at all in the past 10 years, it's changed in some part that I, user of p4v (the GUI client) and p4 (the command line client) haven't found obvious

- the command line experience is pretty terrible, as it's inconsistent, and not very well documented, and the supposedly machine-readable output mode quite often just feeds you nothing more than an array of the same strings you'd see using the normal mode. But I can't deny that you can usually eventually get it to do what you want, provided you can assume the server charset

- the offline experience is rather poor! But this is increasingly less of a problem over time, and that process will continue

Somebody please figure out how to eat Perforce's lunch. We discuss this occasionally at work, but we're too busy working on actual projects...


What is the best public source on how Perforce works?


Mecurial isn't superior to git, it's simpler. The stuff that people tend to "criticize" git for is useful complexity. Although, I suppose most of these features are available via extension now?


> At that point in time, it was not yet clear that distributed version control would eventually take over, and even less clear that Git would be the dominant system.

Literally reading it on GitHub's blog, where the whole endeavor has the name of a source control system in it.

Jokes aside, did they knew Git was going to win or did they just take on a massive risk that would otherwise gone wrong?


The risk. It's probably hard to imagine, but Github was promising, but small back then. You can't put one egg in three baskets.


My guess is that GitHub is what ensured Git won.


Git was better for everything except binaries, but if you could afford it Perforce did much better there.

CVS though, uf da. We had some rough times in gamedev land until we moved over to SVN and then Perforce with proper proxies for a distributed team.


None of those except for the iPhone were obvious to me. And in hindsight I’m happy I only switched to Git and Node when they’d matured a bit.


I'd love to hear from someone who's still using SVN professionally and can explain why they prefers it to Git.

I used SVN very early in my career. I think the only good thing I can say about it is that it's easier to learn. It's quite easy to teach a junior dev how to use SVN. Git takes much longer to master.


I don't "prefer" svn (or anything else), but if you are interested in what svn does differently than git, which can suit some use cases better, here are some suggestions:

1. The trivial commit graph makes some concepts easier. You know from the commit number which is more recent. There are no complex merges.

2. The above makes for useful GUIs. There are no git GUIs that a user can not aim at their own foot, and needs the real client to clean up the mess.

3. The lack of complex operations makes access control easier. You still have to manage hooks, but it's easier or at least with less unintended consequences.

4. All clones are sparse. Together with a zero copy data storage makes for some cute concepts, such as branches and tags and directories actually being the same object. Sparse clones can be very useful.

5. The system tracks file system operations, such as moves and copies. It can actually be used for things, directly and without heuristics. (This is actually the one point where I can say I prefer the svn way. Git could have done something similar without breaking the conceptual model and it would have been useful.)

6. The metadata is, or at least used to be, more developed for non-unix file systems. I'm not sure if this is still the case.

7. Storing binary data is somewhat less bad. But still bad.

All that said, outside of specialized applications I'm not sure there's much reason to use svn for any new projects. And if you use it, keep in mind that git works perfectly well with svn archives. They just become a linear graph with funny commits and you can use that familiar git tools.


I’ll add to that:

8. Naming of commands is sensible and inline with other similar tools.

9. No major issues with large files in your repo.

10. No flamewars break out online if you admit you don’t understand how part of the tool works.


SVN is still simply unsurpassed by any other open source version control I know when it comes to the handling of large files and fine grained access control.

The truth is that git is pretty abysmal overall. There are commercial offerings that easily surpass it, if you are willing to pay.


I use Subversion at my gig in several places. I don't use it to manage source code revision control, but I have several processes that require business users to manage binary files (such as audio files) in order for them to be automatically deployed to production. Git or Mercurial are pretty awful at this kind of role. Subversion let's me check out at a subfolder level of a repository, and not pay the cost of having the full revision history contents sitting in my clone.


Core Git is not the greatest at large binaries, so for that you indeed probably want an addon like git-annex or Git LFS or an alternative such as dvc, but in recent versions it absolutely can[1] only give you only part of the history or part of the objects: the former is known as “shallow clones”, the latter as “partial clones” (there are also “sparse checkouts”[2], but those only concern the working tree IIUC).

[1] https://github.blog/2020-12-21-get-up-to-speed-with-partial-...

[2] https://github.blog/2020-01-17-bring-your-monorepo-down-to-s...


Similarly, we used to use SVN for a mechanical engineering team doing CAD models. SVN supported a reserved checkout mode (I guess SVN calls it "locking") which meant only 1 person could check out a file at a time. Perfect for non-mergeable binary files, and with TortoiseSVN it was easy for non-SW engineers to use.

That's one thing I doubt Git will ever be able to do since it is a distributed model rather than server-based.


File locking exists for LFS, which you could argue is the main place you'd need it... if you want to "lock down" other files, there's ways to handle that in the process, eg. pull requests


It does? That’s a requirement for version control for cad I think (just wrote a huge comment yesterday in here about it). I might actually try to use svn in 2023.


If you're using lfs for cad, it has file locking


Been working on a version control system to solve this large binary file problem as well.

It's called Oxen and is initially targeted towards unstructured machine learning datasets, but could be good for this use case as well. Would love to get any feedback on it!

https://github.com/Oxen-AI/oxen-release


Git is much better at it nowadays thanks to Git LFS which just stores the blobs.


in exchange for yet another set of things to worry about to explain to/debug for non-technical users.


I don't think the intended audience for Git LFS is non-technical users.


Then why do you mention it in response to someone describing their use of SVN because it's better for non-technical users?


> It's quite easy to teach a junior dev how to use SVN. Git takes much longer to master.

This is pretty much it, ease of use. No one really knows git (beyond 5 commands). Everyone knows how to google git commands or talks to the wizard in the company that can fix the mess someone created. While in theory git is more powerful, in the art of getting things done easily SVN wins, simple, stupid and it works.


I remember having all kinds of strange issues with subversion that were hard to resolve as a novice, not being able to commit for cryptic reasons, etc... Perforce is the only tool I've used that was actually pretty simple, and even then I'd say that dealing with merging streams was not nearly as easy as merging git branches.


Heh, funny story about Perforce. I bumped into one of their founders at a camp I go to, real down to earth dude. We had struck up a conversation over a few drinks, and the conversation drifted over to what we did. He asked "You know version control software?", and I replied "Like Git?" and he just responded with an exasperated sigh.


I am using it for Sciter development.

For my case: one|two developers + hundreds of read-only observers + a number of patch senders it is a perfect tool that far.

TortoiseSVN (Windows) and analogous tools (MacOS and Linux) are really unbeatable.

I am also using Git time to time but that is really far usability wise.

Five commands: commit, update, revert, merge and switch combined with FS explorer visualization are perfectly enough. Rarely: shelve(create patch) and unshelve(apply patch).


As others have commented, my team at work use SVN because it's what somebody is confident to manage with svn+ssh protocol and want to keep it internal. Some projects use a "release" branch to which we merge when deploy is sensible, most projects haven't this branch and is just "trunk"/master. No more is needed. SVN works.


We use SVN still. I'd like to switch to Git soon, given tooling like CI/CD is better/more available with Git, but apart from that it works.

That said, SVN works, and I do think the learning issue is the primary obstacle for us moving to Git.

It should be said we're a small team though, 10 devs. I can imagine Git looking a lot more attractive to a larger team.


Almost a decade ago, I was part of a 10 person team (that was growing) and while there was consensus around transition to Git, it kept getting pushed back. I eventually forced the issue by changing the deployment job in Jenkins for one of the main projects to deploy from Git instead of SVN.

I had to give several talks to existing and new employees about how Git works and how to use it. I was also that #1597 guy (https://xkcd.com/1597/). Back then most hires didn't work with Git before joining us, today of course it's different. Over time, all projects moved to Git as whenever someone was working on two projects, one Git and one SVN, they would ask "why is this still on SVN?" and soon it would be on Git.


I worked for General Electric back between 2012-2014. My department brought a trainer in who worked for GitHub for a two day class on git.


"easier to use" a great reason to prefer it. I want to write software, not manage a source control system.


"I want to write software, not manage a source control system."

Exactly!


What's better about git? I haven't used svn, but have used perforce/mercurial professionally/git professionally, and use git personally, but I find all of them to provide the same "feature set" when doing basic development: have your own branch, and merge it in to the main branch when done/reviewed.

Merging seems the same on all 3 version control systems I've used... I've heard that git branching is better(?), but haven't seen that being used anywhere really.


> I haven't used svn

Yeahhhh. Svn is centralized. You must be connected to the server to do any source control work. There is no local repository, there are no local commits. Every commit is global on the server. When you make a commit, it is pushed to the server and your coworkers can fetch it. You don't make commits locally and fiddle around and then push.

Also Svn doesn't have branches per se. You just use subdirectories. It does have facilities for merging directories and managing these "branches", but it feels real weird to be switching branches with 'cd'.

It's a very different world.

A quick read: https://svnbook.red-bean.com/en/1.7/svn.tour.cycle.html


> You must be connected to the server to do any source control work. There is no local repository...

That's not correct technically speaking. You can create repository on your machine - on local FS.

But of course it is more reliable to run it on server, even for yourself. If on Windows then VisualSVN is one click solution for those who just want to have that thing working.


Am I understanding you correctly if I compare it to making the argument that, technically, you could run any Internet website from your laptop and network connectivity isn't actually required for using the web?


I am referring exactly to "must be connected to the server to do any source control work. There is no local repository..." which is plainly wrong by any means.

SVN client supports equally well as "svn:" protocol as "file:". Server is not mandatory with SVN - you can work with repositories on your local HD or network share.


"Server" was perhaps the wrong word. "Central repo", if you like, regardless of connection method. What I meant is that "svn checkout" does not make a new repository, as "clone" does in decentralized source control systems. You must interact with the central repository (wherever it is stored, locally or over the network) to do version control work.


I have used svn in a decentralised manner, pretty much as the person above me describes, for years. It's super simple to do. You create a local repo, populate it from "origin", and when you want to commit your local changes, you make a "foreign merge" request from "origin".

Your argument that this cant be done in svn because "there's a central repo" could have been translated in "git" as: "yeah you cant work on your machine without internet because you wont be able to push on github"


> Also Svn doesn't have branches per se. You just use subdirectories. It does have facilities for merging directories and managing these "branches", but it feels real weird to be switching branches with 'cd'.

Also, this means that it's possible to do some horrifying things with branches and tags, like making a merge commit which is isolated to a single directory, or checking out a tag and making a commit to it.

Hopefully no one is actually depending on these workflows being possible, because they make project history extremely hard to follow.


> I've heard that git branching is better(?), but haven't seen that being used anywhere really.

How are you merging without branches?

Git is mostly faster and more flexible than svn, and the merging works far better. Unless svn's merging has improved in the past decade or so, which is entirely possible.

When I switched to git from svn, the main differences were: merging was usable, making new branches and switching branches and such were _instant_ instead of many seconds, and I could work more flexibly (git doesn't require being connected to the server).


Yes, that’s actually the thing about branching in SVN. Everyone remembers how awful it was 10+ years ago under SVN 1.4 and earlier but has improved immensely since then. Combined with modern client tooling, i.e. TortoiseSVN, problems with merging are almost non-existent for a long time.

I certainly wouldn’t call SVN modern but it’s very well maintained and has never lost code on me. Many git-like features also exist now such as being able to stash some changes in order to pivot to something else for a bit. Except for the central server being a problem for some use-cases, SVN just works.


I meant, using branches in a way that's "better" - whatever people who use git mean when they say that.

As I said, I haven't used SVN. It just seems like perforce and mercurial are basically "identical" for the ways I use them at least.


> Merging seems the same on all 3 version control systems I've used... I've heard that git branching is better(?), but haven't seen that being used anywhere really.

Much better working merges was reason many people moved to SVN but since then SVN just got better at it


I still use SVN professionally, but new projects use git.

When you've got a ton of projects and their associated deployment pipelines all using SVN, the cost of migrating is non-zero. Someday I'll probably start (slowly) migrating, but the benefits just aren't great enough to make it a priority.


My workplace still uses SVN. Easy to colocate code & hardware/mechanical files. For the kind of work I do, the advanced features in Git wouldn't really provide any benefit but changing would be a pain.


I used svn in my previous job, not out of preference, but because that's what the boss preferred. But, I have to say, as a user (who doesnt care about the under-the-hood-efficiency arguments) I don't get what the fuss is about.

E.g., people always like to point out that git is decentralised and svn is centralised. Bullshit. Centralisation and decentralisation are concepts, not absolute requirements. It's similar to the whole "you can't write object oriented code in c, you need c++" argument. Nope, you totally can. And also you can totally write completely non-object oriented c++. Case in point, most people who use git these days rely on a centralised use of git, namely through github. At my last job my use of svn was completely decentralised and worked like a charm.

I think the big thing for me is that, svn vs git shares a lot of similarities to the c vs c++ scenario: svn feels more low level, in that it is super flexible through its simplicity, but relies on good conventions as a result. git, on the other hand, tries to be opinionated about how things are to be done, and provides a specialised command for each task.

git feels very much like a "take-it-or-leave-it" solution to me. svn feels more like the early linux days: if you can afford to play around a bit to set up things just how you like them, then you can end up with a pretty sweet gig, that does exactly what you want it to.


One of my friends is using it at work. They have been using SVN since a long time. They have been planning to migrate to Git for a few years. The problem is that customers pay for features, not for the version control you are using. SVN still works, and the inertia is not zero.


Not using it now, but the last project I was on that did was at a defense contractor about 5 years ago.

Basically, the lead engineer that was in charge of the new project set up what he liked before the rest of the team was in place. So when I was hired on the team consisted of me, a senior engineer who hadn't used subversion in years, and a bunch of junior engineers who were completely unfamiliar with it. No amount of pleading would get him to migrate to git.

It was partially a control thing, I think. He liked having full control of the repo to himself. The system architecture was also... dated.


> Why do people still use Subversion on GitHub, anyhow? Besides simple inertia, there were workflows which Git didn’t support until recently. The main thing we heard when speaking with customers and communities was checking out a subset of the repository–a single directory, or only the latest commit. I have good news: with sparse checkout, sparse index, and partial clone, Git can now do a pretty decent job at these workflows.

There is also this: https://github.com/msofficesvn/msofficesvn


Migrations exist for two reasons:

1) adoption. You're early in the Adoption Curve and you're trying to grease the wheels to make it easier for people to come for you

2) modernization work. You're brought in to consult for a company because someone has declared moral bankruptcy and decided that neutral third parties are more likely to dig them out of the problem. So as a consultant you show up week one and you say, "Dear God, they're still using X?" about five times. So now you're proposing to do migrations that all of your peers finished up seven years ago.


We used SVN in an application to grab a single folder and its contents from a repo on GitHub. It was the quickest and easiest approach at the time (that I knew of) and worked well.

We switched a year or so ago to use git sparse checkout that was released in 2.25.


While I'm deeply thankful that I don't have to use it, the most famous one I know if is Oracle's VirtualBox: https://www.virtualbox.org/svn/vbox and /svn/kstuff-mirror and /svn/kbuild-mirror


Less of a problem with LFS, but it's simple, functional, and you can store whatever large files without any worry.

<I am a git user, who prefers it to anything>


There is only one thing about subversion I miss when using git -- the deletion of branches (which were, admittedly, just directories in a folder called branches) was version controlled, so you could recover deleted branches.


Just in case - you can in git as well. Branches are just pointers to a commit SHA, so can always create another pointer to the same commit and your branch is back.

https://stackoverflow.com/questions/3640764/can-i-recover-a-...


I suspect GP knows this, but there's no way to go back and see what the branch ref was at a prior point in time. I believe they wish that refs themselves were versioned.

And yes, I know they're kept in the reflog. But that is short-lived and (I believe) local rather than eternal and global like everything else in git.


The reflog does exist on the server too, but it's not really accessible without access to the git repo on the server, so not much use in eg, GitHub. That being said, GitHub also exposes some of these details, for example: in PRs when you force push on your branch.

So in the general sense, there's no reason we can't have exactly what's being discussed, but it just doesn't exist yet.


> The reflog does exist on the server too, but it's not really accessible without access to the git repo on the server

Though it only logs refs that the server itself has seen, which is more or less what I meant by "local". None of its state is shared during push/pull, it's computed entirely based upon what some individual client/server directly observes.


Yes, as you say, I'd have to write the commit names down, and eventually they would get garbage collected.

You could tag, but then you are just polluting your tag namespace with old branches.


You can also search logs to recover a commit, besides the other ways mentioned.

> ok but then I lose it's nice handle.

Then don't delete the handle? Maybe rename it with a `deleted/` prefix?

It's hard to imagine what the expected behaviour is when branch names are just local convenience labels.


I miss the name, SVN was pronounced Svend (SVeN) among my friends, which is a scandinavian male name.

"Have you committed your changes to Svend yet?", "No, Svend is experiencing some downtime right now", that was some fun interactions that got us through computer science.

Git is a fun name too, but it is not nearly as funny to me.

TortoiseSVN was a great SVN client integrated into Windows File Explorer, and it looked great when you used the classic theme on Windows XP.


Conversely, the deletion of branches is version-controlled in git, because when someone force-pushes or deletes a branch on the remote it can still be restored from anyone's clone :)

I've seen this happen a few times with a dev running `git push -f` with the intent to update their dev branch, except that it would force-push all branches in their clone that were associated with the remote, like master. It probably doesn't happen these days though, because the default was changed at some point to only push the current branch.


<sigh> I still use the svn bridge as my primary way to acces GitHub repositories: I get around much faster in svn than in git and have little need for a VCS except in special circumstances. Kinda dreading this.


Not sure I’m familiar with any projects using Subversion. I think the most odd-ball version control I’ve seen in prod is SQLite using Fossil (which does seem to have benefits over git) and OpenBSD sticking with CVS.


SVN is a relatively sane option, especially before git was published. But in 2023 its mostly been replaced.


Subversion fixed all the truly horrific things about CVS, it's quite adequate for centralized source control.


Reminds me of Linus Torvald's Google Tech Talk on Git. https://youtu.be/4XpnKHJAok8

> I see Subversion as being the most pointless project ever started, because the slogan for Subversion for a while was "CVS done right" or something like that and if you start with that kind of slogan, there's nowhere you can go. There is no way to do CVS right.


I mean, he was definitely selling a product there. SVN was pretty good, barring the merge logic being a PITA.


I don't know if he was selling a product. Git is a FOSS utility, after all.

GitHub and others came much later, and have no affiliation with the Git project or Torvalds (some might not know this).

Torvalds created Git for the Linux Kernel folks, and didn't really care if it was used anywhere else.

Torvalds only went about creating Git because BitKeeper and the Kernel folks had a very public falling-out (much to BitKeeper's ultimate misfortune - who even knows about BitKeeper these days). He, and the Kernel folks, were very happy with BitKeeper up until then - and a lot of the BitKeeper features that were unique at the time ended up landing in Git.

This tech talk was because people outside of the Kernel became interested in a similar workflow - namely distributed and "free" branching.

Also, I can't help but marvel... this one person created not only the most prolific Kernel in existence - but also the more prolific VCS. I have a hard time just getting out of bed in the morning...


Selling has more connotations than earning money. If he is at a Google event he is promoting it.


"cvs add image.jpg" and then discovering you can't just do that 2 years later after a fresh checkout was fun.


SVN had a great feature set, and a great way to handle merges, and sane defaults.

too bad they hit some limits as repos become larger and larger, tried to implement a new storage engine, and basically they never managed to iron out all bugs out of it in time, while the older storage engine entered feature freeze and was basically abandoned.

I've been checking them for years, and there were file corruption issue for so long, I lost interest in tracking the status anymore, as by that point I wrapped my head around the git model.


> and a great way to handle merges

Nope. That took a very long while to be added. When the current crop of DVCS started appearing, one of their significant draw was actually that by necessity merges worked well. At the time, SVN made it extremely easy to fork branches, and almost impossible to merge them.

Per the official doc (https://svnbook.red-bean.com/en/1.7/svn.branchmerge.basicmer...) merge tracking was introduced in subversion 1.5, released in May 2008. That's 3 years after the initial release of Git, Mercurial, and Bazaar.

Before then you had to keep note of what you'd merged by hand so you didn't double merge it, and that was distinctly not fun.


> SVN had a great feature set, and a great way to handle merges, and sane defaults.

Are you kidding? Subversion had a notoriously awful way of handling merges, which was a huge driver of people onto Git as soon as it appeared. You truly had to have been there to believe it, but in all but the simplest of merge scenarios, declaring branch bankruptcy and manually moving things back into the target branch by hand was your only real option. Early to mid 2000s, the most common team branching strategy I saw with subversion was "there's only one branch and everybody does all the development in it because god help you if you try to put it back together after branching for something".

It wasn't until well after the momentum was clearly in Git's favour and a huge chunk of the user base was gone that Subversion finally fixed it to not be complete dogshit.


it was great, it basically forced developers into the equivalent of a git rebase, maybe the cli wasn't the best, but that was a problem that tortoise solved quite well, in 2002


Apache httpd uses subversion.

That's the only example of a well-known project I could find that's not directly related to subversion itself. I'm sure there are others, but all the other ones I could recall have since migrated (usually to git).


Well, I think it is a bit related, being subversion an Apache project: https://subversion.apache.org/


Many Apache projects use git these days. I looked at some other well-known Apache projects, and they all used git; sometimes on the Apache git servers, sometimes on GitHub. I'm sure there are some others that use subversion, but it's not common.

AFAIK for the most part Apache (the foundation) is fairly hands-off in telling people how to run their Apache project, and every project is mostly free to "do whatever".


Subversion was the main thing in opensource space in before-git before-GitHub mid-2000, when projects were hosted on SourceForge. Old times!


I was about to respond with WebKit but it looks like they switched to git last July.


Is there an alternative way to download select folders without svn?


> Why do people still use Subversion on GitHub, anyhow? Besides simple inertia, there were workflows which Git didn’t support until recently. The main thing we heard when speaking with customers and communities was checking out a subset of the repository–a single directory, or only the latest commit. I have good news: with sparse checkout, sparse index, and partial clone, Git can now do a pretty decent job at these workflows.


With these features I can have a local clone with just a subfolder, and without downloading all the history. But it still comes with git - which I'll then have to remove for my use case.

I really think that grabbing some files from a certain commit should be a completely trivial oneliner.


If you are already using GitHub you can write your own short scripts with GitHub's file raw URLs (which include a specific commit hash) and curl/httpie.

Obviously, that's not easily portable to other hosts (though most of the big ones have similar URL patterns that you can discover), but depending on your use case may be a handy option.

> But it still comes with git - which I'll then have to remove for my use case.

I don't know what your use case is, or why you don't have a local history anyway of the repo so that you need to spot-download sub-trees, but if this is a somewhat common workflow this seems to indicate you may possibly be looking for something like git-archive [1]. That's a tool included in the "corners" of the git "suite" to take a `treeish` and build a tar or zip directly from it (with no git metadata inside). You can use a commit or a tag name for a `treeish`, but if you wanted it to be a specific folder inside a commit you can pull the commit, examine the tree it points to until you find the tree ID inside that representing just the subfolder you are looking for. You can pass that tree ID to git-archive. (And other things that take a `treeish`.) It's not quite a "completely trivial oneliner" to write a script to do that, but it may not be that much more complicated of a script to write.

[1] https://git-scm.com/docs/git-archive


In the many years I've used github, I honestly never even realized they supported svn.


When are we going to admit to ourselves that everything else is worse than git?


About time... I don't know a single person that still uses it. I can't tell you how many times I got burned doing merges in Subversion where the entire repo became corrupt. Git was a life saver when it came out and I switched immediately over to it.


> I can't tell you how many times I got burned doing merges in Subversion where the entire repo became corrupt.

Earlier in my career, in the first week at one company I ended up corrupting the Subversion repo. I was a nervous wreck. My boss reassured and told me about he had also corrupted a Subversion repo previously as he restored the repo from backup.


Uggh. If there's one thing I would want to be super stable it would be my version control!


I do.

I sometimes check out a single directory from a huge repository with enormous history. Since subversion only cares about the latest commit and the directory I'm checking out, it's instantaneous.


This part made me laugh "Late in 2023, we’ll run a few hours-long and then day-long brownouts to help flush out any remaining use of the feature."

I'd prefer just spam emails saying the system is shutting down


And after the email spam, it just shuts off? Wouldn't a temporary outage be a better way to attend those not reading the old email box used to sign up for Github that they need to upgrade their software?


They had Subversion support?


github had subversion support ??? :)


what does sunsetting mean?


No!


I had no idea. This is fucking great. Hahaha. Best thing I've read all day.


How about Visual SourceSafe? I actually loved that for a small team.


> I actually loved that for a small team.

That's a joke right?

VSS is by far the shittiest VCS I've had the displeasure to use, though I will confess I never had to suffer through RCS and CVS.

It's certainly the only one which managed to destroy data. 'twas a good thing I'd already used other source control systems before VSS, because that experience would definitely have made me flee from source control systems for a while (much as Java did static type systems).


We used to call it Visual Source Shredder because of how easy it was to corrupt the database.


Are you serious? I'm not even sure if Visual SourceSafe is better than "Once a week I copy all the files into a ZIP and date it" as a strategy for version management.


for one thing, zips are far less likely to destroy data than sourcesafe, which was famous for doing just that


Jeff Atwood had a lot of negative things to say about Visual SourceSafe back in 2006: https://blog.codinghorror.com/source-control-anything-but-so...

There's some good links there that describe the trouble VSS caused.


Lol the comments. I went through Perforce and ClearCase too.


I try to forget ClearCase.

It haunts my nightmares.

When we were migrating to git, some of the more senior engineers came up with a workflow that was described as "the best implementation of ClearCase in git that I've ever seen". It was meant as a compliment.


Worst. VCS. Ever! Can't believe anyone ever loved VSS - it was constantly corrupting the repository, people would lock programs and go on vacation, just a constant struggle.

We switched to CVS and it was immediately better, followed by SVN a few years later. I still use git like a fancy subversion, just with multiple repositories :-)


I remembering checking in PDF templates into that thing and then never being able to open them ever again because VSS just randomly corrupted files.


We used it at my first job, up until a disgruntled contractor locked every single file in our repo on the day he got fired. We couldn't do a damn thing for like a week, until we migrated to... TFS, sadly, but even that was an improvement.


> But hey, if the Subversion system just works and doesn’t bother anyone, there’s no reason to make any changes, right? The reality is that there’s an ongoing maintenance cost to any software, and that goes extra for public-facing services on the internet.

This is a poor excuse. I understand that MSFT/Github feels compelled to provide a convincing reason to users, but this is not it. 99.9% of problems are caused by developers making changes or pushing new code. I've been writing software for close to two decades now--if you don't touch it, it's not going to break on its own.


The "ongoing maintenance cost of software" can also be the additional complexity in the mental model of developers that have to deal with nearby parts of the codebase. Even if no maintenance is needed for a specific feature, it doesn't always make sense to keep around.

There's also knock-on effects, e.g. an offering like this could attract the type of user that would add a lot of additional support burden for the value that the company _and_ the user gets out of the feature.


GitHub hasn't heard of Docker? It seems they could use it to run Subversion without any effect on their git hosting.

Just kidding, but remember back in the DotCloud days when they pointed out you could run Perl? That was pretty cool. https://news.ycombinator.com/item?id=2518526

> In 2010, when GitHub introduced Subversion support, the version control landscape was very different.

That was waaaaay before Microsoft acquired it.

> But hey, if the Subversion system just works and doesn’t bother anyone, there’s no reason to make any changes, right? The reality is that there’s an ongoing maintenance cost to any software, and that goes extra for public-facing services on the internet. As the use of GitHub has evolved and the number of Subversion requests has declined dramatically, we plan to focus our efforts entirely on Git.

Reminds me of how Microsoft dumped Atom (and then fauxpen-sourced VSCode before dumping the last remnants of Atom).


> Reminds me of how Microsoft dumped Atom (and then fauxpen-sourced VSCode before dumping the last remnants of Atom).

I'm confused by this. Microsoft released the VS Code source in 2015, long before they acquired GitHub in 2018 and sunset Atom in 2022.


They moved Atom into de-facto maintenance mode much earlier than that, when VSCode was much less fauxpen-source than it is now.


Just repeating "fauxpen source" doesn't mean anything to me. What actual change are you referring to?


Big overview: https://ghuntley.com/fracture/

A couple points in time:

Pylance, installed by default, won't open source. Microsoft keeps calling VSCode open source. https://github.com/microsoft/pylance-release/issues/4

Same for .NET https://twitter.com/migueldeicaza/status/1537175065380495367


To add to the list, the c/c++ extensions, remote ssh, remote containers and intellicode are also closed source.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: