Personally I would be hard pressed to bother contributing to a project not on GitHub at this point. There is a certain workflow and interaction model that GitHub projects use that non-GitHub ones do not and it is simply not worth the time investment to learn those other projects. Not only that but it allows me to easily point at my work and go "I did that" when talking with folks.
Though I wonder if "homogenous ecosystem" effects/issues could rear their ugly head...just a random thought.
git diff master..bugfix > bugfix.patch # or `format-patch`
# now attach/upload bugfix.patch
# make sure you click around github.com to create third fork
git remote add unnecessary-third-fork $THIRDFORK
git push unnecessary-third-fork bugfix
firefox $THIRDFORK # now click around to file a PR
# now wait for your PR to be merged
# now click around on github.com to delete $THIRDFORK
# ... unless you just leave things laying around
As much as I hate centralization, especially when the central entity is a for-profit corporation running closed software, often that ends up giving you a standardized experience that makes things easier. "Easier" doesn't have to mean fewer steps; I agree that the GitHub workflow you describe isn't simpler, but if you've done it a few times, it's mechanical and you don't need to think about it. GH even provides a command-line tool that lets you avoid most of the click-around-on-website steps.
To stray outside the lines with some meta-commentary: it's nice to get a well thought out response instead of the sort of kneejerk rooting-for-my-home team that's on display in the wasteland of intellectual dishonesty in the comments below.
For the most part it's the same workflow. Also, if it's a trivial change (like fixing/appending something in documentation) you don't even need to leave the browser.
Discovery is another issue... it's far more easy to use Github semi-socially than most other platforms. Something I both love and hate is that GH doesn't have a direct message functionality. On the one hand, I wouldn't want to be bothered with a ton of end user emails for the same issues over and over... on another, after you've waited a week for a bug fixing PR, it's not fun either.
Whereas when using Github you click a button, get your own copy, do whatever you want to it, and then click a button to open a pull request. You _tried_ to make it look like using GitHub.com is somehow... complicated. But it's dead simple and you even added steps, like "waiting for PR to merge" etc that is the same with a mailing list anyway.
I get it. You might like the mailing list better to avoid a single company handling all of the OSS contributions. But let's not ignore the actual good aspects of github by making up stuff. If you want to convince people to _not_ use github it's going to take more than this.
Adding a remote is generally a one-time cost and is unneeded for every PR, so adding that command (along with all the associated comments) makes it appear more complicated. The reality for most GitHub users is that they simply have to do:
`git push origin <branch name>`
> Adding a remote is generally a one-time cost
It's not a constant cost, unless you're saying you only ever intend to contribute to one project ever. It's a fixed cost that you will pay N times, where N is the number of projects you contribute to.
Having to create a fork per PR is a rather antiquated way of doing it. In my experience, you can almost always push to origin and create a new PR from the branch, but maybe I've just been lucky with the projects I contribute to.
> It's not a constant cost, unless you're saying you only ever intend to contribute to one project ever. It's a fixed cost that you will pay N times, where N is the number of projects you contribute to.
It's a constant cost in the same way that looking up where to submit your patch to is a constant cost. You will pay both N times, where N is the number of projects you contribute to.
Why am I having to repeat myself here? You can never push to origin unless it's your own project or your team's project.
> It's a constant cost in the same way that looking up where to submit your patch to is a constant cost. You will pay [...] N times, where N is the number of projects you contribute to.
In other words, it's not a constant cost.
Because you are incorrect and not reading the responses.
>you set origin to the branch you own...
They are referring to origin as the forked repository. E.g. if I contribute to nixpkgs (the NixOS package repository), I only have to fork it once, use that as my origin, and can create branches and submit PRs.
So, you are both right. If you contribute many times to the same repo, you only have to fork once. If you do a lot of drive-by contributions, you'll end up forking a lot of repositories.
(I fully agree that GitHub has a lot of overhead compared to git format-patch/diff. GitHub et al. also have some benefits in terms of communication. At any rate, diff/format-patch are not that hard, so I think any git user should learn it.)
That doesn't contradict anything I've written here, or anything I've written in years past on exactly this topic. But this _entire_ branch of conversation started with someone quibbling that I didn't rank configuration of remotes as a zero-cost operation. So, no, we're not both right.
I agree it's a cost per repo you contribute to. However, you can also do it reasonably cheaply with scripts. I recall you have to use hub in addition to git commandline, but once you get it set up then it's basically zero extra commands if you bake it into a clone. Run a script that does the fork to your github username and clone to your local box, do your normal modifications and commits, then git push origin and click on the URL to get dropped into the upstream PR workflow.
The fork bit only needs to be done once.
That's surely equivalently complicated as adding a second remote.
Almost every pull request I've made to a project on GitHub has started with a "git clone http://github.com/example/example.git", since they start with bringing down the source code and finding the bug in the project. Sometimes it's something I can fix, so I then need to fork the project on GitHub, add a remote (or replace origin with my fork's location), and make the commits.
That's not too difficult, but it's not easier than sending a diff to a mailing list. If any discussion is necessary, it's easier to keep track of that on GitHub. It's also much easier to see the patch 3 years later, if the maintainer wasn't interested — that's the big feature which makes GitHub (or its competitors) worthwhile to me.
(A long time ago I sent a patch to Git itself to the Git mailing list, and it was about 6 months before it was applied. However, it was applied, so they must have had some way of keeping track.)
The origin you push to is your fork of the project.
Fork, push, pull request.
I'm going to skip ahead here. You're going to replace the `add unnecessary-third-fork` command with `set-url origin $THIRDFORK`. Either that, or you swap for a `git clone $THIRDFORK` so "origin" is set as a result of the clone.
How many steps do you need to eliminate before you can match the cost first sequence (2 steps)? How many steps does your advice eliminate? What are the total number of steps involved in the GitHub approach? I'll wait for your answer this time.
You know those email chains that just continue to fork? Where people reply to the original (or the first couple replies) with their wall of text after a couple other replies have already trickled through?
That's what a collaborative, stateful PR solves.
> # now attach/upload bugfix.patch
Where do you "attach" your patch file?
Also, if it's not GitHub/GitLab/gerrit/reviewboard/etc. and not mailing list, what other workflow for code contribution are you talking about then?
The Github website and its fork/pull request flow has increased my productivity and the amount of things I contribute to or can maintain with some level of sanity 100x easily.
If my project uses Git, I can easily accept a patch. If someone happens to give me a patch against some old version that doesn't apply to HEAD, I can just "git reset --hard" the HEAD that version, apply the patch, and then rebase with "git rebase".
I would expect most people to be making patches out of their own git repo (using "git format-patch") anyway; they should be able to rebase first.
Then there is hooking up PRs to your automation setup.
This is where the GitHub approach shines imo.
One way to do it would be an "upload patch" option on others' repositories, where GitHub forks the repository for you under the hood, possibly creates a branch for you, and applies your patches linearly to that branch. It opens a pull request to the targeted branch of the upstream repository from your branch. Then when the pull request is closed, it cleans things up for you (temporary branch, fork) under the hood, if desired.
And you do need that fork. If you want any kind of CI/CD stuff on the repo you sort of need to pull the changes in from a third party source to make sure nothing bad will happen.
now try to keep your patches up to date with constant rebases and comments from reviewers. Maybe some parts are ok, but some are not so you go back and forth for a couple of weeks. Fewer people will want to go through this extra hassle that they don't even get paid for.
And I do just leave forks and branches laying around :/
I like "old tech". I use emacs, my mailer is mutt, I don't like HTML email, I like IRC, I like using a terminal, I don't like how the web is eating everything.
Still, my experience using mailing lists is just garbage.
Random anecdotes from working with project using mailing lists for patch reviews:
- I find a patchset that seems interesting, but I wasn't subscribed to the ML back when it was posted. Now I need to dig up the mails on some archive out there. I want to see if there were important comments/revisions on these patches? Well here goes 30 minutes of clicking on "next by thread" to sift through the entire discussion, hoping not to miss anything.
- Every project has slightly different guidelines for contributing. Should I put somebody in copy? Run some script on the patch beforehand? Is there a special procedure for contributing patches? Here comes 15 minutes sifting through the "contributing" doc to figure out the modus operandi. I still get it wrong from time to time on projects I don't frequently contribute to (mainly because I get confused between different projects and forget the idiosyncrasies). And of course you need to figure out the exact mailing list to use, whether you need to subscribe to post on it etc...
- You get some feedback on your patch and need to create a new revision? Oh boy, that's where the fun begins. Don't forget to set the "--in-reply-to=" to your git command line if you want your patches to thread correctly! Also some projects prefer that you add a revision number to your patch set, but I actually forgot how to do that and a quick browse through git format-patch's man page didn't help me. Boy, this sure is easy and straightforward! To think of those losers on GH who just have to push their updated commits on their branches and the PR is automatically updated.
- Okay now you've amended your patchset and integrated your modifications. But the patchset is large and the modifications are mainly small coding style issues. Do I send the patch right now, at the risk of getting comments on two separate threads, one outdated, and also risking spamming the mailing list if I do other modifications in a row? Wouldn't it be better buffering the changes and pushing a big new patchset later once I get more feedback? But then the other reviewers will work with outdated code... Wow, it sure does feel like the proper tool to work with! I'm so glad I'm not using github's PR system right now.
I can't justify this. I'll defend IRC over Slack/Discord to the bitter end, mutt over gmail, Emacs over VS Code but I just can't comprehend how mailing lists are still a thing, much less patch mailing lists. I actually have some modifications to software that I use that I didn't bother upstreaming because I can't be arsed to figure out where those patches need to be sent and how. On the other hand I have already submitted PRs for small, non-important one line changes on github because it's so simple and frictionless.
I want mailing lists to die. I want patch mailing lists to die a painful death.
so, in general, it works like shit, but some people are not willing to accept this, because "it works for me".
Both versions involve clicking.
Both versions involve command-line steps.
The difference is that the GitHub version requires more of both, needlessly. That's the point of what I wrote. That's the only point.
I don't understand this context where I'm being forced to defend an argument that's been foisted upon me and that I never made and never even thought of trying to make.
He wants me to call him, as if escalating this will be worth it...
He didn't even answer your very reasonable question above. Don't you wonder why that is?
Feeling misinterpreted can be very personal. That said, in both my initial reading and my re-reading you come off worse than he does.
> He wants me to call him, as if escalating this will be worth it...
Sometimes text-conversation-on-the-internet escalates when neither party intends it, and switching to other modalities can be valuable. That said, it's not at all clear to me that it's a good idea here compared to dropping it.
> He didn't even answer your very reasonable question above. Don't you wonder why that is?
I presume it's because it got lost in the noise.
To continue saying otherwise (explicitly, even) is a case of outright intellectual dishonesty.
Personally I find it a pain to deal with github-only projects. Why should I have to sign up to a social network for coders when I could just send a patch to the development mailing list? It's more depressing than surprising that Microsoft paid $7.5B for that.
So that is to say, mailing lists in their original inception didn't require "tracking" as a prerequisite for engagement. They were really just re-mailing robots: you write a message to a robot, and it sends it to others. Those others do "reply all", so that you receive the reply even though the robot doesn't have you in the list. The robot stays in the loop because it is CC'd, and so the list subscribers can track the discussion.
When I have some question, or want to report a bug, I don't actually want to track all of the activities in that project's mailing list. It is rude to expect that of me. And anyway, there are web archives of mailing lists!
All the mailing lists that I operate are in this classic open manner.
So, three-five mailing lists. Vs. the convenience of a single login and actual conversations you can follow in issues/PRs. With a one-click PR request if I want to send a patch (that can be easily discussed, annotated, cross-linked to issues and other PRs).
> You're tracking dozens of projects? Following every twist and turn of dozens of projects sounds like a huge time-suck, but maybe we just work differently.
The thing is, GitHub provides granular access. You can just star a project, and return to it when you want. You can follow a specific issue. You can follow every twist and turn of a project.
Depending on my current interest or area of research, I can deal with any number of active projects which I may "snooze" when I no longer need them (but will still get notifications on conversations/issues/PRs I'm involved in).
And to do all that I don't need to be a part of multiple mail lists with no control of what gets sent or muted.
I also have direct access to all the PRs and issues where I'm involved without needing to remember which mail list it was on. And without needing to discover those mailing lists in the first place (is it an <x>-users mailing list? an <x>-dev? an <x>-development? an <x>-contrib? an <x>-patch? what are the rules? etc.)
Github is extremely convenient for a very large number of otherwise tedious tasks.
The Git project itself, the Cygwin project and Gawk project all use primarily mailing lists. Again while I prefer the GitHub system all 3 of these projects are responsive to emails and changes are made quickly.
Great if you disagree and I'm sure there are some others, but I would be willing to be you're the minority.
How big of a barrier is writing tests or documentation?
Or setting up the whole environment to actually build something pulled from Github?
You know that some Github projects have nasty things in them like Makefiles, and all that code and configuration requries tedious text editing.
It matters a lot actually; a lot of free-software/open source software are licensed that way because the projects themselves are ideologically predisposed.
While that does not hold true for certain (even large) projects like Linux, it certainly holds true for Apache (historically) and GNU.
To put it another way; if you found out GNU coreutils were hosted on Window machines using IIS web servers then you would probably consider that the people making the software (or, certainly those hosting it) are ideologically at odds with the project and are hypocritical.
So, I mean, you get to choose, if you go the Linux way and say "we are open source for pragmatic reasons" then there's no doublethink. If you say "we believe that all software should be free" while simultaneously forcing your users to contribute using closed source software on a proprietary platform then then you're not practicing your ideology, and worse; you're forcing that non-practice on your developers and users.
Do you somehow consider copyleft style licenses "more ideological" than those which are not? That is probably more telling about your own views on licensing than it is on ideologies.
FreeBSD is not copyleft, and they work actively on eliminating GPL from their base. Debian mostly under the free software umbrella, but welcomes BSD code in their base.
Stallman is about as idelogical as you can get but has been known to argue for the MIT license in some cases. Just to mention a few examples.
Of course you can decide not to release your code under a free software license, but that rather defeats the purpose of running a free software project ;-)
Looking at it another way, more permissive licences grant more options to downstream users than copyleft licenses. It doesn't make sense to argue that you are offering only copyleft licenses to benefit those downstream users individually (and I know of nobody who makes that claim). Instead, the argument is that it is better for the group that everybody has equal restrictions and can't use proprietary code to gain an advantage over everyone else.
This is actually one of the reasons why if anyone asks me to assign copyright for my side projects over for work, I'm happy to do so: on the provision that all of my work is licensed under the GPL. It barely matters to me at that point who owns the copyright (though to protect yourself further you should ensure that no one person has copyright over the entire work).
IMHO, although the choice to write free software at all is often ideological, the choice of using a copyleft license or not is usually pragmatic -- at least among those who understand why copyleft licenses are written the way they are.
Mind you, we maintain private mirrors and have restrictions on some of the GitHub access/workflows (eg. ICLA on file, and 2FA required). We still need to track provenance, and must be able to operate independently, if it comes to that.
-- Greg Stein
From a pragmatic standpoint, if (theoretically) running on Windows/IIS allowed the GNU coreutils project to save enough money to _actually further their goals_, I'd say they'd be foolish not to host that way.
When taking an ideological stance, there are practical considerations to consider. There will always be more and less effective ways to get one's point across.
I'm reminded, a bit, of this comic: https://thenib.com/mister-gotcha. Sometimes, you have to participate in the thing you're rallying against, because it's the most effective way to gain traction for your cause.
The fact that Github itself is not F/OSS matters even less as Github is the only user of Github's source code ; they're the copyright holder and unencumbered by any licensing restrictions.
 For GH's on-prem enterprise product this does not apply and has a much stronger case to be F/OSS since now GH is using copyright to restrict the freedom of others.
Does GNU audit every contributor to its libraries to ensure they only personally use open source hardware/software? If not, then I'd have a big issue the ideology stance of these organizations.
> So, I mean, you get to choose, if you go the Linux way and say "we are open source for pragmatic reasons" then there's no doublethink. If you say "we believe that all software should be free" while simultaneously forcing your users to contribute using closed source software on a proprietary platform then then you're not practicing your ideology, and worse; you're forcing that non-practice on your developers and users.
With that same rationale, it's also doublethink if every system that an ideologically-aligned FOSS isn't using FOSS based systems. And realistically, this is not how the market works and makes it extremely limiting to find contributors to maintain systems in longevity.
Kind of reminds me of the discussion around people going personally carbon-neutral. Are you going to realistically audit every interaction you have every day to ensure the level of carbon you are consuming? No, you look at the largest/most material areas that you can control and use alternatives there.
That's for Apache to decide, but consider:
Git itself was written, by Linus Torvalds, because Linux (the kernel) was using non-open source version control at the time called BitKeeper, and the non-open source nature of it was causing increasing problems in the Linux developer community, both ideological and eventually practical problems too, caused by the licensing preventing some devs from building tools.
When Git was written and the Linux kernel devs jumped to it en masse, it was like a breath of fresh air.
BitKeeper has credit for many of the ideas found in Git before Git. (Aside: credit also should go to Mercurial for ideas). One of the things that made Git better than BitKeeper was people were free to build more tools on it, which is entirely due to the open vs. closed source licensing, as well as a general attitude of welcome vs. unwelcome towards those tools.
GitHub is different because it already builds on Git, and works with tools that anyone can build on top of Git.
But some things which are really essential to thriving developer community are locked away in GitHub, so it's still limiting what people can build with it. (For example, can you innovate on how Issues are handled? Somewhat, but not in every way that is useful.)
In some ways GitHub is like any other walled garden. You can't fork it yourself.
The very specific flash-point was Linus throwing an unjustified hissy-fit defending Larry McVoy of BitKeeper being difficult about Andrew Tridgel (Tridge of Samba fame) "reverse engineering" the data traffic of Bitkeeper for inter-operability (really just sniffing client packets with wireshark or something).
Whether the name "git" pertains to Tridge or Linus himself who in retrospect decided he acted like a spoiled brat is still not known :)
I was there, and I was told in private mail by the author of BitKeeper that I did not have permission to use it, because of my work on respository analysis software that looked like it would get too close for comfort.
That's without any reverse engineering. I never used BitKeeper, or connected to the server, or read the infamous "help" text.
It meant I couldn't participate in kernel development in the same way as most folks.
I wasn't the only one, and that's what I mean by practical problems, not just ideological.
I wanted to point out that it was not a very loose "pragmatic and/or ideological" argument back then, but there were very specific actions and respected actors involved.
You obviously, being directly involved, are aware of the specifics, but it might be easy for a casual reader to place it in a "Ah, the Free Software people were ruining a good free thing even back then" context, whereas pretty much the opposite happened.
 and you too, by implication, for which my apologies as well.
That's interesting. I read your comment as making out that it was _merely_ one person (Tridge) who had a problem with BitKeeper and BK didn't deserve the flak, but I read your later comment as making out that BK did deserve it.
He connected to a BK port and typed "help".
BK helpfully output a bunch of protocol help.
He used that to implement something minor (I think for archival?) and McVoy wasn't having it.
It wasn't even "wireshark", but simply a telnet session indeed.
But GitHub doesn't have this problem.
Both was sprung in large parts from Linus' description of what requirements he had in order to consider using a version control system for Linux.
They were both released extremely close together, and I don't remember why I associate Mercurial ideas as being something Git learned from. Maybe I'm wrong about it.
I think Linus said at some point that if only performance would have been sufficient for the Linux kernel, there would have been no git (but don't quote me on that).
a) Github will have leverage over the project.
b) They make themselves vulnerable to the outrage du jour. If an outrage mob forms against someone in the Apache Project, Github may kick them off their platform - they are known to have done so in the past. Apache will then have to decide what to do. The path of least resistance will be to just let them go.
People who anticipate such a course of events will have to live in constant fear of all their effort coming to waste and their community being pulled from them by an outsider. And people who anticipate such fear may never join in the first place.
"FOSS" includes Windows developers, Mac developers, iOS developers, Java developers, people working for bigtech corporations, and some bigtech firms themselves. (Some parts of these firms some of the time, anyway.)
That said, your point still stands.
1) Every OSS project has its bureaucrats who contribute a modest amount of code but want to have an immodest amount of power. For these bureaucrats GitHub is like heaven: They appear productive, have power to silence discussions etc.
2) They work for someone or know someone who is associated with GitHub or in the Microsoft embracing strike force.
3) Legitimate reason: They want to show their employers a metric how much they contribute.
I'm pretty cynical about the current state of OSS. Major idealistic contributions are a thing of the past. It is all about attaching oneself to a project, get hired somewhere and then stop contributions.
No, pretty obviously not. For the domain, this has already been answered. For the servers, it's trivial to move your setup to a different hoster with no visible effects to the outside.
> If so, we should use an open source distributed protocol.
Like ... git and email? Yeah, we should.
Also, with git you have a repository as soon as you start using it. If you mean something like a centralized repository server: No, you don't actually need that. You can serve a git repository from your workstation just fine for others to pull from. Or you can just move changes around via email.
Also, there are way more ways to use email than for mailing lists. You could also use it as a transport for machine-readable messages between git frontends or whatever.
In and case, I don't get what point you are trying to make.
I'm not sure how "just give people direct access to my local git" fills that massive hole, so I assumed your mention of email included implicitly something like a listserve. Otherwise your suggestion seems entirely unworkable to me?
So one of us is missing something. Are you really proposing p2p git + email alone with no other infrastructure? Do you think that's at all optimal?
Though it is maybe useful to distinguish "central repository that publishes the authoritative version" and "central repository that contributors push to", two functions that GitHub kindof purposefully confuses.
The former is what is useful for discovery and catching up, but doesn't need accounts/authentication (for the "public side" of things), it just needs a stable public identity and availability. The latter is technically completely unnecessary with git: There is absolutely no problem with merging a branch from a repository hosted on a completely different server than the authoritative repository. It's just a business decision of GitHub to build a closed system that requires you to enter into a contract with them in order to submit a merge request that is limited to branches hosted on their platform, to create an artificial network effect for their platform.
After all, the primary problem is not that people choose to host a project on GitHub. It's that they demand (or allow GitHub to make that demand with them as voluntary hostages) that you also host your branches on GitHub, or else you can't contribute. If I am a happy Bitbucket customer, or I happen to run my own git server, there is no way for me to submit a merge request to GitHub specifying a branch hosted whereever I happen to be hosting my git repositories.
But no, I was not suggesting any particular implementation, I was just pointing out that those two protocols do exist and can be used, in many ways, as a basis for decentralized Free Software development. And while p2p git certainly is not the solution to all problems, it can be a perfectly useful tool--and with some more tooling around it possibly moreso than what is practical today.
Apache maintains clones of all our GitHub org's repositories. GitHub has no leverage over our repositories. We have a fallback mechanism for contributors to push to our server, if they deny GitHub T&Cs.
Apache has the support of GitHub and Microsoft, from the CEOs of both, and through the organizations.
I'm very glad to hear of the fallback mechanism.
And yes, lots of our communities have been asking for better GitHub support (read: access to its tools). So we made it happen for them.
Absolutely yes. In another thread, someone mentioned that MediaGoblin is basically dead now. I went to look at their repo and it's hosted on Savannah. That definitely hurts involvement.
Mediagoblin is dying not so much because it's on Savannah but because basically there are other things that got ActivityPub before it and could replace it (Pixelfed, Mastodon, Write Freely). Nobody really wants to work on Mediagoblin now because the alternatives are pretty much all better.
Yeah. We've had this topic come up a number of times in the Wine project. We still use a mailing list and git-am patches for our contributions. We have a few hard-liner FOSS types who would strongly reject a GitHub solution (including myself), but a self-hosted Gitlab solution may be accepted. But in the end there just isn't much evidence that the change would be beneficial for the project. If you can't be bothered to figure out how to send an email or attach a patch to the bug tracker, are you really going to usefully contribute to Wine? It's an active topic of debate, but just being "easier to contribute" isn't clearly a good thing.
Just as an example, when inkscape project migrated to GitLab  , I've noticed something that was not optimal in their CI definition and contributed a change right away. In a "mailing list" based development, that CI script would not be visible. Most projects even hide their internal tooling.
Also, unless you were born in a certain age and had access to internet since X, there is a good chance you never have been exposed to Mailman or how the mailing list flow works.
You're not wrong. And neither is the parent post! MediaGoblin is in "unofficial retirement", but that's because we made progress unexpectedly in other ways, which is good, but not where we realized we were going. Allow me to lay out what happened and what the history is here.
About four (or was it five?) years ago, MediaGoblin was still a very active project and a lot of it worked, but we still didn't have working federation support. At the time we were looking at a lot of different protocols and it wasn't clear which approach was the right one, but Evan Prodromou had written up the Pump API document: https://github.com/pump-io/pump.io/blob/master/API.md
Even though pump.io didn't have the highest uptake, it seemed to have the cleanest design and addressed many issues that OStatus had. Evan did StatusNet which is what's also now called GNU Social, and has done more work to advance the federated social web than anyone else, and given how clean the design looked and that I trusted Evan, I thought this was the right approach. So we used the funds from the second crowdfunding campaign we ran and hired Jessica Tallon, who had written PyPump (and understood the practical details better than me at the time, I was learning as I went), to do the implementation. We got as far as getting MediaGoblin and Pump.io to talk to each other and pump.io clients to even work on MediaGoblin.
But there was still a problem... nobody else was using the Pump API but our two projects, and at this point all these different projects on the fediverse were speaking different protocols (and sometimes not even compatibly speaking the same protocol)... what I would call in talks as a "fractured federation". I heard Evan Prodromou mention he was going to be co-chairing the W3C Social Working Group and I asked that Jessica and I could participate, and we were brought in as what are called "invited experts". At this point Erin Shepherd had transformed the Pump API document into a prototype W3C spec document called "ActivityPump" and that was the direction Jessica and I got pulled in to.
There were a lot of smart people in the group, and my assumption was, they probably all knew what they were doing and I told Jessica "we can just show up for an hour a week to make sure they're on track and doing what we need and then we can focus on MediaGoblin". I didn't know the phrase "revolutions are run by the people who show up" but I certainly do now... Jessica and I got drawn in as co-editors of the ActivityPub standard. We had raised enough money from the second crowdfunding campaign to pay Jessica for a full year (I didn't take any money from that campaign) but we stretched it out to two years by Jessica and I contracting for Open Tech Strategies part time (great people, btw). This was helpful because when one of us was working on standards stuff, the other person could do some work on MediaGoblin as a project, and there was a lot to do.
But as time went on and deadlines became more urgent, standardizing ActivityPub grew more and more in time consumed. Eventually it became my full time job; I would work 40-50 hours a week on ActivityPub and do 10-20 hours of contracting on the side to pay the bills. It was clear we were doing something important and there was a real opportunity.
But ActivityPub grew to three and a half years of standardization work and as I said, we only could stretch out paying Jessica for two, so she had to find paying work and it wasn't possible for us to split our time to manage both. In the meanwhile, even though all this stuff was happening for MediaGoblin, I found less and less time to work on the project. Even worse, Gitorious (which we had previously been hosted on) went down, and we were unsure where to move to. A community member volunteered to do the work to move us to Savannah and we took it. MediaGoblin wasn't using Gitorious's issues/merge request tools anyway; the way people would make contributions is make a new git branch, publish it anywhere, and then link that branch on the issue tracker where we'd do the code review and then eventually we'd merge it in. In that sense we were already using git in a more distributed manner (the way that git was intended I'd even argue)... but actually I do think we lost something in the move from Gitorious to Savannah. What we lost is that many people didn't know where to host branches, and Gitorious (along with many other such services) tend to offer a one-click easy process to fork, where you don't have to learn or debate over how/where to host things if you don't already have a preference. Our server infrastructure also languished... we previously had some volunteers helping with the infrastructure but they ran low on time, there was a server migration that went badly (it's still in a bad state tbh), spam filled up our wiki and trac instances, and it was all a huge headache that I didn't really have time to deal with. And I wasn't there to help steward the project the way I used to... I did appoint a co-maintainer (Breton) who did great work but I guess I did help drive a lot of the energy for the project and so when I stopped working on it actively, the community languished. We went from dozens of active contributors to practically none over the course of ActivityPub's standardization.
It wasn't clear that it was worth it; towards the end of ActivityPub's standrdization it looked like we wouldn't even make it and I thought I wasted years of my life. Then Mastodon picked it up, then Peertube, then etc etc and we suddenly had dozens of ActivityPub implementations. It turns out it was worth it, and finally we had a fediverse that did talk to each other. It turned out MediaGoblin did make a large contribution to the federated social web, but it wasn't in the way I expected... it was a driving force, rather than the project people ran.
Still, afterwards I came back (and with a more strong sense of how finite and fragile time is than ever) and I had to debate: should I pick up and run full swing with MediaGoblin again? The project could pick up and with effort, merge the languishing federation branch, I could try to drum up excitement in the community again, we might even make it.
But the webdev world shifted and so did I. IPFS and Webtorrent didn't exist when MediaGoblin started, and Peertube did the smart thing of integrating those into their project and it felt like they handled our ideas better than we did there. There were also all these other projects (Pixelfed, Funkwhale) which, while not delivering all the media types in one package (why the heck not? I still don't understand that) which seemed to be doing the same thing we were and actually were already federating... with the protocol we built for our own needs with MediaGoblin no less! And web applications aren't typically built as request-response type systems any more (and I for one was tired of it and had become disillusioned for my interest/belief in for Python being a great asynchronous language), and I just didn't feel excited about the codebase anymore. What to do?
I had another idea, and I called up several of my closest free software friends to make sure that the path I was suggesting wasn't an awful one. The main success I have had turned out to not be in the applications I built but in the way I showed how to grow distributed systems, and I now understand even the deficiencies (but how we can improve building on the base we have) on the current federated social web. So that's where Spritely came from, and why I'm building it as a series of demos (more here:) https://dustycloud.org/blog/spritely/ (first documented demo here:) https://gitlab.com/spritely/golem/blob/master/README.org
So what lead to MediaGoblin's "unofficial retirement"? I think that it's both true that
a) the standardization effort of ActivityPub, while done for MediaGoblin, accidentally lead to a loss of energy in MediaGoblin's community
b) there was a falloff in code/infrastructure hosting and other challenges related to that
c) other projects picked up on what MediaGoblin was doing and arguably did it better, using ActivityPub even, and finally
d) I still believe there are serious problems and deficiencies in the current federated social web that are addressable, and so I started the Spritely project to document and demonstrate a path forward there.
You could focus on just any one of those, but I think the story is clearest when it's told all together.
Anyway, it's free software! If someone is interested in revitalizing the project and community, I'm still interested in that happening.. maybe reach out to me and we can figure out how to continue. I'm easily found: https://dustycloud.org/contact/
Uh, maybe, except that none of the things you list actually do anything close to the core promise of MediaGoblin. MediaGoblin was going to be the libre Flickr/YouTube replacer. Nothing focusing on federation networks has ever gotten even close to that, so it definitely wasn't beaten by those. At best now there's Piwigo, which by the way, is on GitHub.
1) Is not even close to Flickr. It's way more like Instagram, which is an entirely different and unrelated thing.
2) Barely has docs. Everything just says "to do".
3) Doesn't even have a website! pixelfed.org just says "coming soon" since last year. It has 400 different language translations though, all telling you the same nothing.
How is that in any way superior?
Flickr lets you organize your photos and videos together. Peertube and Pixelfed don't do that. You keep suggesting one-off social sharing feed services, but those are entirely orthogonal to what Flickr and the now defunct MediaGoblin provide.
Your suggested replacements do entirely different things than what they're alleged to be replacing, and at least one of them doesn't even tell you how to run or use it.
git clone https://git.savannah.gnu.org/git/mediagoblin.git
Put it on github and see how everything starts to get better.
Usually when people say this they are referring to either Gitlab or Gitea. Well, it's not like all of Gitlab is fully open source, it's just Open Core, but I'm sure that's a sizable amount of the code, but I would imagine there's a decent chunk of gitlab.com source code that's proprietary (I'm sure somebody will quibble with this in a following comment, but the original argument is an argument of purism of being free and open).
Gitlab CE and Gitea doesn't solve the hosting issue, though of course Apache could probably pay somebody to manage that, but then you've introduced extra overhead for collaboration that most people aren't that inclined to solve.
I think this is not a fair assessment, GitLabs community edition is fully open source and is a full product. The fact you can add proprietary elements for a licensing fee (which are also "open" in that they are readable, debuggable, etc;) is not at all the same as hosting on a platform which is entirely proprietary.
While gitlab.com is based on the Enterprise Edition version, it's 100% possible to host into FOSS only if you wish.
2. What does the registration process look like?
OAuth helps with federated authentication. It's not federated registration.
2. Click “sign in with provider” log in as normal, or just get logged in if you have a cookie.
3. These are functionally identical, it’s 2 clicks to create an account on gitlab if you use an external identity provider.
I’m pretty sure it’s MIT. This page says it is.
How easy it is to actually do that depends on your ruby skills and your wallet.