- https://news.ycombinator.com/item?id=17006503 (has a lot on why it was forked from Gogs and whether using the fork is still a good idea)
Tools like this primarily provide a web client to a repository not intended as a working copy, with some optional non-git code collaboration tools, such as issues, and an inbox of pull requests (i.e. suggested patches). These are not unique tools, there are many options for issue tracking and email.
I make this point because these solutions are trying to replicate smaller or larger chunks of github rather than provide alternative ways to use git.
That said, Gita/Gogs is pretty. And might hold the hand of people only used to github. It is great that it is self-contained as a binary, so can be used with minimal configuration, like other front-ends.
But I worry that the emphasis on tools like this suggest we have accepted that github-flavoured centralised version control ('gitversion', maybe, or 'gcs'), rather than git, is now the de facto standard for version control. Am I paranoid? Does it even matter?
I'd argue that even a single developer really benefit from a "centralized" repository. It helps with maintainability, backups, syncing between machines (oh, my workstation wasn't on, can't update my laptop...).
Pulling code from random developers machines might sound pretty neat in isolation but that is the exception and an edge case to something else. And that something else should most likely be a centralized repository.
When people say they love decentralized version control I often get the impression that what they really love is local commits. There is nothing that says that you can't have local commits in a centralized version control system, it is just that being decentralized is a neat solution for many problems. But that is just an implementation detail that very few could care about, had it not been hyped to death.
Now this is something entirely different than the whole developer community putting everything in one basket (github), that's the only type of centralization worth worrying about.
Isn't that just a straw man argument?
To the extent that it applies to distributed version control, it seems like an invented problem (i.e. I don't know anyone who uses a dvcs that pulls from 'random' developers). To the extent that it is true (we are all probably guilty of using code from developers we don't know and have not properly vetted) it seems like just as much---if not more of a problem---with github-esque projects.
> I often get the impression that what they really love is local commits
That's a really good point I have not considered before. So would you say that git's main contribution to the development community is showing the power of a VCS with local commits? And that's why github is dominating core git?
That was in fact one of the arguments a few true believers used in favor of DVCS when it was still a novelty.
The think I think truly gives developers comfort, in addition to all the nice features it makes simple, is LOCKSS (Lots of Copies Keeps Stuff Safe). When everyone has a full copy of the repo, even if you generally work through an intermediary, the central repo being blown away or taken control of by a bad actor doesn't mean you need to reconstruct the history of the code from whatever single snapshot checkouts individuals happen to have.
That said, I will agree that the workflows enabled, as well as the power of a common platform, are probably what really let git and github become the new default.
edit: Also I imagine being able to paw through the entire history and bisect entirely locally are vital features for a small (but essential especially of open source) minority of developers.
The point being that short of choosing a convention for which person’s copy (or which machine, etc) counts as the primary source of record (which would then just be poor man’s centralization with the admin costs and maintenance of any centralization tools now a burden on the team rather than a third-party product), then interacting in an ad hoc way with even just my known team’s set of distributed clones of the project turns into a bookkeeping problem nobody wants.
I can also imagine a non-centralized model can lead to many more complicated workflow failure modes. I’m just thinking about how often novices get stuck with rebasing errors or squashing commits incorrectly, and bikeshedding arguments over whether it’s ok to ever revert master or if you should intentionally keep mistakes in the history and correct with new commits. I can imagine it being even worse when there are fewer conventions, since conventions are often the only way to avoid these bikeshedding debates about preferred git hygiene methods.
But the servers are necessary, as far as I know, unless you want to enumerate IP addresses and scan for open ports. Which is going to be interesting with IPv6.
In theory, yes, if they go down, you could configure your client to point at another well-known client, though.
Of course some (many) people are using forks, but all sync directly or indirectly with this dude's version.
Linux kernel dev branch single source of truth is https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
The stable branches have their single source of truth repositories, too.
There is a single source of truth for a particular version of Linux.
A few days ago I read about the multiple Linux distributions that all have their fork with special patches etc. pp.
Sure you CAN use git without a web UI, We had that at work for a while and it was not a good experience at all compared to using gitlab which we have now and we now have so many features that let us get things done faster.
I'm not arguing against your suggestion, but these are things that need to be solved if this workflow is ever going to see mainstream adoption. Network infrastructure and security has been geared towards centralization for a long time. That's going to need to change if we want to empower distributed applications of this nature (which I believe we do).
Well, actually I wasn't suggesting that, just emphasizing git's distributed nature.
What I do suggest is a federated model, large number of small hosts akin to Mastodon. What would be the real issue is, like you noted, discovery. Centralized solutions have it nicely solved but it's hard to do in general with decentralized ones.
As opposed to the traditional version control model where, e.g. every commit effectively requires a rebase against the remote and the history cannot be retrieved without a connection to the remote.
The "D" in "DVCS" is about having many copies of the repository, not about having _no_ central repository, which is still a core part of having an effective delivery workflow and very much encouraged by the baked-in concept of a default remote repository.
It's a distinction of technology, not of workflow.
This is really a common misunderstanding about what makes DVCS an effective concept.
In retrospect I think it makes complete sense to tie code, documentation and issues. They evolve together, maybe they ought to be versioned together.
I feel like code reviews naturally should be stored alongside the code it's reviewing, and like the PoC shown by git-appraise from Google
The web itself seems to be be becoming more centralised- more people spend more time on fewer websites. Websites are increasingly being hosted on a tiny number of mega cloud services, rather than millions of independent server rooms.
Centralisation is a real concern.
I'd have more confidence in it if it could self-host and I was able to see gitea inside of a gitea instance and that was the main workflow. Like this, it feels like maintainers aren't prepared to eat their own dog food just yet. That's fine, but I'll take a pass until that's fixed.
So for small projects I wouldn't let this put you off, and for larger ones or those where for some other reason you want more than just good source control take a judgement based on the features you expect to need.
Of course it is git in the back-end, so if you start needing just an interface onto source control but later desire missing features that gitlab/github/other has, migrating the repositories should be almost no effort.
I'm hosting Gitea (together with around 10 other services) on a 1/1 instance (1 CPU, 1 GB RAM) at Hetzner. Those can be had for € 2,50 per month.
> its easier to have contributions on GitHub since it doesn't require most people to register a new account
That's the big issue.
> That's the big issue.
I don't think so: Github can work as OAuth provider, Gitea supports OAuth integration, so the self-hosted instance can still authenticate people using their Github accounts.
But can the Gitea devs scale it to support potential growth that costs beyond $10 a month? This is the reality too.
The UI is somewhat slower though, code highlightning show up a second after page render.
Sorry but that's dumb. Go offer them a decent free server and paid hosting and i'm sure they'll accept your offer.
Your example is not well-worded; it's a demand. I'd expect a well-worded issue to share some story about how you were working to achieve some goal, so you tried setting up gitea to self-host the gitea repo, and were surprised that it didn't work (and share the errors etc. that you saw). Then the devss can prioritise your issue by understanding how blocked you are by the missing behaviour, and even propose a workaround or alternative way of achieving your goal.
'Title: do this thing' is not polite or well-worded.
"Let's self-host gitea so potential users can easily see how awesome it is" is more polite, far easier to see the benefit of, and a trash-fire of proper composition.
Judging from Github stars, Gogs is also vastly more popular.
gogs is mostly maintained by a single developer, and has less activity than gitea.
Compare releases and features.
Edit: This list have been created on best efford, if you find any wrong information a pull request is always welcome.
It's running since a year, never had a problem with it.
It only consumes 20MB of RAM and is extremely fast.
Easy to deploy (I've used ansible)
Running the git service is probably a bit expensive and you loose the network effect of github.
They could probably migrate to gitlab though.
"We’re a growing group of former Gogs users and contributors who found the single-maintainer management model of Gogs frustrating and thus decided to make an effort to build a more open and faster development model."
This is a guide to run it with Docker
GitHub is a hosting platform for git repositories, which has collaboration tools (such as issue tracking and code review) built into it.
There are still plenty of things you might want centralized on a server somewhere, but it seems like a lot of the value add of GitHub, GitLab, and now Gitea is in making git repos easier to manage and interact with.
It's interesting to think about how far you could decentralize that, ideally with a "cambrian explosion" of OSS and indie-software clients.
I think that storing repositories in loose format would make them much faster to read, but maybe I'm missing something. Any thoughts?
When you say 'read the repo', it makes me think you're more interested in the behaviour of cloning from a remote.
Loose objects would avoid the need to inspect packfiles, but… that code's all written in C and mmaps the contents & does fast seeks. Most likely the slow parts are reconstituting objects from packs (also mmapped C with fast seeks) or delta-compressing objects for git-upload-pack to send to clients. Going to loose objects doesn't help if the remote still burns CPU creating a pack: try using a dumb remote instead of a smart one? You're trading CPU for network now.
If you're more interested in improving performance in a clone: loose objects avoids the need to (fast mmapped C) read a packfile. The index still has to track if you've changed any checked-out file in the working directory, and if there are a lot of files, it's going to be big.
So on the server you should only ever have packfiles, and in order to efficiently read packfiles you read the index (idx) file. I'm not positive, but I think that this file needs to be read in its entirety in order to access an object. Even if you don't have to read the whole file, it's probably best because you generally read more than one object at a time (e.g. if you display a list of files in HEAD you read the commit pointed to by HEAD, read its tree, and read all of the blobs in its tree).
My thought with using loose files rather than packfiles is that you wouldn't suffer the memory overhead of lookup, you just open the file at `objects/some/object` and parse it.
The real solution here is probably to get a server with more RAM and cache repositories. I'd be interested to hear what GitHub does.
Parsing the packfile indexes is ridiculously fast; even in a memory-constrained environment the OS will manage loading data from disk so you only use a few pages. Inflating objects from packs is slower & will trash your memory; rendering to HTML will be even worse.
Perhaps 1GB is not enough RAM to host a webviewer of the firefox repo? Maybe if you generate a static site version of it…
You're right, 1GB is not enough but it's all that I have so I have to make do.
Plus storage, sure
With this MS acquisition, that proposition is starting to become a problem and an uncool dependancy on MS.
When the remote is shared (with other devs or tools), then you have the hassle of provisioning accounts, updating keys when they get lost, implementing ACLs, setting them, recording who changes what refs for audit trails, availability/backups becomes an actual problem, managing disk space + garbage collection, etc. The time you spend on those interruptsion is time (and concentration) that you're not spending on what you want to do. That's where the value proposition of GitHub comes in…
GitHub then has the value-adds/lock-in of easy webhook integrations, gh-pages branch, issues, wiki, and the web UI.
If all you want is somewhere to push code that's always available & private to you, then I'd look into using some could-based object store to host your repo. If you want to share the repo with other devs/tools… let me know where to find 99 other people willing to spend $5/month on this kind of thing.
There's some level of irony in a GitHub alternative that's hosted on GitHub.
Self-hosting also takes resources such as time and money, and not all open source projects have the latter.
Why fork the project instead of keeping a single one ?
Also there were some governance issues IIRC.