I'm grateful for Gitea. It performs well even on slow computers (fitlet2 equipped with an Atom x7-E3950 and 16GB of Ram in my case). Gitlab, on the other hand, doesn't, which is a shame, because I love Gitlab (using it at work) :).
I think the issue with Gitlab resource usage isn't the application itself, but the sheer complexity of Gitlab and the amount of services needed to run the whole thing. It even starts Prometheus and Grafana and stuff like that.
https://docs.gitlab.com/ee/development/architecture.html
In general, the larger the project the more likely that all obscure corners of a language are utilized somewhere in the project; these same obscure corners are typically the weakest point of alternative implementations and are likely to cause issues.
Nah, it's definitely Ruby. It uses Postgres + Redis + a S3-like object store for most of the data heavy-lifting. There is some additional ops complexity in Gitaly, their git backend, but that wouldn't be a huge source of resource usage in a simple setup
I'd say that an open-source project has a (much) longer horizon than a typical for-profit startup. The startup needs to iterate incredibly quickly, to adapt to the market and capture it, before the money runs out. Then it is usually acquired, and the code base becomes someone else's headache. It can even be rewritten wholesale, how it happened to Twitter.
A typical open-source project is written by its users; if you want a feature, contribute, don't demand. The project can (but not always does!) choose tools that are best in long term and fit the problem domain better, even if they require time to get started / involved, and longer to get an MVP out the door.
Go checks both boxes: it lets move fast, but also is a reasonable long-term choice. I used to shy away from Go-based projects, but after 1.18 I have to admit that Go becomes acceptably pleasant to use!
Nah, the issue with the resource usage of Gitlab is not caused by Ruby. It is the architecture they chose. You could build the same resource hog in Go if you wanted to.
Did you have bad experiences with Verdaccio? I've always thought verdaccio looked like an undervalued project/pretty amazing for how much people usually pay NPM for private packages.
"I want to use a regex to filter for valid mail addresses"
"There is a function for that why not use that?"
"Invisible unicode characters!"
"That's not a problem in automatic mail verification"
*ignore everyone*
*push*
*pull*
I don't understand why so many developers do so many stupid things with emails. Sure, there's some trivial validation you can do that makes sense - make sure it matches (.+)@(.+), make sure the domain part exists, etc.
But far too many sites do extra 'validation' using weird restrictive regex, and it prevents you from using valid email addresses - things like restricting chars so I can't use '+', or so I can't use a one char local part, or a one char domain, or a newer TLD that doesn't fit some dodgy regex they copied from a random stack overflow comment...
I really wish that developers would just do really simplistic checks - eg make sure it has an @ in it and something either side, and then just send a validation email as part of their signup flow. If you do it as the first step in the flow, you validate control of the email address and you ensure you don't end up with junk accounts on your system where someone typo'd their email and will never be able to activate it.
edit: the really stupid thing is that it takes considerably more effort to add this restrictive and usually wrong validation than it does to do it right :(
For the dystopian cyberpunk future where a few companies rule the world, yup.
I will stay with passwords that only a quantum computer can break :)
To elaborate a bit more, as we are already off topic:
Biometric traits are a big nono for me.
And another device just shifts the password problem to said device (same for email account instead of device). Once someone has access to the device (secured via one password) they have access to everything.
So a normal trade in convenience for security.
Passwords are solved for me, apart from idiotic rulesets on the other end.
> For the dystopian cyberpunk future where a few companies rule the world, yup.
oauth with google isn't the only option. There are oodles of oauth providers out there, and you can even set up your own.
> Biometric traits are a big nono for me.
Nobody is talking about biometrics but you.
> And another device just shifts the password problem to said device
Passwordless doesn't mean 2fa, and even if you _are_ talking about another device, the other device doesn't have to require a password.
> (same for email account instead of device).
They're not the same actually, not at all. Firstly passwordless doesn't imply access to another _device_, (and even in the case of WebAuthn it doesn't even imply access to another service). The most common case of Oauth allows for the service provider to trust any number of providers, who may or may not require a password. It can be a hardware key, it can be a private key in software that a restricted process has access to, or yes it can be a password.
> Once someone has access to the device (secured via one password) they have access to everything.
If someone has access to your device and password, it doesn't matter if you use unique passwords for everything, pretty much every service in existence will happily let you reset your password with access to the original email account.
> So a normal trade in convenience for security.
Hard disagree here. My biggest risk vector is third party websites insecurely handling credentials and leaking them. If they require passwords, my password gets leaked, which means I need unique passwords per site, which in turn means I'm going to rely on software to manage those credentials for me. If I'm relying on software to manage those credentials for me, isn't it _more secure_ to reduce the possibility of human error, clipboard scraping, incorrect file permissions on my local uncencrypted file of passwords (because if it's encrypted I need a password for this too, right?)
If you google passwordless, biometric solutions are one way to go, hence I mentioned them, not because I wad trying to put it into your mouth.
If someone has access to my password they either got it by torture, a non or insufficient hashed store on the other end or by breaking encryption.
A simple dongle that may not even need a password, is easier to get.
2FA can make sense, passwordless does not.
The risk of 3rd party screwing up, doesn't go away, it's just shifted to another 3rd party, which again, you have to trust.
I use a different email address with a unique password for anything that's important and where another person having access could harm me. Forums and such are not a part of that.
So let's agree to disagree. I'll stay with passwords for everything that's important and for most things that are really important, apart from banking that is, I don't even have a 3rd party involved.
A long, easy to remember, but hard go guess password, that is securely hashed on the other end, is an even better solution, that is proven and battle tested.
No, it isn't. A password can be phished, a hardware key can't, among a ton of other benefits. The hardware key is orders of magnitude better than a password.
I don't know, my USB key is on my keychain. Turns out that if you couple your GitHub login to your way of entering your house, you never forget your GitHub login.
Another decent solution is to seed the key, and keep the seed written down or in your password manager. Though I only know of the hacker version of SoloKey that lets you do that. Now you've got the convenience of passwordless login, and the peace of mind that you can always get a new spare.
This is an excellent idea, and I'm still waiting for BitWarden to implement soft-WebAuthn. That way I can just unlock my password manager (or, really, type in a passphrase that will generate the private key) and my browser can take care of all the authentication.
No need to store passwords, you can securely have one password for all sites. You lose the ability of rotating it if it gets stolen, but it's unlikely to get stolen if you never enter it anywhere else.
I would welcome something, but not the so called solutions that go under the buzzword "passwordless".
Shoulder-surfing isn't a problem for me, phishing worked when I was 12 and the internet was new, keyloggers... Yup, possible, although unlikely given the choice of my OS.
But alas, I digress and indeed it became personal, so let's stop it here, as I am not interested in personal discussions on the internet.
Why close the pull request outright instead of just requesting that an additional validation against a set of disallowed characters be implemented?
Of course, I get the feeling that the developers just don't want to support "odd" e-mail addresses, because it doesn't affect them personally. They might be using just firstname.lastname@provider.com, or somenickname@provider.com, instead of using tags (+), slashes or other such symbols. Ergo, they might not care, at least when there are other issues that need to be solved and worked on - this gets de-prioritized.
Agreed, I really do not get why this PR was merged. It seems pretty obvious to me that the proper solution is to add a check for disallowed Unicode characters.
I witnessed the split of gogs and gitea and while the maintainer of gogs was indeed rarely active and merged PRs very slowly, they at least examined the code and made a proper review with suggestions.
The gitea guys wanted to move a lot faster and in the process neglected quite a bit of code review and product planning. Apparently this continues to this day as evidenced by the PR you linked.
This, and the many trivial CVEs that were discovered in Gitea made me look for alternatives. I think that right now the best free and OSS solution is https://github.com/theonedev/onedev. It even has CI/CD built-in which is cool!
Can also recommend checking out onedev. Seemed much lighter weight to me than gitlab, and it didn't choke when handling large amounts of data like gitea seems to. Not used the CI besides some basic tests, but it looks pretty capable from what I've seen.
Ah sorry, Drone CI (https://www.drone.io/) in order to run automated tests / builds from repo's in gitea. Gitea does not require it but even in my homelab setup I like to auto build / test the stuff I commit.
I'm using this for personal projects, so I'd prefer a free solution. GitLab has a self-hosted version but it's not clear what's available and what requires a license.
The second reason is that GitLab's minimal resource requirements[0] are just overkill compared to OneDev[1] which can run in a single container with an embedded database.
FWIW, GitLab has an entirely OSS community edition, and you can run the enterprise edition just without licenses - I do this personally.
You have to run your own CI runners, but if you can self-host GitLab that ought to be no problem ;)
I agree though, it's very resource heavy - my personal instance has 8G RAM and 4 vCPUs and only two people use it. It has that much RAM because without it, it would randomly fail (usually during/after package upgrades).
An open core model is just bad. Gitlab shows advertisements for features it does not offer and if you want to integrate an already existing feature you won't be able to mainline it.
Gitlab is a capable software (albeit slow and prone to security issues especially with enterprise features) and it is what most people right now are used to and that's why we offer it, but for personal use gogs, gitea or onedev (or many other solutions I won't even try to list) are way better. I run gitea on my 2G RAM VPS next to a dozen other services. I would need double or triple the resources to run gitlab alone.
It's not ideal, I agree - the main reason I use it over lighter & more open services is the integrated CI. I've considered replacing it with sourcehut, but I don't have available hardware to dedicate to the VM-based CI.
onedev I did not know about, and looks exceptionally tempting on that basis.
Genuine question: why has CI become so prominent that even small projects cannot live without? You have a small instance with 2 people using it. Why is CI a deal breaker?
I used gogs before gitea and when it split I watched for a few months and then switched.
The gogs maontainer was unreachable for many month multiple times and the gitea fork os very active. It implements features I care about and even though they move fast and break configs the problems are documented and I had only a few minor problems over the years, a few of them werent even their fault.
So I'm a happy gitea user even when there are things I dislike.
I agree with this too. I witnessed the split too and feel exactly the same.
Interestingly, NotABug have not switched to Gitea and is still running Gogs.
We had this conversation at a startup for a feedback tool I worked for. We often had conversation with early adopters on how to improve the site. An external designer then stated the obvious to us, that we simply use _our_ tool to get feedback.
Remember that this thing is developed by people under the domain of the chinese government. They host their source on Chinese servers, and are firmly in the control of a nearly-enemy-state.
I would avoid using it if at all possible. You do not want your source code on a platform opened by those notorious for stealing intellectual property.
Open source isn't important, the primary developers are. The source can be open, but if everyone who works on it is compromised the source is compromised unless you go and read all of yourself.