Hacker News new | past | comments | ask | show | jobs | submit login
Gogs – Go Git Service (gogs.io)
341 points by of on March 28, 2016 | hide | past | favorite | 183 comments



> How to use downloads?

> 1. Extract the archive.

> 2. cd into the directory just created.

> 3. Execute ./gogs web and you’re done.

Can't beat the simplicity of running Go applications. It's funny because running a compiled binary is so incredibly basic to computing, and yet 90% of the time installing a new shiny toy in a server involves dealing with 342525 dependencies, half of which broke because god knows what dependency wasn't targeting the right version. And you sit there debugging dependencies and errors and pulling your hair for two hours, trying to get someone else's mess to work, wondering how did we get this so wrong.


A lot of Java software is "(1) download the jar file, (2) java -jar thefile.jar". Agree it's good for the user, but when people talk about distributing a statically-linked program the "what about security updates" question invariably comes up.


A lot of Java software is also "(1) download the tar, (2) run bin/whatever which is a 500 line bash script for setting up the jvm environment"


6 of one, half a dozen of the other.

As long as it's behind and simple interface and it works, I don't care what it actually does.


Well, it's more like:

(2) run bin/whatever which is a 500 line bash script for setting up the jvm environment that has a 50% chance of not working if you are on a distro that's not Ubuntu or RHEL, and probably also fails if it takes a filename and you give it a filename containing a space.


Well that sounds like a personal problem. Who puts spaces in a filename on UNIX? That's just bad form, and anyway, if you did decide to throw prudence and best practices out the window, you should properly escape it.


And..., just like that, you are part of a bot net.


The truth is, nobody personally vets the source of all the software they use. Even people who compile everything themselves aren't reading the source code that came over the wire. It baffles me when people fret over "don't blindly run that curl|bash installation script because security!" when nobody would bat an eye at a .deb downloaded from the same web site.


There are many that take an interest in securing their workstations, just as there are many negligent individuals who blindly run things they download from somewhere.

A minimum amount of care when installing things (checking signatures/checksums, reading install scripts, etc) should be common sense and not doing that is grossly negligent.


What's the point of reading an install script if you don't also read the source of what the script is installing? That's like a proofreader only editing the intro paragraph and then signing off on the entire essay.


While I'm all for downvoting the grandparent, I'd say reading the installer script is wise even if you trust the source. Developers aren't good at shell, much less portable shell, and such scripts are usually far inferior in quality to the main application code (written in pretty much anything that's not shell). Since an installer typically manipulates privileged locations on your system it's also playing with fire. Taking few minutes to establish whether it was written competently can save a lot of pain down the road!


While that's true, you can also say the same about the preinst/postinst in that .deb package hosted on someone's PPA. Also, while every .deb/.rpm must be installed using root privileges, many (not all) shell script installers don't require root and just perform a local install, making them less dangerous by nature.


The point is that many install scripts do insecure things such as downloading over unencrypted channels, demand sudo without a good reason and are generally not careful to have a minimal impact to the machine.

At the minimum I want to know what it does and where it puts stuff.

This assumes that the software package itself is trusted, which is (still?) a reasonable assumption to make for open-source and big name companies.

By the way, it's poor form to downvote someone just for disagreeing with them.


Sure, but if I'm really worried, egress filtering will catch it or the VM won't be allowed to connect out to any non-whitelisted hosts.

Security is about defense in depth, not just "don't run anything you haven't read in detail" - have you read the code of the web browser you posted with?


Unless audit the code you download, you still could be pwned. Compiling or downloading from github isn't some magic talisman.


It is still easier with static compiles if you ask me:

"Make sure you're running Gogs 1.64.2 or above"

"Make sure you're running Gogs 1.64.2, OpenSSL 1.2e, libcrypt 3.73, leftpad 2.0..."

Especially when the deployment of said binaries doesn't involve anything installed on the host OS.


"Make sure you run Foo 1.7, Bar 2.1, Baz 2.3, ..."

"Make sure you update to OpenSSL 1.2e and restart all services."

If you think of more than one program using the same module, shared libraries make a lot of sense.


You do run into false dependencies causing needless headaches with dynamic libraries though. There's a vulnerability in hash X in OpenSSL. One (or worse zero) of my 15 installed applications that depend on OpenSSL actually uses hash X, but in order to upgrade, I have to carefully manage those 15 dependencies. With statically linked dependencies, the hash X code wouldn't be in any of the other binaries and I only update the one (or perhaps don't have to update at all).


What do you mean by managing dependencies? You upgrade to an OpenSSL version that fixes hash X and be done with it? I don't understand your point.


I guess under the assumption of everyone following sane versioning practices, maintaining alias symlinks, and/or really solid package management and compatibility testing, this isn't such an issue. If the system admin was compiling a lot of their own libraries and binaries that don't necessarily follow those best practices, it becomes more of a problem. The system admin doesn't know what functionality in the library is used by each binary and weather or not the upgrade is really necessary. If the binaries are self-contained and statically linked, the maintainer of the actual software can stay abreast of security updates in the libraries that they are using and ship updates only when vulnerabilities affect functionality that they use. The sys-admin then only has to download updated binaries if truly necessary and tracking any dependencies on the end-user system becomes unneeded. Not really arguing that it's a better setup. It does mean that software maintainers have to stay active and on-top of security patches. The complexity is inherent, it's just a matter of shifting it around onto different people.


Can't upgrade if the fix was dropping the hash because it's fundamentally flawed.


That would be an ABI incompatible change. These are only allowed when increasing the compatibility version (SONAME). You would immediately notice such a change. For the usual package managers that would even require to create a new package.


They do, but static linking means the developer actually does have control of the environment. What happens if you ship the latest and greatest binary but the shared libraries aren't updated by the user?


That is why libraries guarantee API/ABI compatibility within a version number (SONAME). The package manager of your distribution will be responsible for updating the required dependencies if needed.

Of course, if the user chooses to install software outside of the package manager, it is their responsibility to upgrade dependencies accordingly.


> It is still easier with static compiles if you ask me

You'd be surprised how many users have no idea whether they are running the 32bit or 64bit version of their OS. For them, running a Go binary will be anything but easy.


You'll have the same problem with dynamic linking. It's even worse when your user has to know which version of Python/Java/etc they need to run your software on. The best case for your user is a static native binary behind a package repository.


Now I have to install java and keep it up to date, in addition to the jar.


Only if you don't deliver it alongside the application, which is what statically compiling the Go runtime means.


Are there many java applications that bundle the jre?


I really wish more applications would do it. The situation here landed with running two different RDS servers, and users having to manage running applications from different environments simultaneously.

Because some Salesforce product required a specific version of Java and Oracle's PMS software required a specific, different version of Java. And both vendors only support their products by having it run with only the correct version of Java installed.

And then I have this VM with this ancient version of Java I can't even find for download any more so that I can use the management interface for Brocade FC switches.


Yes, it is quite common in desktop applications to do so.

Usually you get the option to use the bundled one or whatever is available if one is detected already on the system.

To the point that is one of the options of the new Java application packer introduced with Java 8.

Also in all server deployments that we do for our customers, Java is just another configuration of the whole application packaging.

Also many companies don't mind paying for commercial Java AOT compilers to native code, which then puts their Java code in the same league as Go.


Yeah, the missing bit is that this should be done by OS vendors which sadly is not true.


If package management is important to you, you might want to try an operating system that integrates it natively.


The point of "run anywhere" is to not have to search for a special operating system.


I wouldn't conflate the "run anywhere" aspect with OS-level package management.

How operating systems handle updates is unrelated to Java, and affects other applications on the system as well. Think about other "run anywhere" technology like Flash, PDF (Adobe Reader), Microsoft Office, Browsers -- they also don't have a great security story and need constant updates. Adobe and Sun decided to ship their software with "Update Managers". Everybody loved them (not), but at least they fulfilled their purpose. 10 years later, Google added background updates with binary patching to Chrome on Windows and Mac OS (not needed on Linux). They decided to just install updates without the user's consent, because that will at least keep them safe. Google can get away with it because everybody expects them to know what they're doing. But I certainly don't want downloading code at runtime to become the norm.

That's why distributing updates for third-party apps should be an operating system feature, as it is in all popular Linux distributions. Microsoft and Apple, please feel free to copy.


> That's why distributing updates for third-party apps should be an operating system feature, as it is in all popular Linux distributions. Microsoft and Apple, please feel free to copy.

That's exactly what I meant when I said

> Yeah, the missing bit is that this should be done by OS vendors which sadly is not true.


Don't forget (0) install the right JVM. This has caused me much grief in the past, at least on Linux.


Assuming you have the JVM installed.


Yes. Digging up dependencies is the worst. Managing another fucking build tool is the worst. To be honest, anything short of clicking a thing is the worst. Because there's no reason it can't be as simple as clicking a thing.

I've always believed that 100% of all dependencies should be included in projects. Whether that is raw source code, amalgamated source code, static libs, or dynamic libs I don't care. Give to me the things I need to run and/or build your project. Don't make me hunt shit down.


As itself, bundling dependencies is not necessary a bad thing, but it often leads to really bad practices.

Once you bundle dependencies, you make the choice of having to track those dependencies for security fixes and bug fixes. Otherwise you end up exposing bugs and security issues to your users.

Too often, I've seen applications (specially in the "Enterprise" Java World) with hard pinned versions from 2, 3, 4 and even more years old, with tons of bugs and tons of security issues.

Bundling make sense in many cases, for example if you distribute a product, it permits to control most of the stack and to factor maintenance across several platforms.

Not bundling also make sense, for example if you are a distribution, you have tight control over what put in, and it reduces the effort to maintain stuff because you don't have to track x, y, z each in versions n, m, o.

Bundling or not is a trade-off, either way, it must be done properly.

Dependency handling is a complex task, and yeah, it sucks. And it's not technically that the problem will be solved, it's with proper organization, conventions and normalization.


> I've always believed that 100% of all dependencies should be included in projects.

Including libc, Mesa, and libX11?

It's pretty untenable to do this in the limit. Eventually you're forced into using a good package manager, and once you have one you might as well use it for the rest of your dependencies too.


I make video games. There's no such thing as a package manager that supports Linux, OS X, Windows, iOS, Playstation, and Xbox. So from my perspective no. No you aren't forced into something that doesn't exist. ;)

There is a line. You shouldn't include operative systems. Nor language tools (C++ compiler, Python installer, etc). Nor hardware SDKs (DirectX, OpenGL, etc). But beyond that? My vote is strongly in favor of including dependencies.

Compiling OpensSSL on Windows is a god damn nightmare. I shouldn't have to fucking install StrawberryPerl. SQLite is beautiful in comparison. It's two .c files and two .h files. You can also download pre-compiled libs if you want.

Almost all open source projects could easily be single file. If it can be single file then, in my opinion, it should be.


I don't know if I'd go quite that far, but yeah if you have to do stuff cross-platform, packaging your own dependencies is a great solution.


Gitbucket [1] a git hosting system built in scala is as simple as typing "java -jar gitbucket.war"

[1] https://github.com/gitbucket/gitbucket


40MB download + requirement of Java Runtime (~100MB) to be present is not quite same. Though to their credit they have packaged all jar dependencies besides JRE in WEB-INF/lib


That's how WAR files get built. It's a very standard mechanism for deploying Java apps.

I think having Java on your machine is a reasonable requirement. I mean the equivalent in Ruby would be to have Ruby, install all gems and THEN launch the app.


Why Java on machine is reasonable requirement? The Gogs service does not need Go to run. So having Java is extra requirement for Gitbucket.

I know about packaging and running Java web apps. I am pointing out that compare to Gogs, Java based solution has extra requirement of JRE which I have to keep patched and updated. And this is independent of any fix/improvement of Gitbucket itself.


Your patched and updated argument is an equivalent one.

Are you saying if go gets updated, you will not have to rebuild your gogs binary? Go does not provide a better security infrastructure than Java. I grant that just running a binary is cool, but as I said, having Java is OK as far as requirements go IMHO. others may have different opinions.

Atleast its better than installing a ruby app.


If you actually want to run this software, JVM is a reasonable requirement: it's well-known, well-tested, easy to install, and comes in one package.

It's not zero requirements, though. (Go binaries still need platform-specific libc, AFAIK, but it's usually also a reasonable requirement.)


Actually, they mostly don't. Go doesn't use libc, fortunately.


One runtime dependency isn't bad, but it's still worse than zero. :/


Did libc stop being a runtime dependency when I wasn't looking?


I don't know if libc ever was a Go dependency, but it isn't today.


I spoke imprecisely; you are correct. That said, I don't think, of the software that has escaped the Golang pits and weaseled its way into systems I am forced to maintain, any of it does not have a cgo, and thus a libc, dependency.


Runtimes like ruby and python are quite commonly found in the default installation of many distributions (and OSX, IIRC). Java is not.


Except with go I don't have to have anything else installed. And that is the way it should be.


It's mostly a result of trying to avoid duplication. It's a non-issue for a server running one service but quite relevant on, for example, a dev workstation with dozens of projects or for smaller tools – imagine if every system command brought it's own dependencies.

But the trend seems to go towards sacrificing disk space for isolation, i. e. NPM which makes it more or less impossible to use a system-wide 'node_modules'.


> Can't beat the simplicity of running Go applications.

... as long as you downloaded the right binary.


This is a dev tool, devs usually know which arch they are running. Besides, you can detect the arch using javascript like google with google chrome download. Here's how you do it: window.navigator.platform.indexOf("64") and then show the right link checking window.navigator.platform.indexOf("Linux") for each OS. Easy as pie :)


This is true of any application; still better than a dynamic binary


If this is so desirable, why haven't people been shipping open source C and C++ software this way? It's not like Go invented static linking.

Not just C and C++, either -- Python, Ruby etc software could be shipped as statically linked executables.


People often _do_ ship C and C++ software this way with dpkg.

Static linking is sometimes discouraged because it makes it harder to identify what needs to be upgraded in case of (for example) security vulnerables in core libraries like libc.

Could you elaborate on how Python software can be shipped as a statically linked executable? I'm not familiar an easy way to do that, and it would help me quite a bit.


It's not really a static Python .exe, but PyInstaller can create a single executable that embeds all of its dependencies in the file. Except for a small delay at startup it works really well.


This is how I distribute Grow (grow.io), a Python program. It's worked really well so far, but it's also produced a few issues in development, for me.

For example, PyInstaller itself changes drastically from version-to-version, and I've previously had to spend hours picking away at why "compiled" Grow worked before but no longer works post-PyInstaller updates.

Overall happy though, and much happier to write in Python and distribute a single executable without requiring folks to muck around with Python versions, pip, dependencies, etc.


The python distribution story is far behind Go. This article goes into more details: https://glyph.twistedmatrix.com/2015/09/software-you-can-use...

PS: I'm the original author of PyInstaller. I obviously love it but it's a gross hack around the fact that Python doesn't have a serious distribution story.


I've been using Gogs now for a few months and can't recommend it enough as opposed to GitLab. It does have some rough edges (for example the many instances where its CSRF protection feature misfires), but generally it's a very solid and performant git frontend. Although a plugin or widget system would be nice, it was also easy to extend the Gogs UI simply by editing its very straight-forward template files.


Same here. Gitlab is soooo extremely slow. Even on their own site many pages take several seconds to load. And no one really seems to care.

Gogs is a great relief.


GitLab.com is slow because of operational issues, like everything running from one NFS server. GitLab on your own server with enough memory should be fast. The slowness of GitLab.com is unacceptable to us, work to improve it is ongoing in https://gitlab.com/gitlab-com/operations/issues/42


Can't confirm that. I have to work with gitlab with different customers on several independent project teams. No installation feels fast. Memory is >2GB. Also have it here on a machine with 4GB. No page loads under 1 s. And the ajax requests seem to take even longer than a full page reload. Its a pitty you now load almost everything with Ajax and there's no option to turn that off. I often find myself to click a second time on a link because the browser spinner doesn't move and your progress bar is not really where my focus is.

I had been using 5.1 for quite some time before. It wasn't really fast either, but things have gotten much worse with newer versions.


Thanks for your feedback. As we added more and more functionality to GitLab it probably has gotten slower. Since a few months we have 2 people working on improving performance fulltime. As you can see in https://gitlab.com/gitlab-com/operations/issues/42 our focus is GitLab.com performance but many of the changes should also improve the speed for on-premises installations. Changes include fewer calls to git, better caching of markdown rendering, etc. We already saw a lot of improvement in GitLab 8.5 https://about.gitlab.com/2016/02/22/gitlab-8-5-released/ and we plan to continue making progress here.


Sounds good. But honestly since I've read this my faith in the ruby/rails stack (or is it the gitlab devs?) is somewhat limited: http://doc.gitlab.com/ce/operations/sidekiq_memory_killer.ht...

A quite forthright confession of memory leaks - with imo questionable countermeasures.


Blame us more than the stack. These memory leaks are hard to diagnose. Our meta issue to do this is https://gitlab.com/gitlab-org/gitlab-ce/issues/3700 We're currently we're more focussed on improving responsiveness and plan to address the memory leaks and a multithreaded app server https://gitlab.com/gitlab-org/gitlab-ce/issues/3592 after that.


Are you still on Azure or have you moved on? You had several blog/twitter posts about how unreliable Azure is and documented some quirks like failed reboots. But arguable you got Azure cloud for free. Don't look a gift horse in the mouth. But what's worse? I read about the problems since November 2015. Maybe invest in Google/AWS cloud and don't run it on a single NFS.


We're still on Azure. We got a lot of help from Microsoft to work out the things we encountered. They have been very helpful and we plan to stay on Azure for now. Getting rid of the single NFS server is our next step.


Properly tuned NFS should be extremely fast... I've managed NFS clusters for HPC that exceeded 800Mbit/s and 20k IOPS


The problem is that we should distribute the load over multiple servers https://gitlab.com/gitlab-com/operations/issues/1 Right now the lonely repository server has over 200k context switches per second.


Your engineers probably know this, but CephFS is not well known for handing high IOPS loads; make sure to architect accordingly :)


Thanks for the advice, do you maybe have a link with more details?


Mirantis gave a talk last year at OpenStack with real-world numbers... They had to scale to 500 storage nodes to achieve 50k IOPS aggregate.

https://www.openstack.org/summit/vancouver-2015/summit-video... https://www.slideshare.net/mobile/mirantis/ceph-talk-vancouv...


Wow, thanks for those numbers and the links, I've added them to https://gitlab.com/gitlab-com/operations/issues/1#note_44836...


Tried both Gogs and Gitlab at work. Gogs is faster, but Gitlab has most features and the best chance of being introduced. Also, we are happily using Mattermost, which is created by the same people. Gollum also looks like a solid choice of Wiki.

Also note that the latest version of GitLab is a lot faster than it used to be.


Glad to hear our focus on improving speed is paying off, we'll continue it in https://gitlab.com/gitlab-com/operations/issues/42


Would be nice to have a list of features which Gogs offers and GitLab is missing! We are happy using GitLab...


I would guess the only feature gogs has over gitlab is low resources and easy install, which for someone like me who does solo side-projects and likes to commit to my home server, is enough reason.


That's what has been the deciding part for us in the end. While Gogs is super lightweight, Gitlab is so feature rich, stable and user focused, that it's the only piece of Ruby that's running in our stack. We were right in the decision phase for a CI and had already set up a Jenkins machine when Gitlab integrated their CI which has been super simple, reliable and well integrated so far, no comparison to the monster that Jenkins is. Recent work on performance has been stellar, setup was quick, including backups, and upgrades have been super painless, they really got the important parts nailed down. Can totally recommend it, and it's really got the traction to both catch up with Github and also offer unique features. The pull/merge request workflow isn't as refined and better code review support would be high on my wishlist, but overall it has been a great choice so far with little regrets.


Thanks for your kind words! What do you think we can improve in the pull/merge request workflow and in code review support?


make ?w=1 a default behavior

haven't tried other alternatives but a happy gitlab user. you guys rock. the latest gitlab is much faster.


Great suggestion, I created https://gitlab.com/gitlab-org/gitlab-ce/issues/14723

Glad to hear you're happy with GitLab.


I wonder if Gogs will have to change its ui.. https://news.ycombinator.com/item?id=11374786


They should. Github obviously spent countless hours perfecting their UI. Other apps shouldn't be allowed to copy it. (I have no affiliation with Github, other than that I'm a developer and respect the hard work they've put into their product.)


It is even worse than copyrighted APIs. Such attitude will allow Microsoft to sue KDE for Windows look-and-feel and LibreOffice for Office look-and-feel, even without patents. Good features and ideas should be copied as widely as possible instead of reinventing the wheel for the sake of difference.


Someone needs to pay the bills.


Does the argument really change if they spent a weekend on the UI?


Yes, because if the design is trivial, or obvious given the problem domain, then the thing being copied doesn't meet the bar for being a creative work.


Good design is obvious because you can reuse many existing ideas. What if you had to redesign tabs, search field, logos bringing you to homepage, scrollbars, buttons, focus, text cursor and all the other things from scratch?

If there is no objective way to set the bar you mentioned, we end up with double standard.


I'm surprised it's not using material design.


We downloaded and installed Gogs, Gitlab, and Bitbucket. I liked Gogs the best, but Gitlab seemed more enterprise-y, and Bitbucket had an issue we could never figure out. We're trying to replace TFS, so we'll probably end up with Gitlab.

But since I was the one doing the installing, I sure wish we'd go with Gogs. It was 5 minutes from start to finish.


What can we improve in GitLab to make it better? Did the install take too long? Did you try our Omnibus packages or a source install?


I tried installing the omnibus and from source. Both times I could not get it completely working. On CentOS 5.3 x64. I eventually got everything but git commits working over ssh, but I quit trying to get that working after it looked like I had to give up port 22.

I was just installing installing for evaluation and hopefully our sys admins will do the final install.

Gogs was just 'install Go', 'run binary', 'link to database/git ssh user' done.


I'm sorry to hear you had problems installing GitLab. Maybe it was an SE Linux problem? https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc... You can configure an alternative ssh port for the Omnibus packages in https://gitlab.com/gitlab-org/gitlab-ce/blob/30e4d3ce9a18340... If you encourter problems again please email support@gitlab.com and include a link to this comment.


I think I remember that most of my issues were SE Linux related. We liked Gitlab and are going with it, I just personally found Gogs a lot easier and simpler to get going and the missing features weren't that important to us. Thanks for the helpful links.


Thanks for the feedback. Glad to hear you're going with GitLab. If you have any idea's how to improve our SE Linux experience please let me know.


Why did you have to install Go?


I've tried and failed three times to get Gitlab to send me a sign-up email. I can't get it to work. I can't figure out how to disable the confirmation emails either. I'd be using a self-hosted Gitlab to try to fix the speed issues but because of this issue, I cannot. So, instead, I'm a refugee on your public service until I convince my coauthor to eat another repository migration to a private Gogs instance.

I got gogs working in 5 minutes like OP said.

Also Gogs is >>>>>>>>>>>>>>>>>>>>>> faster. It baffles me that I can be pushing an initial commit to an empty repository and my terminal will hang for 10-15 seconds. What's going on with that?


Thanks for the feedback. Aroch posted a good Mailgun config in https://news.ycombinator.com/item?id=11374883 but your post gave me the idea to disable confirmation emails by default https://gitlab.com/gitlab-org/gitlab-ce/issues/14684

I'm not sure what is up with the initial commit issue you describe and made an issue to investigate https://gitlab.com/gitlab-org/gitlab-ce/issues/14685

Thanks again!


If you're self-hosting, I would use mailgun with the below config in your /etc/gitlab/gitlab.rb

https://gist.github.com/ar0ch/b33a9232776504bd83ac


I spent an hour or so with GitLab and disliked the amount of whitespace. I shouldn't have to scroll so much to see what's in a repo.

</minor-annoyance>


Thanks for the feedback. I added you comment to https://gitlab.com/gitlab-org/gitlab-ce/issues/14548


We use JIRA so by default we are shoe-horned into Stash/Bitbucket. I've used gitlab on my own and found it beyond easy to use and very reliable. I see gitlab people posting every once and a while on hacker news and they seem very interested in improving the product overall. Contrasted with Atlassian who seems to have no interest in improving their core product and only enabling new revenue-driving products or costly extensions. So thanks.


Glad to hear you like GitLab. Did you know that we merged our excellent JIRA support into the open source GitLab CE? See http://doc.gitlab.com/ce/project_services/jira.html


The things I miss the most using Gitlab vs github are the instantaneous merges and the syntax highlighting within diffs when reviewing a MR. I'm on an old-ish version of gitlab though and it's running on a single VPS.


We have syntax highlighting within diffs since 8.4 https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/2109 so please consider upgrading.

I don't understand what you mean with instantaneous merges, did merges seem faster in the past? We do have a feature that allows you to merge when the build is successful, saving many people lot of time http://doc.gitlab.com/ce/workflow/merge_when_build_succeeds....


We're on 8.1 I think. We are running a largeish C# project with largeish diffs; when I click "merge" the app spins for a good few seconds or more before completing the merge. Don't think I've ever seen that sort of delay on github. I wonder if you could pre-merge in a separate system branch then do some git magic to rearrange them once the user actually clicks merge?


Pre-merging when you look at the merge request is an interesting idea. Technically that will work but it will add complexity and use more resources. Feel free to make a feature request for it. At this point I think there is lower hanging fruit we can pick to improve performance.


I wouldn't prioritize it if I were you either ;)


Thanks for understanding :)


I'd like to run Gitlab on FreeBSD and never could get it working. So for my side work projects where all the hosting is on FreeBSD I'm using Gogs. A port would fix that.


Thanks for the feedback. All the FreeBSD support right now is unofficial https://github.com/gitlabhq/gitlab-public-wiki/wiki/Unoffici... and https://gitlab.com/gitlab-org/gitlab-recipes/tree/master/ins...

We would love to support it officially but we do need customers to express the same. We're currently talking to a client that might make this happen, so I'm hopeful.


What issue did you have with Bitbucket, if you don't mind my asking? :)


When viewing merge history, we got a generic modal error. I opened a bug with Atlassian, but we never got it resolved. Their first response wasn't the solution and I had limited time/energy to devote on debugging it.


That's entirely understandable--and thank you so much for raising it with us. I'd be curious to know what the issue ID is; my JIRA-fu is failing for what you specified. That said, again, thanks for giving us a shot. :)


Looks like it wasn't merge, but commits that were the issues. 12986 looks like the issue ID, but it has been closed.


If you're still open for suggestions/alternatives, have a look at Phabricator. August-December of 2014 I looked at several different solutions for my company (though primarily my focus was on code review features - today we also use it in some areas for task management and documentation/wiki). I nearly moved forward with GitLab before (re)discovering Phabricator. I like to keep up to date with new features in the alternatives (GitLab, GitBucket, Gogs, Reviewboard, etc.) but at this point I can't imagine wanting to move off Phab.

Homepage, http://phabricator.org/

Their self-hosted install (browse but they prefer that you don't create/test data on this install), https://secure.phabricator.com/

They have paid hosting as well, http://phacility.com/


If you're migrating from TFS VC (that is, from pure TFS VC), try Mercurial instead of Git. It is (IMO) much friendlier to people just coming to DVCS, which is a big plus.

For repository hosting, since you appear to be a Windows shop, try HgLab[0] (disclaimer: I am the creator of HgLab).

[0]: https://hglabhq.com


I wrote about how and why I switched to Gogs from GitLab and Bitbucket here: https://aaronparecki.com/2016/02/13/18/


I switched to gogs from GitLab and haven't looked back. The installation literally took 10 minutes including all of the sysadmin work.

Main reason was the resource utilization of GitLab was just too high. iirc, it was actually the CEO of GitLab that recommended the switch... ;)


Yes, if you are resource constrained (less than 2GB of memory) and using it privately I think Gogs is a great choice.


Has been so far, thanks again for the recommendation!


Sorry, but even with 2GB Gitlab is painfully slow. And it also happens on gitlab.com where I assume resources are optimized. Loading times of up to 3 seconds for a stupid list of issues are not really acceptable for a web application in 2016. In fact, they never were.


For extra ease and security (two things not often found together!) try setting it up in a Sandstorm server, works with one click and is sandboxed from the rest of your server. https://sandstorm.io/


Took me about 3 minutes to have it running on a free account.


Thumbs up, docker image (https://github.com/gogits/gogs/tree/master/docker) works great and is easy to setup. I've had it running for a while on a machine and stopped paying for github :)


We run Gogs on a bog-standard $10 VPS and it works great; quick, snappy, reliable. Very efficient.

Can't recommend it enough. Resource usage matters.


Been using this for a while now and very happy with it. I decided to stop using Github for personal projects due to political disagreements with the company and the fact that git should really not be centralised.

Gogs has probably 95% feature parity with Github and it is a lot faster (is Github still Ruby? That would explain it ...)

I run a personal Kubernetes cluster for services and getting Gogs up and running was super-simple: https://git.tazj.in/tazjin/infrastructure/src/master/gogs/go...


How can you compare speeds when you can't run GitHub on your setup? Running a multimillion user service is vastly different from running your own personal server, right?


End-user usability feel (perceived speed) is all that matters, no ?


Sure, but you can't conclude that the language is the source of GitHub's (relative to this) lesser speed.


Why would ruby explain it? Do you think most of the time you spend waiting for GitHub pages to load is due to CPU load on the application layer? Why would we expect this to be the case?


I don't think grandparent has any concrete reason to think that Ruby is at fault. When you said "due to CPU load on the application layer" you are probably already thinking further than s/he did. My guess is that s/he's just performing a middle-eyebrow dismissal based on some negative feeling towards Ruby.


But you lose the "social" aspect of GitHub, right? How do you deal with this?


When contributing to other projects I still use Github as it's of course up to the authors to decide where they want to host their code.

For my own projects it doesn't matter so much, few people contribute to my Emacs configuration or dotfiles :)


Just curious, where (AWS, DO, GCE or other) do you run your Kubernetes cluster ?


GCE, it works extremely well there!


I have been using gogs at home for a few months now as well. It has been great for all of my needs. Like version controlling little python scripts I am playing around with, or my resume, term papers, etc in LaTeX. Stuff that will never be public that I would like to have a little more control over.


Yes! I've been using it for this purpose as well, and I love it. Easy to install/update, great GUI for all my needs, and runs on my home server. If you aren't using it at home, you need to start yesterday.


If anyone is interested in trying it, the packages are pretty good at keeping your installation up to date:

https://gogs.io/docs/installation/install_from_packages

It is what I use for my misc things at home.




I use gogs and github. Both are great git backends, featuring simple collaboration management. The issues and wiki however feels too primitive to be useful. I can't really decide if they are already bloated or if they have not reached MVP yet.

Anyway, I would love to see better integration on both with their mandatory complements, such as kanban, CI, ...


Kudos on the polished project release. I am relatively new to go, and I am curious about the technology stack behind such a webapp. How does it work under the hood?

- How do you develop such web apps with html, css, javascript, go, etc. all interacting with each other?

- How are static assets packaged in a single binary?

- Any simple tutorial or stack walkthrough you would recommend me reading?

thanks!


- As it happens, this is also on the front page: https://astaxie.gitbooks.io/build-web-application-with-golan...

- I also recommend going through Go's example wiki tutorial: https://golang.org/doc/articles/wiki/

- By default, Go binaries are statically linked. If by static assets you mean CSS, JS, etc., those are usually just deployed alongside the binary.


Why is software still being named for the language it was written in?


For everyday software this is fair, but is software made FOR DEVELOPERS. All things being equal between two solutions, I am going to take the one implemented technology I know and can hack on easier.

Also, many of the most popular solutions in this space are written in what some people would call "low performance languages", since the author here is using a "high performance language", its just good marketing too.


I thought it was because of "Go"ogle but then I clicked though the link. It's pretty interesting that they've launched this because I feel like this is GitHub's bread and butter for enterprise.


So the haters at /r/programming know to stay away?


https://notabug.org/ on a fork of Gogs


Do you know any Continous Integrations which fits well with Gogs?

Something small, easy to setup, easy to use. Just as Gogs is.

All I want - integration with git/gogs (webhook?), status page (with detailed build/test info, especialy for fails), status image (for readme in gogs).


I use gogs at home, but run Gitlab At work - it just has so more many more features and far more customisable - also Gitlab CI rocks, it seems to scale really well - 60 active developers on a tiny vm and its lightening quick and we can easily do more than 100 releases a day.


Glad to hear that you like GitLab CI. Continuous Delivery FTW!


I've been running this on my raspberry pi for a few months now, it works great!


How viable would alternate backends be? Like say, AWS CodeCommit?


the installation doc is not complete, at least not so simple like a 'unzip; ./gogs web', you need create a database, set up users etc, those really should be documented for a good first-time experience.

also after installation it refreshes into localhost:3000 instead of my-remote-host:3000, so you have a dead page after the installation.

yes they're easy to fix, but it's good if they're documented


Is there any Gogs.com hosted version?


Yeah, there's https://notabug.org/


Nope. They have an instance at https://try.gogs.io but it's only a demo. The only public instance that I'm aware of is http://notabug.org but

- they only accept free projects

- they're still using a very old version of Gogs


Does it run on Heroku?


Probably not without major changes - it looks like it stores a lot of stuff (including gir repos and uploaded attachment files) on the file system.


Github killer free app is on Github :)


It kills me when people build GitHub alternatives that are hosted on GitHub.


Why? Github is about distribution and collaboration with a community. This product seems to be about private self-owned hosting.


Then why aren't they hosting on their own product?


Because that would be public hosting, parent just explained that gogs is great at _private_ hosting. When it comes to public hosting of open source projects with network effect, github is still king. I don't think anyone denies that.


I'd answer, but I worry you wouldn't read my second reply either.


because most people already have a github account


Different scale?


(Pitching my own company https://cloudron.io here)

If you want a single click install on a _private_ server and with a custom domain - https://cloudron.io/appstore.html?app=io.gogs.cloudronapp. We track Gogs releases actively and keep it up to date with no effort on your part.

My own repos are hosted on gogs - https://git.girish.in

Edit: Since I got asked a couple of times about this (wow, you guys are fast), anyone can write apps for the Cloudron. It's docker based and you can find the Gogs app code here https://github.com/cloudron-io/gogs-app


How does this relate to that other similar tool the name I just forgot?


Sorry, which tool :-) ? Docker compose?

Cloudron gives you a private server (we use DigitalOcean right now) on which you can install web apps. We automate everything about maintaining your server - DNS, certs, app updates, backups etc. In short, we want to make it possible for everyone to have their own server. Cloudron is more a consumer product than a development tool (of course, we have tooling that enables developers to create apps).



The products are similar. Do give both of them a try.

We have a demo at: https://my-demo.cloudron.me (username: cloudron password: cloudron)


Yes, I meant Sandstorm!


I like this very much. I do not want it for me right now (I don't have a use case) but I really want it to get traction. Are you doing well as a business?

(I know this is not something I should be asking, I'll not be sad if you do not answer)


That's fine :-) We are doing quite well and have a few paying customers. And lots of people evaluating the product.




Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: