Hacker News new | past | comments | ask | show | jobs | submit login
Software Forge Performance Index (forgeperf.org)
30 points by ddevault on April 17, 2020 | hide | past | favorite | 17 comments



Not surprising at all that SourceHut came out on top, but the gap between Github and Gitlab is interesting.

This is probably too much to ask -- it would be great if in the future this page would be expanded by location where the lighthouse was run on. I would be curious to see if there's any variation found across countries.

In my experience running lighthouse locally vs Google's PageSpeed Insights gave non-trivially different results, never did figure out if that's purely because of different location this is being run from, or if I just failed to exactly replicate the configuration of the run PageSpeed Insights uses.


I ran this from Philly, in the same datacenter as SourceHut. Simulated network conditions in Lighthouse should level the playing field, but to be sure I had a friend run the same tests from Europe and got results which were not meaningfully different.

https://www.netbsd.org/~joerg/report-de.tar.xz

Would be curious to see more results - it's pretty straightforward to run these tests yourself:

https://git.sr.ht/~sircmpwn/forgeperf

Here's someone who ran a small subset of the tests against their personal Gitea instance:

https://radical.town/@adasauce/104014456097931550


I've ran it (from London) and you seem to be right -- nothing significantly different from the other results.

https://gofile.io/?c=vRdxAZ


> Disclaimer: This website is maintained by SourceHut.

Note that I'm not insinuating that the data is incorrect ... sometimes comparisons like this omit unflattering data though. It's open source so if anyone sees KPIs that should be added they can submit patches.


I tried to avoid this, for example deliberately including blame - which sr.ht does not support - and a few hard pages that we slip out of the 90-100 range for. I am definitely interested in expanding the test suite with harder stuff. I want to use this to set goals for performance improvements and feature development - not just to show off how fast sr.ht is. I also want to get those accessibility numbers up - right now, Bitbucket has consistently impressive scores across the board on that metric.


I'm not really enough of an expert to know what operations were missing and/or what might have really different implementations on the different systems. It "seemed" pretty consistent to me ... and leaving the yellow-orange values in the SH column was a good sign too. I have extensive experience with both Github and Gitlab and I can say that, in general, the performance on the operations match my experience. I don't want to say that's a fair comparison either as we self-host Gitlab and the performance (or lack thereof) could be our own fault.

In any case, thanks for putting this together and kudoes to allowing the community to help add, adjust and comment on the metrics you've chosen.


Codeberg servers for example are in Germany. But this was run in the USA on SourceHut itself, so it's no wonder SourceHut has the highest performance score when things like TTFB count in. That seems a bit unfair to me.


I spoke about this here:

https://news.ycombinator.com/item?id=22900649

Basically, (1) simulated network conditions in lighthouse should account for this, and (2) the results are not substantially different when run from Germany:

https://www.netbsd.org/~joerg/report-de.tar.xz


Codeberg is basically gitea, right? I thought gitea would perform much better, because it looks fast, compared to gitlab, equally as github. Bitbucket performs slow even unbenchmarked.


I agree that bitbucket provides a nightmarish experience, but does it take in account that it's a Single Page App and thus after the initial loading the traffic will be much less?


Based on the data quoted here, it would take a very large amount of browsing after the initial load to cancel out the difference, which is often a difference of 50-100 times the SourceHut data.

Having used BitBucket since 2013, I've seen their performance ebb over the years. The most recent redesign made the biggest difference. I left some comments in their beta period noting that the performance was painful in areas, and it did improve some by the time it hit release, but was still noticeably slower than it had been before the redesign.

I would be interested in some sort of analysis of performance differences across web frameworks. Looking at BitBucket's code in the dev tools, they're using React + Redux currently. BackboneJS's web site lists BitBucket as using Backbone, so I would deduce the old, less slow design was the one that used Backbone. I've worked on both a snappy Backbone SPA and a lethargic AngularJS SPA, as well as in-between sites with Angular and React. But the business domains differ, so I don't have as apples-to-apples of a comparison as BitBucket's switch from Backbone to React would provide.

It would be an interesting experiment to get several teams of developers with similar levels of experience in different frameworks - Angular, React, Backbone, Vue, JQuery, all implementing sites to the same business specifications, and compare the relative performance (in both features delivered and page responsiveness) a year or two in. Practical to perform that experiment, probably not, but I'd certainly read the results.


I'm not sure if it does or not, but either way it seems like initial page load is the important part. SourceHut is not a single page app, so to compare apples to apples you'd want to load each page individually. Also, it still takes forever to switch between things on Bitbucket after the initial page load. In general for simple things like this with distinct pages, I'm pretty convinced that "single page app" just means that you've made your site infinitely more complicated, more likely to have issues, and more likely to have accessibility problems, for both developers and users for no benefit other than using some trendy new thing.


The idea of SPA, is to not have to send 'useless' data at every page change (like the HTML layout), but only the strict necessary data with a JSON payload. Especially coupled with good caching. Also it offloads computation done on the server to clients (templating).

So generally yes SPA are heavier, but they are more powerful. It's a matter of tradeoffs.


As long as they've existed, SPAs have been faster, but only in theory ;)

I don't think I've _ever_ seen one that's faster than the normal webapp it replaced. Except possibly for Google's first attempt on gmail, which was pretty snappy. Fortunately they've since replaced it with a sluggish behemoth, so order has been restored to the universe.


With the possible exception you mentioned (I switched to FastMail long ago to get back to that snappy feeling) my experience with SPAs is that they all add 10 megs of JavaScript up front and several dozen kbs of JSON or similar per click to save a couple tens of bytes per page of HTML. I pulled those numbers out of the air obviously, but I'm not sure that they're even exaggerations…


Or you could just make the HTML and what not simpler and then who cares? The useless data is fine if you don't already have the problem of sending tons of extra crap that nobody needs. Single page stuff can still use templating on the server side.


Is the bitbucket server in Australia or something?




Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: