
Software Forge Performance Index - ddevault
https://forgeperf.org/
======
rossvor
Not surprising at all that SourceHut came out on top, but the gap between
Github and Gitlab is interesting.

This is probably too much to ask -- it would be great if in the future this
page would be expanded by location where the lighthouse was run on. I would be
curious to see if there's any variation found across countries.

In my experience running lighthouse locally vs Google's PageSpeed Insights
gave non-trivially different results, never did figure out if that's purely
because of different location this is being run from, or if I just failed to
exactly replicate the configuration of the run PageSpeed Insights uses.

~~~
ddevault
I ran this from Philly, in the same datacenter as SourceHut. Simulated network
conditions in Lighthouse should level the playing field, but to be sure I had
a friend run the same tests from Europe and got results which were not
meaningfully different.

[https://www.netbsd.org/~joerg/report-
de.tar.xz](https://www.netbsd.org/~joerg/report-de.tar.xz)

Would be curious to see more results - it's pretty straightforward to run
these tests yourself:

[https://git.sr.ht/~sircmpwn/forgeperf](https://git.sr.ht/~sircmpwn/forgeperf)

Here's someone who ran a small subset of the tests against their personal
Gitea instance:

[https://radical.town/@adasauce/104014456097931550](https://radical.town/@adasauce/104014456097931550)

~~~
rossvor
I've ran it (from London) and you seem to be right -- nothing significantly
different from the other results.

[https://gofile.io/?c=vRdxAZ](https://gofile.io/?c=vRdxAZ)

------
smoyer
> Disclaimer: This website is maintained by SourceHut.

Note that I'm not insinuating that the data is incorrect ... sometimes
comparisons like this omit unflattering data though. It's open source so if
anyone sees KPIs that should be added they can submit patches.

~~~
ddevault
I tried to avoid this, for example deliberately including blame - which sr.ht
does not support - and a few hard pages that we slip out of the 90-100 range
for. I am definitely interested in expanding the test suite with harder stuff.
I want to use this to set goals for performance improvements and feature
development - not just to show off how fast sr.ht is. I also want to get those
accessibility numbers up - right now, Bitbucket has consistently impressive
scores across the board on that metric.

~~~
smoyer
I'm not really enough of an expert to know what operations were missing and/or
what might have really different implementations on the different systems. It
"seemed" pretty consistent to me ... and leaving the yellow-orange values in
the SH column was a good sign too. I have extensive experience with both
Github and Gitlab and I can say that, in general, the performance on the
operations match my experience. I don't want to say that's a fair comparison
either as we self-host Gitlab and the performance (or lack thereof) could be
our own fault.

In any case, thanks for putting this together and kudoes to allowing the
community to help add, adjust and comment on the metrics you've chosen.

------
jlelse
Codeberg servers for example are in Germany. But this was run in the USA on
SourceHut itself, so it's no wonder SourceHut has the highest performance
score when things like TTFB count in. That seems a bit unfair to me.

~~~
ddevault
I spoke about this here:

[https://news.ycombinator.com/item?id=22900649](https://news.ycombinator.com/item?id=22900649)

Basically, (1) simulated network conditions in lighthouse should account for
this, and (2) the results are not substantially different when run from
Germany:

[https://www.netbsd.org/~joerg/report-
de.tar.xz](https://www.netbsd.org/~joerg/report-de.tar.xz)

------
rurban
Codeberg is basically gitea, right? I thought gitea would perform much better,
because it looks fast, compared to gitlab, equally as github. Bitbucket
performs slow even unbenchmarked.

------
z0mbie42
I agree that bitbucket provides a nightmarish experience, but does it take in
account that it's a Single Page App and thus after the initial loading the
traffic will be much less?

~~~
SamWhited
I'm not sure if it does or not, but either way it seems like initial page load
is the important part. SourceHut is not a single page app, so to compare
apples to apples you'd want to load each page individually. Also, it still
takes forever to switch between things on Bitbucket after the initial page
load. In general for simple things like this with distinct pages, I'm pretty
convinced that "single page app" just means that you've made your site
infinitely more complicated, more likely to have issues, and more likely to
have accessibility problems, for both developers and users for no benefit
other than using some trendy new thing.

~~~
z0mbie42
The idea of SPA, is to not have to send 'useless' data at every page change
(like the HTML layout), but only the strict necessary data with a JSON
payload. Especially coupled with good caching. Also it offloads computation
done on the server to clients (templating).

So generally yes SPA are heavier, but they are more powerful. It's a matter of
tradeoffs.

~~~
rsynnott
As long as they've existed, SPAs have been faster, but only in theory ;)

I don't think I've _ever_ seen one that's faster than the normal webapp it
replaced. Except possibly for Google's first attempt on gmail, which was
pretty snappy. Fortunately they've since replaced it with a sluggish behemoth,
so order has been restored to the universe.

~~~
SamWhited
With the possible exception you mentioned (I switched to FastMail long ago to
get back to that snappy feeling) my experience with SPAs is that they all add
10 megs of JavaScript up front and several dozen kbs of JSON or similar per
click to save a couple tens of bytes per page of HTML. I pulled those numbers
out of the air obviously, but I'm not sure that they're even exaggerations…

------
PaulHoule
Is the bitbucket server in Australia or something?

