
Stagit – A static Git page generator - raldu
https://git.codemadness.org/stagit/
======
bachmeier
This is not a very informative link. Would be nice to have a description of
what it is and why the reader should be interested in it.

~~~
ldonley
It looks like this is the GitHub for the project:
[https://github.com/oxalorg/stagit](https://github.com/oxalorg/stagit)

~~~
daptaq
Actually [http://git.2f30.org/stagit/](http://git.2f30.org/stagit/) is the
real host, what you linked was just a mirror. If you go up one level on the
site, you can find a list of projects hosted by the 2f30 team, including
stagit, which was made by Hiltjo Posthuma (one of the Suckless guys, iirk)

------
chriswarbo
Very nice! Personally I'm using git2html (e.g. see
[http://chriswarbo.net/git/git2html/git](http://chriswarbo.net/git/git2html/git)
) with a few modifications. In particular, I've had to limit it to only using
the HEAD commit, since the disk space becomes crazy (e.g. it seems to render
the contents of every file to a separate HTML page for each commit).

I'll take a look at stagit, since it might be saner (plus the resulting pages
look nicer, and I can stop maintaining my own patches to git2html).

What I'd really like to see is a JS implementation of the "dynamic" features
like diffing, so rather than having to pre-compute and store them, they can be
generated on demand on the client-side by fetching the required git objects,
etc. over HTTP (would presumably require a clone to be accessible on the same
domain as the HTML). That way, everyone gets the plain pages like now, but
those enabling JS can browse repos more thoroughly.

~~~
DorothySim
> What I'd really like to see is a JS implementation of the "dynamic" features
> like diffing

That's possible. I've made something like that (dynamically fetching git info
via dump HTTP server protocol) using Git.js [0] although for a different
reason and it worked very well. Some caveats: the decompression must be
handled in a WebWorker or the UI is stuck pretty easily. But objects can be
fetched on demand so it's kind of like Microsoft's Git Virtual FS. As you've
said cross origin policies apply so either have the viewer on the same site as
repos or add appropriate CORS headers.

[0]: [https://github.com/yonran/git.js](https://github.com/yonran/git.js)

I see git.js even has a "repo-viewer" demo [1]. Although it's very primitive
it shows the ref list and diffs.

[1]: [https://github.com/yonran/git.js/blob/master/demos/repo-
view...](https://github.com/yonran/git.js/blob/master/demos/repo-
viewer/index.html)

------
trqx
same tool for gopher: [http://git.2f30.org/stagit-
gopher/file/README.html](http://git.2f30.org/stagit-gopher/file/README.html)

------
nishs
This is great. A slightly more feature-rich alternative is Gitiles, which is
used by the Chromium and Android projects.

[https://gerrit.googlesource.com/gitiles/](https://gerrit.googlesource.com/gitiles/)

------
wrigby
Looks pretty nice! The readme says it's not useful for repos with >2000
commits, but I wonder if we could hack together some sort of cache for it to
facilitate delta updates?

~~~
sdesol
> we could hack together some sort of cache for it to facilitate delta
> updates?

Crunching diffs at scale is actually an extremely I/O intensive operation and
I can say this from experience, since my product is doing just that. You can
see an example of what I mean at

[https://public.gitsense.com/insight/github?r=Microsoft/vscod...](https://public.gitsense.com/insight/github?r=Microsoft/vscode#b%3Dgithub%3AMicrosoft%2Fvscode%3Amaster%26t%3Dcommits)

If you scroll down, you can see how the indexed information is used. The red,
green, and blue bars, denotes how many lines were deleted, added and changed.
The red, green and blue bars in the second row, shows how many lines were
deleted, added and changed, which were non blank/comment lines.

If you want to crunch diffs at scale, you really have to throw CPU cores, RAM
and multiple SSDs at the problem. The more CPU cores, translates to more diffs
that can be processed in parallel. The more RAM, translates to better Kernel
page caching. And multiple SSDs in RAID 0 configuration, translates to better
read and write speeds.

~~~
mrmondo
Do you think it’s possible to calculate the relationship between the depth of
the git data and the number of I/O operations?

This is purely out of interest, our storage performs at around 2-4M random 4K
read IOP/s per node and I was wondering (just for fun) how it might look
thrown a good I/O crunch task like this.

~~~
sdesol
> Do you think it’s possible to calculate the relationship between the depth
> of the git data and the number of I/O operations

It's actually pretty straight forward since you know exactly what operations
has to be done. With very good I/O, you may find the CPU will become your
bottleneck.

~~~
mrmondo
Yeah then it becomes even more interesting to me ;) especially when it comes
to NUMA etc...

------
progman
You can have something like that also in your shell. Just apply

    
    
      git log --reverse | perl githead.pl
    

where githead.pl is

    
    
      my $date, $old; while (my $line = <>) { chomp $line; next unless $line; $date = "$1 $2 $3" if $line =~ m|Date:\s+(.+?)\s+(.+?)\s+(.+?)\s+|; if ($line =~ m/^\s+/) { if ($date ne $old) { printf("%-10s%s\n",$date,$line); $old = $date; } else { printf("%-10s%s\n",'',$line); }}}

------
kpcyrd

        git clone git://git.codemadness.org/stagit
    

Can we please stop doing that? Use [https://](https://) or anonymous ssh.

