
An All-in-One DAG Toolkit: The strongly-connected components algorithm - how-about-this
http://vaibhavsagar.com/blog/2017/06/10/dag-toolkit/
======
mrkgnao
Yes! More "real-world" Haskell is always good. This seems like a good time to
bring up Shake, Neil Mitchell's build system:

[http://shakebuild.com/](http://shakebuild.com/)

It can be used like CMake (as a build script generator) or like Ninja. For
many projects (including building Ninja), it's faster than Ninja. Chromium is
a notable exception: it has a huge set of Ninja files, and the Shake parser is
not as fast as Ninja's parser.

From the docs:

> [...] compiling LLVM on Windows under mingw they both take the same time to
> compile initially, and Ninja takes 0.9s for a nothing to do build vs Shake
> at 0.8s. Shake is slower at parsing Ninja files, so if you have huge .ninja
> files (e.g. Chromium) Shake will probably be slower. Shake does less work if
> you don't specify deps, which is probably why it is faster on LLVM (but you
> should specify deps -- it makes both Shake and Ninja faster). As people
> report more results I am sure both Shake and Ninja will be optimised.

[https://github.com/ndmitchell/shake/blob/master/docs/Ninja.m...](https://github.com/ndmitchell/shake/blob/master/docs/Ninja.md)

A Summer of Haskell project is on to migrate GHC to a Shake-based build
system.

PS. Wow, Annie Cherkaev's website looks really different now.

~~~
vaibhavsagar
Shake is awesome! I've been thinking about moving as much of my personal
infrastructure to Haskell as possible: XMonad for window management, Hakyll
for blogging, Pandoc for documents and presentations, Shake for builds, Turtle
for shell scripts etc. and it's been fun to contemplate at least. Propellor
for configuration might also be cool but I think Nix is a better solution
there. There's a lot of "real-world" applicable Haskell out there!

------
wallacoloo
Where did the convention of storing variables local to the algorithm on the
vertices/edges being processed, as opposed to in sets/maps local to the
_function_ originate? You can see it with things like `w.onStack` and to a
lesser degree `v.lowlink` and `v.index`. I see this convention all the time
when it comes to describing graph algorithms, despite it being less
generalizable (you can no longer run the algorithm on different graphs in
parallel if they share any vertices, nor can you perform the algorithm over an
immutable graph). Is this motivated purely by performance?

