It can be used like CMake (as a build script generator) or like Ninja. For many projects (including building Ninja), it's faster than Ninja. Chromium is a notable exception: it has a huge set of Ninja files, and the Shake parser is not as fast as Ninja's parser.
From the docs:
> [...] compiling LLVM on Windows under mingw they both take the same time to compile initially, and Ninja takes 0.9s for a nothing to do build vs Shake at 0.8s. Shake is slower at parsing Ninja files, so if you have huge .ninja files (e.g. Chromium) Shake will probably be slower. Shake does less work if you don't specify deps, which is probably why it is faster on LLVM (but you should specify deps -- it makes both Shake and Ninja faster). As people report more results I am sure both Shake and Ninja will be optimised.
Shake is awesome! I've been thinking about moving as much of my personal infrastructure to Haskell as possible: XMonad for window management, Hakyll for blogging, Pandoc for documents and presentations, Shake for builds, Turtle for shell scripts etc. and it's been fun to contemplate at least. Propellor for configuration might also be cool but I think Nix is a better solution there. There's a lot of "real-world" applicable Haskell out there!
Where did the convention of storing variables local to the algorithm on the vertices/edges being processed, as opposed to in sets/maps local to the function originate? You can see it with things like `w.onStack` and to a lesser degree `v.lowlink` and `v.index`. I see this convention all the time when it comes to describing graph algorithms, despite it being less generalizable (you can no longer run the algorithm on different graphs in parallel if they share any vertices, nor can you perform the algorithm over an immutable graph). Is this motivated purely by performance?
http://shakebuild.com/
It can be used like CMake (as a build script generator) or like Ninja. For many projects (including building Ninja), it's faster than Ninja. Chromium is a notable exception: it has a huge set of Ninja files, and the Shake parser is not as fast as Ninja's parser.
From the docs:
> [...] compiling LLVM on Windows under mingw they both take the same time to compile initially, and Ninja takes 0.9s for a nothing to do build vs Shake at 0.8s. Shake is slower at parsing Ninja files, so if you have huge .ninja files (e.g. Chromium) Shake will probably be slower. Shake does less work if you don't specify deps, which is probably why it is faster on LLVM (but you should specify deps -- it makes both Shake and Ninja faster). As people report more results I am sure both Shake and Ninja will be optimised.
https://github.com/ndmitchell/shake/blob/master/docs/Ninja.m...
A Summer of Haskell project is on to migrate GHC to a Shake-based build system.
PS. Wow, Annie Cherkaev's website looks really different now.