
Graph algorithms via SuiteSparse:GraphBLAS: triangle counting and K-truss (2018) [pdf] - espeed
http://faculty.cse.tamu.edu/davis/GraphBLAS/HPEC18/Davis_HPEC18.pdf
======
lmeyerov
Has anyone compared GraphBLAS to nvgraph, custinger, gunrock, etc.?

~~~
espeed
Re: nvgraph, NVIDIA is one of Tim Davis's research sponsors and uses his code,
as is (and does) Google. Among other things, he also wrote the open-source
NVIDIA CULA sparse matrix library. See the list of libraries at the bottom of
this page (scroll down)
[http://faculty.cse.tamu.edu/davis/research.html](http://faculty.cse.tamu.edu/davis/research.html)

Re: cuStinger, it's no longer...it's now Hornet, which uses CUB by NVIDIA
Research [https://github.com/hornet-gt/hornet](https://github.com/hornet-
gt/hornet)

Re: gunrock, they have plans on adding GraphBLAS as part of their backend:
[https://github.com/gunrock/gunrock/issues?utf8=%E2%9C%93&q=i...](https://github.com/gunrock/gunrock/issues?utf8=%E2%9C%93&q=is%3Aissue+graphblas)

Re: GraphBLAS, see my comment from yesterday...

Log(Graph): A Near-Optimal High-Performance Graph Representation (2018)

[https://news.ycombinator.com/item?id=18099520](https://news.ycombinator.com/item?id=18099520)

For an overview of GraphBLAS in the context of Heterogeneous High-Performance
Computing (HHPC) systems such as NVIDIA GPUs or Intel Xeon Phis, see the 2015
talk Scott McMillan ([https://insights.sei.cmu.edu/author/scott-
mcmillan/](https://insights.sei.cmu.edu/author/scott-mcmillan/)) gave at the
CMU Software Engineering Institute:

Graph Algorithms on Future Architectures [video]
[https://www.youtube.com/watch?v=-sIdS4cz7-4](https://www.youtube.com/watch?v=-sIdS4cz7-4)

And a few years back, Jeremy Kepner did a mini-course on D4M (the precursor to
GraphBLAS). The videos and material are on MIT OCW...

MIT D4M: Mathematics of Big Data and Machine Learning [video]
[https://www.youtube.com/watch?v=iCAZLl6nq4c&list=PLUl4u3cNGP...](https://www.youtube.com/watch?v=iCAZLl6nq4c&list=PLUl4u3cNGP62DPmPLrVyYfk3-Try_ftJJ&index=1)

Discussion:
[https://news.ycombinator.com/item?id=18105931](https://news.ycombinator.com/item?id=18105931)

~~~
lmeyerov
... That makes it sound like the implementations are not meant for direct
long-term use, as folks assume them uncompetitive with tuned GPU versions that
show up in hornet et al, and in practice they will be used when no GPU equiv
is available / out of convenience? Likewise, even when used directly, it will
be by framework / lib devs, so some sort of hornet-blas?

To be clear, the work has been interesting to me for years, so this is purely
a practitioner's question as we are not in a position to ship-all-the-things.

