
Scalable and Sustainable Deep Learning via Randomized Hashing - oco101
https://arxiv.org/abs/1602.08194
======
xiphias
They used CPU for both sparse and dense approach. It would be interesting to
see price/performance comparision for dense GPU vs sparse CPU, especially as
more specialized architectures for dense matrix operations are coming out

------
captaindiego
Looks like a github repo here for this paper:
[https://github.com/rdspring1/LSH_DeepLearning](https://github.com/rdspring1/LSH_DeepLearning)

------
JacobiX
Locality sensitive hash is well known technique for approximate nearest search
and dimensions reduction. Combining it with dnn may reduce the complexity of
the dataset. But I don't know if in this setting the parameters of the lsh are
learnable via backprop?

------
dang
Url changed from
[https://www.sciencedaily.com/releases/2017/06/170601135633.h...](https://www.sciencedaily.com/releases/2017/06/170601135633.htm),
which points to this.

