We are Sergiy, Davit and Jason, founders of Snark AI (https://snark.ai). We provide low-cost GPUs for Deep Learning training and deployment on semi-decentralized servers.
We started Snark AI during our PhD programs at Princeton University. As deep learning researchers we always experienced lack of GPU resources. Renting out GPUs on the cloud didn't fit in our budget, and purchasing GPU cards was difficult -- at that time, so many GPUs were being taken away by the crypto-miners. Then we found out that GPU mining profits lag far behind public cloud GPU prices.
On top of that, we figured out that there's a way to run Neural Network inference and crypto-mining simultaneously without hurting mining hash rate. This observation is a little counterintuitive, but it turns out that anti-asic hashing algorithms are designed to be extremely memory intensive, which leaves a good chunk of the CUDA cores idle. We can utilize the leftover compute power to run Neural Network inference extremely cost efficiently, which could be a life savior for large-scale inference tasks. http://snark.ai/blog
At the same time, we provide low cost raw hardware access for Neural Network training. We aim to be up to 10 times cheaper than on-demand instances on public cloud, undercutting preempteble/spot instance by up to 2x. When the GPU is idle our algorithms efficiently switch to mining to reduce costs. Try it out at https://lab.snark.ai, with 10 hours of free GPU time. We made it very simple to access the hardware through a single command line after `pip3 install snark`. More information on usage here https://github.com/snarkai/snark-doc. We are also working on creating a hub for NNs, similar to docker hub. It is still work in progress but you can take a look at couple examples at https://hub.snark.ai/explore.
We would love to get your feedback, to understand how was the experience for training Deep Networks through our platform and then deploying.