Hacker News new | comments | ask | show | jobs | submit login
Vectordash: GPU instances for deep learning (vectordash.com)
45 points by frlnBorg 8 months ago | hide | past | web | favorite | 11 comments



What's your licensing situation with Nvidia regards their prohibition [1] on datacenter deployment for 'consumer' cards?

[1] https://news.ycombinator.com/item?id=16002068


It does not sound like they are deploying in datacenters: https://vectordash.com/hosting/

That said, the license also has this:

No Sublicensing or Distribution. Customer may not sell, rent, sublicense, distribute or transfer the SOFTWARE; or use the SOFTWARE for public performance or broadcast; or provide commercial hosting services with the SOFTWARE.

which seems to prohibit Vectordash's individual hosts from participating.


Ah right, hadn't seen that. Thanks! If the vectordash team are reading, I'd make the nature of the service a bit clearer to potential users. There's no mention I can find outside the 'hosting' page that these aren't your machines.


Gotcha! I’ll update the copy to make that a bit clearer.


I have no real idea but sometimes its better to ask forgiveness rather than permission (i'm not in any way associated with this service).


If I understand correctly, the instances available are containerized instances that users run (i.e, the system matches hosts to guests and takes a cut).

Beyond being dangerous on multiple levels, there doesn't seem to be any guarantee of storage or network bandwidth/traffic. Having a multi-TFLOP GPU to train with is hardly useful if you can't get the training data on the device in a reasonable amount of time, or hold that data in local storage.


We ensure each instance has ample storage (min 50GB), internet speeds, and hardware specs such that the GPU is the bottleneck! If a user isn’t satisfied with an instance, then there’s no charge whatsoever :)


With more GPU-in-the-cloud offerings coming on line, is there a utility to dump GPU memory to see if your cloud provider has wiped it between customers?


Just read a pretty interesting paper on just this recently - we actually load/unload the drivers for every instance, which in turn also wipes the GPU’s memory. There’s a tool I wrote to test just this, albeit I haven’t uploaded it to GitHub yet. Might do that sometime this weekend.


Do you happen to recall the title of the paper? Would be interested (in the utility as well if you do happen to upload it to github) Thanks!





Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: