Hacker News new | past | comments | ask | show | jobs | submit login

Would be cool to build an “LLM@home” project like folding@home or SETI@home (rip), where tons of folks could donate their GPUs and train something huge and FOSS. I don’t know enough about how these models are trained though. Could it be chunked up and distributed in that way, then stitched/merged back together?



https://stablehorde.net/ comes somewhat close.


Golem has been building this since 2017

https://www.golem.network/

They also have on option to get paid in crypto for your GPU power.

The challenge is that the AIsoftware architectures are not made "to run over Internet."



Always figured it would be too slow. Distributed training on clusters is usually done with 1+ gb/s interconnects.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: