Hacker News new | past | comments | ask | show | jobs | submit login

Do you have some good search terms to get started down the rabbit hole?



Probably the biggest recent result: https://arxiv.org/abs/2209.04836 (author thread: https://twitter.com/SamuelAinsworth/status/15697194946455265...)

See also: https://github.com/learning-at-home/hivemind

and more to OP's incentive structure: https://docs.bittensor.com/

Latter two intend to beat latency with Mixture-of-Expert models (MoEs). If the results of the former hold, it shows that with a simple algorithmic transformation you can merge two independently trained models in weight-space and have performance functionally equivalent to a model trained monolithically.


I too would like to go down this rabbit hole. I am going to poke around using the terms “distributed learning” and “federated learning” (They’re different areas, but somewhat related as far as I understand).


This one is a few years old, but seems interesting: https://arxiv.org/abs/1802.05799v2




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: