Hacker News new | comments | show | ask | jobs | submit login

That is a staggering rate of increase. I can see a future where this is less centralized; learning could happen in "phases" where a local device improves its model given local data and reports back something centrally that can be combined and used to train a shared model.

This requires hardware to be miniaturized as non-ML compute has been and when that does happen we'll have the learnings from the current edge computing push. In the mean time I've excited to see what developments are made on both the hardware and software side.

This is called federated learning[0] at least by Google. I don't know whether they've added this to more products or whether it works well. It would be interesting to see this done in open source.

[0] https://ai.googleblog.com/2017/04/federated-learning-collabo...

Thank you! I was trying to find that before posting but forgot their naming of it.

Find a solid proof-of-work system for sharing signed data in this manner and you will change the world. Especially if you can re-combine the shared model with the local model.

Sounds like what https://www.openmined.org/ is working on.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact