Hacker News new | past | comments | ask | show | jobs | submit login

I'd probably phrase it as "can" dramatically reduce the amount of data you need rather than "does". Getting transfer learning to work in any kind of reliable way is still very much open research, and the systems I've seen are heavily dependent on basically every variable involved: the specific data sets, domains, model architectures, etc., with sometimes pretty puzzling failures.

I don't doubt Google has managed to make something useful work, though I'm more skeptical of how general the ML tech is. One advantage of an API like this is that it allows control over many of those variables. I'm not sure if this is what it does, but you could even start out by making a transfer-learning system that's heavily tailored to transfering from one specific fixed model, which coupled with some Google-level engineering/testing resources, could produce much more reliable performance than in the general case.




Disclosure: I work at Google on Kubeflow

As you can see here[1], we do provide quite a bit of information about the accuracy and training of the underlying model.

Additionally, the AutoML already (often) provides better than human level performance[2]. Your comment about transferring a heavily tailored model from one model to another is basically what it's doing - it's taking something domain specific (vision) and allowing you to transfer it to your domain.

[1] https://youtu.be/GbLQE2C181U?t=1m15s

[2] https://static.googleusercontent.com/media/research.google.c...


I was about to type a very similar comment, but this is much of what I had in mind.

I've also seen it used to justify insufficient validation - resulting in strange generalization failures.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: