I'd probably phrase it as "can" dramatically reduce the amount of data you need rather than "does". Getting transfer learning to work in any kind of reliable way is still very much open research, and the systems I've seen are heavily dependent on basically every variable involved: the specific data sets, domains, model architectures, etc., with sometimes pretty puzzling failures.
I don't doubt Google has managed to make something useful work, though I'm more skeptical of how general the ML tech is. One advantage of an API like this is that it allows control over many of those variables. I'm not sure if this is what it does, but you could even start out by making a transfer-learning system that's heavily tailored to transfering from one specific fixed model, which coupled with some Google-level engineering/testing resources, could produce much more reliable performance than in the general case.
As you can see here[1], we do provide quite a bit of information about the accuracy and training of the underlying model.
Additionally, the AutoML already (often) provides better than human level performance[2]. Your comment about transferring a heavily tailored model from one model to another is basically what it's doing - it's taking something domain specific (vision) and allowing you to transfer it to your domain.
I don't doubt Google has managed to make something useful work, though I'm more skeptical of how general the ML tech is. One advantage of an API like this is that it allows control over many of those variables. I'm not sure if this is what it does, but you could even start out by making a transfer-learning system that's heavily tailored to transfering from one specific fixed model, which coupled with some Google-level engineering/testing resources, could produce much more reliable performance than in the general case.