Sure, that's correct, but that's absolutely unrelated to what we were talking about; your example is about the general concept of transfer learning to task-specific annotated data, not about domain-specific pretrained models.
For example, if you want a domain-specific model for the legal domain, then you can pre-train a large self-supervised model on every single legal-related document in the world you can get your hands on, instead of a general mix of news and fiction and blogs and everything else - and that might be a more efficient starting point for however many(few) annotated examples you have for your task-specific classifier than the general model.
Legal-related documents are a minuscule fraction of the corpus the large model is trained on. The resulting model won't have the conceptual fluency that the large model has. It's like training a human baby with legal briefs and expecting her to be a good lawyer.
For example, if you want a domain-specific model for the legal domain, then you can pre-train a large self-supervised model on every single legal-related document in the world you can get your hands on, instead of a general mix of news and fiction and blogs and everything else - and that might be a more efficient starting point for however many(few) annotated examples you have for your task-specific classifier than the general model.