"Finally, we excluded trials that were registered with ClinicalTrials.gov more than one month after beginning enrolment, in order to ensure that the decision to register a study occurred separately from the decision to publish the study results. This decision was made because it has been shown that in spite of guidelines endorsing prospective trial registration, trials are often not registered for many months after the initiation of participant enrollment. In some of these cases, the registration of the trial is performed in preparation for publication and after the decision is made to publish the results. To the extent that this occurs, the inclusion of trials with delayed registration would bias our results. We limited this source of bias by focusing on trials for which registration was not delayed."
I worry that the article is still examining a biased group. As it stands, over two thirds of the large trials (3710 out of 5427) were eliminated for registering after enrollment. It stands to reason, then, that this is standard industry practice. The remaining 10% that was studied is clearly bucking the common practice and I'd therefore suggest it's not a representative sample.
That said, I commend them for the study and can't suggest a better way to mitigate bias. I think it clearly shows a pervasive problem in the industry. I'd just be wary of citing the percentages too fervently.
I think the idea is that the number of studies unpublished because of negative results is a constant across all time scales.
The data here are probably sufficient to do this meta-analysis; looking at publication rate as a function of time between starting the study and registration, rather than looking at an arbitrary cutoff.