A) Pre-registering the study design, including the statistical analysis. Otherwise, attaching a big label "Exploratory! Additional confirmation needed!"
B) Properly powering the study. That means gathering a sample large enough that the chances of a false negative aren't just a coin flip.
C) Making the data and analysis (scripts, etc.) publicly available where possible. It's truly astounding that this is not a best practice everywhere.
D) Making the analysis reproducible without black magic. That includes C) as well as a more complete methods section and more automation of the analysis (one can call it automation but I see it more as reproducibility).
Replication of the entire study is great, but it's also inefficient in the case of a perfect replication (the goal). Two identical and independent experiments will have both a higher false negative and false positive rate than a single experiment with twice the sample size. Additionally, it's unclear how to evaluate them in the case of conflicting results (unless one does a proper meta-analysis--but then why not just have a bigger single experiment?).