The paper is meant to compare architecture vs architecture with similar model size, and dataset - to inform decisions in future architecture designs
Its main benefits being presented with the above staying closely the same, is that it has significantly lower running and training cost without performance penalty
If you want to compare any <20B model with GPT 3.5 / 4 / 100B models evals, thats another paper altogether