No, if you read their technical paper, it's frequentist sequential testing with false discovery rate control, which is a fairly recent development (I mean, 25 years old is pretty new in statistics).
I think all OP is trying to point out is that it either agrees with bayesian methods or it's wrong... so at best it's not materially new, and at worst it's using questionable assumptions.