Specifically, it's unclear why one would eschew the high degree of structure in OFDM (specifically designed to aid analytical approaches to time/frequency/channel estimation) in place of applying a general-purpose learning technique.
I suspect this results in a much lower-performing, much more expensive algorithm, relative to state of the art. The lack of a relative performance comparison is telling here. (Detection "below the noise floor" sounds impressive, but in practice that's how many/most digital radio systems work.)
It's also unclear whether this actually provides fine time/frequency offset estimates. These are the numbers one actually needs, not just "was there a signal".