It is easier to configure (less parameters, there is no need for a window function), offers more flexibility (exponential frequency band; e.g. for music scales) and can reach the Gabor-Heisenberg uncertainty limit without artifacts.
The only downside is that you need to know the entire signal in advance, so it can only be used for recordings.
Shameless self-promo of my implementation: https://github.com/Lichtso/CCWT
You can window the wavelet, then slide the finite-duration wavelet by a few samples at a time, even if the wavelet is hundreds to thousands of samples long. This is possible in STFT as well (each part of the original signal shows up in many separate FFTs).
Again, I don't know the implementation details of wavelet transforms. Maybe I'll look into your repo when I have time. What's your asymptotic and practical runtime?
But you can downsample the signal in frequency domain, meaning you will pay mostly for the output resolution.
The project: https://github.com/aguaviva/GuitarTuner
Online demo: https://aguaviva.github.io/GuitarTuner/GuitarTuner.html
Why did you implement your own FFT instead of using WebAudio?
When I get some time I might make Spectro use a Wasm FFT implementation like PulseFFT (https://github.com/AWSM-WASM/PulseFFT) for better performance. At the moment I'm using jsfft (https://github.com/dntj/jsfft) inside a web worker, which definitely isn't as efficient as a native implementation.
Caleb Joseph is the original author: https://github.com/calebj0seph