No, Dirac is based on wavelet transforms, which have different math behind them, not using Fourier transforms. This Sparse Fourier Transform (which the authors are now calling the sFFT) would be more useful for traditional FFT/DCT compression algorithms. Though for the most part, the hardware for even very complicated real-time Fourier-based compression (H.264, HD) already exists.
The sparse Fourier transform is also probably not useful for any of that stuff, because accurate results are needed (not just getting one or two of the main tones). Transforms are not the bottleneck at all in video encoding, and even in audio encoding, split-radix FFTs and the like are very fast.
High-res Dirac video encoding is definitely possible in real-time, you just need a very optimized encoder, which doesn't really exist. Dirac's main speed cost comes from the overlapped-block motion compensation, not the transform.
The other observation I make is that apart from their computational complexity, FFTs work well in hardware because they have reasonably efficient implementations.
Implementation of an FFT on a chip has two components: the logic/computing elements ( governed by O(n.log(n)) ) and the routing of signals between those elements. It turns out the size and speed of the FFT is mainly determined by the routing, not by the logic, and there is a tractable routing solution to a reasonable number of points . The computation complexity becomes secondary if the complexity of the implementation is determined by the non-computational aspects.
Assuming these are suitable for H.264, then developing H.264 hardware decoders using sFFT could reduce the power consumption. Even though the problem of hardware FFT is "solved" there is still room for improvement.