I don’t think I’ve ever come across an algorithms book that discusses FFT that doesn’t discuss fast polynomial multiplication when introducing the algorithm.
That’s literally what the FFT does, fast polynomial multiplication.
If you come at it from an engineering approach (e.g., electrical engineer), it doesn't look like that at all. In fact, the focus is on filters and spectral analysis. It looked something like this when I was at Georgia Tech:
First, you learn about the convolution operation that, given any input function x(t), get the output of any linear time-invariant circuit as y(t). The convolution integral is nasty, professors make you feel the pain a bit. Then introduce the Laplace transform to make the computation much easier. Then go to continuous time, continuous frequency Fourier transform. Talk about frequency domain a bunch, learn filter topologies, etc. Circuits class over.
Now comes a signal processing class where you learn about discrete-time signals. First, talk lots about sampling. Then, get introduced discrete time convolution. Now, learn the z-transform and the discrete time Fourier transform (DTFT) for a discrete-time, continuous frequency signal. Mostly discuss filtering and spectral analysis. Intro to signal processing class over.
Learn about sampling the DTFT in the frequency domain. This is the DFT, which is usually presented as a sum that would require O(n^2) operations to compute. Learn that this corresponds to circular convolution and learn about zero padding for traditional convolution. Finally, get presented with Cooley-Tukey FFT algorithm for base-2. Focus is still signal and spectral analysis. Talk lots about windowing. You may get a mention that convolution corresponds to polynomial multiplication here. Or maybe they talk about grade-school multiplication, its really the same thing as polynomial multiplication with a carry. Senior level signal processing class over.
Yes that's true. But if you come at it looking at it as a sampled continuous signal and spectrum, you don't think of the FFT as fast polynomial multiplication. Neither viewpoint is wrong. Just different lenses for looking at different problems.
I think I still think of it that way. In EE you are convolving a filter with a signal in the time domain, which you can think of as a polynomial. Doing the fft, then a vector multiply +ifft accomplishes the same thing, but using the fft.
How can you think of a continuous time signal as a polynomial though? I don't know of a continuous time analogy. It seems like the polynomial only works for the discrete case. And FIR filter convolution is rarely computed by FFT in FPGAs (in stream applications).
Look at a system doing DSP for example. Let's say you have an input RF signal that goes into an analog band-pass filter, a mixer, an analog low-pass filter, a DAC, and a FPGA. The FPGA implements a FIR filter - lets say a Hilbert transform using a distributed arithmetic algorithm. You are getting envelope information out of the FPGA. There's no FFT anywhere in the FPGA to compute this filter output.
You are seeing some weird noise in your DSP output. You dig a little by looking at the output of the DAC and then look at the FFT magnitude log10. You see some suspicious spurs popping up, but at really strange discrete frequencies. You show the spectrum to an RF engineer, then you two go to the lab and put an analog spectrum analyzer on the input to the DAC. Sure enough, there are mixing products of your input carrier, mixer frequency, and sampling clock frequency. The weird spur frequencies make sense now - you are seeing folding of the mixing products from the sampling. You tweak the sampling clock to move the spur digital frequencies.
In that long but real world example, I don't see anywhere here where thinking of things in terms of polynomials would be helpful here. It's also going to lead to confusion of terminology with your RF engineer - they tend to think continuous.
Sorry, I think I misspoke. I didn't mean that I think of it in those terms normally. working with RF everyday, I think of it as time and frequency domain in the sense that you were describing. But mathematically I could see how the polynomial makes sense. Also, I was only thinking about discreet time since we are talking about the fft.
So interesting thing that. While I was introduced to the FFT in the same manner, (an algorithm for fast polynomial multiplication in a class by the CS department) my Electrical-Engineering-backgrounded colleagues are completely unaware of this use of the FFT. They use it as a change of basis to directly observe and manipulate frequency. The EEs I work with are much more familiar with the relationship between the Fourier transform of a function and what the original function looks like.
Yep. Grade school multiplication of two large numbers is the same thing (except for the carries) as convolution of two signals in the time domain. The numbers are the signals. You already know that piecewise multiplication in the frequency domain is equivalent to (and faster than) convolution of those signals in the time domain. So that's what FFT multiplication is about.
It's only useful for VERY large numbers (thousands of digits), which is why most people never encounter it.
Note that a “signal” in this context is just a trigonometric polynomial over a periodic interval.
If you think of your periodic interval as representing angle measure, and the points in the interval as points on the unit circle in the complex plane, then your trigonometric polynomial can alternately be thought of as a Laurent polynomial in the complex plane. https://en.wikipedia.org/wiki/Laurent_polynomial
What the FFT does is convert between the values of your function at n roots of unity in the complex plane -> the coefficients of the Laurent polynomial interpolating those values.
I'm guessing most people aren't introduced to FFT in an algorithms course though, they're introduced to it as the computational way to calculate an FT.
You're probably right -- and even when I saw it elsewhere it was still within the CS department.
For instance we glanced over it during my Digital Image Processing course. The professor did introduce it as a form of fast polynomial multiplication but we didn't dive too deep into Cooley-Turkey because frequency analysis was the focus of the work.
That’s literally what the FFT does, fast polynomial multiplication.