Hacker News new | comments | show | ask | jobs | submit login
History and Derivation of the Fast Fourier Transform (michaeljflynn.net)
84 points by mjfl 99 days ago | hide | past | web | 28 comments | favorite



In computational seismology we've long had our own creation story for the FFT:

"The first machine computation with this algorithm known to the author was done by Vern Herbert, who used it extensively in the interpretation of reflection seismic data. He programmed it on an IBM 1401 computer at Chevron Standard Ltd., Calgary, Canada in 1962. Herbert never published the method. It was rediscovered and widely publicized by Cooley and Tukey in 1965." [1]

[1] Claerbout, J., 1985, Fundamentals of Geophysical Data Processing, p. 12.


The Claerbout book also gives a 27-line Fortran program to perform an FFT.


I remember a math professor retiring in the late 1980s, who was giving away all of his books. Looking through them I saw one from the early 1960s that was on numerical methods for certain classes of problems.

Several chapters started off by describing a problem, explaining that ideally you'd do a Fourier transform, and then coming up with a special case hack that worked well enough and performed well enough to actually be done.

The FFT made all of this knowledge obsolete.


Especially in math, a distinction is often drawn between

    FT   f(real) -> F(real)
    DFS  f(real) -> F(int)
    DTFT f(int)  -> F(real)
    DFT  f(int)  -> F(int)
Only one of these, the DFT, is easily handled by a computer, but they all have an important role in both theory and practice. For example, why does JPEG use a Discrete Cosine Transform rather than a Discrete Fourier Transform? What are the pitfalls of approximating a continuous domain with discrete samples? Why are engineers obsessed with window functions? A continuous domain will lead you to the right answer. You can't easily understand the DFT (or FFT) without the FT and DFT. DFS has its own uses.

To many (applied) mathematicians, the fact that a computer uses the FFT algorithm to compute a DFT is largely immaterial and not particularly worth mentioning, because they are more interested in the questions in the preceding paragraph than questions like "how many points can I take on X hardware while maintaining Y performance?"


Also notice the awesome "Sparse FFT" https://groups.csail.mit.edu/netmit/sFFT/ which takes time just about `k log (n/k)` to recover the `k` largest coefficients. So often a complete Fourier transform is taken only to immediately discard nearly all of the computed values. (Compression for example.)



I recommend Fast Algorithms for Digital Signal Processing, by Blahut, 1985 for a mind-expanding survey of all the work in this area up to that time. Winograd FFT is even lower complexity than C-T. And the book spends a lot of time on fast DFTs over integer fields of size a Mersenne or Fermat prime. It's tragic how many transistors we've wasted doing floating point math on what is essentially an integer problem.


My impression is Winograd FFT is still O(N log N), i.e. exactly the same complexity as Cooley-Tukey. That said, they work best on different N, which is possibly what you're referring to?


A nice table from the book at [1]. For block length 1024, CT radix 2 requires ~20k multiplies and ~30k adds. Block length 1008 Winograd requires ~4k multiplies and ~35k adds.

[1] https://books.google.com/books?id=8sbS2Bi4BmIC&pg=PA396&lpg=...


I see. That sounds like a smaller constant factor, rather than a lower complexity.


Is is just me being a n00b or are there a few errors in the equations? Like the exponential definitions of sin and cos, a missing negative sign in proof of m≠n, maybe more later not sure. Somewhat perplexing for someone learning the stuff BUT then again, I'm in medicine; so it might actually just be me.

Relevant for other learners is the digital signal processing lectures on Coursera, highly recommended with python notebooks and quizzes. https://www.coursera.org/learn/dsp


You are correct on the sine and cosine thing, the minus is switched. Will fix, thanks.


dumbish question: if cosine is simply sine but offset by pi/2 radians, why are we approximation these functions with an infinite sum of sines and cosines and not doing it just with sines?


Well, you may not like this answer, but:

If cosine is just sine rotated by 90°, then we really just use sines. Except we write cosine instead of a sine with a 90° rotation because it looks less confusing :)

The real goal is to arrive at the exponential representation, where the sines and cosines are absorbed into the exponential functions, which is later needed to write down the fourier integrals.

Your question boils down to preference. The two things you name are, in fact, equal. Math people just choose to write cos(x) instead of sin(x+pi/2)


The "true" Fourier transform is to write the function as an infinite sum of e^(2 pi i n x) with complex coefficients and n an integer. (This is the "true" Fourier transform because of a connection between periodic functions and circles, and because the representation theory of the circle group says that these are the so-called irreducible representations. This is extremely abridged -- the point is just to mention relevant words.)

Due to Euler's formula, there is a correspondence between a linear combination of e^(2 pi i n x) and e^(-2 pi i n x) and a linear combination of cos(2 pi n x) and sin(2 pi n x). In other words, both pairs of functions span the same subspace (with complex coefficients).

We need all of the exponentials to represent a periodic function, with positive and negative n, and so we need both cos and sin. If you leave out cos, you can only represent odd functions, and leaving out sin, even functions.

But: if your question is just why not just use sin(x) and sin(x-pi/2) instead of sin(x) and cos(x), well, it's the same thing. Just because you write cos(x) like sin(x-pi/2) doesn't mean it's not still cos(x)!

(Fun fact: cosine means "complementary sine," the sine of the complementary angle in the triangle.)


One reason is that in floating-point processing, in some cases adding a constant to an argument can truncate and lose the original argument, so using a single function will fail in a class of pathological edge cases:

    import numpy as np
    a = np.pi/2
    print(a)
    > 1.5707963267948966
    print(a == a + 1e-15)
    > False
    print(a == a + 1e-16)
    > True
    print(np.sin(1e-16))
    > 1e-16
    print(np.cos(np.pi/2+1e-16))
    > 6.12323399574e-17
    print(np.sin(1e-16) == np.cos(np.pi/2+1e-16))
    > False


Because with a Fourier series/transform you only get to play with the amplitude of the sine and cosine functions, not the offset. If you only had cosine functions in your sum you'd only be able to represent symmetric functions (and antisymmetric if only sine). Except actually I lied; if you sum both sine and cosine functions you do get to specify offsets. Think of this trig identity: sin(x + offset) = [cos(offset)] sin(x) + [sin(offset)] cos(x). Think about the terms in brackets as just being amplitudes. By having both sine and cosine in the sum you can represent any arbitrary offset by just changing their relative amplitude. This is one of those things which is nicer when you think about it in terms of complex numbers.


The series \sum a_n\cos(n\theta) + \sum b_n\sin(n\theta) and \sum a_n\sin(n\theta + \pi/2) + \sum b_n\sin(n\theta) are literally, exactly the same; it is purely a matter of convention which one you choose to write. In more programming-type language, 'cosine' is just syntactic sugar (or 'sine' is, if you prefer; or, I would rather say, both are, for the real and complex parts of the exponential); but sometimes syntactic sugar is handy!


I believe the reason is due to cosine being even, i.e. cos(-x) = cos(x), and sine being odd, sin(-x) = -sin(x). The sum of even functions is still even, and the sum of odd functions is still odd. If you have a function that is symmetric about the origin, say f(x) = x^2, then you can approximate it with only cosines, but in general functions are not odd or even so you need to use a mix of sines and cosines.


it will depend on the implementation and processor if it's faster to calculate simultaneous sin and cos of a given angle, or sin(w) and sin(w+pi/2), but you are right you could do it all with sin or cos only.

But there is already a different thing called a discrete cosine transform :)


A Chevron geophysicist coded it before Tukey, but did not publish it


I am genuinely interested in the source for that if you have it.


Here [1] is a mathforum.org thread discussing the unpublished predecessors of the FFT. Vern Herbert of Chevron is mentioned, but so are several others. The mathforum thread has a link to a pdf that supposedly contains more information, but that link is dead.

[1] http://mathforum.org/kb/message.jspa?messageID=1640228


Correct. Since I am a geophysicist I met him a few times. He was pround of his achievements despite limited recognition. A defect of the US R&D system which promotes competitive secrecy over scientific gain.


I missed this subthread before posting my comment -- it was Vern Herbert in 1962, according to a book by Jon Claerbout. Details in my other comment.


Here's another history, of sorts, of the FFT that traces similar mathematical techniques back to Gauss. Here's a snippet:

"In a recently published history of numerical analysis, H. H. Goldstine attributes to Carl Friedrich Gauss, the eminent German mathematician, an algorithm similar to the FFT for the computation of the coefficients of a finite Fourier series. Gauss' treatise describing the algorithm was not published in his lifetime; it appeared only in his collected works as an unpublished manuscript. "

[1] http://www.cis.rit.edu/class/simg716/Gauss_History_FFT.pdf


Great article. The FFT and it's many applications (especially convolution-based ones such as matched filtering) is one of my favorite algorithms, and possibly one of the more important ones developed in the 20th century.

I'd put the Simplex method as a close 2nd.


There are a couple misspellings of "Fourier." One example is the table about half way through the article which spells it "Fouirer."




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: