 [dupe] An Interactive Introduction to Fourier Transforms 403 points by thunderbong 9 days ago | hide | past | web | favorite | 44 comments Previous submissions https://hn.algolia.com/?q=An+Interactive+Introduction+to+Fou... There are three really important things about the Fourier transform in my mind. Two are math and one is engineering.- the Fourier Transform preserves energy (Parceval's theorem, the norm of the transformed function is the same as the norm of the original)- there exists an inverse transform to get the original function back- once you grasp that magnitude/phase describe patterns in the function you can gain powerful intuition about the transform and how to use it as an analytical and design tool.Those first two properties tell us that the transform preserves information, basically it's another way of looking at the same thing to gain more insight without loss. The third is something not harped on enough in engineering courses, and failure to teach it in my mind is one reason so many people thing controls/signal processing is black magic.A big followup question here is, are there other transforms for which energy is preserved, and there exists an inverse? The answer is yes, there are infinitely many. The third property is more useful, which begs the question, which of those other transforms are useful?An example of this is a cousin of the Fourier Transform called the Discrete Cosine Transform which is critical in compression, classification, and machine learning (especially contemporary speech recognition). It's not as straightforward as Fourier, since the result isn't as obvious as breaking down the energy into patterns, but rather what it does is break the energy into correlated parts, in other words it preserves the energy while representing it in decorrelated bins. The strongest of those bins is the most important part, which is why compression technology works by taking the DCT and tossing out low magnitude components (it preserves the most important energy) while also showing how it can work for machine learning, where it decomposes the input information into an equivalent but decorrelated representation where inputs aren't shared for different parts of something like a neural net.There are other equally cool orthogonal transforms, I like the Hilbert transform myself because it can extract really useful info like signal envelopes and be used to make wacky noises, like a frequency shifter. Don’t forget that Heisenberg’s uncertainty principle can be derived as a property (theorem?) of the Fourier transform . I found the Laplace transform intriguing as well .The Hilbert transform sounds intriguing.edit: fixed link Laplace transforms, being essentially a more powerful superset of Fourier transforms, are awesome.They also make calculus way easier, differential equations specifically. As an electrical engineer, I love me some Laplace transforms. > a more powerful supersetWhat do you mean? How is integral over half R more powerful than integral over whole R? Parent is right. The Laplace transform is a generalization of Fourier's. There are two types of Laplace tranforms, the two-sided and the one-sided. The two-sided is defined in all of R. The s variable is a complex frequency in the form s = σ + jw. Setting σ = 0 in yields the Fourier tranform. The single-sided LT transform is equivalent to the one-sided if the signal is causal. A causal signal f(t) is zero for all t < 0. All of the signals in the real world are causal. This is convenient as the exponential e^(-st) = e^(-(σ + jw)t) gets really big for values of σ close to minus infinity something the renders the whole integral non convergent. We can avoid that if we make sure the signal is causal. Good summary. I would add the bandwidth equality, which is akin to the Uncertainty Principle in physics, that shows a function cannot be be both time and frequency bounded. Can you recommend any good textbooks?"The Scientist and Engineer's Guide to Digital Signal Processing" is a bit too verbose and hand wavy for my liking; looking for something more succinct and rigorous. Two texts I keep on my desk that are a bit more rigorous than most:- Oppenheim & Schafer Discrete Time Signal Processing (the Bible of DSP)- Manolakis & Ingle, Applied Digital Signal Processing (good discussion of orthogonal transforms).A lot of what I know about transform analysis comes from self study of linear algebra/vector spaces with some reading here and there in commonly cited papers. Might want to pick up a text on that subject, it's the same idea but more rigor than an engineer would use.There's also a book I haven't worked all the way through yet and is dated, heavily based in EE concepts (non negligible amount of circuit theory) and extremely rigorous called Theory of Linear Physical Systems by Ernst Guilleman. I picked it up last week actually and quite like it.It has a lot of information and approaches with Fourier/Laplace methods, which is interesting since it predates the FFT and has so much information on concepts that engineers 50 years ago would need to build their intuition with instead of through tooling. I picked it up for the network theory/dynamical systems angle (which relates to some stuff I'm working on) but the rigor is definitely higher than what you'd see in those more digestible books. Practical Signal Processing by Owen might be a lead worth looking into. It was recommended by Ossmann in his HackRF tutorial videos (he created the HackRF).It's gonna fall near the extreme end of succinctness.I had since purchased the book but haven't gone more than a couple chapters deep. It definitely looks like it allows you to dive into the meat of DSP. Don't know if it will fit the bill for rigorous.I also didn't pay anything near what it's currently listing for on Amazon. You could try Arfken and Weber's "Mathematical Methods for Physicists". It's a common reference for Physicists. While it doesn't focus on engineering (or signal) applications, it's both fairly rigorous and succinct enough for practical use. Looks like there are pdf's online. Excellent list, the only thing I would add is the convolution theorem!!!! It’s good that they link to 3brown1blue’s video as well. I was already decently well learned in DSP and information theory before I saw that video and it still managed to teach me a very unique perspective on the fourier transform.These visualizations are very useful for those without an intuition built up. This is the exact way to think about things if you need to work in the frequency domain.I do wish they mentioned phase. It’s always glossed over when teaching the fourier transform but is incredibly essential to describing a coherent signal. love the interactive demos.one fundamental thing i always feel is missing with all these videos and articles about the spinny circles set end to end with different phases and amplitude is: why on earth do such configurations happen to have the capacity to approximate any function you prescribe?? to me this is the entire mystery behind fourier transforms. the spinny circles are kind of unusual to look at, but do nothing to illuminate to me why convergence of fourier series happens, and for this reason exactly i am of the opinion that this meme analogy is not useful for beginners beyond entertainment.of course the details for convergence of fourier series are the entire topic of classical harmonic analysis. one hand-wavy way to make sense of it is to first sample and then identify that the dft matrix for the vector space of sampled signals is a basis. kind of similarly, int dx sin x sin nx from 0 to 2pi is 0 for all n, and the span of {sin nx,cos nx} is somehow dense in some function space. although that isn't really very illuminating since to the uninformed it amounts to a numerical coincidence. every single article of this sort that i have seen falls flat in this respect and i feel like this most interesting part has been obscured. why on earth do such configurations happen to have the capacity to approximate any function you prescribe??Well they don't. Only certain classes of functions with bounded varitation will have a convergent Fourier series approximation. I think the best way for these demos to introduce this stuff would be to focus on the one with the steps, point out the Gibbs phenomena around the jumps (ringing). Then show for smooth blobby things you are all good though. I think historically a lot of mathematicians had a lot of problems with Fourier's methods applied to the heat equation for these types of reasons (which initial/boundary conditions are ok, etc) hence we have the whole field of harmonic analysis now... it is hard to give a satisfying high level explanation but i would love to see someone try.a Dirichlet kernels proof I just looked up that works for integrable periodic functions is accessible to a really smart high schooler, but still involves a lot of minutae. http://math.uchicago.edu/~may/REU2012/REUPapers/Cuddy.pdf .You can define a function called a "n-th order Dirichlet kernel" which when you convolve it with f yields the n-th order Fourier series approximation for f.I guess the missing intuition is that Dirichlet kernels behave more and more like a delta as order goes up, which you can prove using algebra and calculus. Intuitively you know what convolution with a delta looks like, so it is easy to believe that convolution with something near a delta is numerically similar.There is a nice visual on the wiki article for a sequence of Dirichlet kernels looking more and more like a delta. 1. The only thing that matters in engineering is the discrete Fourier transform (DFT). That’s what anyone who wants to calculate anything must use. 2. The fast Fourier transform (FFT) is used to to calculate the DFT. Cue up Matlab or LabView. 3. Knowing the units of the abscissa in sample and frequency space is next. 4. Studying the sampling theorem so hard you can describe it in a number of different contexts, rigorously, is critical. 5. Knowing what aliasing and antialiasing are is critical. 6. Knowing how to use transforms to interpolate is next. 7. Knowing how to mess around with the real and imaginary parts of the complex DFT to improve the S/N is a good achievement. 8. Convolutional filters. 9. Smoothing data sets with non-causal filters. 10. What windowing functions, and why?Have I left anything out? Yes. The problem is assuming the only thing that matters with the fourier transform is the DFT. Point #1 is incorrect. Sometimes sampling theorems are misleading (not incorrect). There are cases where it is highly advantageous compute a continuous fourier transform without a DFT. I think my answer was highly oriented towards handling real data. But even in simulation, any calculations are discrete. It helps tremendously to understand continuous Fourier transforms, because most of the theorems have immediate discrete analogs. Where could someone go (ideally online) to learn those things and practice them? "The Scientist and Engineer's Guide to Digital Signal Processing" by Steven W Smith available online here: http://www.dspguide.com/pdfbook.htmOne of my colleagues lent me a hardcover copy very good explanation I found it a huge help for a recent project I was working on (Sonic vibration sensor).Clearest explanation of everything I was able to find. Fourier transforms were covered during my engineering degree but it had been some years since I'd needed to concern myself with them. Top google hits were not really helping and I found this textbook was invaluable. I recommend Coursera Digital Signal Processing by Martin Vetterli https://www.coursera.org/learn/dsp. He does a good job of explaining the math behind the DFT. Any finite signal can be expressed as a linear combination of complex exponentials - that was the aha! moment for me.Once you have that under your belt you might find Audio Signal Processing for Music Applications by Xavier Serra a fun course to complete https://www.coursera.org/learn/audio-signal-processing. I personally learned mostly from experiments in Labview. The Smith book is excellent. This is very cool.Tangent idea: it seems like the way we describe Earth's movement in space (rotation + orbit + precesion) is akin to the image of the hand that draws itself; where instead of describing the whole trajectory, we instead describe it in terms of "circular components" (rotation, orbit and precesion, yes I'm aware the orbit is not perfectly circular).The closest visualization of Earth's "full trajectory" in space, that I've been able to find, is a video on YouTube (https://youtu.be/0jHsq36_NTU), which unfortunately is a bit exaggerated and not very accurate.Has anyone seen something better than the above? Great job on the article! Really succinct and a good introduction to the topic. Would've really helped me in undergrad when I first was learning about the Fast Fourier Transform. Two easy questions: 1. Metaphysically, is everything actually constructed from sine waves? Or is this totally arbitrary (e.g., just as easy to construct things from square waves) 2. Neural firing approximates sine waves in the brain (e.g., alpha waves). Is a single neuron firing a square wave with a small duty cycle, or something else altogether? 1. Square waves also work. That's the Hadamard transform.2. Individual neurons produce very short spikes (action potentials). EEGs measure those spikes in aggregate, which creates a very noisy and not at all sinusoidal signal. The Greek letters denote particular frequency bands, so if a graph of "alpha waves" looks somewhat sinusoidal, it's because the signal has been filtered to suppress other frequencies. Wow. Fantastic.1. Can I ask where you learned that? So happy to know this. By virtue of the 0/1 approach, it seems almost more fundamental than sine waves. Or, maybe negative infinity to infinity is more fundamental? Anyhow, that's just metaphysics. I'm also curious what it implies for digital computingHere is a nice article that taught better than Wikipedia: https://www.mathworks.com/help/signal/ug/walshhadamard-trans... 2. Alpha on a single EEG channel can look pretty sinusoidal. Is there a measure of sinusoidality that would allow me to assert how sinusoidal they are?:) awesome answers to my easy questions, thanks! Bug: If you move the harmonic-count slider to the left and redraw the wave, the slider moves to the right... If you press Play right after redrawing the wave, the harmonics are there. If you keep spamming Play as the right side of the image animates into a filtered wave, the quieter (not necessarily upper) harmonics gradually fade into silence. The write-up is so amazing. The animated diagrams are so amazing too. I wonder how many weeks it would have taken to create such a write-up. Well this doesn't actually explain FFT at all, but it could be interesting to use the circle like thing on speech. This is such a nice intro, really well done. This is the best introduction to FTT I've ever seen. I wish I had this when I was an undergrad. Best illustration ever for the saying, "Give me 19 parameters and I'll fit an Elephant!" Thanks for this article. It had Fourier-transformed my brain-waves! Small Nitpick: You cannot reconstruct a square wave even if you use infinite fourier coefficients due to Gibbs phenomenon.(https://en.m.wikipedia.org/wiki/Gibbs_phenomenon) Your link seems to contradict you?> Informally, the Gibbs phenomenon reflects the difficulty inherent in approximating a discontinuous function by a finite series of continuous sine and cosine waves. It is important to put emphasis on the word finite because even though every partial sum of the Fourier series overshoots the function it is approximating, the limit of the partial sums does not. The value of x where the maximum overshoot is achieved moves closer and closer to the discontinuity as the number of terms summed increases so, again informally, once the overshoot has passed by a particular x, convergence at that value of x is possible.> There is no contradiction in the overshoot converging to a non-zero amount, but the limit of the partial sums having no overshoot, because the location of that overshoot moves. We have pointwise convergence, but not uniform convergence. For a piecewise C1 function the Fourier series converges to the function at every point except at the jump discontinuities. At the jump discontinuities themselves the limit will converge to the average of the values of the function on either side of the jump. This is a consequence of the Dirichlet theorem. the second paragraph spells it out? at the jump the series converges to the average rather than either value. Meta nitpick: the Gibbs phenomenon is a statement about any arbitrary finite decomposition. The limit of the wave form as the number of terms goes to infinity is in fact exactly a square wave. As usual, infinity complicates things; the series has pointwise convergence but not uniform convergence in such cases. Doesn’t it still converge in the L2 norm? (The L2 distance between two functions is the area trapped between them). sure you can, up to a negligible set One bit of pedantry to complain about: this (really clever) essay is really only describing Fourier Decomposition, the idea that any function can be described in a vector space of sinusoidal basis functions. The Fourier Transform is an algorithm that exploits some (also really clever, and obviously of critical practical impact) factoring representations to compute that sum of functions in N*log(N) time in the number of data points instead of the naive N^2 summation of products. You're describing the "Fast" Fourier Transform, which is a specific algorithm for efficiently calculating the Discrete Fourier Transform of a signal.The plain ol' Fourier Transform is also a bit different than what's described in this blog post. Fourier Transforms can be thought of as an extension of the Fourier Series described there. The Fourier Series shown there all have not only finitely many terms, but also a finite frequency spacing between successive terms (eg. 1Hz, 2Hz, 3Hz, ...). Fourier Transforms build up signals from _all_ frequencies, so are expressed as an integral over frequency components instead of a sum, even if it's an infinite sum. Applications are open for YC Winter 2020

Search: