
An Interactive Introduction to Fourier Transforms - thunderbong
http://www.jezzamon.com/fourier/
======
alberto_ol
Previous submissions
[https://hn.algolia.com/?q=An+Interactive+Introduction+to+Fou...](https://hn.algolia.com/?q=An+Interactive+Introduction+to+Fourier+Transforms)

------
holy_city
There are three really important things about the Fourier transform in my
mind. Two are math and one is engineering.

\- the Fourier Transform preserves energy (Parceval's theorem, the norm of the
transformed function is the same as the norm of the original)

\- there exists an inverse transform to get the original function back

\- once you grasp that magnitude/phase describe patterns in the function you
can gain powerful intuition about the transform and how to use it as an
analytical and design tool.

Those first two properties tell us that the transform preserves information,
basically it's another way of looking at the same thing to gain more insight
without loss. The third is something not harped on enough in engineering
courses, and failure to teach it in my mind is one reason so many people thing
controls/signal processing is black magic.

A big followup question here is, are there other transforms for which energy
is preserved, and there exists an inverse? The answer is yes, there are
infinitely many. The third property is more useful, which begs the question,
which of those other transforms are useful?

An example of this is a cousin of the Fourier Transform called the Discrete
Cosine Transform which is critical in compression, classification, and machine
learning (especially contemporary speech recognition). It's not as
straightforward as Fourier, since the result isn't as obvious as breaking down
the energy into patterns, but rather what it does is break the energy into
_correlated_ parts, in other words it preserves the energy while representing
it in decorrelated bins. The strongest of those bins is the most important
part, which is why compression technology works by taking the DCT and tossing
out low magnitude components (it preserves the most important energy) while
also showing how it can work for machine learning, where it decomposes the
input information into an equivalent but decorrelated representation where
inputs aren't shared for different parts of something like a neural net.

There are other equally cool orthogonal transforms, I like the Hilbert
transform myself because it can extract really useful info like signal
envelopes and be used to make wacky noises, like a frequency shifter.

~~~
brooksbp
Can you recommend any good textbooks?

"The Scientist and Engineer's Guide to Digital Signal Processing" is a bit too
verbose and hand wavy for my liking; looking for something more succinct and
rigorous.

~~~
holy_city
Two texts I keep on my desk that are a bit more rigorous than most:

\- Oppenheim & Schafer _Discrete Time Signal Processing_ (the Bible of DSP)

\- Manolakis & Ingle, _Applied Digital Signal Processing_ (good discussion of
orthogonal transforms).

A lot of what I know about transform analysis comes from self study of linear
algebra/vector spaces with some reading here and there in commonly cited
papers. Might want to pick up a text on that subject, it's the same idea but
more rigor than an engineer would use.

There's also a book I haven't worked all the way through yet and is dated,
heavily based in EE concepts (non negligible amount of circuit theory) and
extremely rigorous called _Theory of Linear Physical Systems_ by Ernst
Guilleman. I picked it up last week actually and quite like it.

It has a lot of information and approaches with Fourier/Laplace methods, which
is interesting since it predates the FFT and has so much information on
concepts that engineers 50 years ago would need to build their intuition with
instead of through tooling. I picked it up for the network theory/dynamical
systems angle (which relates to some stuff I'm working on) but the rigor is
definitely higher than what you'd see in those more digestible books.

------
willis936
It’s good that they link to 3brown1blue’s video as well. I was already
decently well learned in DSP and information theory before I saw that video
and it still managed to teach me a very unique perspective on the fourier
transform.

These visualizations are very useful for those without an intuition built up.
This is the exact way to think about things if you need to work in the
frequency domain.

I do wish they mentioned phase. It’s always glossed over when teaching the
fourier transform but is incredibly essential to describing a coherent signal.

------
enthdegree
love the interactive demos.

one fundamental thing i always feel is missing with all these videos and
articles about the spinny circles set end to end with different phases and
amplitude is: why on earth do such configurations happen to have the capacity
to approximate any function you prescribe?? to me this is the entire mystery
behind fourier transforms. the spinny circles are kind of unusual to look at,
but do nothing to illuminate to me why convergence of fourier series happens,
and for this reason exactly i am of the opinion that this meme analogy is not
useful for beginners beyond entertainment.

of course the details for convergence of fourier series are the entire topic
of classical harmonic analysis. one hand-wavy way to make sense of it is to
first sample and then identify that the dft matrix for the vector space of
sampled signals is a basis. kind of similarly, int dx sin x sin nx from 0 to
2pi is 0 for all n, and the span of {sin nx,cos nx} is somehow dense in some
function space. although that isn't really very illuminating since to the
uninformed it amounts to a numerical coincidence. every single article of this
sort that i have seen falls flat in this respect and i feel like this most
interesting part has been obscured.

~~~
salty_biscuits
why on earth do such configurations happen to have the capacity to approximate
any function you prescribe??

Well they don't. Only certain classes of functions with bounded varitation
will have a convergent Fourier series approximation. I think the best way for
these demos to introduce this stuff would be to focus on the one with the
steps, point out the Gibbs phenomena around the jumps (ringing). Then show for
smooth blobby things you are all good though. I think historically a lot of
mathematicians had a lot of problems with Fourier's methods applied to the
heat equation for these types of reasons (which initial/boundary conditions
are ok, etc) hence we have the whole field of harmonic analysis now...

------
aj7
1\. The only thing that matters in engineering is the discrete Fourier
transform (DFT). That’s what anyone who wants to calculate anything must use.
2\. The fast Fourier transform (FFT) is used to to calculate the DFT. Cue up
Matlab or LabView. 3\. Knowing the units of the abscissa in sample and
frequency space is next. 4\. Studying the sampling theorem so hard you can
describe it in a number of different contexts, rigorously, is critical. 5\.
Knowing what aliasing and antialiasing are is critical. 6\. Knowing how to use
transforms to interpolate is next. 7\. Knowing how to mess around with the
real and imaginary parts of the complex DFT to improve the S/N is a good
achievement. 8\. Convolutional filters. 9\. Smoothing data sets with non-
causal filters. 10\. What windowing functions, and why?

Have I left anything out?

~~~
ta1234567890
Where could someone go (ideally online) to learn those things and practice
them?

~~~
bigger_cheese
"The Scientist and Engineer's Guide to Digital Signal Processing" by Steven W
Smith available online here:
[http://www.dspguide.com/pdfbook.htm](http://www.dspguide.com/pdfbook.htm)

One of my colleagues lent me a hardcover copy very good explanation I found it
a huge help for a recent project I was working on (Sonic vibration sensor).

Clearest explanation of everything I was able to find. Fourier transforms were
covered during my engineering degree but it had been some years since I'd
needed to concern myself with them. Top google hits were not really helping
and I found this textbook was invaluable.

------
ta1234567890
This is very cool.

Tangent idea: it seems like the way we describe Earth's movement in space
(rotation + orbit + precesion) is akin to the image of the hand that draws
itself; where instead of describing the whole trajectory, we instead describe
it in terms of "circular components" (rotation, orbit and precesion, yes I'm
aware the orbit is not perfectly circular).

The closest visualization of Earth's "full trajectory" in space, that I've
been able to find, is a video on YouTube
([https://youtu.be/0jHsq36_NTU](https://youtu.be/0jHsq36_NTU)), which
unfortunately is a bit exaggerated and not very accurate.

Has anyone seen something better than the above?

------
pequalsnp
Great job on the article! Really succinct and a good introduction to the
topic. Would've really helped me in undergrad when I first was learning about
the Fast Fourier Transform.

------
dr_dshiv
Two easy questions: 1\. Metaphysically, is everything actually constructed
from sine waves? Or is this totally arbitrary (e.g., just as easy to construct
things from square waves) 2\. Neural firing approximates sine waves in the
brain (e.g., alpha waves). Is a single neuron firing a square wave with a
small duty cycle, or something else altogether?

~~~
yorwba
1\. Square waves also work. That's the Hadamard transform.

2\. Individual neurons produce very short spikes (action potentials). EEGs
measure those spikes in aggregate, which creates a very noisy and not at all
sinusoidal signal. The Greek letters denote particular frequency bands, so if
a graph of "alpha waves" looks somewhat sinusoidal, it's because the signal
has been filtered to suppress other frequencies.

~~~
dr_dshiv
Wow. Fantastic.

1\. Can I ask where you learned that? So happy to know this. By virtue of the
0/1 approach, it seems almost more fundamental than sine waves. Or, maybe
negative infinity to infinity is more fundamental? Anyhow, that's just
metaphysics. I'm also curious what it implies for digital computing

Here is a nice article that taught better than Wikipedia:
[https://www.mathworks.com/help/signal/ug/walshhadamard-
trans...](https://www.mathworks.com/help/signal/ug/walshhadamard-
transform.html) 2\. Alpha on a single EEG channel can _look_ pretty
sinusoidal. Is there a measure of sinusoidality that would allow me to assert
how sinusoidal they are?

:) awesome answers to my easy questions, thanks!

------
jimbo1qaz
Bug: If you move the harmonic-count slider to the left and redraw the wave,
the slider moves to the right... If you press Play right after redrawing the
wave, the harmonics are there. If you keep spamming Play as the right side of
the image animates into a filtered wave, the quieter (not necessarily upper)
harmonics gradually fade into silence.

------
hi41
The write-up is so amazing. The animated diagrams are so amazing too. I wonder
how many weeks it would have taken to create such a write-up.

------
PavlikPaja
Well this doesn't actually explain FFT at all, but it could be interesting to
use the circle like thing on speech.

------
ykevinator
This is such a nice intro, really well done.

------
gHosts
Best illustration ever for the saying, "Give me 19 parameters and I'll fit an
Elephant!"

------
atarian
This is the best introduction to FTT I've ever seen. I wish I had this when I
was an undergrad.

------
hongo
Thanks for this article. It had Fourier-transformed my brain-waves!

------
catchmrbharath
Small Nitpick: You cannot reconstruct a square wave even if you use infinite
fourier coefficients due to Gibbs
phenomenon.([https://en.m.wikipedia.org/wiki/Gibbs_phenomenon](https://en.m.wikipedia.org/wiki/Gibbs_phenomenon))

~~~
seiferteric
Your link seems to contradict you?

> Informally, the Gibbs phenomenon reflects the difficulty inherent in
> approximating a discontinuous function by a finite series of continuous sine
> and cosine waves. It is important to put emphasis on the word finite because
> even though every partial sum of the Fourier series overshoots the function
> it is approximating, the limit of the partial sums does not. The value of x
> where the maximum overshoot is achieved moves closer and closer to the
> discontinuity as the number of terms summed increases so, again informally,
> once the overshoot has passed by a particular x, convergence at that value
> of x is possible.

> There is no contradiction in the overshoot converging to a non-zero amount,
> but the limit of the partial sums having no overshoot, because the location
> of that overshoot moves. We have pointwise convergence, but not uniform
> convergence. For a piecewise C1 function the Fourier series converges to the
> function at every point except at the jump discontinuities. At the jump
> discontinuities themselves the limit will converge to the average of the
> values of the function on either side of the jump. This is a consequence of
> the Dirichlet theorem.[11]

~~~
mlevental
the second paragraph spells it out? at the jump the series converges to the
average rather than either value.

------
ajross
One bit of pedantry to complain about: this (really clever) essay is really
only describing Fourier Decomposition, the idea that any function can be
described in a vector space of sinusoidal basis functions. The Fourier
_Transform_ is an algorithm that exploits some (also really clever, and
obviously of critical practical impact) factoring representations to compute
that sum of functions in N*log(N) time in the number of data points instead of
the naive N^2 summation of products.

~~~
jauntbox
You're describing the "Fast" Fourier Transform, which is a specific algorithm
for efficiently calculating the Discrete Fourier Transform of a signal.

The plain ol' Fourier Transform is also a bit different than what's described
in this blog post. Fourier Transforms can be thought of as an extension of the
Fourier Series described there. The Fourier Series shown there all have not
only finitely many terms, but also a finite frequency spacing between
successive terms (eg. 1Hz, 2Hz, 3Hz, ...). Fourier Transforms build up signals
from _all_ frequencies, so are expressed as an integral over frequency
components instead of a sum, even if it's an infinite sum.

