
Fourier transform for dummies - wfn
https://math.stackexchange.com/questions/1002/fourier-transform-for-dummies
======
thomasahle
This is what made it `click' for me: [http://betterexplained.com/articles/an-
interactive-guide-to-...](http://betterexplained.com/articles/an-interactive-
guide-to-the-fourier-transform/)

Ever since I found Better Explained I have recommended it to anyone wanting to
get actual intuition for high school / undergraduate level math concepts.

~~~
jakejake
Thanks for that - looks like an interesting site for all kinds of math-related
subjects.

I found another page that I felt like offers an nice overview of the Fourier
transform and various practical uses, without getting into the actual
implementation details - [http://nautil.us/blog/the-math-trick-behind-
mp3s-jpegs-and-h...](http://nautil.us/blog/the-math-trick-behind-mp3s-jpegs-
and-homer-simpsons-face)

~~~
jaynos
This is the one that made it click for me...even after learning them 4 or 5
times is assorted calc classes. I actually came to the comments here hoping
that someone had posted the link because I lost it.

------
joosters
I understand Fourier transforms, but the top rated answer IMO is a terrible
and confusing way of introducing them to someone.

~~~
tptacek
... because...

~~~
joosters
Because it introduces and relies upon far too many other mathematical
concepts. Angular frequencies, complex numbers, integrations and a lots of
formulas. It was a '...for dummies' question and you can explain FTs with much
simpler means.

The example that worked best for me is with audio sampling. A demonstration of
how you can build up a 'wiggly line' to match an audio sample by adding
together multiple waves of different frequencies. This then leads into talking
about a real world example - mp3 encoding.

~~~
zodiac
In my experience, math.stackexchange likes mathematical rigour and presupposes
some mathematical maturity. There are a fair number of professional
mathematicians using the site.

Sure, if you just want a "for dummies" understanding, a hand-wavy example like
the second highest voted answer suffices. But if you want to learn it properly
(prove theorems, etc), you need to know concepts like complex numbers and
integration.

~~~
mturmon
If you wanted something more deep than the square wave example, you could
explain _why the complex exponential is used_. This would provide important
insight, because a smart reader might say, OK fine, square wave approximation,
but why don't we use step functions or gaussian-shaped bumps or anything else
as basis functions.

By the Stone-Weierstrass theorem
([http://en.wikipedia.org/wiki/Stone–Weierstrass_theorem](http://en.wikipedia.org/wiki/Stone–Weierstrass_theorem))
we know that lots (really, lots!) of sets of functions would work, not just
complex exponentials.

The story I tell myself about that is, "complex exponentials are the (unique)
eigenfunctions of linear time-invariant systems".

Simply put, this means that if S(.) is a system that has derivatives and
integrals, and scaling and shifting in time, then

    
    
      S(exp(i omega t)) = const * exp(i omega t)
    

I.e., you put in a complex exponential, you get out a (scaled) complex
exponential. That's not going to happen, in general, for other basis
functions.

This property is (under some not too restrictive conditions) basically a
defining property of the complex exponential. (Basically, notice that d/dt
[exp (a t)] = a exp(a t) -- which turns out to have a converse as well.)

Then, because S is linear, a linear combo of complex exponentials (on the
input of S) transforms into a weighted combo of the same exponentials on the
output.

To me, that's the deeper reason that Fourier analysis is important. Those
sines and cosines are mathematically inevitable, right from the moment in high
school when you learned d/dx e^x = e^x.

Noted on that SE page (the OP) is a related, but subsidiary property, that the
Fourier bases are also the eigenfunctions of the wave equation. But the wave
equation is linear, so this follows from the above reasoning.

I imagine, but I don't know, that the usefulness of the Fourier basis for
quantum-mechanical analysis also follows from linearity.

------
lutusp
Topic aside, I'm now officially jealous of those who post to
StackExchange/Mathematics, because they can post Latex and see it rendered:

<p>$$z(t) = \int_{-\infty}^{\infty}R(\omega) e^{i\omega t}
\mathrm{d}\omega$$</p> ->
[https://i.imgur.com/fLFlv7f.png](https://i.imgur.com/fLFlv7f.png)

I can do this when I create a page on my site (I also have MathJax installed),
but none of the discussion groups I post to supports Latex rendering. I keep
thinking this will eventually become universal, but IMHO "eventually" is
taking too long.

Anyway: [http://www.mathjax.org/](http://www.mathjax.org/) (it's free)

------
quarterwave
Start with different people talking all at once - some loudly, some softly. We
can try and make out what someone is saying, but it's not easy. So we put each
person on a different radio station & we then 'tune' our radio with a little
knob and dial.

If our ears could hear radio frequencies - like AM1070 - then each station by
itself would sound like a dull drone or a tuning fork. It's dull because it's
repetitive - it sounds the same this second as it sounded the previous one. To
make the drone interesting we multiply it with a person's voice signal (say
from a microphone or rolling tape) - then the drone kind of goes wah-wuh-wuh-
wah, and this 'modulated' signal is a lot more interesting.

That's basically the Fourier transform - a sum of Speech.Station signals,
where Speech is interesting and Station is repetitive.

\---

We can play this same game with the speech itself - or music. Think of each
note do-re-mi-... as radio stations, and the pressure applied to each key of
the piano as the speech. So, music can be represented by a combination of the
form KeyPressure.MusicalNote. The KeyPressure sequence can be recorded in a
music sheet. The notes themselves are boring (as the kids say in The Sound of
Music, the notes don't _mean_ anything), it's the "multiplication and sum"
with the KeyPressure sequence that makes music.

Again, note the summation overy x.y terms, where x is interesting and y is
repetitive - that's the Fourier transform.

Historical sidelight: The simplest sheet music is Morse code, dah-dit. When
telegraph became a booming business there was a shortage of wires to handle
the volume of messages. People wanted a way to send more Morse per wire.
Alexander Graham Bell had a bright idea - this was a few decades before radio
- he took several Morse streams & put each dit-dah-dah-dit-dah sequence onto
separate notes (like do-re-mi), and then sent the lot down a single wire. As
he played with his system he had the deep insight that if the Morse code was
replaced by sheet notation, then he could transmit music down a wire - and
that became the telephone!

------
malanj
Winamp's audio visualiser really helped me to understand the idea of time and
frequency domains.

It was pretty crude, but is was a nice way to see how a piece of music looked
in the frequency domain. I kept that mental model in my head all the way
through engineering.

------
dnautics
You can take a general function and break it into its 'component' polynomial
terms, this is called the taylor series.

You can also take a periodic function and break it into its 'component' sines
and cosines. This is called the fourier series.

Fourier transform is _essentially similar_ to the fourier series except you're
allowed to have non-integer frequency multiples.

You can see how FT and FS are useful for analyzing all sorts of things that
are periodic. MP3s, for example use a similar system to encode music (it's
called the discrete cosine transform) and compression is enabled by deleting
information that your ear will fill in - but this information is most easily
selected and deleted by looking at the MP3 in terms of frequency and amplitude
over time.

~~~
jjoonathan
> You can take a general function and break it into its 'component' polynomial
> terms, this is called the taylor series.

> Fourier transform is essentially similar to the fourier series except you're
> allowed to have non-integer frequency multiples.

Not really. Taylor series are "optimal" in a completely different way from
Fourier series. Since the "decomposition strategy" used by Fourier series is
completely general (there are polynomial equivalents of the Fourier Series)
it's worth drawing the distinction.

The Taylor Series gets all of its information from the neighborhood of a
single point. Thought experiment: consider the curve e^x. Now set it to 0
everywhere outside of the interval (-e,e) where e is arbitrarily small. Now
take the taylor series of the curve at the point x=0. You get back the
original curve e^x -- the Taylor Series completely ignores the fact that your
curve no longer looks anything like e^x! You can do even worse than that:
paste a tiny, almost-linear segment of 1+sin(x) into e^x around the origin.
You couldn't tell the difference looking at the curve (they both look like
e^x) but the Taylor series would converge to the sin, not to the e^x curve you
"transplanted" the patch into. Taylor series are good for answering the
question "How does f(x) change if I nudge x a bit?" but they are very bad at
answering the question "what polynomial best approximates f(x) over its
domain?".

If you want to answer the latter question, you should use linear algebra and
change of basis / orthogonal filter banks / whatever you want to call it. This
is the trick used by the Fourier Series and the Discrete/Fast Fourier
Transform. It works for polynomials too: google "Legendre polynomials" or
"Hermite polynomials" for more information on polynomial analogs of the
Fourier Series (if you haven't had linear algebra, I highly recommend Axler's
book, which will teach you how to use generalized inner products to perform
this trick). There's a second type of polynomial approximation called
"Lagrange interpolation" which approximates a function using polynomials by
evaluating it at certain points rather than by integrating it. Lagrange
interpolations are only distantly related to Legendre/Hermite/etc polynomials
but both strategies answer the question of "what polynomial best approximates
f(x) over its domain?" much better than Taylor series.

Also, if you ever want to learn how MP3/JPG compression actually works, you'll
want the DCT, which is another relative of the FFT. If you know the relevant
linear algebra, the distinction is trivial (DCT has implied "mirror" symmetry
at the endpoints rather than "wrap" symmetry so you don't get ringing if the
color changes from one edge of a block to the other). DCT, FFT, Legendre
polynomial approximation, Hermite polynomial approximation, etc all use the
same simple trick (cue "one weird trick" ad). If you don't know the relevant
linear algebra, it's a bunch of black magic. But the relevant math is
accessible enough that I highly recommend looking into it. I've already
mentioned Axler's book (Linear Algebra Done Right) -- it's accessible to
anyone with a semester or two of calculus under their belt and it'll get you
to the point of understanding how all of these transforms are basically just
rotation matrices for high-dimensional spaces (one dimension for every point
you evaluate the function at) within a few chapters.

The key is to understand the distinction between vectors/operators as
understood by mathematicians and the blocks of numbers called vectors/matrices
by engineers. The latter are like a shadow cast by the former. Learn about the
former and you'll see that the DCT, FFT, etc are shadows of the exact same
concept as the familiar triplets of real numbers, just cast at a different
angle, if that makes sense.

~~~
ArbitraryLimits
> Lagrange interpolations are only distantly related to Legendre/Hermite/etc
> polynomials

Actually ... take an nth degree polynomial from your favorite orthogonal set.
For each of its n+1 zeros, construct by Lagrange interpolation the polynomial
which is 1 at that zero and 0 at the others. Then that set of n+1
interpolating polynomials has the same relation to the orthogonal polynomials
of degree 0 through n that (periodized) sincs have to sinusoids, i.e. you can
decompose an n degree or lower polynomial in either basis, and the two
representations are related by something like a Fourier transform.

~~~
jjoonathan
Huh, cool. I was familiar with using the division algorithm to prove that
Gauss quadrature worked, but I never chased the notion any further. I suppose
you're right, the DCT does the exact same thing when it samples a function at
a finite number of points.

------
revelation
First occurence of "frequency domain" is in the lower 1/4th of the page..

~~~
j2kun
I always hated that term. And signal. Took me way too long to figure out that
signal just means function.

~~~
JTon
Does the use of the word signal spawn from its application in communications
theory? I've never found it strange using the word signal but it might be
because of my EE background. Frequency domain and time domain never irked me
either :/

~~~
ArbitraryLimits
"Signal" has always bugged me because it has the connotation of a function of
_time_ , whereas historically the Fourier series were first used to represent
functions of _space_.

~~~
wglb
But the point is that the transform shows you two representations, one time-
based, the other frequency-based, of the same signal.

And notice that time is a component in each of these.

~~~
zodiac
It doesn't _need_ to be time- and frequency- based. For example in quantum
mechanics you transform from position to momentum space. I think the complaint
was that the word "signal" excludes this later case while "function" is more
general.

------
wglb
How about instead of the discredited history of epicycles "Fourier transforms
are a way to translate from the frequency domain to the time domain and back"?

~~~
Florin_Andrei
TBH, what made me understand the whole "frequency domain to time domain, and
back again" thing was actually the F transform. When I hear that expression, I
go "oh yeah, Fourier transform".

Honestly, for me, the sine curve + sine curve + sine curve + ... explanation
was perfectly good even on an intuitive level. Perhaps that's because I've
spent some of my adolescence in front of a computer doing tons of
visualizations and graph plots, and because at the same time I was doing quite
a bit of electronics (radio frequency and so on) and audio synthesis. At some
point, you just go "yeah, I see it".

Fun anecdote: A square wave is made entirely of odd harmonics (3, 5, 7, ...).
There are no even harmonics in it. When you can see why intuitively, you've
understood Fourier.

