
The Fourier Transform and its Applications - kercker
https://see.stanford.edu/Course/EE261
======
yagyu
There is this hard question that is often glossed over in a first course on
Fourier analysis: We learn that the Fourier transform of f(t) is an integral
of exp(iwt)f(t) over all t. But when we look at a table of Fourier transform
pairs, we find things like the Fourier transform of a constant function (is
the Dirac delta). Integral does not converge? WTF?

I really think Prof. Osgood hits a sweet spot of not sweeping this under the
carpet, and not becoming a course on functional/integration theory. I'm a
postdoc in theoretical physics, and use these transforms regularly, and I have
yet to find a more honest but still useful introduction (although I understand
that real mathematicians cringe at my abuse of mathematics.. :)

edit: typo

~~~
dnautics
the other somewhat confusing thing is that algorithms like discrete fourier
transform/fft are actually fourier series.

~~~
leni536
There are three of them.

1\. Fourier transform: unbounded real domain <-> unbounded real domain

2\. Fourier series: bounded real domain <-> unbounded integer domain

3\. Discrete Fourier transform (DFT): bounded integer domain <-> bounded
integer domain

FFT is an algorithm for quick calculation of DFT, which is _not_ Fourier
series.

~~~
enriquto
you are missing the discrete time fourier transform, which is the dual of your
point (2).

Together, these eight domains can be arranged on a commutative cube, all of
whose edges have beautiful meanings: sampling, interpolation, etc. The fact
that the cube is commutative gives a lot of theorems relating each transform
and these operations.

------
tptacek
In case this is useful to anyone else:

There's probably an easier way to do it, but if you want to get faster
playback (2x is about as fast as I can comprehend this lecturer at), this line
seems to do the trick:

    
    
        document.getElementById("myElement").querySelector("video").playbackRate = 2

~~~
jsprogrammer

        document.getElementsByTagName('video')[0].playbackRate = 2;
    

Not tested.

~~~
jholman
Here's my general-purpose speed-adjustment bookmarklet:

    
    
        javascript:void%20function(){document.querySelector(%22video%22).playbackRate=parseFloat(prompt(%22Set%20the%20playback%20rate%22))}();
    

And I find that about +70% is the speed I most-frequently want, so here's my
one-click "x1.7":

    
    
        javascript:void%20function(){document.querySelector(%22video%22).playbackRate=1.7}();
    

Both of these adjust the speed of all _non-iframe-embedded_ HTML5 video
elements on the current page.

Of course, because of XSS-prevention rules, they won't help with iframe-
embedded players. For that I guess you're gonna need a plugin.

~~~
jsprogrammer
`queySelector` only returns the first matching element (according to Mozilla
docs).

------
farhanhubble
This is an exceptional course. In my CS undergrad days, some 7 years ago, when
I wanted to make sense of Fourier Series and Transform, this course was my
savior. The mathematics text book had a dry and unintuitive treatment of the
topics and electrical engineering, physics and computer science books mostly
relegated the topics to a few pages in an appendix.

I found this course and printed out the PDF of the lecture notes as I didn't
have enough bandwidth for watching the videos. The lecture notes had a wealth
of information about the idea of a Fourier Series expansion, it's connection
to the Transform and the applications in diverse fields.

Prof. Osgood had great examples. In one such example he showed that the
temperature of a ring must be a periodic function of time as the structure of
the ring is periodic in space, and then went on to derive an equation for the
temperature of the ring that was a Fourier Series.

It was from these notes that I learned a great deal about convolution too. The
new website looks much better organized. I'd recommend every CS/EE student to
at least watch the first few lectures once.

~~~
Zombieball
> electrical engineering, physics and computer science books mostly relegated
> the topics to a few pages in an appendix.

Out of curiosity, is this really true? Perhaps I misunderstand you, but I have
always been under the impression that Fourier Transform was a fundamental
building block covered extensively in all EE curriculum?

I took Computer Engineering with some classes shared between Comp. Eng. and EE
students. I probably had 2 full-semester classes dedicated to signal
processing and fourier transforms. Additionally I took a math course that
covered Fourier and Laplace transforms.

~~~
notalaser
I don't know about physics, but it's definitely not true about electrical
engineering. The Fourier transform is one of those things that is absolutely
fundamental to any branch of electrical engineering, you run into it when
doing anything, from microelectronics to power distribution. And there's
typically no way you get an EE diploma without a soul-wrenching course on
signals & systems.

As a personal anecdote: I ended up taking _three_ courses: one on signals, one
on systems, one on circuits (...and systems!). During those troubled uni
years, trying to see if I can still remember the Fourier transform was my "am
I too drunk?" benchmark. When I could no longer do it, I went home.

~~~
uiri
_And there 's typically no way you get an EE diploma without a soul-wrenching
course on signals & systems._

Similar to the person you're replying to, my education is in computer
engineering rather than electrical engineering. I took the "signals and
systems" course which was a treatment of Fourier series, Fourier transforms
and convolutions. Since the final exam was open book, I did well by primarily
being able to quickly look up the information relevant to each question on the
exam. Convolutions did seem a little bit convoluted but the course didn't seem
particularly soul-wrenching (compared to, say, the intro circuits course or
the intro electronics course). Perhaps I was spared the soul-wrenching bits
(e.g. I took a stats course instead of the course on random signals) or
perhaps my university's electrical engineering department is subpar.

~~~
notalaser
The soul-wrenching part depends a lot on the professor, I suppose. My Signals
teacher was absolutely stellar and put a lot of effort into giving us a solid
understanding of the theoretical underpinnings of her course, so it was pretty
math-intensive. The exam, as I remember it, was pretty modest in terms of
difficulty, but the course was pretty hard. I aced every homework, lab and
exam, but that's largely because I loved that course. Everyone I knew
struggled with it. It ate a good 4-6 hours off every week, in addition to the
time spent on assignments & co. (sometimes more, but by my own making, i.e.
because I'd hack on something alluded to, but not taught in that course. Like
I said, I really liked that class).

My Circuits and Systems professor, on the other hand, wasn't too interested in
this. He obviously cared about us knowing the math, but insisted that he was
teaching a Circuits class, not a Mathematics course. His course seemed a lot
easier as a consequence. I was never that good at math. I learned it because
you can't do engineering without it, but it has always been my weak spot.

(Edit: I don't mean it in the "how much is 12% out of 45 USD? zomg I suck at
math" way, that's 5.4 USD, I'm not an idiot. I mean it in the "vector calculus
made sense, I think, but I could never have made a career out of it as I did
with writing software and designing electronic devices" kind of way.)

My Systems professor was smart but had long stopped caring about that course
and his students, so I learned most of the material by myself. At the time, I
thought he was an idiot, but I've come to understand him in time. His course
wasn't too torturous, largely because I skipped most of the classes.

Most of my colleagues (particularly those in CompEng) didn't take separate
courses on Signals and Systems though. They had a single, bigger Signals &
Systems course, that was taught by a couple of hardcore professors, both well
into their sixties and both of them very draconian. One of them also taught an
introductory Electronic Devices and Circuits course that I took, so I knew him
firsthand: he was a very good professor, but he was about as flexible as a
chopstick. An year when less than 1/3rd of his students failed the final exam
was a really good year.

------
jomamaxx
I 'took' the stanford NLP course online it was great.

Doing this one now.

Funny - I did Fourier in Comp. Eng. - but it was jammed together with so many
other things, I felt overwhelmed by all of it, I did enough to do decently on
the exam, but I feel a lot of it was lost on me, and I forgot much of it.

Using fourier in most computer applications is very rare!

I'm excited to have a project I need to use it for - and weirdly stocked to be
watching this video.

I love it when I have a reason to learn something awesome ...

~~~
jomamaxx
The prof noted that it's almost all EE folks, with a few others, but didn't
mention Comp Sci. Can anyone comment on why there were seemingly no Comp Sci
people in this class? Wouldn't this be 'core' to Comp Sci?

~~~
GrumpyYoungMan
>The prof noted that it's almost all EE folks, with a few others, but didn't
mention Comp Sci. Can anyone comment on why there were seemingly no Comp Sci
people in this class? Wouldn't this be 'core' to Comp Sci?

Why would it? Core CS deals primarily with discrete mathematics. Outside of a
DSP or image processing course, there's little reason for a CS student to
encounter the Fourier or other transforms.

~~~
hood_syntax
I took a class on Scientific Computing (required) which went over a lot of
core algorithms in various fields, including FFT. I wouldn't be surprised if a
good portion of programs had something similar.

------
hackinthebochs
I went through this entire lecture series and I can't recommend it enough.
Osgood is one of the most engaging professors I've ever listened to. I would
happily learn anything from this guy. My only disappointment was that he
doesn't teach anything else!

------
zaf
Web page didn't work for me on my Safari. YouTube works fine:
[https://www.youtube.com/watch?v=gZNm7L96pfY](https://www.youtube.com/watch?v=gZNm7L96pfY)

------
jokoon
I think I remember that the fourier transform is used to send a signal across
a analog line, for DSL modems, after it's converted from binary.

Since analog lines may not have the same frequency response when it comes to
interference, the modem usually tests the analog lines at various frequencies,
and only keeps the intervals who are good enough to transmit.

I wonder if the FFT is used for wifi, since wifi passes through an analog
signal.

Although I'm not even sure of what I'm talking about is true or relates to
fourier.

~~~
freyr
Yes, Wi-Fi uses the FFT. In wireless systems, the method is known as OFDM
(orthogonal frequency division multiplexing).

------
Xcelerate
30 lectures on Fourier transforms and not one mention of the Fourier transform
over compact groups? (Which is arguably the most generalized/abstract
formulation of the concept of a Fourier transform.) A lot of recent research
has been done on FFTs over groups like the rotation group and the symmetric
group, and this work has lead to significant progress on problems that were
previously considered intractable.

~~~
JadeNB
> the Fourier transform over compact groups … is arguably the most
> generalized/abstract formulation of the concept of a Fourier transform.

A good-spirited correction: anyone who knows enough to know this much
generality knows that calling anything mathematical the "most generalised"
version of a concept is begging to be corrected. Source: I do harmonic
analysis on non-compact groups, and suspect that the folks over at the
n-category café probably regard my kind of work as dangerously pedestrian
(see, for example,
[https://golem.ph.utexas.edu/category/2010/11/integral_transf...](https://golem.ph.utexas.edu/category/2010/11/integral_transforms_and_pullpu.html)).

~~~
Xcelerate
Haha, fair point (although in my defense, I did qualify my statement with
"arguably").

That's interesting that you do harmonic analysis on non-compact groups; I've
been meaning to learn more about that actually. I've been working on group
synchronization problems related to the ideas in this paper
([http://arxiv.org/pdf/1505.03840.pdf](http://arxiv.org/pdf/1505.03840.pdf)),
which briefly mentions how one might extend the concept to non-compact groups,
but doesn't elaborate further.

~~~
JadeNB
Sorry for the delayed response; I'm "submitting too fast".

> I did qualify my statement with "arguably"

I always tell my students—and, though it's a joke, I think that it's more true
than casual consideration might suggest—that the correct answer (though not
one that they're allowed to use on tests!) to any question in mathematics is
"it depends." The more elementary the question, the more profound it sounds
that you can see angles from which different answers can be correct. (Of
course, backing this up with actual profound re-interpretations is the hard
part. :-) )

> That's interesting that you do harmonic analysis on non-compact groups; I've
> been meaning to learn more about that actually. I've been working on group
> synchronization problems related to the ideas in this paper
> ([http://arxiv.org/pdf/1505.03840.pdf](http://arxiv.org/pdf/1505.03840.pdf)),
> which briefly mentions how one might extend the concept to non-compact
> groups, but doesn't elaborate further.

As a decidedly pure mathematician, that's outside my wheelhouse, although I
did have the pleasure of having some of the applications explained to me by
one of the authors of one of the papers on cryo-electron microscopy cited in
the linked article. I'm afraid that the idea of embedding a non-compact group
in a larger, compact group is foreign to my way of thinking about things, so
(in case you were implicitly asking) unfortunately I can't shed any light
there.

------
jdp123
very very highly recomended. Once you can understand the basis of this concept
you open your mind to a whole new area of math and things like the finite
element method for PDE's and wavelet transforms follow much more naturally.

------
drostie
One of the fun ones, and this may or may not be covered by the link, is
generating the "cumulants" for probability density functions. The basic idea
in (basic, continuous) probability theory is that you have these things called
"random variables" X,Y,Z which now need to be thought of taking on values with
various probabilities; and the way we do that is to use calculus (where the d-
in "dx" is a special notation coming from the word "difference"; "dx" means "a
tiny little change in x"). The basic idea is that there is some _joint
density_ j(x, y, z) such that the probability that simultaneously x < X < x +
dx, AND y < Y < y + dy, AND z < Z < z + dz, is simply given by j(x, y, z) dx
dy dz.

With a bit of calculation you conclude that if we want to analyze Z = X + Y,
then the probability-density for Z alone must be h(z) = ∫ dx j(x, z - x);
there are a couple ways to do this but my favorite is to use Z = X + Y as a
schematic to write a Dirac δ-function j(x, y, z) = j(x, y) δ(z - x - y), then
the standard "what is the probability-density for Z alone? integrate over both
x and y!" rule comes into play. But you can also reason it out by hand, if you
want.

Anyway, then you come up with this cool definition of independence; X and Y
are independent if j(x, y) factors nicely into f(x) g(y); independent
probabilities tend to multiply. So that's cool.

Now here's where the Fourier transform comes in; consider using a convention
where we denote the Fourier transform and with the same function-name but
using square brackets rather than parentheses for its application,

f[q] = ℱ{p → q} f(p) = ∫ dp exp(-i p q) f(p).

When we Fourier-transform h(z) for Z = X + Y we get:

h[a] = ∫ dz exp(-i a z) h(z) = ∫ dx dy exp(-i a (x + y)) j(x, y)

and if the two are independent random variables we find that h[a] = f[a] g[a];
the sum of independent random variables has a product of Fourier transforms
precisely because the above sum takes the form of a convolution.

Taking a logarithm, we have that the χ(a) = log h[a] = log f[a] + log g[a] and
any properties of the logarithm of the Fourier transform of the density
function must be additive amongst independent random variables. And this gives
the _cumulants_ of the random variables: pseudo-linear expressions (linear in
independent random variables f(X + Y) = f(X) + f(Y), with f(k X) = q(k) * f(X)
for some fixed q) which characterize the different "moments" of the random
variables in question. In fact if we look carefully at the definition h[a] = ∫
dz exp(-i a z) h(z) we see that the variable P = k Z, having distribution b(p)
= h(p/k) / k, generates b[a] = h[k a].

Calculating the zeroth, χ(0) = log h(0) = log 1 = 0. No biggie. Continuing the
Maclaurin expansion gives χ'(a) = h'(a)/h(a) and at a=0 that's -i E(Z). We
knew that the expectation value was linear, so no surprise there. The next
term is (-i)^2 [h''(a) h(a) - h'(a) h'(a)]/[h(a) h(a)] which works out to just
E(Z^2) - E(Z)^2, the familiar expression for the variance; so variance is
pseudolinear (in this case q(k) = k^2). The point is, then you can just go on.
There are in fact infinitely many pseudolinear cumulants which scale like q(k)
= k^n, coming from this Maclaurin series, with a pattern based in fact
entirely on the pattern of derivatives that comes out of log(f(x)).

As a freebie you have a proof of the central limit theorem. Take N independent
identically-distributed random variables with characteristic functions χ(a),
call them each X_i, form their sum S = sum_i X_i / N, and from the properties
we've already seen in this comment its characteristic function must be ψ(a) =
N χ(a / N). Do the Maclaurin series to 2nd order and ignore the latter terms
for large N; you get convergence to a parabola, which means that the Fourier
transform of the probability density is... a Gaussian! And the inverse Fourier
transform of that Gaussian is just another Gaussian, so for large N the result
must be a normal random variable with mean μ and variance σ²/N, which of
course is what they must be, because means are linear and variances are
pseudolinear.

~~~
fgimenez
Thanks for this cool writeup. My undergrad probability professor had us prove
the CLT using cumulants, but I never kept the notes and had trouble finding
the same proof later in life. Do you know of any reference that goes in depth
about cumulants/FT/CLT?

~~~
drostie
Actually probably one of the best simple ones is Scholarpedia's article: in
particular it covers one of the things that are kind of at the periphery of a
non-applied statistician's mind when using cumulants: how the log-normal
distribution serves as a handy counterexample to just about everything you'd
like to prove about them (e.g. with some tweaking you can prove that the log-
normal distribution's cumulant expansion is not unique, therefore you have an
injection from PDFs to cumulants, not a surjection).

Something I've never seen rigorously explored: Cumulants appeared in my
Master's work in condensed matter, where they at times seemed to have a
mysterious tie to Feynman diagrams; that is, there seems to be some sort of
analogy between how the Nth cumulant "subtracts out" all of the components of
the Nth moment which are just products of the previous N-1 moments to obtain
something "additive", and a Feynman diagram containing a vertex joining N
particles. In particular Feynman diagrams for pairwise interactions seemed to
be eerily similar to cumulant expansions which did not have any 3rd or higher
moments. I confess I totally forgot this until I Googled today and saw a
question on Math Overflow about it, but it was too sketchy to show me a real
satisfying resolution of my mental curiosity. Similarly I remember seeing
Young tableaux in some lecture notes on cumulants and then the day after I was
working with Marcin Dukalski on some crazy approach he was taking in his
thesis, and he spontaneously started talking about "hm, what do the Young
tableaux look like for that?" and I was only able to help at all because of
some limited understanding from the previous day's reading! So... there's
definitely some mathematical resonances when you get to quantum theory.

