Then you just need to understand that the sines and cosines are a complete basis. So think about the sines and cosines for a while until you can say "yeah, they're orthogonal, and I can believe they're a complete basis for the kind of functions under consideration." Then to promote this from "I can believe" to "obviously," for the discrete FT (the orthogonality and) counting/dimensionality arguments suffice, and for the continuous FT you can look at Gaussians, say "obviously Gaussians are a complete basis for the kinds of functions under consideration" and then do the easy integrals to show that any Gaussian can be expressed as a linear combination of sines and cosines.
(This assumes you're interested in transforming reasonably smooth things like wavefunctions in chemistry, as opposed to trying to see how far you can push Fourier analysis into the netherworld of bizarre jagged twisted functions shown to exist by invoking the Axiom of Choice. If you want to do that, feel free to take a course from Terence Tao studying theorems whose prerequisites involve concepts like "countable.")
To add to this: if you want an intermediate path that'll give a bit more insight than just basic linear algebra, reading a bit about Hilbert spaces might be worthwhile too. Depending which space(s) you choose to work in, it doesn't necessarily have to bog you down in coping with pathological cases. You probably will need to grok things like countability, limits and converge though.
The benefit: you get to pin down exactly what it means to talk about things like 'basis' and in particular 'orthogonal basis' in a meaningful way when it comes to infinite-dimensional spaces of functions like the ones the Fourier transform works on. Turns out you need some extra tools to do this; intuitions based only on experience with finite-dimensional linear algebra will likely have some handwavy gaps. (Not that that necessarily matters :)
About the article though: I really liked it. It wouldn't teach me from scratch how the transform works, but every extra bit of intuition one can get from looking at a concept from a different point of view helps, and this adds some extra physical intuition in a really nice way.
And if you aren't working with wavefunctions regularly, like physicists and engineers might be, that gets obtuse fast (I'm referring to the use of the FT in econometrics and image processing). Then it's not at all clear that you should even want a sinusoidal basis with integer frequencies.
It was just that frequency = 1 / time. In this (barbarically reductive) conception, taking the FT is just a change of variable.
This relationship is one way to "derive" many of the standard Fourier facts.
For example, the scaling property, that if x(t) has transform X(f), then x(at) has transform (1/a) * X(f/a). It also "explains" why time signals concentrated around t=0 tend to have lots of high-frequency content (f = 1/t = 1/0 = infinity), and vice versa.
It also "explains" why the inverse FT formula looks just like the forward FT formula (since if f = 1/t, then t = 1/f). And, for the same reason, most of the duality relationships between the two domains.
All with just arithmetic! You can dispense with linear algebra, not to mention complex arithmetic, groups, or measure theory.
That is beautiful.
And that's the hallmark of good teaching.
fourier_trasform(signal sig(t), frequency freq):
let sinu(t) = sinusoid with frequency freq
let mult(t) = sig(t) * sinu(t)
value = integral of mult(t) from -infinity to infinity
If the input signal sig(t) has no relation to the frequency of the sinusoid, then integrating mult(t) over infinity will give zero.
If the signal has a component with the required frequency, it will kind of resonate with the sinusoid and give a non-zero value. The value then depends on the magnitude of the signal and to how much it "resonates" with the sinusoid.
When you do this for a range of different frequencies freq, but using the same signal sig(t), you can plot how much the signal sig(t) resonates with all frequencies, and that plot is the plot of the Fourier Transform.
Now, to find a life sciences journal to publish that in...
Therefore to multiply polynomials, one thinks of them as functions on G, uses the DFT to take you to functions on the dual of G, multiplies pointwise, then does an inverse transform to get you back to functions on G again. I'm skimming over lots of details and oversimplifying a bit, but what I just described is the process of using a convolution to multiply polynomials.
The really great thing is if n is a power of 2. Then you have this cool Cooley-Tukey algorithm called the Fast Fourier Transform to do the DFT (and IFFT) really fast (in time O(n log n) instead of O(n^2)). It works by recognising that computing an FFT is precisely the same thing as evaluating a polynomial at the n-th roots of unity. This can be done by repeatedly breaking the problem into halves and recognising that the same pattern of roots of unity occurs in the first half as in the second. By factoring that out, you can (recursively) save yourself half the work.
Again, oversimplified, but that's the nub of it.
Particularly enlightening was the demodulation of a frequency modulated sine wave when the tuner was imperfectly matched to the carrier frequency. Looking at it on an oscilloscope was similar to watching an old TV with the Vertical Hold improperly set.
That made me start thinking in terms of a signal (sine wave) that was a cycle rather than a sinusoidal shape. Seeing that you can graph the amplitude and phase of any signal on the complex plane and that the frequency was the change in phase from one moment to the next was the aha! moment.
Then if you think about sampling and how if you sample a sinusoid exactly at its peaks how that would graph as a constant point on the complex plane, but if you sampled at any other mismatched frequently, the point would rotate and change amplitude with respect to either axis. The further away from the actual frequency you'd go the the points would look more random, and they'd average out to zero with fewer samples.
This would be a fun animation or java applet to make; I'm sure someone has done it.
I came to the comments expecting to see nods of approbation at how cool this explanation was (I stopped taking math at about Calc 3, so no linear algebra for me) but instead I see people geeking out saying things like "to really understand you need to grasp the complex plane, and groups and DPTs and so forth."
Well, just so you know, for me the OP's intutitive explanation was enough.
Only 3 dimensions required, which is nice.
Substitute "-1" into the left side of that equation, and see that no real value of x will suffice. This is related to the fact that the imaginary constant (i) wasn't discovered, it was simply declared as an unknown quantity that squares to -1.
The real magical part is that i still works in more complicated situations: multiplying any real number by e^(ix) as x increases gradually transforms it into an imaginary number, and then into its own negative, behaving like a counter-clockwise rotation when visualized in the complex plane.
It takes some serious entrepreneurial skills and mindset to embark on a problem which is seemingly impossible, and never giving up until the solution has been derived.
Inspirational, to say the least!
> It takes some serious entrepreneurial skills and mindset
> to embark on a problem which is seemingly impossible, and
> never giving up until the solution has been derived.
Lets utilize our synergy to create some entrepreneurial verticals!