Hacker News new | past | comments | ask | show | jobs | submit login
Demystifying Fourier analysis (dsego.github.io)
279 points by dsego 71 days ago | hide | past | favorite | 60 comments



A funky thing is that you can replace the 'N' sin/cos functions used in Fourier analysis with any set of 'N' orthogonal functions (aka basis functions). In this context, a Fourier transform is just a bunch of correlations of the function to be analysed with each basis function. Two functions are orthogonal if their correlation is zero.

PCM is "Fourier analysis" with impulse functions in place of sin/cos. CDMA replaces sin/cos with a set of (almost) orthogonal functions with a wide bandwidth.

From an information perspective, orthogonality of the basis functions means each component/coefficient is giving you new information about the signal. In real life you can only compute a finite number of coefficients, meaning you must have a finite amount of information, so there is an inherent uncertainty in the relationship between the signal being analysed and the output of Fourier analysis. (Physical quantities that have an uncertainty principle, such as position and momentum, are related to each other by a Fourier transform.)


No! This is a huge misconception that misses the essence of quantum mechanics. The Uncertainty is not due to finite information. It is an inherent property!

Uncertainty is due to (or better, "modeled by", because FT is math, not physics, and you could use wavelets or something instead), Fourier (and similar transforms) is a (bidirectional) function that maps (local) points to (dispersed) waves, and vice versa. This holds true even with infinite precision/information.

http://math.uchicago.edu/~may/REU2021/REUPapers/Dubey.pdf

"Qualitatively, this means a narrow function has a wide Fourier transform, and a wide function has a narrow Fourier transform. In either domain, a wider function means there is literally a wide distribution of data, so there always exists uncertainty in one domain."


Mathematically, the Fourier transform obeys an uncertainty principle. Here is a nice and simple writeup:

https://www.johndcook.com/blog/2021/03/17/fourier-uncertaint...

The discrete Fourier transform (which is what OP is talking about) doesn't satisfy an uncertainty principle as far as I know. The concept doesn't really make sense.

You can talk about these concepts without bringing information or precision into the picture.


Discrete FT is just an approximate of the the continuous FT, so it follows the same uncertainty principle. It doesn't have an additional uncertainty feature, jt just has the same approximation error that any finite model of an infinite phenomenon would have. That's not "uncertainty" because you can get arbitrarily precision by adding more samples. But Uncertainty means that it it impossible to get below the Uncertainty threshold, just like how if you squeeze a (theoretical) balloon full of incompressible fluid, you can never reduce its global volume to zero, even though you can reduce any local part of it to zero volume.


I'm happy to be wrong and learn about an uncertainty principle for the DFT. I'm just not aware of one. Your comment isn't very helpful for illuminating, absence any mathematical detail.


I've never heard of one either. The first time I was introduced to an uncertainty principle for the DFT was in time-frequency analysis which exists due to not being able to know both time domain and frequency domain information simultaneously. There is some shared concepts from QM, but they are not the same thing.


That a non periodic signal eg something which is nonzero only in a short span, requires a infinite number of periodic functions i.e fourier series/transform to represent is better considered as a limitation of the basis, it simply cannot represent such, only approximate it by a signal with a infinite period, which is not the same as non-periodic to mathematicians.

The definition of the signal implies its expression both in fourier and spatial basis exactly, with no uncertainty.

For this to have an interpretation as uncertainty requires that the functions themselves have that interpretation.


> The discrete Fourier transform doesn't satisfy an uncertainty principle as far as I know. The concept doesn't really make sense.

It certainly does. There are different formulations of it (by Donoho-Stark and by Tao, that I know of). They work when the domain is a finite abelian group.


I guess I'm not totally surprised. Could you give a reference? I'm not sure what this would mean exactly or what the utility would be.


I guess the original article [0] is a good starting reference. A post [1] in Tao's blog gives a nice overview.

If you just want a short statement of the discrete uncertainty principle: If you define the discrete Fourier transform F(u) for functions u defined on a finite abelian group G, and you denote by |S(u)| the cardinal of the support of u, then you have the inequalities:

  |S(u)| · |S(F(u))| >= |G|
and (as consequence)

  |S(u)| + |S(F(u))| >= 2 sqrt(|G|)
Notice that this contains as a particular case the discrete Fourier transform, where the abelian group in question is the integers modulo N.

This has a very practical and intutive interpretation: if the signal u is very localized, then its spectrum F(u) cannot be very localized at the same time, for their supports must be large enough (with respect to the total size of the domain).

[0] Donoho D. L. and Stark P. B., Uncertainty Principles and Signal Recovery, SIAM Journal of Applied Mathematics, 49 (1989), 906–931

[1] https://terrytao.wordpress.com/2010/06/25/the-uncertainty-pr...


Ah, very cool! Yeah, that makes sense. Appreciate you taking the time to write up a quick explanation and send a reference. I'll check it out.


> The Uncertainty is not due to finite information. It is an inherent property!

That's what they said, though? Although I suppose it could stand to be emphasized more:

Physical quantities that have an uncertainty principle, such as position and momentum, are inherently related to each other by a Fourier transform. At a quantum-mechanical level, position is the Fourier transform of momentum, and vice versa - anything that constrains the position to a narrow interval (including but not limited to measuring it very precisely) inherently also smears momentum over a wide range of possibilities, even it was itself previously tighly constained (and, again, vice versa).


Apologies if you consider this pedantic, but I prefer to phrase it in terms of the wavefunctions in position and momentum, rather than position and momentum, unadorned:

At a quantum-mechanical level, the wavefunction in the position domain is the Fourier transform of the wavefunction in the momentum domain, and vice versa - anything that constrains the position wavefunction to a narrow interval inherently also smears the momentum wavefunction...

Again, apologies if this seems pedantic. I know enough about quantum mechanics to be dangerous, but am no expert, so pedantic wording helps me keep things straight.


> Apologies if you consider this pedantic

Honestly, if anything I'd say the pedatry goes the other way: pedantically, the wavefunction in the position domain is what a particle's position is (and respectively for momentum); it's making a distiction between the two (or pretending that there's anything in the position domain other than wave functions) that's insufficiently pedantic.


Need to be careful about the difference between the discrete Fourier transform and the Fourier series here.

It sounds like you're talking about the discrete Fourier transform, but making some statements which apply to Fourier series.

For the DFT, your N orthogonal "functions" are really just orthogonal vectors in C^N. In this case, if they're normalized appropriately, they certainly provide an orthonormal basis for C^N. But this is just linear algebra. The same is true of other orthogonal matrices which are discretized versions of systems of orthogonal functions. Like the discrete Wavelet transform, for example.

On the other hand, with the Fourier series, to represent any 2pi-periodic square integrable function, you need an expansion over an infinite sequence of orthogonal basis functions. In this case, the complex exponentials with integer frequency.

But you can't just take N arbitrary orthogonal functions and expect them to be meaningful when it comes to analyzing 2pi-periodic square integrable functions. Generating a system of orthogonal functions which truly spans L^2 for some set is more involved than what you do in linear algebra. You can read about Sturm-Liouville eigenvalue problems for one way. If you want a system of orthogonal polynomials, it's possible to use Gram-Schmidt, but notice that this doesn't make any sense for a periodic interval (and simply running Gram-Schmidt doesn't actually prove that the resulting sequence of polynomials actually spans L^2...). Also, note that the construction of the continuous wavelets is very different.


From this perspective you can look at it as comparable to the dot product concept of 'how much' of this thing is in that thing. You can also think of it as a 'rotation' operation - a change of basis into the frequency domain. That insight I garnered from graphics pioneer James Blinn.

A related funky thing is that if you choose the right functions you can engineer the analysis to require only shifts and adds which was a boon for making bespoke compression schemes on limited hardware. An example is the Slant transform used in an early Intel video codec. IIRC the Hadamard transform was also similarly useful where the coefficients were all 1s and -1s. I also seem to recall that some modern codecs have modes that use transforms that avoid the traditional DCT.

One last thing, I might have dreamt this but I recall reading a NASA tech report from the 70s or 80s outlining a video compression scheme that used a non DCT approach that could be calculated with simpler hardware without multiplication. I remember thinking, wow they were out front but never commercialized that.


> You can also think of it as a 'rotation' operation - a change of basis into the frequency domain.

Is this only a rotation between two discrete states, or do intermediate angles exist?


Nice, I never thought of PCM and CDMA in terms of basis functions, but you're totally right. Here are some links for anyone interested: https://en.wikipedia.org/wiki/Pulse-code_modulation#Modulati... https://en.wikipedia.org/wiki/Code-division_multiple_access#...

For further reading on orthogonal functions, here is a quick table that shows the bases used for the Fourier series (FS), Fourier transform (FT), and Discrete Fourier transform (DFT) from my linear algebra book: https://minireference.com/static/excerpts/fourier_transforma...


.. and wavelets replace the sin|cos functions that echo out to infinity with tapered funclets that have 'better' (for some value of) local modelling properties with fewer edge effects.

As noted by femto a key insight is there can be many types of basis functions.


I'm late and I only skimmed the other replies to see if someone else mentioned it, but being orthogonal is not sufficient. They must also form a complete basis. For instance, a set of sin functions with the same phase at 0 would not be complete because all of them would be 0 at x=0, and thus any function that is non-zero at x=0 could not be represented.

The second point to make is that the fourier transform can be exactly correct if the bandwidth of the signal being analyzed is finite. Or stated another way, signals with non-finite bandwidth require an infinite number of terms to represent. The famous case of the overshoot of reconstructed square waves is because square waves have an infinite bandwidth. Adding more terms can reduce the energy of the area under the overshoot to an arbitrarily small limit, but it can never be zero with a finite number of terms.


Yeah I chuckle every time people keep rediscovering Hilbert's functional analysis and the idea of orthogonal basis for functional spaces (goes well beyond Hilbert's widest dreams too!!).

Also Re the person who said no to you... Wave mechanics is written in Hilbert's language directly...


> you can replace the 'N' sin/cos functions used in Fourier analysis with any set of 'N' orthogonal functions

But why? It rarely makes sense to use such arbitrary functions that don't satisfy further properties! Typically you'd also want your functions to be the eigenfunctions of a symmetric linear operator which is naturally associated to your problem. Most often, the Laplacian (second derivative operator) of your domain. Thus the natural linear PDE on your space (diffusion, potential, wave equations and their variations) can be readily solved in the Fourier domain. The sequence of eigenvalues provides a lot of geometrical information about the shape of the domain.


Chebyshev polynomials are just sinusoids in disguise, but they also bring in some of the nice properties of polynomials and have helpful recurrence relations.


sure, and they are the eigenfunctions of a Sturm-Liouville operator!


Yes, you can think of so many methods of signal analysis as searching of the right set of overcomplete basis functions that give you a nice sparse representation for your problem (machine learning included).


The functions as a group must also be a basis for the function, them being orthogonal is insufficient for that. Any finite set is also often insufficient.


i never took functional analysis, but is orthogonal strictly required in a function space? in finite dim vector spaces, you don't need orthogonal to have a complete basis, it is just cleaner. You only need linear independence. (1,2), (0,1) is just as much a basis set of R^2 as (0,1) (1,0)


The 2-minute brute-force explanation of Fourier transforms in a recent Veritasium video was my "aha!" moment.

https://www.youtube.com/watch?v=nmgFG7PUHfo&t=464s


I can also highly recommend 3Blue1Brown's treatment of the topic: https://www.youtube.com/watch?v=spUNpyF58BY (...which I see now is cited in the video that you link to!)


I have a Bachelors and Master's in Electrical Engineering and do some signal and image processing as part of my job. I've been working with FFTs since undergrad and I thought a good understanding of what it does and how it works. But that Veritasium video helped me put all the pieces together in a way that made things click.

The famous 3Blue1Brown video is also very strong and maybe explains why complex numbers are used better.

Of course both videos leave stuff out in the interest of time but they should both be part of a signal processing curriculum.


You might also enjoy this Engineering Guy video playlist on how a mechanical harmonic analyzer works.

https://youtube.com/playlist?list=PL0INsTTU1k2UYO9Mck-i5HNqG...


For an additional video exposition, I also like Eugene Khutoryansky's "Fourier Transform, Fourier Series, and frequency spectrum"

https://www.youtube.com/watch?v=r18Gi8lSkfM

In particular, the visualization of the addition of sine waves of different frequencies in three dimensions is pretty spectacular. It's similar to Mathologer's "Epicycles, complex Fourier series and Homer Simpson's orbit":

https://www.youtube.com/watch?v=qS4H6PEcCCA


I thought this was really well done without any misleading oversimplifications. I do think the author missed the central trick of the Fourier Transform though. He's right in pointing out the sounds aren't a bunch of sine waves (typically), but the trick is that Fourier frequencies are cleverly chosen so they're all orthogonal (ie. you can't represent any of them in terms of the other). The complex number form turns out to give you one of the simplest representations of this set of frequencies.

I was wondering if anyone has a more rigorous explanation of the in-between frequencies part. I tried to follow this explanation from a great concise but rigorous book hosted online

https://ccrma.stanford.edu/~jos//dft/Frequencies_Cracks.html https://ccrma.stanford.edu/~jos//proj/Bessel_Functions.html

but unfortunately in this section it seems to gloss over the proof (or maybe I don't get it). It just sorta tells you "Bessel functions"! Which isn't very satisfying - especially when you have no idea what a Bessel function is..

I have a problem where my signal (which has a bit of noise) has maybe one (or several) strong non-multiple frequency (I likely know their approximately values a-priori). There may be other frequencies but they shouldn't be close in the frequency space. What I expect to see is a similar spectrogram with this same ringing. But what I'd like to do next is work backwards and estimate the original frequency and most importantly the phase! Everyone talks about the frequency ringing but everyone skips explaining what happens to the phase.

And since I haven't been able to find a nice proof of the ringing, I haven't been able to work it out for myself either

Does anyone happen to have any good pointers for me? Thank you :)


Citing from that page

> In other words, $ J_k(\beta)$ is the amplitude of the $ k$ th harmonic in the Fourier-series expansion of the periodic signal $ x_m(t)$ .

Or from the parent page about FM

> It is well known that sinusoidal frequency-modulation of a sinusoid creates sinusoidal components that are uniformly spaced in frequency by multiples of the modulation frequency, with amplitudes given by the Bessel functions of the first kind [15]. As a special case, frequency-modulation of a sinusoid by itself generates a harmonic spectrum in which the $ k$ th harmonic amplitude is proportional to $ J_k(\beta)$ , where $ k$ is the order of the Bessel function and $ \beta $ is the FM index.

In other words, Bessel functions come up in Fourier transforms of frequency modulation signals, because integrals. The calculations have been figured out centuries ago.

Regarding phase and arbitrary frequencies, it's easier to understand in the discrete time case: a vector of N complex numbers (time domain samples) goes in, it's multiplied by a matrix, and a vector of N complex numbers (Fourier coefficients) comes out.

Regardless of the input signal, such a transform can be inverted (up to rounding errors and the like) by multiplication with the inverse of the forward matrix: the difference between signals of different frequencies is that if periods are a whole fraction of N samples the projections fall on one Fourier coefficient instead of spreading around all of them. Aligning the signal's period with the window length M allows reproducing the whole infinite duration signal from a N coefficient slice because all slices are identical.


Oh yikes yikes yikes. Yes, I wrote that question way too fast and mixed things up. You're totally right, Bessel functions are related to FM and not this ringing. I got things confused. Sorry about that

"Aligning the signal's period with the window length M allows reproducing the whole infinite duration signal from a N coefficient slice because all slices are identical."

Yeah but this isn't really a solution. I mean, yes.. you can cut your data to be a multiple of the frequency you're looking for, do an FFT, look at the phase. If you didn't know the exact frequency you could also do a "search" truncating your data at different values and looking for a peak. But that's.. ugly.

The problem is you end up throwing away data. In my usecase I have maybe 2-3 periods of the frequency I'm interested in. Truncating my data throws away a lot of information. I'd like to do an FFT of all the data I have (it's actually not evenly spaces, so I'd probably have to interpolate it) then I'd like to look at the ringing directly and estimate the original frequency and phase. Ideally after that I'd somehow calculate some uncertainties on those values


>anyone has a more rigorous explanation of the in-between frequencies part

I think it's because the DFT is one period of the sampled version of the DTFT of your discrete input signal, or equivalently one period of the DTFT of the periodic continuation of your discrete input. (Equivalence here is that convolving by a dirac comb in time domain is multiplying by a dirac comb = sampling in the frequency domain)

When you use a windowing function (e.g. multiplication by a rect) to convert an infinite continuous signal into a time-limited one, the fourier spectrum of the original becomes convolved by the spectrum of the window (e.g. sinc). For a given sample length (window size), I guess if you capture the signal at a period boundary then your discretization of the DTFT is done exactly at peaks and zeroes. Otherwise if there's a mismatch, then you're not sampling the DTFT exactly at the peaks/zeroes so you get the spectral leakage. See [1] for a nice interactive example. Figure 3 shows as input a pure sin. If this was infinite, the fourier transform would be a dirac delta at the correpsonding frequency. But since it's windowed by a fixed window of 12 samples, the DTFT is the dirac convolved with a sinc.

Another perspective might be that if you don't capture it at a period boundary, when you create a periodic summation of your input you're introducing higher frequencies at the discontinuity.

[1] https://jackschaedler.github.io/circles-sines-signals/dft_le... [2] https://geo-ant.github.io/blog/2021/dft-spectral-leakage-and... [3] https://dspillustrations.com/pages/posts/misc/spectral-leaka... [4] https://dsp.stackexchange.com/questions/34211/is-spectral-le...

Or you can just grind through the computation and see that the power spectrum is indeed sinc-like https://ccs.fau.edu/~bressler/EDU/STSA/Modules/VIII.pdf


Can't edit, but I just realized that when you go from continuous to discrete via the sampling step you also introduce replication in the frequency domain. So you have

Continuous sin wave: Spectrum is a single dirac

Sampling: Infinite samples of sin wave - spectrum is periodic train of impulses

Windowing: Fixed number of samples - spectrum is sinc convolved with that impulse train. The DTFT would end here.

DFT: Windowed samples are then periodically extended - spectrum is spectrum of DTFT multiplied by a comb.

Even assuming nyquist is satisfied, the tails of the sinc would still interact with each other, so the shape of the DTFT isn't _just_ a sinc, right? It'd be almost a sinc but with some additional wonkiness from the tails?


I understand what you're saying on a high level, but it's tricky for me to then translate that into practice

One issue is that all this interactive stuff always just give you a "Frequency Domain" when really you have complex values and weights.

At the end of the day I need to find a way to work backwards. From a sinc ringing back to both a frequency and phase.

Say I have data that's got some high frequency relatively low amplitude noise and I have say several thousand samples. The samples in my case are actually not exactly evenly spaces in time unfortunately, but I can interpolate. And I want to extract a strong very low frequency component. It's got maybe 10 oscillations in my dataset.

If I do an FFT I'll get some sinc around 10Hz. There is other stuff in the spectrum, but not much around 10Hz. Now I want to work from that sinc and get back to an original frequency and phase. How do I go about that? (without truncating my data)

In my usecase it might be a bit trickier. There are probably several distinct frequencies but the phase need to be known very accurately. Normally when playing with FFTs it's a bit problematic when it comes to determining frequency b/c the naiive interpretation limits you to discreet values. But when it comes to the phase offset there is no such limit. The limit is determined by the accuracy of your input (though I don't actually know how to go from data error-values to a phase error-value. The error propagation is a little confusing to calculate)


I'm not quite sure I follow: evaluating the fourier transform (whether continuous or discrete) at a given frequency omega gives you a complex value that encodes both the magnitude and the phase. Usually for the sake of plotting we just take the magnitude of this. Given these values for all frequencies, the FT et al. are perfectly invertible and you'll get back to your original sampled signal.

But it seems like you want to go from the spectrum of the windowed signal to the spectrum of the "true" signal. I.e. undo the effects of the sinc ringing, which would involve reconstructing a posteriori the most probable signal. I don't know much here, but for DTFT maybe deconvolution techniques are something to look at? Alternatively for the DFT since we only have information about discrete frequencies, some sort of technique that tries to interpolate samples at the edges to ensure the periodic summed waveform doesn't have discontinuities?


Oh also this is only semi-related, but the following is also really unintuitive to me: given a windowed function, it could have been constructed from any number of possible functions. But in the fourier domain, we're guaranteed that the spectrum of the windowed function is simply the spectrum of the original function convolved with sinc (which I just think of as "smoothing"). So the paradox is that the portions of the original function outside of the window can be "anything", even something completely wonky, and yet somehow when convolving its spectrum with sinc we always end up with the same thing.

Trying to puzzle this out: Let the "original" function be a superposition of the windowed portion like f1 and a non-windowed portion like f2:

   orig = rect * f1 + (1 - rect) * f2


Then we have

   F{orig} = F{rect * f1} + F{(1 - rect) * f2}
Now F{1 - rect} is (dirac - sinc) so we have

   F{orig} = F{rect * f1} + F{(1 - rect) * f2} = F{f1} ⊛ sinc + (dirac - sinc) ⊛ F{f2}
Since convolution is distributive

   F{orig} = F{f1} ⊛ sinc + (dirac - sinc) ⊛ F{f2} = (F{f1} ⊛ sinc) - (F{f2} ⊛ sinc) + F{f2}

The above I've plotted to verify, and it's intuitive so far.

   rect * orig = f1

   F{orig} ⊛ sinc = sinc ⊛ ((F{f1} ⊛ sinc) - (F{f2} ⊛ sinc) + F{f2})


   = sinc ⊛ (F{f1} ⊛ sinc) - sinc ⊛ (F{f2} ⊛ sinc) + F{f2}) 
   = sinc ⊛ sinc ⊛ F{f1} - sinc ⊛ F{f2} + sinc ⊛ sinc ⊛ F{f2}
But sinc ⊛ sinc = sinc (easiest to see in the fourier domain), so the second terms cancel out and we get = sinc ⊛ F{f1}

So I think the real magic here is that for any function f(x), f(x) - sinc ⊛ f(x) is a function that gives 0 when again convolved with sinc. The math works out so that sinc is precisely the function which captures that "uncertainty" as to what the portions outside the window are. Really mind-blowing.

---

But back on topic to your question, note that even truncating your data doesn't get rid of the sinc ringing, it's always there as an artifact of the windowing, it's just not always revealed by the discrete sampling of the spectrum. The sinc ringing itself tells you the measure of uncertainty. In terms of how to practically use this information of a DFT though, probably just taking the spectrum values of magnitude above sinc main-lobe/side-lobe ratio would be good enough? Or interpolate the DFT values to reconstruct the DTFT spectrum and then resample it yourself? I can't be of much help here, I'm more comfortable with the theory than practical usage.


To expand the insight, you may see this video about Wavelets which relates them to the Fourier transform.

It explains wavelets as an intermediate point between pure waves (all time, no frequency representation) and the Fourier transform (all frequencies, no time representation), with wavelets having part of both.

https://www.youtube.com/watch?v=jnxqHcObNK4


"time" is present in both frequency and position. The difference is how time is sampled. Fourier Transform maps between global frequency space sampling and (repeating wave for all time) and local positional/point space sampling (perfect impulse).

Wavelets map to an intermediate space,as you say, with smeared points/impulses and damped waves.


In this aspect, are wavelets similar to how stft is done with a hann window or the like?


There are a couple of incorrect assertions near the beginning that should be addressed:

1. "That is, sound waves are not made out of sines as real-word phenomena."

Sinusoids are the solutions to the harmonic oscillator differential equation (\ddot{x} = -w^2 x). The harmonic oscillator is an excellent model, or a component of excellent models, for many natural phenomena, including vibrating strings and columns of air, pressure waves in fluids, the list goes on. The utility of sinusoids is not due simply to their convenient mathematical properties, it's because many natural signals are sparse in the frequency domain, precisely because sinusoids do describe real world phenomena.

2. "A piston on a crankshaft produces "pure" sinusoidal up-and-down motion when operating at constant angular velocity."

Uh, no. The motion of a piston is approximately sinusoidal as you decrease the radius of the crank, but in no case is it a "pure" sinusoid.


Another way to phrase my response to #1 above is that sinusoids are the natural modes of many systems. That is, you put a sinusoid in, and you get a sinusoid of the same frequency out. It may change in amplitude and/or phase, but it still has the same fundamental shape. This is again because these systems act like harmonic oscillators or systems of harmonic oscillators.

So why harmonic oscillators? The same analysis shows up any time you have a restoring force proportional to, and in the opposite direction of, the displacement. This shows up in springy things (recall the old mechanical engineer's adage: everything's a spring until it breaks), and fields of all sorts (electric, gravitational, magnetic, pressure).


Nice! I used this website by Jack Schaedler as an intro in my signal processing course, also highly recommended:

https://jackschaedler.github.io/circles-sines-signals/index....


Nice. I used All Signal Processing to get an intuitive feel through visualization.

There's a website, but the videos are more easily accessible through YouTube. Still worth checking out for the table of contents and quizzes.

https://youtube.com/user/allsignalprocessing


I only really understood it while looking at imaging. How the base frequencies overlap to form an image and how the phase information is much more important for signal reconstruction than the amplitude.

While images have an additional dimension it still helped me to really figure out what was going on.

Since it is so well documented I recommend looking at DCT that is employed in JPG. The principle is the same, just the frequency disection works a bit different.


It's not for nothing that at college, the course in Fourier Series & Boundary Value Problems was better known as Mystery Math. There was a single key insight that made the entire course suddenly clear as a bell, and if you were lucky, that insight came _before_ the final exam.


so, uh, you gonna tell us the insight?


Maybe that it's "just" a change of basis, onto a basis made of orthogonal periodic functions.


Change of bases being done with orthogonal projection derived from defining scalar product of two functions via finite integral of product of two functions: f dot g = integral f(x)g(x)dx over some interval.


Really great explanation, but one thing is missing that plagues every other explanation of Fourier analysis: signals are not infinite.

The time window you use for analysis matters just as much as the number of frequency bins, but everyone always ignores the time window for some reason.


Is there something missing in the two sections near the end about the sinc function and limited time intervals?

https://dsego.github.io/demystifying-fourier/#sinc-function-...


I did sound analysis in the submarine service and always appreciated how the FFT worked though I lacked/lack the math background to fully understand it. The article was great, got through the first part, bookmarked for later.


I get the math and the explanations and all that but what I don't get is how do you express a complex sound signal as a function of f? Is it a long array with t and the amplitude?


In reality, you can't. Since the fourier transform was designed for a continuous signal, it needs some modifications to tailor it to work in a digital world where, instead of a continuous signal, you periodically sample the waveform at regular intervals and use the discrete fourier transform (DFT) or its more efficient little brother, the fast fourier transform (FFT). Both these work off N samples of the waveform at a time.


Afaik, in digital sound recordings it's basically an array of sampled amplitudes. And you need to know the sampling rate in advance, eg 44100 samples in a second. This is I think called PCM https://en.wikipedia.org/wiki/Pulse-code_modulation


Yes.


Great that you've tried to learn by teaching this. I'm going to read it in depth later on. Thanks and well done so far (based on the bits I've read).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: