The funny thing is that the specialized math that people use to describe filters is actually rather simple to work with once you master some initially unfamiliar ideas. It can really seem like a magic trick.
As the saying goes: EQ doesn't cause phase shift, phase shift causes EQ.
The conceptually simplest to understand filter for beginners however would be the moving average filter. Just keep a list of the last n samples, and average them. When a new sample comes in, throw out the oldest.
In the end many low pass filters are some variant of this (adding one or more samples from the past to the current sample using certain weights, maybe with feedback thrown into the mix).
I saw a talk recently where an experienced engineer was arguing that for some modern applications, it's better in both cost and quality to convert the analog signal to digital, filter it in the digital domain, then convert it back to analog (analog signal -> ADC -> digital filter -> DAC -> analog signal).
I don't remember the exact domain, possibly very high frequencies.
sure, you do that whenever you can, which is pretty much anytime you don't have very high frequencies
in order to get to where you don't have very high frequencies, you need analog filtering on the input and the output, and you need adcs and dacs suitable for the application. you might also need attenuation, amplification, impedance transformation, and/or mixing on the input and output. all this stuff is analog and requires this kind of analysis to understand
but whenever you can do stuff in the digital domain, you do, because you can easily reduce the error introduced by digital computation as small as you want, and you can redesign your signal processing after your space probe has passed neptune so a soldering iron would be inconvenient. and you can fit arbitrarily complex filtering and other signal processing into an arbitrarily small package, near enough; what would be another precision capacitor in an analog circuit is just another 16-bit coefficient loaded from your terabyte microsd card in a dsp setup, if you're processing a slow enough signal
One reason you can get away with this sometimes is that ADCs themselves act like a low-pass RC filter. The ADC input is itself a capacitor that gets charged to the voltage that needs to be measured, and it has an appreciable resistance (which can be supplemented with an additional input resistor). Sometimes that's all the low-pass filtering you need to prevent aliasing, and the rest can often be done in the digital domain.
But if the signal you're measuring is very weak (or poorly matched to the ADC range), you probably need some kind of amplifier for it anyway, and in that case you may as well make a filter out of it at the same time to maximize the signal-to-noise ratio.
This is the default solution for anything even slightly complicated. At very high frequencies(Rf), then you may be forced to use passive filters. The range where it makes sense to use active op-amp filters has shrunk considerably.
You get ease of prototyping,modification, and excellent performance. In these systems there may be still analog filter components(anti-aliasing,reconstruction), but with digital components so cheap,good, and fast this "mostly digital" technique is almost always the best option.
The most common type of amplifier at audio frequencies usually works the same way, these days. ADC -> DAC. It fully decouples (signal-wise) the input and output (no possibility of distortion). And component tolerance is less important. It's also more power efficient, potentially exceeding 90%, since power loss only occurs when the transistors generating the output switch on/off, and at audio frequencies transistors switch almost instantly.
Good article. I think it's about how I thought of it too in my EE coursework. It was maybe 1-2 classes before we were told about filters (though I'd seen many online before that) that I realized how they worked like frequency dependant voltage dividers. I think that's the key. Voltage divisors are easy.
> On Wikipedia, unsuspecting visitors are usually greeted by some variant of this:
As someone who studied 2 years of electronics I don't rememer everything and certainly not the explanation of the Laplace transform.
But I certainly remember writing like mad for a long time while trying to catch what the teacher told, only to be told somewhere around lunchtime I think that this wasn't actually important for engineering or even the exams, only included because it was interesting.
One of a few reasons why I changed from electronics engineering to computer systems even if it meant a year more of studies and looked like a poorer choice wrt job security (at the time).
I passed that course but that is partially why I applaud everyone who starts with the simple and interesting instead of diving head first into the mathier parts to try to shake off most students before they even know why this thing is interesting.
The Laplace Transform is an easy(ish) way to transform differential equations into complex numbers.
The response of a capacitor or inductor is a differential equation with an exponential solution. The Laplace transform replaces the exponentials with s = iω, where ω just means "a sine wave at some frequency".
(It's actually jω in EE, because reasons, but it means the same thing.)
So if you build a network of capacitors, resistors, and inductors, or springs and masses, or anything else that uses the same maths, you can turn it into a simple algebraic equation in s.
This is called the transfer function. It literally defines the frequency and phase response with some simple algebra.
In practice you don't bother with the calculus and just go straight to s. A single capacitor filter is 1/(1+sRC). Add more stages - the rules are pretty simple - and you get a more complex equation.
Then you can put ω = 200Hz or whatever into the transfer function, and you get a complex number out. This tells how big the output is at that frequency, and how much it's phase shifted.
If you graph ω, you get the complete frequency and phase response.
Really all you're doing is replacing a static DC test voltage with a sine wave "probe" across the frequencies you're interested in.
There's a bit more to it - which is where poles and zeroes appear - but not much.
(It's also related to the Fourier transform, but that's a whole other thing.)
It's incredibly impressive that all of this was invented in the early 19th century by a handful of French aristocrats and a couple of Germans, more or less as a hobby.
I just don't like how school decide to use three lecture hours to derive it and only afterwards explain what it is for and that this was only a demo of how one could arrive at the same result.
We also derived the relativistic equations but then we got to know up front what was going on.
I’m not going to try and defend applied math courses taught to engineering students. The quality varies—some are superb, some are pretty meh, some are just bad, and it can be pretty difficult to tell the difference as a student, especially when a good course assumes you had a good one for a prerequisite when you only had a meh one.
I am going to point out that repeatedly asking “why?” will get you into very mathy weeds very quickly, such that “foul” calculus will be the least of your problems. Like here:
> As it turns out, most analog filters are relatively distortion-free only in one special case — a perfectly steady sine waveform[.]
Ever wonder why? And not just in the sense of showing that’s the right answer, in the sense of setting off without the knowledge of the answer and deriving it. Well, the starting point is that the filter is linear and time-translation-invariant (i.e. doesn’t have a clock built into it); the fact that it is causal is somewhat important but not immediately so. But then things get interesting, and if you’re lucky you’ll get to hear such wonderful phrases as “group characters” and “representation theory”.
I don’t blame people who got burned by their math courses, but I do want to emphasize that these kinds of “why?” questions are exactly what math is there for. If the math book in front of you doesn’t look like it works that way, take another one and try again—just don’t expect the specific question you have to be answered immediately.
(I’m assuming I don’t have to explain the virtues of repeatedly asking “why?” on a forum called Hacker News.)
Those references to "group characters" and "representation theory" cracked me up, in a good way. For those who have an engineer's exposure to Fourier series and transforms, these would be the logical next steps to appreciate the big picture.
As a pedagogic device I get some success with is by keeping things concrete, showing that shift in time (index) of a very high dimensional vector, is same as multiplying with a fixed matrix. Same with the difference operator, summation operators and so on. Then one can think of singular value, or eigen-decomposition of those matrices, that's the point were students realize why these operators "become" point-wise multiplications and divisions.
Works with those who have some experience with finite dimensional matrices.
> But I certainly remember writing like mad for a long time while trying to catch what the teacher told, only to be told somewhere around lunchtime I think that this wasn't actually important for engineering or even the exams, only included because it was interesting.
Depends what you are getting trained for. An acceptable cobbler who can patch up a broken shoe into something usable would have different pedagogic needs than one who is training to become a designer of the next-gen shoe. A vibrant economy needs both kinds.
That said, a big part of 'hacking' used to be about the pleasure of deeply understanding a topic, not necessarily just as means of earning or employment.
Was it Feynman who said something about whats the utility of physics and why do people engage in it ? I am paraphrasing his comment from faulty memory: sex is useful for reproduction but that's not why we engage in it.
Many here are into it for that other reason of hacking.
Much agree, and one of the reasons to try to make hacking and fun
practical stuff part of a course is that the knowledge is
stickier. You'll remember the Moog filter you built on a breadboard
which made weird noises after you calculated the RC values wrong.
Maybe not so much the perfect stopband you got a solver to make for
you in a DSP simulator. Feynmann was all about that teaching
realism. Also it takes about 6 months to a year of tinkering to really
get "the pleasure of deeply understanding a topic".
Many uni departments scrapped the electronics labs because it's all a
bit messy, expensive and time consuming and I think the demand for
people who teach it is getting smaller, at least here in UK. Last time
I was asked to give an electronics class it was mainly digital
switching of loads and hooking up I2C sensors. Still we had fun. That
"just get the certificate" mentally of education doesn't leave much
room for the play factor.
The tension between who is the university for (students or their potential employer) and the tension around what is the university for (knowledge acquisition or for earning an accreditation) is a problem that society has not solved very well.
I had hoped that online learning platforms such as Coursera and Edx would solve this by occupying the "for students" niche and the "for knowledge acquisition at the students preferred pace" niche. It did not quite turn out that way.