Hacker News new | past | comments | ask | show | jobs | submit login
Signal processing is key to embedded machine learning (edgeimpulse.com)
104 points by janjongboom 12 days ago | hide | past | web | favorite | 32 comments





Actually signal processing is already used for most machine learning of audio signals, including speech recognition. The reason is that ML algorithms, including deep learning has a hard time learning the information you can get from a discrete Fourier transform.

Audio data in time domain are just too noisy for most machine learning, and doing some signal processing as a preprocessor step often helps a lot.

Here it seems like he works with non-audio data, where this is less common.


This is just saying that signal processing is vital to the input sensors. Which, doesn't seem new.

Yes, ml is dependent on getting data. Signal processing is vital to that.

I thought this was saying a new application of signal processing in ml.


It's not the key but the fundamentals that we have done for decades...

This is an awesome topic, but I'm somewhat annoyed they didn't dive into what kind of DSP and instead turned the article into an advertisement.

Does anyone have any good further reading on the topic? (Books, articles, classes, anything really.)



Author here. Our main focus right now is on vibration and non-voice audio. For vibration we use spectral analysis through butterworth filters, FFT over the result, looking at the peaks and locations of the FFT result, and then at spectral power buckets. For audio we look at MFCC (sometimes paired with a bandpass filter). The source for these are in https://github.com/edgeimpulse/processing-blocks (Python) and https://github.com/edgeimpulse/inferencing-sdk-cpp (optimised C++). What’s though on our end is that we need to run the DSP in real-time on very constrained devices.

Unfortunately I have very little good resources on learning more. I’ve done lots of experimentation in Matlab, just getting raw sensor data and playing with the signal processing toolkit that they sell.


The best DSP intro I've found, especially for the non-math whiz is Steven W. Smith's DSP Guide: http://www.dspguide.com/pdfbook.htm

I own a copy of this book. It is quite good as a reference once one has some DSP experience. I wouldn't recommend it as an intro.

The book is quite good in that it is to the point and provides a good roadmap, but to do so it often omits concrete examples and instead writes out the algorithms in mathese. Knowing calculus should be enough.


The Kalman filter is basically an ML algo. The key here is to implement already known linear optimization approximation versions of it in common libraries.

As an ML practitioner I am curious about how you decide what is ML vs what is signal processing. I really cant tell. Its the same freaking problem, be it information theory, machine learning, signal processing. All of us are stuck at the same impasse -- do optimal rate-distortion of a continuous valued signal efficiently and uniformly.

Probably a matter of tradition, not substance. Same with control theory.

(Curious who downvoted me, when we are in agreement... Maybe the buttons are too small and people just mis-click.)


Dont worry about downvotes much. They can be quite erratic at times. I upvoyed both of yours.

It always comes down to representation. If you can use a deterministic, efficient algorithm to represent the data in a more amenable manner, then the ML system will have a much easier time "making sense" of the patterns inherent in the data compared to a system that has to learn some abstract transformation from raw data to useful representations.

My concern with a lot of signal processing techniques used in ml is that sometimes they presuppose things that may not be true.

That is, signal processing had Nyquist's rates. And typically knows there is an underlying signal. Does ml have either?


> That is, signal processing had Nyquist's rates. And typically knows there is an underlying signal. Does ml have either?

What does this question mean? Every band-limited signal has a Nyquist rate. Most signals of interest are well-contained within some finite bandwidth (e.g., human voice). Sampling above this rate will get you very little.

If you're building an ML model to process a certain class of sampled signal and you know, for example, 99% of the signal energy falls within a certain frequency range, that should guide your choice of sample rate. If you're sampling at too high a rate, your input layers may have far more parameters than are needed or useful.

Whether or not a given ML input actually contains a signal of interest doesn't seem relevant to how you sample and preprocess the signal.


Most machine learning is not on a band limited signal. I've literally seen these tactics applied to demand forecasting. And I just can't square that they should.

Ah, I see what you mean. Yes, if you're not dealing with approximately bandlimited and sampled signals, then this wouldn't apply. The article is about embedded devices processing sensor data (microphones, motion/light sensors, accelerometers, etc.), and in those cases the signal of interest will often be bandlimited.

Completely agreed. In those cases, these tactics are required.

Well, natural signals that have an end arent band-limited either. It is a mathematical abstraction that approximates many real world scenarios well enough

But our perception of many things can effectively be band limited with no loss in generality to work with the data. I'm unconvinced this is the case in places ml is often used.

Note, I have to hedge and say I am not convinced they are inapplicable. Just not convinced they are applicable.

Also note, I hadn't gotten the article to load when I fired off my concern. I keep the concern, but ack that it is not applicable to this article.


As I said in another comment I find it hard to separate what is Signal processing, vs what's Information Theory vs what's is ML. I have heard the argument that "if its got trigonometry then its signal processing", or "if its 1 dimensional then its signal processing" I find these arguments pretty weak and unconvincing.

Officially I belong in the ML tribe but all of them are tackling pretty much the exact same problem, any breakthrough in one of them will translate to the others. The name of the topic has changed over the years, the fundamental problem has remained the same -- lets call it another name -- approximating/extracting an unknown function from samples.


"if its 1 dimensional then its signal processing"

This is clearly not the case, since image signal processing, 2D Fourier transforms, etc. are alive and well.


I agree they are all related. Just push back on the applicability of some techniques in places we don't actually know there are signals. If that makes sense.

You can also use the continuous Fourier transforms and not sample so there won’t be a nyquist rate. I also think neural networks have some really interesting properties when viewed from a signal processing perspective. In fact you can view the correct outputs of a neural network as a signal and the incorrect outputs as noise, and use conventional SP techniques to narrow down what parts of a network are informative.

Practically speaking, how would you use a continuous Fourier transform? Do you mean passing it through a "Fourier transform analog circuit" (does that even exist?), and then sample the output?

Build a bank of bandpass filters with different center frequencies, and you have an analog Fourier transform. (Or at least the power spectrum equivalent; it's trickier to get the phase info.) Analog vocoders are one example of this and they were invented long before discrete FTs.

See my other answer - I meant applying it analytically. Your idea is interesting though, and I do know that it should be possible to build a Fourier transform analog circuit. Once you sample though you will be subject to having a nyquist frequency and introducing noise to your signal and the advantage of performing it with an analog circuit will be lost.

Yea, actually I thought a bit more about it as well, and any real (analog) circuit will have a frequency response that tapers of to zero as the frequency goes to infinity. So that basically determines a Nyquist frequency for any analog circuit, even if that frequency is so high that it has no practical implications.

I was curious on this. Still am. Tried thinking of several things it could mean, but so far I'm coming up short.

I think what he means is that in signal processing you can define the signal analytically and perform the Fourier transform analytically, without sampling. The Nyquist rate only comes in when you talk about digital signal processing.

This was what I meant initially, but I realized in a lot of NNs that it might be pretty difficult if not impossible to define the signal analytically.

Ah, if I take your meaning of analytically correctly, I think you are basically arguing for symbolic reasoning.

I do still see the appeal of that. Seems solidly not a promising path, at this point. :(




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: