Audio data in time domain are just too noisy for most machine learning, and doing some signal processing as a preprocessor step often helps a lot.
Here it seems like he works with non-audio data, where this is less common.
Yes, ml is dependent on getting data. Signal processing is vital to that.
I thought this was saying a new application of signal processing in ml.
Does anyone have any good further reading on the topic? (Books, articles, classes, anything really.)
Sorry I don't know if any of these are good. They're in my dsp bookmarks directory, I think I found these a while back when I was trying to relearn these topics.
Unfortunately I have very little good resources on learning more. I’ve done lots of experimentation in Matlab, just getting raw sensor data and playing with the signal processing toolkit that they sell.
The book is quite good in that it is to the point and provides a good roadmap, but to do so it often omits concrete examples and instead writes out the algorithms in mathese. Knowing calculus should be enough.
(Curious who downvoted me, when we are in agreement... Maybe the buttons are too small and people just mis-click.)
That is, signal processing had Nyquist's rates. And typically knows there is an underlying signal. Does ml have either?
What does this question mean? Every band-limited signal has a Nyquist rate. Most signals of interest are well-contained within some finite bandwidth (e.g., human voice). Sampling above this rate will get you very little.
If you're building an ML model to process a certain class of sampled signal and you know, for example, 99% of the signal energy falls within a certain frequency range, that should guide your choice of sample rate. If you're sampling at too high a rate, your input layers may have far more parameters than are needed or useful.
Whether or not a given ML input actually contains a signal of interest doesn't seem relevant to how you sample and preprocess the signal.
Note, I have to hedge and say I am not convinced they are inapplicable. Just not convinced they are applicable.
Also note, I hadn't gotten the article to load when I fired off my concern. I keep the concern, but ack that it is not applicable to this article.
Officially I belong in the ML tribe but all of them are tackling pretty much the exact same problem, any breakthrough in one of them will translate to the others. The name of the topic has changed over the years, the fundamental problem has remained the same -- lets call it another name -- approximating/extracting an unknown function from samples.
This is clearly not the case, since image signal processing, 2D Fourier transforms, etc. are alive and well.
I do still see the appeal of that. Seems solidly not a promising path, at this point. :(