By the looks of it, those functions extract features (like frequency peaks). You do that once for a sound. The output could function as input for an NN, in which case it would be a tokenizer for sound.
1) Tuning hyperparameters of your audio preprocessing is a pain if it's a preprocessed CPU step. You have to redo preprocessing every time you want to tune your audio feature hyperparams
2) It's quite common to use torchaudio spectrograms, etc. purely because they are faster (I can link to a handful of recent high-impact audio ML github repos if you like)
3) If you use nnAudio, you can actually backprop the STFT or mel filters and tune them if you like. With that said, this is not so commonplace.
4) Sometimes the audio is GENERATED by a GPU. For example, in a neural vocoder, you decode the audio from a mel to a waveform. Then, you compute the loss over the true versus predict audio mel spectrograms. You can't do this with these C++ features. (Again, I can link a handful of recent high-impact audio ML github repos if you like.)
Seriously, nobody is lugging a GPU around to interact with their most frequently used micro-computing platform, their headphones, which right now, already represent a new and extraordinary era of "accelerated component" market expansion.
The 7 microphones in your earpiece, and the 6 speakers pushing air into your head, are not quite as close to the GPU, as they need to be, perhaps .. but they already have a DSP, and there is already a silicon battle going on among the vendors.
>You can't do this with these C++ features.
Yes, and I think the point in the end, is to use AI to write better C++ code, and design better, cheaper, smarter silicon, as always (and actually ship it) ..
It is INCREDIBLY common to use multi-scale spectral loss as the audio distance / objective measure in audio generation. They have some issues (i.e. they aren't always well correlated with human perception) but they are the known-current-best.
Backpropping filter coefficients is clever, but it hasn't really caught on much. Google also tried with LEAF (https://github.com/google-research/leaf-audio) to have a learnable audio filterbank.
Anyway, in audio ML what is very common is:
a) Futzing with the way you do feature extraction on the input. (Oh, maybe I want CQT for this task or a different scale Mel etc)
b) Doing feature extraction on generated audio output, and constructing loss functions from generated audio features.
So, as I said, I don't exactly see the utility of this library for deep learning.
With that said, it is definitely nice to have really high speed low latency audio algorithms in C++. I just wouldn't market it as "useful for deep learning" because
a) during training, you need more flexibility than non-GPU methods without backprop
b) if you are doing "deep learning" then your inferred model will presumably be quite large, and there will be a million other things you'll need to optimize to get real-time inference or inference on CPUs to work well.
Is just my gut reaction. It seems like a solid project, I just question the one selling point of "useful for deep learning" that's all.
This is a really broad topic. I began studying it about 5 years ago.
Can you start by suggesting what you task you want to do? I'll throw out some suggestions, but you can say something different. Also you are welcome to email me (email in HN profile):
* Voice conversion / singing voice conversion
* Transcription of audio to MIDI
* Classification / tagging of audio scene
* Applying some effect / cleanup to audio
* Separating audio into different instruments
etc
The really quick summary of audio ML as a topic is:
* Often people treat it audio ML as vision ML, by using spectrogram representations of audio. Nonetheless, 1D models are sometimes just as good if not better, but they require very specific familiarity with the audio domain.
* Audio distance measures (loss functions) are pretty crappy and not well-correlated with human perception. You can say the same thing about vision distance measures, but a lot more research has gone into vision models so we have better heuristics around vision stuff. With that said, multi-scale log mel spectrogram isn't that terrible.
* Audio has a handful of little gotches around padding, windowing, etc.
* DSP is a black art and DSP knowledge has high ROI versus just being dumb and black boxy about everything.
I'm considering doing some ML stuff for a mobile DJ app.. like beat/bpm detection, instrument / vocal separation etc. Have you seen anything recent that might be efficient enough to run on a mobile device and process a track in a reasonable amount of time ( less than song length ) ?
I may not email as it isn't a serious pursuit, but more curiosity. Thank you for the invitation! My current fascination is in separation and classification. And modular synthesis where I guess DSP stuff comes about if translating into the digital domain.
A GPU is useful, but DSP's are also still useful - for example there is a compelling case to have frameworks around such as AudioFlux, JUCE and others, in order to support portability and also realtime analysis competitively, which is important in this domain, where such things as Qualcomms' ADK, and others, is quite literally being put inside peoples ears...
Not to say that big-AI shouldn't have audio analysis as a compelling sphere of application, but more that, until the chips arrive, in-ear AI is less of a specification/requirement, than in-ear DSP.
We don't need AI to isolate discrete audio components and do things with them, in-Ear. Offline/big-AI, however, is still compelling. But we don't yet have GPU neckbands ..
It is the use of English grammar rules to mean C and C++, naturally not everyone was that great on English classes, and specially those that never attended WG21 and WG14 meetings, or work for said companies, and enjoy being pedantic online about it.
To make it easier for those skipping English classes
"A forward dash can be used to state alternatives. A sentence that uses a forward slash in this way can be read to mean that any or all of the stated words could apply."
A library can be written in one or the other. Note most of your "evidence" is job postings, which are generally written by non-technical folks who often mistake Javascript and Java. But no, there is no "C/C++" language or library. There are skills which help you in both. There is code that compiles with both compilers. There is no WG21 for C/C++.
Yes, Visual Studio supports both C and C++, but those are, in fact, two different languages.
You'll struggle to find the links you promised at the end because C and C++ are run by two different groups, meaning you won't be able to link us to single sources.
Of course there is no "C/C++" language, only people that failed at English grammar class, and have yet to update their English parser and semantic analisis.
From Herb Sutter, a name that you might know what relevance it has for WG21, I hope.
"Keynote: Safety, Security, Safety and C / C++ - C++ Evolution"
Context is important in English. Your example is, again, a different context.
A library is one or the other. Talking about safe systems languages where C and C++ share memory safety issues is very different from promoting a library. Thanks for the opportunity to clarify here. You’re confusing context with lack of technical precision.
Nope, I just have better things in life than to be pissed off in Internet when people using English grammar rules accordingly.
We both made it quite clear where we stand, so there is hardly any value pointing out uses of C/C++ expression by other key WG14 and WG21 members, papers or products.
I only jumped in when I saw your inaccurate, condescending post. I'd hate for people to be misled by your confidence in making a pretty simple English mistake. Context always matters in English and precision matters in technical discussions.
"C/C++" has meaning in some contexts and reveals ignorance when used out of context. The post title here uses it incorrectly, but yes, there are ways to use it correctly. We disagree on that because you can't tell the difference in the two. So it is, but the actual explanation of usage is there for others who do care if they are perceived as non-technical in technical environments.
It is true that there is C code that is conforming C++ code. However I would say if you’re using a C compiler with with “extern C” in the headers for C++ linker compatibility (as this library does) then saying C++ is about as misleading as saying a Rust library is C++ as you can link to that too.
As far as compatibility and “history” the languages are different enough now. There are both: features in C that do not exist in C++, and code that is conforming C that would be UB in C++. Saying C/C++ (for real) is usually a dumb target when it’s better to pick one and settle with that.
If it’s C, just say so. Everyone knows what extern C is, you don’t need to confuse.
Something very close, but that's not what you would expect for something that markets itself as a C++ library IMHO. Especially in 2024, most people would hope (or assume) that "C++" means "C++ 11" at least.
Definitely doesn't count as _lying_, but still underwhelming.
Right, but C++ started as an extension of C and is mostly compatible and historically you could compile C with the C++ compiler. I don't think it's a good comparison.
> historically you could compile C with the C++ compiler.
not any C, only the C++-compatible subset.
int* foo = malloc(sizeof(int));
has never worked in C++ for instance while it's valid C. Code that worked is code that people actually did effort to express in a way compatible with a C++ compiler.
You must admit that C/Python doesn't quite have the same cachet as C/C++. C & C++ also share the same name, C++ was born as a derivative of C (with classes), they have the same syntax, logical constructs etc. Python is not even a systems language.
- https://essentia.upf.edu/
- https://github.com/marsyas/marsyas
- https://github.com/ircam-ismm/pipo
- https://github.com/flucoma/flucoma-core/tree/main/include/al...