Unless I was allowed to use jackd, but that probably doesn't count.
BTW: I think you meant to say "parallel-port-to-DAC", where "DAC" all too often meant just a simple cobbled-together resistor ladder.
Who cares about HiFi when you can have actual sampled sounds instead of tinny beeps? :-)
It shows just how careful maintainers should be about creating smoother build experiences. Anyway I hope the author of the article will post a follow up with his new library.
(And it's only 10,000 lines.)
I place great stock in the organization of my files as well as my code, and rarely let them go above 1000 lines each. Perhaps this is a weakness.
My hopes have just been dashed. I was really hoping you'd make a smooth cross platform build system top of your list.
I still wish you all the best anyway.
On the latency: do you (or anyone reading this) happen to knwo what's the current state of low latency audio (say <5mSec) on linux? A couple of years ago we did an application for human auditory system research which required stable (as in, not ever dropping any buffers) low latency. Original idea was to do it for both windows and linux, the gui would be Qt anyway so that got the cross-platform part coverd already. We were using an RME Multiface and on windows we'd just hook it up and select e.g. 48kHz 128samples buffersize, full duplex and it just worked without any problems with the standard ASIO samples. On linux we initially hardly managed to get anything out at all. And once that was fixed, it was impossible to get down to the required latencies without dropping buffers. Even with the semi-realtime kernel patches or whatever it's called.
Anyway, we gave up back then but I still wonder: was it just our lack of knowledge, or was linux back then (about 7 years ago) really not up to the task, or was it a driver problem? And what is the state today?
1. Install and set up JACK. Enjoy your lack of latency. I haven't profiled it scientifically, but I've tried microphone monitoring with filters layered on and couldn't detect any latency between me speaking and my voice coming out the speakers.
2. Modern linux uses pulse. Pulse is kind of jingoist and slightly hostile to foreign presence. You'll need to edit its default.pa file to not load the udev-detect module and not try to just grab the soundcard for itself, but to load the jack-source and jack-sink modules. You'll also need to write a script to do the module loading because modern desktop linux doesn't have a race-free way to start daemons when a user logs in. Which you'll have to run manually once you log in. That script looks like this:
pacmd load-module module-jack-source channels=1
pacmd load-module module-jack-sink channels=2
For best results, you should install a low-latency or realtime kernel. On Ubuntu, it's easy to find low-latency kernels through the repos, and Fedora has a realtime kernel through the PlanetCCRMA repos (although this randomly crashed on my last, approx 5-year old laptop). On other distros you may have to compile your own patched kernel, and there's plenty of documentation on this.
There's also some distros specifically for audio work, that are set up out of the box with great low-latency settings (Ubuntu Studio, KXStudio for example)
On my current laptop, I'm running jack at 48kHz, with 3 buffers of 64 samples, using a Focusrite Scarlett 2i4 interface. I haven't really noticed any significant dropouts so far. This is on Ubuntu with a low-latency kernel.
With the Pulseaudio jack sink, pulse is basically piping its output into jack, which is especially convenient if I want to do something like record audio streaming from a browser.
More interestingly, if you are connected to an audio device, you can use a GraphListener. GraphListeners call a callback on you every time a block gets processed. As a consequence, it's also more than possible to do stuff like live stream over the internet. I am debating the merit of a backend that advances the simulation without playing: I can see potential uses for it, I'm just not sure if they're good ones.
The problem is that the authoritative source sits inside the Firefox tree. Mozilla makes a standalone version (that doesn't require that tree) available on GitHub, but you're not going to have much luck contributing back if your contributions break the Firefox build.
The solution proposed would be to make the GitHub the authoritative source, and patch the Firefox version. But that shifts the maintenance burden in the way that's opposite to the main developers' interests. I can see why there's no enthusiasm.
I've contributed to libcubeb and had no problems with the above. But my contributions were not of the "lol I don't like automake so let me replace your entire buildsystem" kind. Honestly if that's the kind of thing you're trying to do, I think almost any open source project will push back.