I agree this is bad engineering: don't use Android threads for real-time stuff. This should be running at module level.
A better approach, if the micro you are using allows it, is to have an interrupt when (for example) the I2S buffer is empty. I would then point the DMA to fetch the next buffer (already processed and mixed) and fire the DMA transfer.
> at least 100ms
This calls for huge latency problems. But I understand the approach if your buffer-filling/reading procedures are slow or unreliable.
I disagree with the timer thing. There are systems that provide precise timers for media (for example Win32 multimedia timers[1], that I never used but I know they exist).
It doesn't mean there is 100ms of latency, it just means that 100ms of audio is buffered so that you have ~100ms of leeway about when you app's audio thread is scheduled. Changes to the audio stream such, as stop/start/volume control, can be achieved with much lower latency using buffer rewriting or by applying changes lower down the stack where the buffers are smaller, or both. By default PulseAudio will buffer ~2000ms of audio from clients [1]
Anything above 10ms (some say max. 20ms) for real-time audio processing (especially musical instruments) is prohibitive. Imagine an electronic drum set: if you hit the snare and the audio is output 100ms later from the speakers, I bet you'll notice it :)
Ah but GP wasn't talking about real-time in that context ("And if you're not...").
You can still have 100ms buffer without 100ms latency: within 10ms of the drum being hit, write 100ms of the drum sample into the playback buffer and immediately trigger its playback (or write it into the buffer starting at the cursor position that is just about to be played).
The only trouble is when you need to modify some of that 100ms before it is played back, for example if the user hits another drum 50ms later. In that case it becomes more complex, you'd have to overwrite some of the existing buffer with a new mix of both drum samples. The complexity is not worth it for that kind of app.
For a simple app like a video player, the audio stream is much more predictable so you can buffer more. Volume changes and pausing can still be applied with no perceivable latency by modifying the existing buffered data [1]
> within 10ms of the drum being hit, write 100ms of the drum sample into the playback buffer
You can do that if you can predict the future and fill the buffer with data you don't know (samples from the future). Otherwise, you still have to wait for 100ms of samples to output. So if you have to wait 100ms for the samples, then the output happens after 100ms, hence 100ms of latency.
100ms samples can be fine for a video player. For real-time you (usually) do this: 2 buffers of 10ms each. While one buffer is playing, you fill another buffer with real-time data. After 10ms has passed, you start playing the buffer with real-time data, while the other buffer gets filled.
A better approach, if the micro you are using allows it, is to have an interrupt when (for example) the I2S buffer is empty. I would then point the DMA to fetch the next buffer (already processed and mixed) and fire the DMA transfer.
> at least 100ms
This calls for huge latency problems. But I understand the approach if your buffer-filling/reading procedures are slow or unreliable.
I disagree with the timer thing. There are systems that provide precise timers for media (for example Win32 multimedia timers[1], that I never used but I know they exist).
[1] https://docs.microsoft.com/en-us/windows/win32/multimedia/ab...