Timers in general will jitter.
Audio processing usually involves a callback to process an audio block (array of audio samples... floats).
To simplify things, this audio block runs on realtime thread and copies the block to desired output. you're guaranteed it'll be continuous between each call.
(you can cause overflow/stutter if you hold it for too long).
With setTimer approaches each callback WILL have offset.
But if you need to synchronize things or your metronome needs to run with other audio, it WILL be jerky.
That's why a proper metronome would also be 'sample accurate'.
WebAudio / AudioContext was made exactly to solve that!
similar to common audio processing standard, you end up getting a callback and usually need to use WebAsm/C++ under the hood to keep the golden DSP rule - never allocate on block processing callback.
btw, on Firefox 69 (macOS) only the pre-scheduled audio made a sound :)
The next attempt is AudioWorklet, which runs in a separate js context and on the audio thread. Sadly still exclusively in Chrome, so I can't be bothered to invest heavily in it.
> btw, on Firefox 69 (macOS) only the pre-scheduled audio made a sound :)
Also similar experience here, same version. Also "stop" didn't stop the pre-scheduled audio. It continued until 20 ticks.
Once you get the hang of it it’s all about DSP. FFT, filter design etc.
Also most actual DSP code is C/C++ depends on deployment needs.
another approach is to try Faust, PureData. Look those up and see what’s looks as a good starting point for you.
I don't know what the status is with shared array buffers and WASM. At one point I was very excited about the potential of WASM to get around some of the problems above, but last I checked, this was being partially held back by the same security concerns.
- It's fast (the #1 metronome app on the iOS App Store taking 7+ seconds to start up on my iPhone was the reason I built this)
- It has keyboard shortcuts on desktop (use arrow keys to adjust BPM and space bar to toggle playing)
- It's a progressive web app so you can add it to your home screen and it'll behave like an app
- At 60bpm it's clearly noticeable.
- At 120bpm every 12-15th tick is noticeably delayed.
- At 300bpm it's playing two ticks at very close to the same time about every 5th tick.
I want to weigh in about this whole perceivable jitter thing.
I think it’s important when making music related programming decisions to recognize there’s a whole area of perception in between actually conscious “hey that metronome is off!” and actually being imperceptible. In that area the feel and impact of music can be altered while no one can pinpoint why.
For safety’s sake I think sub-millisecond timing in controllers and things like metronomes needs to be the standard.
The ideal should be audio rate accuracy, when it can be reasonably achieved.
It’s really bad to fall in the trap of thinking that just because no one can point out a problem, it isn’t having an effect. Especially with audio where people have trouble explaining what they are hearing.
I am not sure exactly what the tests and methodologies are that different people refer to, but I do know that when I have brought up this concern to instrument and software developers who are operating at the cutting edge of this stuff and really should know the answer, they never bother to debate it until you get to the difference between sub-millisecond and audio-rate.
It is axiomatic that anything that has less resolution than audio rate can be perceived, under the correct circumstances.
For example, if you had two metronomes which each played the same wide-frequency burst and had independent jitters on their start time, the combined sound would likely shift in timbre due to the phase relationship of the summed waves.
If you had those two outputs going to a stereo output, one on the left channel, and one on the right, the resulting effect should be that the "click" will randomly pan around the soundfield in the listener's perception.
So, I guess it also depends on not only your use case but how much of a hassle it is to get the right resolution. I would be really sad if I subtly messed up some musician's sense of time for years in the future because they were practicing diligently with some jittery metronome I made.
This has happened to me on many occasions. Having a part that needs to be in-the-pocket accidentally shifted (due to MIDI latency on weird gear or bad sample editing) just throws everything off in a way that's really hard to even identify.
I use an encapsulated "steller" library to do this with JS on the web. Steller can sync graphics with audio too.
Really wish the joint "get time stamp" function that gives current DOMHiresTimestamp and the synchronized AudioContext currentTime were available on all browsers.
Steller - https://github.com/srikumarks/steller
Taka Keeper - http://talakeeper.org/talas.html (targeted at South Indian classical music forms, so not all may get the terms)
Edit: Fixed autocorrect errors
Edit: https://hello-magenta.glitch.me seems to be the use case. Cool and inevitable that this would be worked-on, but after spending years with both algorithmically-supported composition and all-human composition, I'm skeptical. There is a special sauce that machines will never grok. Or, I'm wrong.
In a 60fps app that's as much as 5 frames out of sync. Users would notice that.
setInterval's minimum interval is ~15ms , so shouldn't something like...
setInterval(() => (new Date()).valueOf() % DESIRED_TICK_INTERVAL < 15 && tickTheMetronome(), 15)
...get you within 15ms accuracy, and it can probably be reduced in Chrome et al?
Above example will indefinitely beat with 60bpm (10ms accuracy)
I did start writing a blog post about it but never finished it. This article is a much better and more in depth analysis of the problem anyhow.
And if you can, move away from the browser to native.
And if you can use a hardware midi clock or just a standalone metronome.
I used to use google metronome but the constant hiccups were annoying. I wonder if they are using Web Audio scheduling or setInterval().