While neat, all the web audio stuff I've seen fundamentally relies on setTimeout() for sequence playback. This only works at all because the apis let you schedule playback events ahead of time, against their special high resolution timer. To see how it doesn't work, press play and then move to a different browser tab.
The audio processing itself works fine while in the background, but the api makes no facility at all for any sequencing or any user-supplied control-rate data which runs as part of the audio graph. This is rather unfortunate and makes it hard to build real applications.
If somebody knows some solution for scheduling playback events while in a background tab, please speak up. In my own experiments, the very best I was able to do was synthesizing control-like signals inside of a ScriptProcessorNode. I quit before I got it to work, but that's all I could really come up with. The other option would have been to start scheduling 500ms ahead once the tab goes into the background, but you have to know it's coming.
Background: I am working on a live music competition game (with video) for freestyle rap. We require an accurate tick in order to synchronize the beat playback across many clients during a freestyle rap battle.
When developing client-side javascript I get the impression that time is running on a thin sheet of rubber. I have also wanted to have a more reliable tick but I often have to move onto other parts of what I am working on.
One useful technique is to try to create a fixed tick that all clients can receive, at least you can use this to nail down some of the rubber sheet. I use a shared tick emitting from the websocket server, and then calculate -- every sec -- how that differs from client time. In order to give clients a synchronized event I "schedule ahead" using setTimeout and this client-server time diff.
works in the latest version of Chrome on OSX for me. Unfortunately I think this line in drummachine.js proves it wont really work in FF/IE:
context = new webkitAudioContext();
(I believe FF has a separate mozAudioContext(), cant wait till this stuff is standard in browsers).
MIDI is a pretty recent addition to AudioContext so I don't know when you'll ever see it in Firefox or Safari or anything like that. Maybe Safari will get it faster if it's done at the webkit level and not the chromium level...
It's too bad it totally chokes as soon as the tab enters the background. I wonder if Google's done any work to ensure it's safe to let web pages send arbitrary byte data to MIDI devices? It seems like that wouldn't necessarily be safe, but maybe they're filtering the data behind the scenes?
The audio processing itself works fine while in the background, but the api makes no facility at all for any sequencing or any user-supplied control-rate data which runs as part of the audio graph. This is rather unfortunate and makes it hard to build real applications.
If somebody knows some solution for scheduling playback events while in a background tab, please speak up. In my own experiments, the very best I was able to do was synthesizing control-like signals inside of a ScriptProcessorNode. I quit before I got it to work, but that's all I could really come up with. The other option would have been to start scheduling 500ms ahead once the tab goes into the background, but you have to know it's coming.