Reverb is the result of latency, and in large spaces the audio signal to distant speakers is delayed to minimize dead spots due to wave interference.
Which is to say musical experience is complicated and hence live sound design is also complex in ways that aren’t casually obvious.
The rule of thumb is sound travels one foot per millisecond (30cm per millisecond). So at 30m, ten milliseconds latency wouldn’t change the spatial perception very much even without correction. Of course 90ms is plenty of time to correct for latency if you want to be spot on.
My wild ass guess is a microphone on the drone could be used to sync the remote speakers with the piano speakers to phase align the sources.
Basically ordinary audio signal processing succumbed to compute about twenty years ago.
Haas would like to disagree. If you experience a second wavefront within ~2-50ms you'll still localise to the first, even if the second is up to 10dB louder.
Unless I misunderstand the psycho-acoustic point of the comment, it would seem that we are in agreement about the effects of 10ms latency on audio experience.
My intent was that 10ms latency correlates to about 3m. The difference between perceiving reflections at 33m/110ms and 30m/100ms is unlikely to have a significant impact on a live music experience...though it might matter in a recording studio or in live performance monitoring.
Sure, but you can't play begin to play the sound until you have enough bits that you can decode what was sampled, and that always means you have to wait at least a couple of milliseconds. Conventional analog radio allows you to play out the modulating waveform with sub millisecond delay.
My understanding is that that's why until now no serious electric piano has ever proposed bluetooth audio connectivity for the player.
[Edit]: they use an other tech with zero latency, as described in the article. Thanks for the replying comments.