This works for a listener down the chain, but obviously can't work for performers playing together. The article mentions a producer listening to remotely located performers as they were playing together, but fails to mention how these remotely located performers can sync to each other in the first place.
I still fail to understand why this is a thing. Two possibilities:
1) the beat is created live by a human performer who can't meaningfully hear the other performer(s) in time. He / she is stuck with playing blindly.
2) the beat is pre-recorded - sampled or electronically generated on a sequencer. Then what's the use case in the first place? The other performer can download it offline and play on it live.
All this is done to get something that mimics a live performance (but isn't, because the band components can't hear each other in real time) to someone listen-only at the end of the chain. What's the advantage in doing so? What's the use case?