The fact that this does bidirectional audio is icing on the cake; can't wait to share this with some quarantined bands.
Magic, well done.
We now all love in different cities. Due to the pandemic and lockdowns we ended up reconnecting and despite the distance we are jamming and working on songs like we never left the drummers basement.
Strongly recommend jamulus.io. It is fast enough to actually jam on music with other players.
Roundtrip latency was under 30ms, which you definitely feel but your brain can compensate for.
The funny effect overall though is that because you're delayed and everyone else is delayed, you end up sort of waiting for the other player, and they end up waiting for you, so instead of the classic band "everyone speeds up" it's like everyone progressively slows down as the song progresses.
We haven't tried with our drummer actually playing drums, he's just been recording against other parts we provide. I don't think it'll work very well because his acoustic kit will be too loud and he'll hear that as well as the delayed drum sound, which for him will feel like a stuttery mess
Yes, that sounds right. Whether this sort of approach works depends a lot on whether your instrument is naturally quiet enough that you can just focus on what you hear from the server. People are actually surprisingly good at adjusting to instruments that are "slow to sound"; just like you can bounce a basketball on the beat with a little practice, even though it takes many milliseconds for the ball to hit the ground after leaving your hand, you can learn to play keyboard etc with a similar delay.
We had a zoom video going at the same time so we could look at each other. What tripped my brain up the most was if I looked for rhythm cues from the video while playing, even my own video would not be in sync and I would lose the zone where I could play in delayed time and have to restart
You can make music together at this distance (even though it's perceptible).
Of course if you're playing with quantization on, or sequencers, it's not as much of a problem
unless you get a Quantum A.I. that can predict seconds into the future... mine just said musk will start a Q.A.I. company with some shady zero-equity A-round disguised as pre-sale. Oh wait that's not my QuantumAI, it's just my tweeter tab.
anyone promising faster than light communication could very well be claiming a patent to a perpetual motion machine.
Have to link to that because I chuckled at your post.
I could just listen to music on my phone directly, but it becomes such a pain because watching a YouTube video or anything with audio on my computer completely breaks my flow.
Granted this is a completely niche persnickety problem, but what is this site if not for solving niche persnickety problems.
My friend who happens to be an acclaimed tubist just noted that he could use this for online music lessons. So there's earnestly a wide variety of use cases here.
I'm in the exact same boat - completely resonate with your desire to avoid breaking flow - and I doubt I would have thought of this solution if it weren't for your post above.
Needless to say, I love this. The only thing that's stopping it from being truly wonderful is an Android app <3
Needs some work still...
I wonder if you can't grab the latency correction factor from snapcast in real-time, and somehow apply it to you video stream as well? This has been raised before it seems , and in another issue, the snapcast author recommends looking at RTP based streaming instead of snapcast .
Seems like snapcast may not be ideal for this after-all :)
I am not sure snapcast can't sync down to a few ms. If so, the issue would be syncing video with the audio clients, which certainly sounds feasible if integrated in the video player. That's what jellyfin does: https://github.com/jellyfin/jellyfin-web/pull/1011 (I tried to help a bit with that one).
But, I guess they're not open source...
One thing nice about ROC is its pulseaudio plugin for native integration as a sink.
I wish there was a chat integration? Would like to just say: this is cool tune
I’ve been dreaming of a scenario where iPhones, or really any mobile device, could be setup as a two-way wireless mic/monitor system.
The idea came up because of things moving to video conference solutions. Any time you have multiple people at one end of the camera your audio almost by necessity becomes some mash up of microphone and speakers. The half-duplex nature of acoustic echo cancelled audio in this setting drives me insane.
I’ve done wild setups of splitting ear buds to a bunch of people but the cables become a mess. It would be so nice for each person in the room to just plug their earbuds with mic into their mobile and then somehow point it at the device hosting the video conference.
I’ve experimented with OBS.ninja for this but it has the same limitation that mic input dies once the device screen is locked.
And yes, definitely everyone on earbuds. Software or even DSP echo cancellation drives me crazy and I’m trying to eliminate it from the equation as much as possible.
Thanks for this project!
What work needs to be done to move the iOS app to the app store rather than beta? I'm not an iOS developer but if the issue is money I would like to help out!
to clarify: airpod audio quality works fine paired to mac, but when you want to also use the mic (e.g zoom call), the mic audio quality is horrible. Some bluetooth limitation on macs? not sure, but it’s a known problem. Audio and mic quality on the iphone however is great. no problem at all.
so that’s the reason I’m asking if it’s possible to connect both audio and mic to the mac via the iphone to work around this limitation? ie mac <> sonobus <> iphone <> airpods
Some devices that are designed for high quality audio and microphone usage simultaneously get around this limitation by actually exposing two Bluetooth "devices" simultaneously, one used exclusively for the microphone and the other for receiving high-quality audio. (Some gaming headsets use this technique.)
So this is why I was wondering if using Sonobus + iPhone might work around this issue?
The reason airpods "worsen" on calls is that it switches Bluetooth profile with 8khz audio.
A "possible" workaround is setting airpods as output and your mac's internal microphone.
It'll have other drawbacks for latency/echo cancelation
* doesn't need Jack
* (optional) compressed audio using the Opus codec
* public and private groups
* automatic resampling and reblocking between peers
* (optional) dynamic jitter buffer adjustment
* built-in panning, mixing and some FX (compressor, eq, reverb, etc.)
* record output to disk
* also available as a VST plugin
* and probably more
Also, don't underestimate the value of a good UI ;-)
Seems to me a killer feature is the ability to run as a VST plugin inside of a DAW or OBS, so users can incorporate SonoBus into their DAW's workflow rather than setting up a new thing entirely.
the thing I looked for, hoped to see, but didn't find is a standalone non-gui version, or a library implementing the protocol. I would love to drop this onto a headless pi racked up as a eurorack module broadcasting low latency output from hardware to join up. talk about an amazing jam session.
are there any plans for a library, or command line version?
Good luck on that trademark case
latin prefix + single letter:
latin prefix + short word:
the auto functionality does not "adjust" to the network capabilities
other than that, it is great!
I'm assuming the first date is from the commit date, which here is Aug 2, 2020.